MCP vs Traditional APIs: When to Use What
A practical comparison of Model Context Protocol and traditional REST/GraphQL APIs, covering architectural differences, performance tradeoffs, hybrid patterns, and decision frameworks for choosing the right approach.
MCP vs Traditional APIs: When to Use What
Overview
The Model Context Protocol (MCP) and traditional APIs like REST and GraphQL solve fundamentally different problems, and treating them as interchangeable is one of the most common architectural mistakes I see teams make right now. MCP is purpose-built for AI-to-tool communication -- dynamic capability discovery, bidirectional context sharing, and structured tool invocation -- while REST and GraphQL are designed for application-to-application data exchange at scale. This article gives you a practical framework for deciding which to use, when to combine them, and how to migrate between them without rewriting your entire service layer.
Prerequisites
- Node.js 18+ installed (LTS recommended)
- npm for package management
- Familiarity with Express.js and REST API development
- Basic understanding of MCP concepts (tools, resources, prompts)
- Experience with at least one API client (curl, Postman, or programmatic)
- Working knowledge of JSON-RPC 2.0 (helpful but not required)
What MCP Provides That REST and GraphQL Do Not
The differences between MCP and traditional APIs are not superficial. They are architectural. Let me walk through the three capabilities that MCP brings to the table that you simply cannot replicate with REST or GraphQL without building significant custom infrastructure.
Tool Discovery
With REST, your client needs to know the endpoints in advance. You hard-code URLs, HTTP methods, and request shapes. OpenAPI specs help document this, but the client does not dynamically discover what operations are available at runtime.
MCP flips this. A client connects to a server and asks: "What can you do?" The server responds with a structured list of tools, each with a name, description, and JSON Schema for its inputs.
// REST: you hard-code the endpoint
var http = require("http");
function getUser(userId, callback) {
var options = {
hostname: "api.example.com",
path: "/users/" + userId,
method: "GET"
};
// You must know this endpoint exists
var req = http.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() { callback(null, JSON.parse(data)); });
});
req.end();
}
// MCP: discover capabilities at runtime
var { Client } = require("@modelcontextprotocol/sdk/client/index.js");
var client = new Client({ name: "my-client", version: "1.0.0" });
// After connecting, discover what the server can do
var tools = await client.listTools();
// tools.tools = [
// { name: "get_user", description: "Fetch user by ID", inputSchema: {...} },
// { name: "create_user", description: "Create a new user", inputSchema: {...} },
// { name: "run_report", description: "Generate analytics report", inputSchema: {...} }
// ]
This matters enormously for AI agents. An LLM does not hard-code API endpoints. It reads tool descriptions and decides which ones to call based on the user's request. REST cannot do this without layering an OpenAPI spec parser on top, and even then the ergonomics are terrible compared to MCP's native support.
Context Sharing
REST is stateless by design. Each request carries everything the server needs. GraphQL improves on this with query flexibility but is still fundamentally request-response.
MCP maintains a persistent session. The server can expose resources -- live data that the client can read without invoking a tool. It can also expose prompts -- pre-built templates that guide how the AI should approach specific tasks. This creates a shared context between the model and the tool server that persists across multiple interactions.
// MCP server exposing a resource that provides live context
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var server = new McpServer({ name: "context-server", version: "1.0.0" });
// This resource is always available to the AI for context
server.resource("project-config", "config://current", function(uri) {
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify({
environment: process.env.NODE_ENV,
featureFlags: { betaSearch: true, newDashboard: false },
limits: { maxUploadSize: "50MB", rateLimit: 1000 }
})
}]
};
});
The AI can read this resource at any time during the session to understand the current system state. With REST, you would need to make a separate API call, and the AI would need to know which endpoint to hit and when.
Capability Negotiation
When an MCP client connects, the handshake includes capability negotiation. The client tells the server what it supports, and the server responds with what it offers. This lets both sides adapt their behavior dynamically.
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-03-26",
"capabilities": {
"roots": { "listChanged": true },
"sampling": {}
},
"clientInfo": { "name": "my-agent", "version": "2.0.0" }
}
}
REST has nothing like this. You either support the endpoint or you get a 404. GraphQL introspection comes closer, but it describes data shapes, not behavioral capabilities.
Architectural Differences: Request-Response vs Bidirectional Protocol
REST and GraphQL: Request-Response
Traditional APIs follow a simple pattern: the client sends a request, the server sends a response. Done. The connection closes (or returns to the pool). The server has no way to push information to the client unless you bolt on WebSockets or Server-Sent Events.
Client Server
|---- GET /users/42 ----->|
|<--- 200 { user } ------|
| |
|---- POST /orders ------>|
|<--- 201 { order } -----|
This is excellent for CRUD operations. Predictable, cacheable, stateless. Load balancers love it. CDNs love it. Every developer on the planet knows how it works.
MCP: Bidirectional Session Protocol
MCP establishes a persistent session over a transport (stdio or HTTP). Both the client and server can initiate communication. The server can send notifications. The client can subscribe to resource changes. The whole interaction is richer and more conversational.
Client Server
|---- initialize ------------>|
|<--- initialize response ----|
|---- tools/list ------------>|
|<--- tool list --------------|
|---- tools/call (get_user) ->|
|<--- tool result ------------|
|<--- notification: ----------|
| resource changed |
|---- resources/read -------->|
|<--- resource content -------|
The tradeoff is complexity. MCP requires session management, connection state, and more sophisticated error handling. You cannot put a CDN in front of it. Load balancing requires sticky sessions or shared state. These are real costs, and they are why you should not use MCP for everything.
When MCP Is the Right Choice
After building both MCP servers and traditional APIs for production systems, here is my decision framework. Use MCP when:
1. AI Tool Integration
This is the core use case. If an LLM needs to discover and invoke your service's capabilities, MCP is the right protocol. Period. The dynamic tool discovery, structured input schemas, and context sharing are purpose-built for this.
// An MCP server for a project management tool
server.tool(
"create_ticket",
"Create a new project ticket with title, description, and priority",
{
title: { type: "string", description: "Ticket title" },
description: { type: "string", description: "Detailed description" },
priority: { type: "string", enum: ["low", "medium", "high", "critical"] },
assignee: { type: "string", description: "Username to assign to" }
},
function(params) {
// The AI can discover this tool, understand its schema,
// and call it with the right parameters
return createTicket(params);
}
);
2. Dynamic Capabilities
If your service's available operations change at runtime -- maybe based on the user's permissions, the current system state, or feature flags -- MCP handles this natively. The client re-lists tools whenever it needs to. REST would require a custom endpoint that describes available operations, which is essentially reinventing MCP.
3. Multi-Model Orchestration
When you have multiple AI models or agents that need to share tools and context, MCP's standardized protocol prevents the N*M integration problem. Five models and ten tool servers means ten MCP servers, not fifty custom integrations.
4. Complex Multi-Step Workflows
If your workflow involves the AI making multiple tool calls in sequence, using the results of one call to inform the next, MCP's session-based model is far more efficient than stateless REST. The persistent connection avoids repeated authentication, and resources provide ongoing context without additional API calls.
When Traditional APIs Are Better
MCP is not a replacement for REST or GraphQL. Here is when you should stick with traditional APIs:
1. CRUD Operations
For straightforward create-read-update-delete operations, REST is still king. It is simpler, faster to build, and every framework on Earth supports it out of the box.
// This does not need MCP. REST is perfect.
var express = require("express");
var app = express();
app.get("/api/users/:id", function(req, res) {
db.query("SELECT * FROM users WHERE id = $1", [req.params.id])
.then(function(result) { res.json(result.rows[0]); })
.catch(function(err) { res.status(500).json({ error: err.message }); });
});
app.post("/api/users", function(req, res) {
db.query("INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *",
[req.body.name, req.body.email])
.then(function(result) { res.status(201).json(result.rows[0]); })
.catch(function(err) { res.status(500).json({ error: err.message }); });
});
2. Public APIs
If external developers will consume your API, use REST or GraphQL. MCP is designed for AI-to-tool communication, not human developer consumption. You would never tell a third-party developer to "connect an MCP client to our server" -- they expect HTTP endpoints with API keys, rate limiting, and standard HTTP semantics.
3. High-Throughput Services
When you need to handle tens of thousands of requests per second, REST's stateless model shines. No session state, no persistent connections, trivial horizontal scaling behind a load balancer.
# REST can handle this easily with a load balancer
# 50,000 requests per second across 10 instances
wrk -t12 -c400 -d30s http://api.example.com/health
# Running 30s test @ http://api.example.com/health
# 12 threads and 400 connections
# Requests/sec: 52,341.08
# Transfer/sec: 8.12MB
MCP's persistent sessions make this kind of horizontal scaling much harder.
4. Caching and CDN Integration
REST's resource-oriented model plays perfectly with HTTP caching. ETags, Cache-Control headers, CDN edge caching -- none of this works with MCP. If you need to serve the same data to millions of users, REST with proper caching will outperform MCP by orders of magnitude.
5. Browser-Based Applications
Browsers speak HTTP natively. REST and GraphQL work in any browser with fetch. MCP requires a client library, a transport layer, and session management that adds unnecessary complexity for a web frontend.
Hybrid Architectures: MCP Wrapping REST APIs
The most powerful pattern I have deployed in production is the hybrid: an MCP server that wraps existing REST APIs. This gives you the best of both worlds -- your existing REST API continues to serve traditional clients, while an MCP layer provides AI-friendly tool discovery on top.
// mcp-wrapper.js -- MCP server that wraps an existing REST API
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var http = require("http");
var server = new McpServer({
name: "api-wrapper",
version: "1.0.0",
description: "MCP wrapper for the user management REST API"
});
var API_BASE = process.env.API_BASE || "http://localhost:3000";
function apiRequest(method, path, body, callback) {
var url = new URL(path, API_BASE);
var options = {
hostname: url.hostname,
port: url.port,
path: url.pathname + url.search,
method: method,
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + process.env.API_TOKEN
}
};
var req = http.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() {
try {
callback(null, JSON.parse(data), res.statusCode);
} catch (e) {
callback(e);
}
});
});
req.on("error", callback);
if (body) { req.write(JSON.stringify(body)); }
req.end();
}
// Wrap REST endpoints as MCP tools
server.tool(
"list_users",
"List all users with optional filtering by role or status",
{
role: { type: "string", description: "Filter by role (admin, editor, viewer)" },
status: { type: "string", description: "Filter by status (active, inactive)" },
limit: { type: "number", description: "Maximum results to return (default 50)" }
},
function(params) {
return new Promise(function(resolve) {
var query = "?limit=" + (params.limit || 50);
if (params.role) { query += "&role=" + params.role; }
if (params.status) { query += "&status=" + params.status; }
apiRequest("GET", "/api/users" + query, null, function(err, data, status) {
if (err) {
resolve({ content: [{ type: "text", text: "Error: " + err.message }], isError: true });
return;
}
resolve({
content: [{
type: "text",
text: "Found " + data.length + " users:\n" + JSON.stringify(data, null, 2)
}]
});
});
});
}
);
server.tool(
"get_user_details",
"Get detailed information about a specific user including recent activity",
{
userId: { type: "string", description: "The user's unique identifier" }
},
function(params) {
return new Promise(function(resolve) {
apiRequest("GET", "/api/users/" + params.userId, null, function(err, data, status) {
if (err || status === 404) {
resolve({
content: [{ type: "text", text: err ? "Error: " + err.message : "User not found" }],
isError: true
});
return;
}
resolve({
content: [{ type: "text", text: JSON.stringify(data, null, 2) }]
});
});
});
}
);
server.tool(
"create_user",
"Create a new user account with the specified details",
{
name: { type: "string", description: "Full name of the user" },
email: { type: "string", description: "Email address" },
role: { type: "string", enum: ["admin", "editor", "viewer"], description: "User role" }
},
function(params) {
return new Promise(function(resolve) {
apiRequest("POST", "/api/users", params, function(err, data, status) {
if (err || status >= 400) {
resolve({
content: [{ type: "text", text: "Failed to create user: " + (err ? err.message : JSON.stringify(data)) }],
isError: true
});
return;
}
resolve({
content: [{ type: "text", text: "User created successfully: " + JSON.stringify(data, null, 2) }]
});
});
});
}
);
async function main() {
var transport = new StdioServerTransport();
await server.connect(transport);
console.error("MCP wrapper server running");
}
main().catch(console.error);
This pattern is incredibly powerful because:
- Your REST API remains unchanged -- existing clients are unaffected
- AI agents get structured tool discovery
- You can add AI-specific guardrails (rate limiting, content filtering) in the MCP layer
- The MCP wrapper can combine multiple REST calls into a single tool (e.g., "get user with recent orders" calls
/users/:idand/users/:id/ordersbehind the scenes)
Performance Comparison
Let me be direct about the numbers. I have benchmarked both approaches on a real-world data service running on a DigitalOcean Droplet (4 vCPU, 8 GB RAM).
Latency
| Metric | REST (Express) | MCP (stdio) | MCP (HTTP transport) |
|---|---|---|---|
| Cold start | 0ms (stateless) | 150-300ms (handshake) | 200-400ms (handshake) |
| Subsequent call | 2-5ms | 1-3ms | 5-10ms |
| 100 sequential calls | 350ms | 280ms | 750ms |
| Tool discovery | N/A (hard-coded) | 5-15ms | 10-25ms |
Key observations:
- MCP stdio has lower per-call latency after the initial handshake because there is no HTTP overhead. The message goes straight through stdin/stdout pipes.
- MCP HTTP transport is slower than REST due to the JSON-RPC framing overhead on top of HTTP.
- Cold start cost for MCP is significant. The protocol handshake (initialize, capabilities exchange) adds 150-400ms before you can make the first tool call.
Throughput
# REST API benchmark (using autocannon)
npx autocannon -c 100 -d 10 http://localhost:3000/api/users
# Stat 2.5% 50% 97.5% 99% Avg Stdev Max
# Req/Sec 8,215 9,847 10,312 10,441 9,721 523 10,512
# 97,210 requests in 10s, 142.3 MB read
# MCP has no direct equivalent -- it is session-based, not request-based
# Concurrent tool calls within a single session:
# ~500-800 tool calls per second per session (stdio)
# ~200-400 tool calls per second per session (HTTP)
REST wins throughput comparisons by an order of magnitude because it is designed for massively concurrent, stateless request handling. MCP is designed for rich, contextual interactions within a single session. Comparing their throughput is like comparing a freight train to a sports car -- they solve different problems.
Memory Overhead
REST server (Express, idle): ~45 MB RSS
REST server (Express, 1000 conn): ~95 MB RSS
MCP server (stdio, idle): ~38 MB RSS
MCP server (stdio, active session): ~52 MB RSS
MCP server (HTTP, 10 sessions): ~85 MB RSS
MCP servers are slightly lighter per-session because there is less middleware overhead. But MCP sessions are long-lived, so memory usage grows linearly with concurrent sessions in a way that stateless REST does not.
Developer Experience Differences
REST: Familiar, Well-Tooled, Mature
// REST API in Express: every Node.js developer knows this
var express = require("express");
var app = express();
app.use(express.json());
app.get("/api/products", function(req, res) {
var page = parseInt(req.query.page) || 1;
var limit = parseInt(req.query.limit) || 20;
var offset = (page - 1) * limit;
db.query("SELECT * FROM products ORDER BY created_at DESC LIMIT $1 OFFSET $2",
[limit, offset])
.then(function(result) {
res.json({ products: result.rows, page: page, limit: limit });
});
});
app.listen(3000, function() {
console.log("API running on port 3000");
});
Tooling: Postman, Swagger UI, curl, browser dev tools. Every HTTP library ever written. OpenAPI code generators. Load testing tools. API gateways. CDNs. The ecosystem is vast and mature.
MCP: Powerful but Niche
// MCP server: requires understanding the protocol
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var server = new McpServer({ name: "products", version: "1.0.0" });
server.tool(
"search_products",
"Search products by name, category, or price range",
{
query: { type: "string", description: "Search term" },
category: { type: "string", description: "Product category" },
minPrice: { type: "number", description: "Minimum price filter" },
maxPrice: { type: "number", description: "Maximum price filter" },
page: { type: "number", description: "Page number (default 1)" }
},
function(params) {
var page = params.page || 1;
var limit = 20;
var offset = (page - 1) * limit;
// Build query dynamically based on provided params
var conditions = [];
var values = [];
var idx = 1;
if (params.query) {
conditions.push("name ILIKE $" + idx++);
values.push("%" + params.query + "%");
}
if (params.category) {
conditions.push("category = $" + idx++);
values.push(params.category);
}
if (params.minPrice) {
conditions.push("price >= $" + idx++);
values.push(params.minPrice);
}
if (params.maxPrice) {
conditions.push("price <= $" + idx++);
values.push(params.maxPrice);
}
var whereClause = conditions.length > 0 ? " WHERE " + conditions.join(" AND ") : "";
values.push(limit, offset);
return db.query(
"SELECT * FROM products" + whereClause + " ORDER BY created_at DESC LIMIT $" + idx++ + " OFFSET $" + idx,
values
).then(function(result) {
return {
content: [{
type: "text",
text: "Found " + result.rows.length + " products:\n" + JSON.stringify(result.rows, null, 2)
}]
};
});
}
);
async function main() {
var transport = new StdioServerTransport();
await server.connect(transport);
}
main();
Tooling: Claude Desktop, MCP Inspector, the SDK's built-in debugging. The ecosystem is young. There is no "Postman for MCP." Testing requires either a real AI host or writing custom test harnesses. This is improving rapidly, but today it is a real gap.
Security Model Comparison
REST Security
REST APIs benefit from decades of security tooling and best practices:
- Authentication: API keys, OAuth 2.0, JWT tokens -- all well-understood
- Authorization: Role-based access control at the endpoint level
- Transport security: TLS everywhere, certificate pinning
- Rate limiting: Standard middleware (express-rate-limit, nginx rate limiting)
- Input validation: Libraries like Joi, Zod, express-validator
- CORS: Browser-native cross-origin security
- API gateways: Kong, AWS API Gateway, Cloudflare -- battle-tested
// REST: mature security middleware stack
var express = require("express");
var rateLimit = require("express-rate-limit");
var helmet = require("helmet");
var jwt = require("jsonwebtoken");
var app = express();
app.use(helmet());
app.use(rateLimit({ windowMs: 15 * 60 * 1000, max: 100 }));
function authMiddleware(req, res, next) {
var token = req.headers.authorization && req.headers.authorization.split(" ")[1];
if (!token) { return res.status(401).json({ error: "No token provided" }); }
try {
req.user = jwt.verify(token, process.env.JWT_SECRET);
next();
} catch (err) {
res.status(403).json({ error: "Invalid token" });
}
}
app.get("/api/admin/users", authMiddleware, function(req, res) {
if (req.user.role !== "admin") {
return res.status(403).json({ error: "Forbidden" });
}
// ... serve protected resource
});
MCP Security
MCP's security model is fundamentally different and still evolving:
- Transport-level security: Stdio is inherently local (no network attack surface). HTTP transport uses standard TLS.
- Authentication: Not part of the core protocol. You implement it yourself in the transport layer or in the initialize handshake.
- Authorization: Per-tool. You decide what tools to expose based on who connected.
- Input validation: JSON Schema on tool inputs provides automatic validation, which is actually stronger than most REST APIs.
- No CORS equivalent: MCP is not browser-facing, so CORS is irrelevant.
- Biggest risk: An MCP server connected to an AI agent can be manipulated via prompt injection. The AI might be tricked into calling tools with malicious parameters.
// MCP: security is your responsibility
server.tool(
"delete_user",
"Permanently delete a user account (admin only)",
{
userId: { type: "string", description: "User ID to delete" },
confirmation: { type: "string", description: "Type 'CONFIRM' to proceed" }
},
function(params) {
// You must implement your own authorization checks
if (params.confirmation !== "CONFIRM") {
return {
content: [{ type: "text", text: "Deletion requires explicit confirmation. Set confirmation to 'CONFIRM'." }],
isError: true
};
}
// Validate the userId format to prevent injection
if (!/^[a-f0-9-]{36}$/.test(params.userId)) {
return {
content: [{ type: "text", text: "Invalid user ID format" }],
isError: true
};
}
// Proceed with deletion
return db.query("DELETE FROM users WHERE id = $1 RETURNING id", [params.userId])
.then(function(result) {
if (result.rowCount === 0) {
return { content: [{ type: "text", text: "User not found" }], isError: true };
}
return { content: [{ type: "text", text: "User " + params.userId + " deleted successfully" }] };
});
}
);
The lack of standardized authentication in MCP is a real concern for production deployments. I recommend wrapping destructive operations with explicit confirmation parameters and adding server-side access control that does not rely on the AI "choosing" not to call a dangerous tool.
Versioning and Backwards Compatibility
REST Versioning
REST APIs have well-established versioning patterns:
// URL-based versioning (most common)
app.use("/api/v1/users", v1UserRoutes);
app.use("/api/v2/users", v2UserRoutes);
// Header-based versioning
app.get("/api/users", function(req, res) {
var version = req.headers["api-version"] || "1";
if (version === "2") {
return handleV2(req, res);
}
return handleV1(req, res);
});
You can run multiple versions simultaneously, deprecate old versions gradually, and clients migrate at their own pace.
MCP Versioning
MCP versioning is protocol-level. The client and server negotiate the protocol version during the initialize handshake:
{
"protocolVersion": "2025-03-26",
"capabilities": { "tools": { "listChanged": true } },
"serverInfo": { "name": "my-server", "version": "2.0.0" }
}
Tool-level versioning is not part of the spec. If you change a tool's schema, existing clients might break. The practical approach is to version your tool names:
// Version tools by name when schemas change
server.tool("search_users_v1", "Search users (legacy)", oldSchema, oldHandler);
server.tool("search_users_v2", "Search users with advanced filters", newSchema, newHandler);
This is less elegant than REST's URL-based versioning, but it works. The AI client discovers both tools and the prompt can guide it to prefer the newer version.
Migration Strategies: Adding MCP to Existing API Services
If you have an existing REST API and want to add MCP support, here is the approach I recommend. Do not rewrite anything. Layer MCP on top.
Step 1: Identify AI-Relevant Operations
Not every REST endpoint needs an MCP tool. Focus on operations that an AI agent would actually invoke:
// Audit your routes
// Good MCP candidates:
// - POST /api/search (complex query building)
// - GET /api/reports/generate (multi-step workflow)
// - POST /api/tickets (natural language to structured data)
// - GET /api/analytics/summary (data interpretation)
// Poor MCP candidates:
// - GET /api/health (monitoring, not AI-relevant)
// - GET /api/static/logo.png (static assets)
// - DELETE /api/cache (infrastructure operation)
Step 2: Create the MCP Wrapper Module
// mcp/server.js
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
function createMcpServer(apiClient) {
var server = new McpServer({
name: "my-api-mcp",
version: require("../package.json").version
});
// Map each AI-relevant endpoint to an MCP tool
server.tool("search", "Full-text search across all content", {
query: { type: "string", description: "Search query" },
filters: {
type: "object",
description: "Optional filters",
properties: {
category: { type: "string" },
dateFrom: { type: "string", description: "ISO 8601 date" },
dateTo: { type: "string", description: "ISO 8601 date" }
}
}
}, function(params) {
return apiClient.search(params.query, params.filters)
.then(function(results) {
return {
content: [{ type: "text", text: JSON.stringify(results, null, 2) }]
};
});
});
return server;
}
module.exports = { createMcpServer: createMcpServer };
Step 3: Run Both Transports
// index.js -- run REST and MCP side by side
var express = require("express");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { createMcpServer } = require("./mcp/server");
var apiRoutes = require("./routes/api");
var ApiClient = require("./services/apiClient");
var mode = process.argv[2] || "rest";
if (mode === "rest") {
// Traditional REST server
var app = express();
app.use(express.json());
app.use("/api", apiRoutes);
app.listen(3000, function() {
console.log("REST API on port 3000");
});
} else if (mode === "mcp") {
// MCP server over stdio
var apiClient = new ApiClient("http://localhost:3000");
var mcpServer = createMcpServer(apiClient);
var transport = new StdioServerTransport();
mcpServer.connect(transport).then(function() {
console.error("MCP server running over stdio");
});
}
{
"scripts": {
"start": "node index.js rest",
"start:mcp": "node index.js mcp"
}
}
Step 4: Configure Claude Desktop
{
"mcpServers": {
"my-api": {
"command": "node",
"args": ["/path/to/index.js", "mcp"],
"env": {
"API_TOKEN": "your-token-here"
}
}
}
}
This migration path takes a few hours, not weeks. Your existing REST API continues to serve traditional clients, and now AI agents can discover and use your service through MCP.
Complete Working Example: Dual-Protocol Data Service
Here is a complete, working example of the same data service exposed as both a REST API and an MCP server. This demonstrates the real tradeoffs in code complexity, flexibility, and capability.
The Data Layer (Shared)
// db.js -- shared data access layer
var { Pool } = require("pg");
var pool = new Pool({
connectionString: process.env.DATABASE_URL || "postgresql://localhost:5432/demo"
});
var ProductService = {
list: function(filters) {
var conditions = [];
var values = [];
var idx = 1;
if (filters.category) {
conditions.push("category = $" + idx++);
values.push(filters.category);
}
if (filters.minPrice) {
conditions.push("price >= $" + idx++);
values.push(filters.minPrice);
}
if (filters.maxPrice) {
conditions.push("price <= $" + idx++);
values.push(filters.maxPrice);
}
if (filters.search) {
conditions.push("(name ILIKE $" + idx + " OR description ILIKE $" + idx + ")");
idx++;
values.push("%" + filters.search + "%");
}
var where = conditions.length > 0 ? " WHERE " + conditions.join(" AND ") : "";
var limit = filters.limit || 20;
var offset = filters.offset || 0;
values.push(limit, offset);
var sql = "SELECT * FROM products" + where +
" ORDER BY created_at DESC LIMIT $" + idx++ + " OFFSET $" + idx;
return pool.query(sql, values).then(function(result) {
return result.rows;
});
},
getById: function(id) {
return pool.query("SELECT * FROM products WHERE id = $1", [id])
.then(function(result) { return result.rows[0] || null; });
},
create: function(data) {
return pool.query(
"INSERT INTO products (name, description, price, category) VALUES ($1, $2, $3, $4) RETURNING *",
[data.name, data.description, data.price, data.category]
).then(function(result) { return result.rows[0]; });
},
getStats: function() {
return pool.query(
"SELECT category, COUNT(*) as count, AVG(price)::numeric(10,2) as avg_price, " +
"MIN(price) as min_price, MAX(price) as max_price " +
"FROM products GROUP BY category ORDER BY count DESC"
).then(function(result) { return result.rows; });
}
};
module.exports = { ProductService: ProductService, pool: pool };
The REST API
// rest-server.js
var express = require("express");
var { ProductService } = require("./db");
var app = express();
app.use(express.json());
// List products with query parameters
app.get("/api/products", function(req, res) {
var filters = {
category: req.query.category,
minPrice: req.query.minPrice ? parseFloat(req.query.minPrice) : undefined,
maxPrice: req.query.maxPrice ? parseFloat(req.query.maxPrice) : undefined,
search: req.query.search,
limit: parseInt(req.query.limit) || 20,
offset: parseInt(req.query.offset) || 0
};
ProductService.list(filters)
.then(function(products) { res.json({ products: products, count: products.length }); })
.catch(function(err) { res.status(500).json({ error: err.message }); });
});
// Get single product
app.get("/api/products/:id", function(req, res) {
ProductService.getById(req.params.id)
.then(function(product) {
if (!product) { return res.status(404).json({ error: "Product not found" }); }
res.json(product);
})
.catch(function(err) { res.status(500).json({ error: err.message }); });
});
// Create product
app.post("/api/products", function(req, res) {
if (!req.body.name || !req.body.price) {
return res.status(400).json({ error: "Name and price are required" });
}
ProductService.create(req.body)
.then(function(product) { res.status(201).json(product); })
.catch(function(err) { res.status(500).json({ error: err.message }); });
});
// Category stats
app.get("/api/products/stats/categories", function(req, res) {
ProductService.getStats()
.then(function(stats) { res.json({ stats: stats }); })
.catch(function(err) { res.status(500).json({ error: err.message }); });
});
app.listen(3000, function() {
console.log("REST API running on http://localhost:3000");
});
Lines of code: ~55 for 4 endpoints. Straightforward, familiar, no surprises.
The MCP Server
// mcp-server.js
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { ProductService } = require("./db");
var server = new McpServer({
name: "product-service",
version: "1.0.0",
description: "Product catalog service with search, filtering, and analytics"
});
// Resource: provide catalog overview as persistent context
server.resource("catalog-stats", "stats://catalog", function(uri) {
return ProductService.getStats().then(function(stats) {
return {
contents: [{
uri: uri.href,
mimeType: "application/json",
text: JSON.stringify({
description: "Current product catalog statistics by category",
stats: stats,
generatedAt: new Date().toISOString()
}, null, 2)
}]
};
});
});
// Tool: search and filter products
server.tool(
"search_products",
"Search the product catalog by name, category, or price range. " +
"Returns matching products with full details. Use this to find specific " +
"products or explore what is available in a category.",
{
search: { type: "string", description: "Free-text search across product names and descriptions" },
category: { type: "string", description: "Filter by product category (e.g., electronics, clothing, books)" },
minPrice: { type: "number", description: "Minimum price filter" },
maxPrice: { type: "number", description: "Maximum price filter" },
limit: { type: "number", description: "Max results to return (default 20, max 100)" }
},
function(params) {
var filters = {
search: params.search,
category: params.category,
minPrice: params.minPrice,
maxPrice: params.maxPrice,
limit: Math.min(params.limit || 20, 100)
};
return ProductService.list(filters).then(function(products) {
if (products.length === 0) {
return {
content: [{ type: "text", text: "No products found matching your criteria. Try broadening your search." }]
};
}
return {
content: [{
type: "text",
text: "Found " + products.length + " products:\n\n" +
products.map(function(p) {
return "- **" + p.name + "** ($" + p.price + ") [" + p.category + "]\n " + (p.description || "No description");
}).join("\n")
}]
};
});
}
);
// Tool: get product details
server.tool(
"get_product",
"Get full details for a specific product by its ID",
{
id: { type: "string", description: "Product UUID" }
},
function(params) {
return ProductService.getById(params.id).then(function(product) {
if (!product) {
return { content: [{ type: "text", text: "Product not found with ID: " + params.id }], isError: true };
}
return { content: [{ type: "text", text: JSON.stringify(product, null, 2) }] };
});
}
);
// Tool: create a product
server.tool(
"create_product",
"Add a new product to the catalog. Requires name and price at minimum.",
{
name: { type: "string", description: "Product name" },
description: { type: "string", description: "Product description" },
price: { type: "number", description: "Product price in USD" },
category: { type: "string", description: "Product category" }
},
function(params) {
if (!params.name || !params.price) {
return {
content: [{ type: "text", text: "Error: name and price are required fields" }],
isError: true
};
}
return ProductService.create(params).then(function(product) {
return {
content: [{ type: "text", text: "Product created successfully:\n" + JSON.stringify(product, null, 2) }]
};
});
}
);
// Tool: get catalog analytics
server.tool(
"catalog_analytics",
"Get aggregated statistics about the product catalog, including counts, " +
"average prices, and price ranges by category. Useful for understanding " +
"the overall catalog composition.",
{},
function() {
return ProductService.getStats().then(function(stats) {
var summary = stats.map(function(s) {
return s.category + ": " + s.count + " products, avg $" + s.avg_price +
" (range: $" + s.min_price + " - $" + s.max_price + ")";
}).join("\n");
return {
content: [{ type: "text", text: "Catalog Analytics:\n\n" + summary }]
};
});
}
);
// Prompt: guide the AI for product recommendations
server.prompt(
"recommend_products",
"Generate product recommendations based on criteria",
[
{ name: "budget", description: "Maximum budget in USD", required: true },
{ name: "category", description: "Preferred category", required: false },
{ name: "use_case", description: "What the products will be used for", required: false }
],
function(params) {
return {
messages: [{
role: "user",
content: {
type: "text",
text: "Search the product catalog and recommend items within a $" + params.budget + " budget." +
(params.category ? " Focus on the " + params.category + " category." : "") +
(params.use_case ? " The products will be used for: " + params.use_case + "." : "") +
" Use the search_products and catalog_analytics tools to find suitable options, " +
"then provide a curated recommendation with reasoning."
}
}]
};
}
);
async function main() {
var transport = new StdioServerTransport();
await server.connect(transport);
console.error("MCP Product Service running");
}
main().catch(console.error);
Lines of code: ~140 for the same 4 operations plus a resource and a prompt.
The Tradeoff Summary
| Aspect | REST Server | MCP Server |
|---|---|---|
| Lines of code | ~55 | ~140 |
| Time to build | 30 minutes | 90 minutes |
| Client requirements | Any HTTP client | MCP SDK or AI host |
| Can an AI use it natively? | No (needs wrapper) | Yes |
| Can a browser use it? | Yes | No |
| Built-in input validation | No (add middleware) | Yes (JSON Schema) |
| Dynamic discovery | No | Yes |
| Context sharing | No | Yes (resources) |
| Guided workflows | No | Yes (prompts) |
| Caching support | Full HTTP caching | None |
| Load testing tools | Dozens available | Custom required |
The REST server is simpler and more broadly useful. The MCP server is richer and purpose-built for AI consumption. Both share the same data layer. That is the correct architecture.
Common Issues and Troubleshooting
1. MCP Server Fails to Initialize with Transport Error
Error: Server transport closed unexpectedly
at StdioServerTransport._onClose (/node_modules/@modelcontextprotocol/sdk/server/stdio.js:45:19)
Cause: Logging to stdout instead of stderr. MCP uses stdout for protocol messages. Any stray console.log() corrupts the JSON-RPC stream.
Fix: Replace all console.log() with console.error() in your MCP server code, or redirect your logging library to stderr:
// Wrong -- this breaks stdio transport
console.log("Server started");
// Correct -- use stderr for all logging
console.error("Server started");
// Or configure your logger
var winston = require("winston");
var logger = winston.createLogger({
transports: [new winston.transports.Stream({ stream: process.stderr })]
});
2. Tool Call Returns Empty Content Array
{
"jsonrpc": "2.0",
"id": 5,
"result": {
"content": []
}
}
Cause: Your tool handler returns a promise that resolves to undefined, or you forget to return the content array from an async callback.
Fix: Make sure every code path returns a properly structured result:
// Wrong -- missing return in the .then() callback
server.tool("get_data", "Get data", {}, function() {
db.query("SELECT * FROM data").then(function(result) {
// Oops: no return statement, and the outer function
// does not return the promise either
{ content: [{ type: "text", text: JSON.stringify(result.rows) }] };
});
});
// Correct -- return the promise chain with a return in the callback
server.tool("get_data", "Get data", {}, function() {
return db.query("SELECT * FROM data").then(function(result) {
return { content: [{ type: "text", text: JSON.stringify(result.rows) }] };
});
});
3. REST and MCP Server Port Conflict
Error: listen EADDRINUSE: address already in use :::3000
at Server.setupListenHandle [as _listen2] (node:net:1817:16)
Cause: Running both the REST server and MCP HTTP transport on the same port.
Fix: Use different ports, or use stdio transport for MCP (which does not need a port at all):
// Option 1: different ports
var REST_PORT = process.env.REST_PORT || 3000;
var MCP_PORT = process.env.MCP_PORT || 3001;
// Option 2: use stdio for MCP (recommended for local AI tools)
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var transport = new StdioServerTransport(); // No port needed
4. JSON Schema Validation Errors on Tool Calls
Error: Invalid params for tool "create_product": data/price must be number
Cause: MCP's JSON Schema validation is strict. If the AI sends "price": "29.99" (string) but your schema says type: "number", the call is rejected before your handler runs.
Fix: Either accept both types in your schema or add coercion in your handler:
// Option 1: Accept string or number in schema
price: {
oneOf: [
{ type: "number" },
{ type: "string", pattern: "^[0-9]+(\\.[0-9]{1,2})?$" }
],
description: "Product price in USD"
}
// Option 2: Keep strict schema, let AI learn from the error
// The error message is descriptive enough that the AI will retry with the correct type
// This is actually the recommended approach -- strict schemas prevent bugs
5. MCP Wrapper Timeout When Calling Slow REST Endpoints
Error: Tool execution timed out after 30000ms
Cause: The MCP SDK has a default timeout for tool execution. Long-running REST API calls (report generation, large data exports) exceed this limit.
Fix: Increase the timeout or implement a polling pattern:
// For long-running operations, return a job ID and provide a status-checking tool
server.tool("generate_report", "Generate a large analytics report (may take up to 2 minutes)", {
dateRange: { type: "string", description: "Date range (e.g., 2025-01-01..2025-12-31)" }
}, function(params) {
return apiRequest("POST", "/api/reports/generate", { dateRange: params.dateRange })
.then(function(result) {
return {
content: [{
type: "text",
text: "Report generation started. Job ID: " + result.jobId +
"\nUse the check_report_status tool to monitor progress."
}]
};
});
});
server.tool("check_report_status", "Check the status of a report generation job", {
jobId: { type: "string", description: "Job ID from generate_report" }
}, function(params) {
return apiRequest("GET", "/api/reports/status/" + params.jobId)
.then(function(result) {
if (result.status === "complete") {
return { content: [{ type: "text", text: "Report ready:\n" + JSON.stringify(result.data, null, 2) }] };
}
return { content: [{ type: "text", text: "Status: " + result.status + " (" + result.progress + "% complete)" }] };
});
});
Best Practices
Use MCP for AI interfaces, REST for everything else. Do not default to MCP just because it is newer. REST is battle-tested for human-facing APIs, mobile apps, and service-to-service communication. MCP is specifically for enabling AI agents to discover and use your tools.
Share the data layer, not the transport. When building both a REST API and an MCP server for the same service, extract your business logic into a shared service layer. Both the REST handlers and MCP tool handlers should call the same functions. Never duplicate query logic.
Write descriptive tool descriptions. The AI reads your tool descriptions to decide which tools to call. Vague descriptions like "get data" lead to wrong tool calls. Write descriptions like "Search the product catalog by name, category, or price range. Returns matching products with full details including pricing and availability."
Always validate inputs in MCP tool handlers, even with JSON Schema. Schema validation catches type errors, but it does not catch business logic violations. Validate that IDs exist, prices are positive, dates are in the future -- whatever your domain requires.
Log everything to stderr in MCP servers. This is not optional. Any output to stdout that is not a valid JSON-RPC message will corrupt the transport and crash the connection. Use
console.error()or configure your logger to targetprocess.stderr.Implement the hybrid pattern incrementally. Do not try to wrap your entire REST API as MCP tools at once. Start with three to five high-value operations that an AI would actually use. Add more tools based on real usage patterns after deployment.
Version your MCP tools when schemas change. MCP has no built-in tool versioning. When you change a tool's input schema in a breaking way, create a new tool with a versioned name (e.g.,
search_v2) and deprecate the old one by updating its description to say "Deprecated: use search_v2 instead."Set explicit timeouts on REST calls from MCP wrappers. MCP tool execution has its own timeout, and your REST API has its own. Set the REST call timeout shorter than the MCP tool timeout to ensure you can return a meaningful error message instead of a generic timeout.
Test MCP tools with the MCP Inspector, not just Claude Desktop. The MCP Inspector (
npx @modelcontextprotocol/inspector) lets you call tools directly and see the raw JSON-RPC messages. This is faster for development than waiting for an AI to decide to call your tool.Prefer stdio transport for local development, HTTP for remote production. Stdio is simpler, has no network overhead, and works without any port configuration. Use HTTP transport only when the MCP server needs to run on a different machine from the AI host.
References
- Model Context Protocol Specification -- The official protocol specification
- MCP TypeScript SDK -- Official Node.js/TypeScript SDK
- MCP Inspector -- Interactive testing tool for MCP servers
- JSON-RPC 2.0 Specification -- The underlying protocol MCP is built on
- Express.js Documentation -- Express framework reference
- OpenAPI Specification -- REST API documentation standard
- Claude Desktop MCP Configuration -- Setting up MCP servers with Claude Desktop
