Mcp

Debugging MCP Connections and Transport Issues

A practical guide to debugging Model Context Protocol connections and transport issues, covering stdio, SSE, and HTTP transports, JSON-RPC message inspection, and building custom debug tooling.

Debugging MCP Connections and Transport Issues

Overview

MCP connections fail silently more often than they fail loudly. When your AI tool call vanishes into the void -- no response, no error, just nothing -- you need systematic debugging techniques that go beyond adding console.log and hoping for the best. This guide covers every layer of the MCP communication stack: transport debugging for stdio, SSE, and streamable HTTP; JSON-RPC message inspection; capability negotiation failures; and building custom debug middleware that gives you full visibility into what is actually happening on the wire.

Prerequisites

  • Node.js 18+ installed (LTS recommended)
  • Familiarity with the Model Context Protocol and its architecture (see the MCP Fundamentals article)
  • Experience building at least one MCP server or client in Node.js
  • Basic understanding of JSON-RPC 2.0 message format
  • The @modelcontextprotocol/sdk package (v1.x or later)
  • Access to a terminal with curl, jq, and standard Unix tools

Understanding MCP Transport Layers

MCP is transport-agnostic, but in practice you will encounter three transport mechanisms. Each one has its own failure modes, and knowing which layer is breaking is the first step to fixing the problem.

stdio Transport

The stdio transport is the workhorse of local MCP integrations. The host application (typically Claude Desktop or an MCP client) spawns your server as a child process. JSON-RPC messages flow over stdin (host to server) and stdout (server to host). Stderr is reserved for logging.

Host Process
  └── Child Process (your MCP server)
        stdin  ← JSON-RPC requests from host
        stdout → JSON-RPC responses to host
        stderr → Debug logs (never parsed as protocol messages)

The critical rule: anything written to stdout that is not a valid JSON-RPC message will break the connection. This is the single most common cause of stdio transport failures.

SSE Transport (Legacy)

The original SSE transport uses two HTTP channels: a GET request that opens a Server-Sent Events stream for server-to-client messages, and a POST endpoint for client-to-server messages. The SSE stream stays open as a long-lived connection.

Client                          Server
  │                               │
  ├── GET /sse ──────────────────►│  (opens SSE stream)
  │◄──────────── SSE events ──────┤  (server pushes messages)
  │                               │
  ├── POST /messages ────────────►│  (client sends requests)
  │◄──────────── 200 OK ─────────┤

SSE connections are fragile. Proxies, load balancers, and CDNs all love to kill long-lived HTTP connections. If you are running behind nginx, AWS ALB, or Cloudflare, you have probably already hit this.

Streamable HTTP Transport

The newer streamable HTTP transport consolidates everything into a single endpoint. The client sends POST requests with JSON-RPC payloads and the server can respond with either a direct JSON response or upgrade the response to an SSE stream. This is more firewall-friendly and easier to debug.

Client                              Server
  │                                    │
  ├── POST /mcp (JSON-RPC) ──────────►│
  │◄──────────── JSON response ────────┤  (simple request/response)
  │                                    │
  ├── POST /mcp (JSON-RPC) ──────────►│
  │◄──────────── SSE stream ───────────┤  (streaming response)

Common Connection Failure Patterns

Before we get into specific debugging techniques, here is a taxonomy of the failures I see most often. Knowing which category your bug falls into saves hours.

Category 1: Transport Never Connects The client cannot establish a connection to the server. The server process never starts, the port is not listening, or the stdio pipes are not wired correctly.

Category 2: Transport Connects, Initialization Fails The connection is established but the initialize handshake fails. Version mismatch, missing capabilities, or the server crashes during setup.

Category 3: Transport Works, Messages Get Lost The connection is up and initialization succeeds, but some messages never arrive. Buffer issues, message framing errors, or the server writes non-JSON data to stdout.

Category 4: Transport Drops Mid-Session Everything works for a while, then the connection dies. Timeouts, OOM kills, unhandled exceptions, or network interruptions (for HTTP transports).


Logging and Tracing MCP Messages

The most important debugging tool is full message logging. You need to see every byte that crosses the wire. Here is a minimal message logger that wraps any MCP transport:

// debug-logger.js
var fs = require("fs");
var path = require("path");

function createMessageLogger(logFile) {
  var stream = fs.createWriteStream(logFile, { flags: "a" });
  var sequence = 0;

  function log(direction, data) {
    sequence++;
    var entry = {
      seq: sequence,
      timestamp: new Date().toISOString(),
      direction: direction,
      size: Buffer.byteLength(JSON.stringify(data)),
      message: data
    };
    stream.write(JSON.stringify(entry) + "\n");
  }

  return {
    logIncoming: function(data) { log("incoming", data); },
    logOutgoing: function(data) { log("outgoing", data); },
    close: function() { stream.end(); }
  };
}

module.exports = { createMessageLogger: createMessageLogger };

Attach it to your server immediately after creating the transport:

var { Server } = require("@modelcontextprotocol/sdk/server/index.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { createMessageLogger } = require("./debug-logger.js");

var logger = createMessageLogger("/tmp/mcp-debug.log");

var server = new Server(
  { name: "my-server", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

var transport = new StdioServerTransport();

// Wrap the transport's message handling
var originalOnMessage = transport.onmessage;
transport.onmessage = function(message) {
  logger.logIncoming(message);
  if (originalOnMessage) {
    originalOnMessage(message);
  }
};

var originalSend = transport.send.bind(transport);
transport.send = function(message) {
  logger.logOutgoing(message);
  return originalSend(message);
};

server.connect(transport);

The log file will contain a complete record of every message, with sequence numbers and timestamps. When something goes wrong, you can trace exactly where the conversation broke:

{"seq":1,"timestamp":"2026-02-08T14:32:01.445Z","direction":"incoming","size":142,"message":{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"claude-desktop","version":"0.7.3"}}}}
{"seq":2,"timestamp":"2026-02-08T14:32:01.448Z","direction":"outgoing","size":198,"message":{"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2024-11-05","capabilities":{"tools":{}},"serverInfo":{"name":"my-server","version":"1.0.0"}}}}
{"seq":3,"timestamp":"2026-02-08T14:32:01.450Z","direction":"incoming","size":45,"message":{"jsonrpc":"2.0","method":"notifications/initialized"}}

Debugging stdio Transport Issues

The stdio transport is deceptively simple, and that simplicity is what makes it dangerous. There is no HTTP status code to check, no connection timeout to configure. Either bytes flow through the pipes or they do not.

Problem: Server Writes to stdout

This is the number one stdio debugging issue. If your server writes anything to stdout that is not a JSON-RPC message, the client parser will choke. Common culprits:

  • console.log() statements (these write to stdout by default)
  • A library that prints warnings or banners to stdout
  • A native module that writes directly to file descriptor 1

The fix: redirect all logging to stderr.

// Override console.log to use stderr
var originalLog = console.log;
console.log = function() {
  console.error.apply(console, arguments);
};

// Or better: use a proper logger that targets stderr
var debugLog = function() {
  var args = Array.prototype.slice.call(arguments);
  var message = args.map(function(a) {
    return typeof a === "object" ? JSON.stringify(a) : String(a);
  }).join(" ");
  process.stderr.write("[DEBUG] " + message + "\n");
};

Problem: Server Process Crashes Silently

When a stdio server crashes, the host sees the pipes close. But without capturing stderr, you have no idea why. Always capture stderr from your child process:

// client-side: capturing server stderr
var { spawn } = require("child_process");

var serverProcess = spawn("node", ["server.js"], {
  stdio: ["pipe", "pipe", "pipe"]
});

serverProcess.stderr.on("data", function(chunk) {
  process.stderr.write("[SERVER STDERR] " + chunk.toString());
});

serverProcess.on("exit", function(code, signal) {
  console.error("Server exited: code=" + code + " signal=" + signal);
});

serverProcess.on("error", function(err) {
  console.error("Failed to spawn server:", err.message);
});

Problem: Message Framing Issues

JSON-RPC messages over stdio are newline-delimited. Each message is a single line of JSON followed by \n. If your message spans multiple lines or lacks the trailing newline, the parser will either hang (waiting for more data) or fail (invalid JSON).

A useful debugging technique is to intercept raw bytes on the pipe:

// Raw byte inspector for stdin
var rawBuffer = "";

process.stdin.on("data", function(chunk) {
  var hex = chunk.toString("hex").match(/.{1,2}/g).join(" ");
  process.stderr.write("[RAW IN] bytes=" + chunk.length + " hex=" + hex + "\n");
  process.stderr.write("[RAW IN] text=" + JSON.stringify(chunk.toString()) + "\n");
});

This will reveal issues like embedded null bytes, Windows-style \r\n line endings causing parse failures, or partial message delivery.

Problem: PATH and Environment Issues

When Claude Desktop or another host spawns your server, it may not inherit your full shell environment. A common failure is the node binary not being found because the PATH in the spawned environment is different from your terminal.

Check the Claude Desktop config file:

{
  "mcpServers": {
    "my-server": {
      "command": "/usr/local/bin/node",
      "args": ["/absolute/path/to/server.js"],
      "env": {
        "NODE_ENV": "development",
        "DATABASE_URL": "postgresql://localhost/mydb"
      }
    }
  }
}

Always use absolute paths for both the command and the script. Relative paths are relative to the host application's working directory, which is almost never what you want.


Debugging SSE Transport Issues

SSE connections add network-layer complexity on top of the protocol layer. Here are the techniques I use.

Verifying SSE Connectivity with curl

Before debugging your application code, verify the transport works with raw HTTP:

# Test the SSE endpoint
curl -N -H "Accept: text/event-stream" http://localhost:3001/sse

# You should see an endpoint event first:
# event: endpoint
# data: /messages?sessionId=abc123

If this hangs or returns an error, the problem is at the transport level, not the protocol level.

Sending a Test Message

Once you have the session endpoint, send an initialize request:

curl -X POST \
  "http://localhost:3001/messages?sessionId=abc123" \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "initialize",
    "params": {
      "protocolVersion": "2024-11-05",
      "capabilities": {},
      "clientInfo": { "name": "curl-test", "version": "1.0.0" }
    }
  }'

Watch the SSE stream for the response. If you see the initialize result come back, the transport is working correctly.

Proxy and Load Balancer Issues

SSE connections are long-lived HTTP connections. Many infrastructure components have opinions about long-lived connections, and most of those opinions involve killing them.

nginx: The default proxy_read_timeout is 60 seconds. If your MCP session is idle for more than 60 seconds, nginx will close the connection.

location /sse {
    proxy_pass http://localhost:3001;
    proxy_http_version 1.1;
    proxy_set_header Connection "";
    proxy_buffering off;
    proxy_cache off;
    proxy_read_timeout 86400s;  # 24 hours
    proxy_send_timeout 86400s;
}

AWS ALB: The idle timeout defaults to 60 seconds. You can increase it to 4000 seconds in the target group settings, but you should also implement keepalive pings.

Cloudflare: Cloudflare has a hard 100-second timeout for SSE connections that you cannot change on the free tier. If you are behind Cloudflare, you must implement reconnection logic or use the streamable HTTP transport instead.

Implementing SSE Keepalive

To prevent proxies from killing your SSE connections, send periodic comment lines. SSE comments (lines starting with :) are ignored by the EventSource parser but keep the TCP connection alive:

// In your SSE server
function startKeepalive(res) {
  var interval = setInterval(function() {
    try {
      res.write(": keepalive\n\n");
    } catch (err) {
      clearInterval(interval);
    }
  }, 30000); // Every 30 seconds

  res.on("close", function() {
    clearInterval(interval);
  });

  return interval;
}

Inspecting JSON-RPC Message Flow

When the transport works but something is still wrong, you need to inspect the protocol-level conversation. MCP has a specific message sequence that must be followed, and any deviation causes failures.

The Required Initialization Sequence

Every MCP session starts with this exact sequence:

Client → Server:  initialize (request)
Server → Client:  initialize result (response)
Client → Server:  notifications/initialized (notification)

Only after this sequence completes can the client send tool calls, resource reads, or any other request. If your server tries to send a notification before receiving the initialized notification from the client, the behavior is undefined.

Here is a message sequence validator:

// sequence-validator.js
function createSequenceValidator() {
  var state = "awaiting_initialize";
  var errors = [];

  function validate(direction, message) {
    var method = message.method;
    var isRequest = message.id !== undefined && method !== undefined;
    var isResponse = message.id !== undefined && (message.result !== undefined || message.error !== undefined);
    var isNotification = method !== undefined && message.id === undefined;

    switch (state) {
      case "awaiting_initialize":
        if (direction === "incoming" && method === "initialize") {
          state = "awaiting_initialize_response";
        } else {
          errors.push({
            state: state,
            expected: "initialize request from client",
            got: method || "response",
            message: message
          });
        }
        break;

      case "awaiting_initialize_response":
        if (direction === "outgoing" && isResponse) {
          state = "awaiting_initialized_notification";
        } else {
          errors.push({
            state: state,
            expected: "initialize response from server",
            got: method || "unknown",
            message: message
          });
        }
        break;

      case "awaiting_initialized_notification":
        if (direction === "incoming" && method === "notifications/initialized") {
          state = "ready";
        } else {
          errors.push({
            state: state,
            expected: "notifications/initialized from client",
            got: method || "unknown",
            message: message
          });
        }
        break;

      case "ready":
        // All messages are valid in the ready state
        break;
    }

    return errors.length === 0;
  }

  return {
    validate: validate,
    getErrors: function() { return errors; },
    getState: function() { return state; }
  };
}

module.exports = { createSequenceValidator: createSequenceValidator };

Timeout and Reconnection Debugging

MCP does not define standard timeouts at the protocol level, which means every host and client implements its own. Here is how to debug timeout-related failures.

Identifying Timeout Issues

Timeout failures manifest as one of two symptoms:

  1. The client gives up waiting: The host reports a tool call timeout. Your server log shows the request arriving but the response never being consumed.
  2. The transport drops: The underlying connection closes. For stdio, the pipes close. For SSE, the EventSource fires an error event.

Instrument your tool handlers with timing:

server.setRequestHandler("tools/call", function(request) {
  var startTime = Date.now();
  var toolName = request.params.name;

  process.stderr.write("[TIMING] Tool call started: " + toolName + " at " + new Date().toISOString() + "\n");

  return executeToolCall(request.params).then(function(result) {
    var duration = Date.now() - startTime;
    process.stderr.write("[TIMING] Tool call completed: " + toolName + " in " + duration + "ms\n");

    if (duration > 10000) {
      process.stderr.write("[WARNING] Tool call " + toolName + " took " + duration + "ms -- approaching timeout threshold\n");
    }

    return result;
  }).catch(function(err) {
    var duration = Date.now() - startTime;
    process.stderr.write("[TIMING] Tool call failed: " + toolName + " in " + duration + "ms -- " + err.message + "\n");
    throw err;
  });
});

Client-Side Reconnection

For HTTP transports, implement exponential backoff reconnection:

function createReconnectingClient(serverUrl, options) {
  var maxRetries = (options && options.maxRetries) || 5;
  var baseDelay = (options && options.baseDelay) || 1000;
  var retryCount = 0;
  var client = null;

  function connect() {
    return new Promise(function(resolve, reject) {
      process.stderr.write("[RECONNECT] Attempt " + (retryCount + 1) + "/" + maxRetries + "\n");

      var transport = new SSEClientTransport(new URL(serverUrl));

      transport.onerror = function(err) {
        process.stderr.write("[RECONNECT] Transport error: " + err.message + "\n");

        if (retryCount < maxRetries) {
          retryCount++;
          var delay = baseDelay * Math.pow(2, retryCount - 1);
          process.stderr.write("[RECONNECT] Retrying in " + delay + "ms\n");
          setTimeout(function() {
            connect().then(resolve).catch(reject);
          }, delay);
        } else {
          reject(new Error("Max reconnection attempts exceeded"));
        }
      };

      transport.onclose = function() {
        process.stderr.write("[RECONNECT] Connection closed\n");
      };

      client = new Client({ name: "reconnecting-client", version: "1.0.0" }, {});
      client.connect(transport).then(function() {
        retryCount = 0;
        process.stderr.write("[RECONNECT] Connected successfully\n");
        resolve(client);
      }).catch(function(err) {
        process.stderr.write("[RECONNECT] Connect failed: " + err.message + "\n");
        transport.onerror(err);
      });
    });
  }

  return { connect: connect };
}

Capability Negotiation Failures

During the initialize handshake, the client and server exchange capability declarations. If the client asks for a capability the server does not support, or the server declares a capability it cannot actually fulfill, things break in confusing ways.

Debugging Capability Mismatch

Log the full initialization exchange:

server.setRequestHandler("initialize", function(request) {
  var clientCapabilities = request.params.capabilities;
  var clientInfo = request.params.clientInfo;
  var protocolVersion = request.params.protocolVersion;

  process.stderr.write("[INIT] Client: " + clientInfo.name + " v" + clientInfo.version + "\n");
  process.stderr.write("[INIT] Protocol version: " + protocolVersion + "\n");
  process.stderr.write("[INIT] Client capabilities: " + JSON.stringify(clientCapabilities, null, 2) + "\n");

  var serverCapabilities = {
    tools: {},
    resources: { subscribe: true },
    prompts: {}
  };

  process.stderr.write("[INIT] Server capabilities: " + JSON.stringify(serverCapabilities, null, 2) + "\n");

  return {
    protocolVersion: "2024-11-05",
    capabilities: serverCapabilities,
    serverInfo: { name: "my-server", version: "1.0.0" }
  };
});

A common mistake is declaring resources: { subscribe: true } in your capabilities but never implementing the resources/subscribe handler. The client will attempt to subscribe to resource changes, and when the handler is missing, the SDK returns an internal error.

Protocol Version Mismatches

If the client sends protocolVersion: "2025-03-15" and your server only understands "2024-11-05", you must return an error. The SDK handles this automatically, but if you are building a raw server, you need to check:

var SUPPORTED_VERSIONS = ["2024-11-05", "2025-03-26"];

function checkProtocolVersion(requestedVersion) {
  if (SUPPORTED_VERSIONS.indexOf(requestedVersion) === -1) {
    process.stderr.write("[INIT] Unsupported protocol version: " + requestedVersion + "\n");
    process.stderr.write("[INIT] Supported versions: " + SUPPORTED_VERSIONS.join(", ") + "\n");
    return false;
  }
  return true;
}

Tool Invocation Errors

Tool calls can fail at multiple levels: the request might be malformed, the tool might not exist, the input might fail schema validation, or the tool handler might throw an exception. Each produces different error messages and requires different debugging approaches.

Missing Tool Handler

If the client calls a tool that the server did not register, the SDK returns a MethodNotFound error:

{
  "jsonrpc": "2.0",
  "id": 5,
  "error": {
    "code": -32601,
    "message": "Method not found"
  }
}

But this is the JSON-RPC level error. At the MCP level, a missing tool actually returns a tool result with isError: true:

{
  "jsonrpc": "2.0",
  "id": 5,
  "result": {
    "content": [
      { "type": "text", "text": "Unknown tool: nonexistent_tool" }
    ],
    "isError": true
  }
}

Schema Validation Failures

When tool input fails schema validation, add detailed validation logging:

var Ajv = require("ajv");
var ajv = new Ajv({ allErrors: true });

function validateToolInput(toolName, inputSchema, args) {
  var validate = ajv.compile(inputSchema);
  var valid = validate(args);

  if (!valid) {
    var errorDetails = validate.errors.map(function(err) {
      return err.instancePath + " " + err.message;
    }).join("; ");

    process.stderr.write("[VALIDATION] Tool " + toolName + " input validation failed:\n");
    process.stderr.write("[VALIDATION] Input: " + JSON.stringify(args) + "\n");
    process.stderr.write("[VALIDATION] Errors: " + errorDetails + "\n");
    process.stderr.write("[VALIDATION] Schema: " + JSON.stringify(inputSchema) + "\n");

    return { valid: false, errors: errorDetails };
  }

  return { valid: true, errors: null };
}

Using MCP Inspector for Debugging

Anthropic provides the MCP Inspector, a browser-based debugging tool that acts as a visual MCP client. It is the fastest way to interactively test your server.

Installing and Running the Inspector

npx @modelcontextprotocol/inspector

This starts a web UI on http://localhost:5173. From there you can:

  1. Connect to a stdio server by specifying the command and arguments
  2. Connect to an SSE/HTTP server by providing the URL
  3. View the initialization exchange
  4. Browse available tools, resources, and prompts
  5. Execute tool calls with custom arguments
  6. See the raw JSON-RPC messages in real time

Connecting to a stdio Server

In the Inspector UI, enter:

  • Transport Type: stdio
  • Command: node
  • Arguments: /absolute/path/to/your/server.js
  • Environment Variables: any required env vars

The Inspector will spawn your server and show the initialization handshake. If initialization fails, you will see the exact error message.

Connecting to an HTTP Server

Start your server first, then in the Inspector:

  • Transport Type: SSE (or Streamable HTTP)
  • URL: http://localhost:3001/sse (or your server's endpoint)

The Inspector connects and displays all available capabilities. You can click on any tool to see its schema and execute it with test arguments.

Inspector Tips

  • Use the "Raw Messages" tab to see the exact JSON-RPC traffic
  • The Inspector shows timing for each request/response pair
  • If a tool call hangs, the Inspector shows a pending spinner -- this tells you the server received the request but has not responded
  • You can send malformed requests to test your server's error handling

Building Custom Debug Middleware

For production debugging, you need something more sophisticated than log files. Here is a debug middleware layer that intercepts all MCP messages, validates them, tracks timing, and exposes a health check endpoint.

// mcp-debug-middleware.js
var fs = require("fs");
var http = require("http");

function createDebugMiddleware(options) {
  var logFile = (options && options.logFile) || "/tmp/mcp-debug.log";
  var healthPort = (options && options.healthPort) || 9090;
  var maxLogEntries = (options && options.maxLogEntries) || 10000;

  var state = {
    connected: false,
    initialized: false,
    messagesIn: 0,
    messagesOut: 0,
    errors: 0,
    lastActivity: null,
    toolCalls: {},
    recentMessages: [],
    startTime: Date.now()
  };

  var logStream = fs.createWriteStream(logFile, { flags: "a" });

  function recordMessage(direction, message) {
    var timestamp = new Date().toISOString();
    var entry = {
      timestamp: timestamp,
      direction: direction,
      method: message.method || null,
      id: message.id || null,
      hasError: !!message.error,
      size: JSON.stringify(message).length
    };

    state.lastActivity = timestamp;
    state.recentMessages.push(entry);

    if (state.recentMessages.length > 100) {
      state.recentMessages.shift();
    }

    if (direction === "in") {
      state.messagesIn++;
    } else {
      state.messagesOut++;
    }

    if (message.error) {
      state.errors++;
    }

    // Track tool call timing
    if (message.method === "tools/call" && direction === "in") {
      var toolName = message.params && message.params.name;
      if (toolName) {
        if (!state.toolCalls[toolName]) {
          state.toolCalls[toolName] = { count: 0, totalMs: 0, errors: 0, pending: {} };
        }
        state.toolCalls[toolName].pending[message.id] = Date.now();
      }
    }

    // Match tool responses
    if (direction === "out" && message.id !== undefined) {
      Object.keys(state.toolCalls).forEach(function(toolName) {
        var pending = state.toolCalls[toolName].pending;
        if (pending[message.id]) {
          var duration = Date.now() - pending[message.id];
          state.toolCalls[toolName].count++;
          state.toolCalls[toolName].totalMs += duration;
          if (message.error || (message.result && message.result.isError)) {
            state.toolCalls[toolName].errors++;
          }
          delete pending[message.id];
        }
      });
    }

    if (message.method === "initialize" && direction === "in") {
      state.connected = true;
    }

    if (message.method === "notifications/initialized" && direction === "in") {
      state.initialized = true;
    }

    logStream.write(JSON.stringify({ timestamp: timestamp, direction: direction, message: message }) + "\n");
  }

  // Health check HTTP server
  var healthServer = http.createServer(function(req, res) {
    if (req.url === "/health") {
      var uptime = Math.floor((Date.now() - state.startTime) / 1000);
      var health = {
        status: state.initialized ? "ready" : (state.connected ? "connecting" : "waiting"),
        uptime: uptime + "s",
        messages: {
          in: state.messagesIn,
          out: state.messagesOut,
          errors: state.errors
        },
        lastActivity: state.lastActivity,
        tools: {}
      };

      Object.keys(state.toolCalls).forEach(function(toolName) {
        var tc = state.toolCalls[toolName];
        health.tools[toolName] = {
          calls: tc.count,
          avgMs: tc.count > 0 ? Math.round(tc.totalMs / tc.count) : 0,
          errors: tc.errors,
          pending: Object.keys(tc.pending).length
        };
      });

      res.writeHead(200, { "Content-Type": "application/json" });
      res.end(JSON.stringify(health, null, 2));
    } else if (req.url === "/messages") {
      res.writeHead(200, { "Content-Type": "application/json" });
      res.end(JSON.stringify(state.recentMessages, null, 2));
    } else {
      res.writeHead(404);
      res.end("Not Found");
    }
  });

  healthServer.listen(healthPort, function() {
    process.stderr.write("[DEBUG] Health check available at http://localhost:" + healthPort + "/health\n");
    process.stderr.write("[DEBUG] Recent messages at http://localhost:" + healthPort + "/messages\n");
  });

  return {
    recordIncoming: function(message) { recordMessage("in", message); },
    recordOutgoing: function(message) { recordMessage("out", message); },
    getState: function() { return state; },
    close: function() {
      logStream.end();
      healthServer.close();
    }
  };
}

module.exports = { createDebugMiddleware: createDebugMiddleware };

Query the health endpoint while your server is running:

curl -s http://localhost:9090/health | jq .

Output:

{
  "status": "ready",
  "uptime": "142s",
  "messages": {
    "in": 23,
    "out": 23,
    "errors": 1
  },
  "lastActivity": "2026-02-08T14:35:12.881Z",
  "tools": {
    "query_database": {
      "calls": 8,
      "avgMs": 234,
      "errors": 0,
      "pending": 0
    },
    "read_file": {
      "calls": 5,
      "avgMs": 12,
      "errors": 1,
      "pending": 0
    }
  }
}

Monitoring MCP Connections in Production

In production, you cannot attach a debugger or tail a log file interactively. You need structured monitoring that feeds into your existing observability stack.

Structured Logging for Log Aggregators

Format your MCP debug logs as structured JSON that tools like Datadog, Grafana Loki, or CloudWatch can parse:

function createStructuredLogger(serverName) {
  function emit(level, event, data) {
    var entry = {
      timestamp: new Date().toISOString(),
      level: level,
      service: "mcp-server",
      server: serverName,
      event: event
    };

    Object.keys(data || {}).forEach(function(key) {
      entry[key] = data[key];
    });

    process.stderr.write(JSON.stringify(entry) + "\n");
  }

  return {
    info: function(event, data) { emit("info", event, data); },
    warn: function(event, data) { emit("warn", event, data); },
    error: function(event, data) { emit("error", event, data); }
  };
}

var log = createStructuredLogger("my-mcp-server");

// Usage in tool handlers
log.info("tool.call.start", { tool: "query_database", requestId: request.id });
log.info("tool.call.complete", { tool: "query_database", requestId: request.id, durationMs: 234 });
log.error("tool.call.failed", { tool: "query_database", requestId: request.id, error: err.message });

Connection Health Metrics

Track connection lifecycle events for alerting:

function trackConnectionHealth(transport) {
  var connectionStart = Date.now();

  transport.onclose = function() {
    var duration = Date.now() - connectionStart;
    log.warn("connection.closed", {
      durationMs: duration,
      durationHuman: Math.floor(duration / 1000) + "s"
    });
  };

  transport.onerror = function(err) {
    log.error("connection.error", {
      error: err.message,
      stack: err.stack
    });
  };

  // Periodic health beacon
  setInterval(function() {
    var uptime = Date.now() - connectionStart;
    log.info("connection.heartbeat", {
      uptimeMs: uptime,
      memoryMB: Math.round(process.memoryUsage().heapUsed / 1024 / 1024)
    });
  }, 60000);
}

Complete Working Example: MCP Debug Toolkit

Here is a comprehensive debug toolkit that wraps any MCP server with full observability. Drop it into your project and start it instead of your raw server:

// debug-server-wrapper.js
var { Server } = require("@modelcontextprotocol/sdk/server/index.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var fs = require("fs");
var http = require("http");
var path = require("path");

// ============================================================
// Configuration
// ============================================================
var CONFIG = {
  logFile: process.env.MCP_DEBUG_LOG || "/tmp/mcp-debug.log",
  healthPort: parseInt(process.env.MCP_DEBUG_PORT || "9090", 10),
  verbose: process.env.MCP_DEBUG_VERBOSE === "true",
  maxRecentMessages: 200
};

// ============================================================
// Debug Logger
// ============================================================
var logStream = fs.createWriteStream(CONFIG.logFile, { flags: "a" });
var sequence = 0;

function debugLog(level, component, message, data) {
  var entry = {
    seq: ++sequence,
    ts: new Date().toISOString(),
    level: level,
    component: component,
    msg: message
  };

  if (data) {
    entry.data = data;
  }

  var line = JSON.stringify(entry);
  logStream.write(line + "\n");

  if (CONFIG.verbose) {
    process.stderr.write("[" + level.toUpperCase() + "] [" + component + "] " + message + "\n");
  }
}

// ============================================================
// Message Inspector
// ============================================================
var stats = {
  startTime: Date.now(),
  initialized: false,
  messagesIn: 0,
  messagesOut: 0,
  errors: 0,
  toolCalls: {},
  recentMessages: [],
  initializationTime: null
};

function inspectMessage(direction, message) {
  var timestamp = Date.now();
  var method = message.method || null;
  var id = message.id !== undefined ? message.id : null;

  var entry = {
    seq: sequence,
    ts: new Date(timestamp).toISOString(),
    dir: direction,
    method: method,
    id: id,
    size: JSON.stringify(message).length
  };

  if (message.error) {
    entry.error = {
      code: message.error.code,
      message: message.error.message
    };
    stats.errors++;
    debugLog("error", "protocol", "Error response", entry.error);
  }

  if (direction === "in") {
    stats.messagesIn++;
  } else {
    stats.messagesOut++;
  }

  // Track initialization
  if (method === "initialize" && direction === "in") {
    stats.initStart = timestamp;
    debugLog("info", "lifecycle", "Initialization started", {
      clientInfo: message.params && message.params.clientInfo,
      protocolVersion: message.params && message.params.protocolVersion
    });
  }

  if (method === "notifications/initialized" && direction === "in") {
    stats.initialized = true;
    stats.initializationTime = timestamp - (stats.initStart || timestamp);
    debugLog("info", "lifecycle", "Initialization complete", {
      durationMs: stats.initializationTime
    });
  }

  // Track tool calls
  if (method === "tools/call" && direction === "in") {
    var toolName = message.params && message.params.name;
    if (toolName) {
      if (!stats.toolCalls[toolName]) {
        stats.toolCalls[toolName] = {
          count: 0,
          totalMs: 0,
          minMs: Infinity,
          maxMs: 0,
          errors: 0,
          lastCall: null,
          pending: {}
        };
      }
      stats.toolCalls[toolName].pending[id] = timestamp;
      debugLog("info", "tool", "Tool call started: " + toolName, {
        requestId: id,
        args: CONFIG.verbose ? message.params.arguments : undefined
      });
    }
  }

  // Match tool call responses
  if (direction === "out" && id !== null) {
    Object.keys(stats.toolCalls).forEach(function(tn) {
      var tc = stats.toolCalls[tn];
      if (tc.pending[id]) {
        var duration = timestamp - tc.pending[id];
        tc.count++;
        tc.totalMs += duration;
        tc.minMs = Math.min(tc.minMs, duration);
        tc.maxMs = Math.max(tc.maxMs, duration);
        tc.lastCall = new Date(timestamp).toISOString();

        if (message.error || (message.result && message.result.isError)) {
          tc.errors++;
          debugLog("error", "tool", "Tool call failed: " + tn, {
            requestId: id,
            durationMs: duration,
            error: message.error || "isError flag set"
          });
        } else {
          debugLog("info", "tool", "Tool call completed: " + tn, {
            requestId: id,
            durationMs: duration
          });
        }

        delete tc.pending[id];
      }
    });
  }

  // Maintain recent messages buffer
  stats.recentMessages.push(entry);
  if (stats.recentMessages.length > CONFIG.maxRecentMessages) {
    stats.recentMessages = stats.recentMessages.slice(-CONFIG.maxRecentMessages);
  }
}

// ============================================================
// Connection Health Check Server
// ============================================================
function startHealthServer() {
  var healthServer = http.createServer(function(req, res) {
    res.setHeader("Access-Control-Allow-Origin", "*");

    if (req.url === "/health") {
      var uptime = Math.floor((Date.now() - stats.startTime) / 1000);
      var mem = process.memoryUsage();

      var health = {
        status: stats.initialized ? "ready" : "initializing",
        uptime: uptime + "s",
        pid: process.pid,
        memory: {
          heapUsedMB: Math.round(mem.heapUsed / 1024 / 1024),
          heapTotalMB: Math.round(mem.heapTotal / 1024 / 1024),
          rssMB: Math.round(mem.rss / 1024 / 1024)
        },
        protocol: {
          messagesIn: stats.messagesIn,
          messagesOut: stats.messagesOut,
          errors: stats.errors,
          initializationMs: stats.initializationTime
        },
        tools: {}
      };

      Object.keys(stats.toolCalls).forEach(function(toolName) {
        var tc = stats.toolCalls[toolName];
        health.tools[toolName] = {
          calls: tc.count,
          avgMs: tc.count > 0 ? Math.round(tc.totalMs / tc.count) : 0,
          minMs: tc.minMs === Infinity ? 0 : tc.minMs,
          maxMs: tc.maxMs,
          errors: tc.errors,
          pending: Object.keys(tc.pending).length,
          lastCall: tc.lastCall
        };
      });

      res.writeHead(200, { "Content-Type": "application/json" });
      res.end(JSON.stringify(health, null, 2) + "\n");

    } else if (req.url === "/messages") {
      res.writeHead(200, { "Content-Type": "application/json" });
      res.end(JSON.stringify(stats.recentMessages, null, 2) + "\n");

    } else if (req.url === "/log") {
      res.writeHead(200, { "Content-Type": "text/plain" });
      var logContent = "";
      try {
        logContent = fs.readFileSync(CONFIG.logFile, "utf8");
        var lines = logContent.split("\n").filter(Boolean);
        res.end(lines.slice(-100).join("\n") + "\n");
      } catch (err) {
        res.end("Error reading log: " + err.message + "\n");
      }

    } else {
      res.writeHead(404);
      res.end("Not Found. Available endpoints: /health, /messages, /log\n");
    }
  });

  healthServer.listen(CONFIG.healthPort, function() {
    debugLog("info", "health", "Health server started", { port: CONFIG.healthPort });
    process.stderr.write("[MCP-DEBUG] Health: http://localhost:" + CONFIG.healthPort + "/health\n");
    process.stderr.write("[MCP-DEBUG] Messages: http://localhost:" + CONFIG.healthPort + "/messages\n");
    process.stderr.write("[MCP-DEBUG] Log: http://localhost:" + CONFIG.healthPort + "/log\n");
    process.stderr.write("[MCP-DEBUG] Log file: " + CONFIG.logFile + "\n");
  });

  healthServer.on("error", function(err) {
    process.stderr.write("[MCP-DEBUG] Health server failed to start: " + err.message + "\n");
  });

  return healthServer;
}

// ============================================================
// Wrap and Start Server
// ============================================================
function wrapServer(server) {
  var healthServer = startHealthServer();

  var transport = new StdioServerTransport();

  // Intercept incoming messages
  var origOnMessage = null;
  var origSend = transport.send.bind(transport);

  // Override send to capture outgoing messages
  transport.send = function(message) {
    inspectMessage("out", message);
    return origSend(message);
  };

  // Connect and then intercept the message handler
  server.connect(transport).then(function() {
    debugLog("info", "lifecycle", "Transport connected");

    // Wrap the onmessage handler that the SDK sets up
    if (transport.onmessage) {
      origOnMessage = transport.onmessage;
      transport.onmessage = function(message) {
        inspectMessage("in", message);
        origOnMessage(message);
      };
    }
  });

  // Cleanup on exit
  process.on("SIGINT", function() {
    debugLog("info", "lifecycle", "Server shutting down (SIGINT)");
    logStream.end();
    healthServer.close();
    process.exit(0);
  });

  process.on("SIGTERM", function() {
    debugLog("info", "lifecycle", "Server shutting down (SIGTERM)");
    logStream.end();
    healthServer.close();
    process.exit(0);
  });

  process.on("uncaughtException", function(err) {
    debugLog("error", "runtime", "Uncaught exception: " + err.message, {
      stack: err.stack
    });
    logStream.end();
    process.exit(1);
  });
}

// ============================================================
// Example: Wrap a real server
// ============================================================
var server = new Server(
  { name: "debug-demo", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

// Register a sample tool for demonstration
server.setRequestHandler("tools/list", function() {
  return {
    tools: [
      {
        name: "echo",
        description: "Echo back the input for testing",
        inputSchema: {
          type: "object",
          properties: {
            message: { type: "string", description: "Message to echo" }
          },
          required: ["message"]
        }
      }
    ]
  };
});

server.setRequestHandler("tools/call", function(request) {
  var toolName = request.params.name;

  if (toolName === "echo") {
    return {
      content: [
        { type: "text", text: "Echo: " + request.params.arguments.message }
      ]
    };
  }

  return {
    content: [{ type: "text", text: "Unknown tool: " + toolName }],
    isError: true
  };
});

// Start with debug wrapper
wrapServer(server);

Run it with environment variables to control behavior:

# Basic usage
node debug-server-wrapper.js

# Verbose mode with custom log file
MCP_DEBUG_VERBOSE=true MCP_DEBUG_LOG=./my-debug.log node debug-server-wrapper.js

# Custom health check port
MCP_DEBUG_PORT=9091 node debug-server-wrapper.js

Then in another terminal:

# Check overall health
$ curl -s http://localhost:9090/health | jq .
{
  "status": "ready",
  "uptime": "87s",
  "pid": 42318,
  "memory": {
    "heapUsedMB": 14,
    "heapTotalMB": 22,
    "rssMB": 48
  },
  "protocol": {
    "messagesIn": 12,
    "messagesOut": 12,
    "errors": 0,
    "initializationMs": 3
  },
  "tools": {
    "echo": {
      "calls": 4,
      "avgMs": 2,
      "minMs": 1,
      "maxMs": 5,
      "errors": 0,
      "pending": 0,
      "lastCall": "2026-02-08T14:36:22.103Z"
    }
  }
}

# View recent message flow
$ curl -s http://localhost:9090/messages | jq '.[0:3]'
[
  {
    "seq": 1,
    "ts": "2026-02-08T14:34:55.201Z",
    "dir": "in",
    "method": "initialize",
    "id": 1,
    "size": 142
  },
  {
    "seq": 2,
    "ts": "2026-02-08T14:34:55.204Z",
    "dir": "out",
    "method": null,
    "id": 1,
    "size": 198
  },
  {
    "seq": 3,
    "ts": "2026-02-08T14:34:55.206Z",
    "dir": "in",
    "method": "notifications/initialized",
    "id": null,
    "size": 45
  }
]

Common Issues & Troubleshooting

Issue 1: "Server process exited with code 1"

Error message from Claude Desktop:

MCP server "my-server" failed to start: Server process exited with code 1

Cause: The server script throws an error on startup before the transport connects. Common reasons: missing environment variables, unresolved module imports, or syntax errors.

Fix: Run the server manually in your terminal to see the actual error:

node /path/to/server.js 2>&1

If it requires stdin input (because it is waiting for JSON-RPC messages), it will just sit there -- that is normal. If it crashes immediately, you will see the error message.

Issue 2: "ECONNREFUSED" on SSE/HTTP Transport

Error message:

Error: connect ECONNREFUSED 127.0.0.1:3001
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1595:16)

Cause: The MCP server is not running or is listening on a different port or interface.

Fix: Verify the server is listening:

# Check if anything is listening on port 3001
lsof -i :3001

# Or on Windows
netstat -ano | findstr 3001

# Test connectivity
curl -v http://localhost:3001/sse

Common pitfall: the server binds to 127.0.0.1 but the client connects to 0.0.0.0, or the server is inside a Docker container and you need to map ports.

Issue 3: Tool Call Returns Empty Result

What you see: The LLM receives a tool response, but the content is empty or undefined. The tool handler runs without errors but the result never reaches the model.

Error in log:

{"jsonrpc":"2.0","id":7,"result":{"content":[]}}

Cause: The tool handler returns a promise that resolves to undefined, or the response object is missing the content array. The SDK does not validate response shapes.

Fix: Always return a properly structured result:

// Wrong - returns undefined
server.setRequestHandler("tools/call", function(request) {
  doSomething(request.params.arguments);
  // Missing return!
});

// Right - explicit return with content array
server.setRequestHandler("tools/call", function(request) {
  var result = doSomething(request.params.arguments);
  return {
    content: [
      { type: "text", text: String(result) }
    ]
  };
});

Issue 4: SSE Connection Drops After 60 Seconds of Inactivity

What you see: The connection works fine during active use, but if the user pauses for a minute, the next tool call fails with a connection error.

Error:

EventSource connection closed unexpectedly
Error: SSE connection lost, no reconnection attempted

Cause: A reverse proxy (nginx, HAProxy, or cloud load balancer) is closing idle connections after its configured timeout.

Fix: Implement keepalive comments in the SSE stream (shown in the SSE debugging section above), and increase proxy timeouts. Also implement client-side reconnection logic.

Issue 5: "Cannot find module" When Spawned by Claude Desktop

Error in Claude Desktop logs:

Error: Cannot find module '/Users/me/projects/my-server/server.js'

Cause: The working directory when Claude Desktop spawns the process is not what you expect. Relative paths in the config resolve relative to the host application's working directory.

Fix: Use absolute paths everywhere in your Claude Desktop config:

{
  "mcpServers": {
    "my-server": {
      "command": "/Users/me/.nvm/versions/node/v20.17.0/bin/node",
      "args": ["/Users/me/projects/my-server/server.js"]
    }
  }
}

Best Practices

  • Always log to stderr, never stdout. This is non-negotiable for stdio transport servers. One stray console.log will corrupt the message stream and produce baffling errors. Redirect console.log to console.error as the very first line of your server.

  • Implement a health check endpoint for HTTP transport servers. Even if it is just a /health route that returns {"status":"ok"}, this gives you something to monitor, something to curl, and something for load balancers to probe.

  • Log the full initialization exchange during development. The initialize handshake is where most connection issues surface. Log the client info, protocol version, and requested capabilities so you can diagnose version mismatches immediately.

  • Use the MCP Inspector before integrating with a host application. The Inspector eliminates transport as a variable -- it handles stdio and SSE correctly out of the box. If your server works in the Inspector but fails in Claude Desktop, the problem is in your host configuration, not your server code.

  • Add timing to every tool handler. Log the start time, end time, and duration of every tool call. When a user reports "the AI is slow," you need data showing whether the bottleneck is in your tool execution or in the model's processing time. Without timing data, you are guessing.

  • Validate tool response shapes before sending them. The SDK does not enforce that your tool response includes a content array with properly typed entries. A missing content field or a type typo will cause the model to receive garbage. Validate your own responses.

  • Implement graceful shutdown. Trap SIGTERM and SIGINT, flush your log buffers, close database connections, and then exit. An MCP server that gets killed mid-response leaves the client hanging until its timeout fires.

  • Pin your @modelcontextprotocol/sdk version. The SDK is evolving rapidly. A minor version bump can change transport behavior, message validation, or error handling. Pin your dependency and test explicitly before upgrading.

  • Test with both stdio and HTTP transports. Even if you only deploy one, testing with both catches bugs where you accidentally write to stdout, depend on HTTP headers that do not exist in stdio, or assume a transport-specific feature is universal.


References

Powered by Contentful