Mcp

MCP Tool Creation Patterns and Best Practices

A comprehensive guide to creating MCP tools in Node.js, covering input validation, error handling, tool composition, long-running operations, database tools, and testing strategies.

MCP Tool Creation Patterns and Best Practices

Overview

MCP tools are the primary way AI models interact with the outside world -- querying databases, manipulating files, calling APIs, and orchestrating workflows. Building tools that work reliably is easy; building tools that an LLM can use effectively requires deliberate design decisions around naming, descriptions, schema design, and error surfaces. This article covers every pattern I have shipped in production MCP servers, from single-purpose stateless tools to complex stateful compositions with progress reporting and cancellation.

Prerequisites

  • Node.js 18+ installed (LTS recommended)
  • npm for package management
  • The MCP SDK (@modelcontextprotocol/sdk) installed
  • Familiarity with JSON Schema and Zod
  • Basic understanding of MCP architecture (hosts, clients, servers) -- see the Building Production-Ready MCP Servers article
  • A working Claude Desktop installation for testing

Anatomy of an MCP Tool

Every MCP tool has four components. Understanding what each one does -- and more importantly, how the LLM interprets each one -- is the foundation for everything else in this article.

Name

The tool name is a machine identifier. It must be unique within the server and should follow snake_case convention. The model uses this name to decide which tool to invoke, so make it descriptive enough to disambiguate but short enough to not waste context.

Description

This is the most important field. The model reads this description to understand when and why to use the tool. Vague descriptions produce vague tool calls. Write descriptions like API documentation for a developer who has never seen your codebase.

Input Schema

Defined using Zod in the SDK, the input schema tells the model what arguments the tool accepts, their types, constraints, and defaults. Every property should have a .describe() annotation. The model generates arguments based on these descriptions.

Handler

The async function that executes when the tool is called. It receives validated arguments and must return a content array of text or image blocks.

Here is the basic structure:

var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { z } = require("zod");

var server = new McpServer({
  name: "example-server",
  version: "1.0.0"
});

server.tool(
  "get_user",                          // name
  "Retrieve a user record by ID. Returns the user's name, email, role, " +
  "and account status. Returns an error if the user does not exist.",  // description
  {
    user_id: z.string().describe("The unique user identifier (UUID format)")
  },                                    // input schema
  async function(args) {                // handler
    var user = await db.users.findById(args.user_id);
    if (!user) {
      return {
        content: [{ type: "text", text: "Error: User not found with ID " + args.user_id }],
        isError: true
      };
    }
    return {
      content: [{ type: "text", text: JSON.stringify(user, null, 2) }]
    };
  }
);

Designing Tool Interfaces for LLM Consumption

The single biggest mistake I see in MCP tool design is treating the LLM like a human user. It is not. An LLM does not read tooltips, does not hover over input fields, and does not have the context you have when you wrote the tool. Everything it knows comes from the tool name, description, and parameter descriptions.

Be Explicit About Return Values

Do not write "Returns user data." Write "Returns a JSON object with fields: id (string), name (string), email (string), role (enum: admin|editor|viewer), created_at (ISO 8601 timestamp). Returns an error message if no user matches the given ID."

Specify Units and Formats

If a parameter accepts a date, say "ISO 8601 format (e.g., 2026-01-15)." If it accepts a size, say "File size in bytes." If it accepts a duration, say "Duration in milliseconds."

Document Side Effects

If a tool modifies state, say so explicitly. "Creates a new task in the project database. This action cannot be undone." The model needs to know whether a tool call is safe to retry or whether it has irreversible consequences.

Keep Parameter Counts Low

I aim for 5 parameters or fewer per tool. If you need more, the tool is probably doing too much. Split it into multiple tools or use a nested object parameter to group related options.

// Bad: too many top-level parameters
server.tool("create_deployment", "...", {
  service: z.string(),
  version: z.string(),
  environment: z.string(),
  replicas: z.number(),
  cpu_limit: z.string(),
  memory_limit: z.string(),
  health_check_path: z.string(),
  health_check_interval: z.number(),
  env_vars: z.record(z.string()),
  labels: z.record(z.string())
}, handler);

// Better: grouped into logical objects
server.tool("create_deployment", "...", {
  service: z.string().describe("Service name"),
  version: z.string().describe("Docker image tag to deploy"),
  environment: z.enum(["staging", "production"]).describe("Target environment"),
  resources: z.object({
    replicas: z.number().min(1).max(20).default(2),
    cpu: z.string().default("500m"),
    memory: z.string().default("512Mi")
  }).describe("Resource allocation settings").optional(),
  health_check: z.object({
    path: z.string().default("/health"),
    interval_seconds: z.number().default(30)
  }).describe("Health check configuration").optional()
}, handler);

Input Validation with JSON Schema

The MCP SDK uses Zod for schema definition, which gives you runtime validation for free. But validation is not just about types -- it is about communicating constraints to the model.

server.tool(
  "query_logs",
  "Search application logs by time range and severity. Returns matching log " +
  "entries sorted by timestamp descending. Maximum 1000 results per query.",
  {
    start_time: z.string()
      .describe("Start of time range in ISO 8601 format (e.g., 2026-01-15T00:00:00Z)"),
    end_time: z.string()
      .describe("End of time range in ISO 8601 format. Must be after start_time.")
      .optional(),
    severity: z.enum(["debug", "info", "warn", "error", "fatal"])
      .describe("Minimum severity level to include")
      .default("info"),
    service: z.string()
      .describe("Service name to filter by (e.g., 'api-gateway', 'auth-service')")
      .optional(),
    search_text: z.string()
      .describe("Full-text search query. Supports simple keywords, not regex.")
      .optional(),
    limit: z.number()
      .min(1)
      .max(1000)
      .default(100)
      .describe("Maximum number of log entries to return")
  },
  async function(args) {
    // Zod has already validated types, enums, and ranges
    // Add business logic validation
    if (args.end_time && new Date(args.end_time) <= new Date(args.start_time)) {
      return {
        content: [{ type: "text", text: "Error: end_time must be after start_time" }],
        isError: true
      };
    }

    var results = await logStore.query(args);
    return {
      content: [{
        type: "text",
        text: "Found " + results.total + " matching entries (showing " +
              results.entries.length + "):\n\n" +
              JSON.stringify(results.entries, null, 2)
      }]
    };
  }
);

Notice how I add contextual information to the response -- "Found 847 matching entries (showing 100)" -- so the model knows there are more results it has not seen. This kind of metadata is critical for the LLM to make good decisions about follow-up actions.

Error Handling Patterns

MCP defines two categories of errors, and understanding the distinction will save you hours of debugging.

User-Facing Errors (isError: true)

These are errors the model should see and reason about. They are returned as normal tool results with the isError flag set. The model can read the error message, understand what went wrong, and decide how to proceed -- retry with different parameters, try a different tool, or explain the problem to the user.

server.tool("delete_file", "...", { path: z.string() },
  async function(args) {
    try {
      var stats = await fs.promises.stat(args.path);
      if (stats.isDirectory()) {
        return {
          content: [{ type: "text", text: "Error: Cannot delete '" + args.path +
            "' because it is a directory. Use delete_directory instead." }],
          isError: true
        };
      }
      await fs.promises.unlink(args.path);
      return {
        content: [{ type: "text", text: "Successfully deleted " + args.path }]
      };
    } catch (err) {
      if (err.code === "ENOENT") {
        return {
          content: [{ type: "text", text: "Error: File not found: " + args.path }],
          isError: true
        };
      }
      if (err.code === "EACCES") {
        return {
          content: [{ type: "text", text: "Error: Permission denied: " + args.path }],
          isError: true
        };
      }
      // Unexpected errors should still be surfaced as user-facing errors
      // but with less implementation detail
      return {
        content: [{ type: "text", text: "Error: Failed to delete file: " + err.message }],
        isError: true
      };
    }
  }
);

System Errors (thrown exceptions)

If your handler throws an exception, the SDK catches it and returns it as a JSON-RPC error. The model may or may not see the details depending on the client implementation. Use thrown exceptions only for truly unexpected failures -- database connection lost, SDK bugs, out of memory. For anything the model might need to react to, return an isError: true result instead.

// This pattern gives the model actionable information
return {
  content: [{ type: "text", text: "Error: Database query timed out after 30 seconds. " +
    "Try narrowing the date range or adding more specific filters." }],
  isError: true
};

// This pattern gives the model nothing useful
throw new Error("ETIMEDOUT");

The rule is simple: if the model could do something differently to avoid the error, it is a user-facing error. If there is nothing the model can do, it is a system error.

Tool Composition

Real-world MCP servers often need tools that build on each other. A "generate report" tool might internally call the same logic as "list tasks" and "get task details." There are two approaches.

Shared Logic (Recommended)

Extract the shared logic into standalone functions and call them from multiple tool handlers. This is the cleanest approach because each tool handler stays thin and testable.

// Shared business logic
async function fetchTasks(filters) {
  var query = "SELECT * FROM tasks WHERE 1=1";
  var params = [];
  var paramIndex = 1;

  if (filters.status) {
    query += " AND status = $" + paramIndex++;
    params.push(filters.status);
  }
  if (filters.assignee) {
    query += " AND assignee = $" + paramIndex++;
    params.push(filters.assignee);
  }
  if (filters.project_id) {
    query += " AND project_id = $" + paramIndex++;
    params.push(filters.project_id);
  }

  query += " ORDER BY created_at DESC";

  if (filters.limit) {
    query += " LIMIT $" + paramIndex++;
    params.push(filters.limit);
  }

  var result = await pool.query(query, params);
  return result.rows;
}

// Tool 1: List tasks
server.tool("list_tasks", "...", { /* schema */ },
  async function(args) {
    var tasks = await fetchTasks(args);
    return {
      content: [{ type: "text", text: JSON.stringify(tasks, null, 2) }]
    };
  }
);

// Tool 2: Generate report (uses the same logic)
server.tool("generate_report", "...", { /* schema */ },
  async function(args) {
    var tasks = await fetchTasks({ project_id: args.project_id });
    var report = buildReport(tasks);
    return {
      content: [{ type: "text", text: report }]
    };
  }
);

Tool Chaining via the Model

The alternative is to let the model call multiple tools in sequence. This is appropriate when the composition logic is non-deterministic -- the model needs to decide what to do based on intermediate results. You do not need to build this into your server; the model will naturally chain tool calls when given a set of complementary tools.

Stateful vs. Stateless Tools

Most tools should be stateless. They receive input, do work, return output. No server-side session, no in-memory cache that accumulates across calls. Stateless tools are easier to test, easier to debug, and work correctly when a model retries or re-runs a conversation.

However, some use cases genuinely need state. A tool that starts a long-running process and later checks its status. A tool that paginates through large result sets. A tool that builds up a complex object across multiple calls.

// Stateful: pagination cursor
var cursors = new Map();

server.tool(
  "list_records",
  "List records with pagination. First call returns results and a cursor_id. " +
  "Pass cursor_id in subsequent calls to get the next page. " +
  "Cursors expire after 5 minutes of inactivity.",
  {
    collection: z.string().describe("Collection name to query"),
    page_size: z.number().min(1).max(100).default(25),
    cursor_id: z.string().optional().describe("Cursor ID from a previous call to get the next page")
  },
  async function(args) {
    var offset = 0;

    if (args.cursor_id) {
      var cursor = cursors.get(args.cursor_id);
      if (!cursor) {
        return {
          content: [{ type: "text", text: "Error: Cursor expired or invalid. " +
            "Start a new query without cursor_id." }],
          isError: true
        };
      }
      offset = cursor.offset;
    }

    var results = await db.collection(args.collection)
      .find({})
      .skip(offset)
      .limit(args.page_size + 1)  // fetch one extra to detect "has more"
      .toArray();

    var hasMore = results.length > args.page_size;
    if (hasMore) results.pop();

    var response = {
      records: results,
      total_returned: results.length,
      has_more: hasMore
    };

    if (hasMore) {
      var newCursorId = require("crypto").randomUUID();
      cursors.set(newCursorId, {
        offset: offset + args.page_size,
        created: Date.now()
      });
      response.cursor_id = newCursorId;

      // Clean up expired cursors
      var fiveMinutesAgo = Date.now() - (5 * 60 * 1000);
      cursors.forEach(function(value, key) {
        if (value.created < fiveMinutesAgo) cursors.delete(key);
      });
    }

    return {
      content: [{ type: "text", text: JSON.stringify(response, null, 2) }]
    };
  }
);

If you must use state, always include expiration and cleanup logic. MCP connections can drop without warning, and you do not want a memory leak from abandoned cursors.

Long-Running Tool Patterns

Some tools take a long time -- running a test suite, importing a large dataset, generating a complex report. MCP supports progress reporting and cancellation for these scenarios.

Progress Reporting

The handler receives a meta parameter containing a progressToken. If the client sent one, you can report progress back incrementally.

server.tool(
  "import_csv",
  "Import a CSV file into the database. Reports progress during import. " +
  "Large files (>10MB) may take several minutes.",
  {
    file_path: z.string().describe("Absolute path to the CSV file"),
    table_name: z.string().describe("Target database table name"),
    skip_header: z.boolean().default(true).describe("Skip the first row as header")
  },
  async function(args, extra) {
    var fs = require("fs");
    var readline = require("readline");

    // Count total lines for progress
    var totalLines = 0;
    var countStream = fs.createReadStream(args.file_path);
    var countReader = readline.createInterface({ input: countStream });
    await new Promise(function(resolve) {
      countReader.on("line", function() { totalLines++; });
      countReader.on("close", resolve);
    });

    if (args.skip_header) totalLines--;

    var processedLines = 0;
    var errors = [];
    var dataStream = fs.createReadStream(args.file_path);
    var dataReader = readline.createInterface({ input: dataStream });
    var isFirstLine = true;
    var headers = null;

    for await (var line of dataReader) {
      if (isFirstLine && args.skip_header) {
        headers = line.split(",");
        isFirstLine = false;
        continue;
      }
      isFirstLine = false;

      try {
        await insertRow(args.table_name, headers, line.split(","));
        processedLines++;

        // Report progress every 100 rows
        if (processedLines % 100 === 0 && extra.reportProgress) {
          await extra.reportProgress({
            progress: processedLines,
            total: totalLines
          });
        }
      } catch (err) {
        errors.push("Row " + (processedLines + 1) + ": " + err.message);
      }
    }

    var summary = "Import complete: " + processedLines + "/" + totalLines +
      " rows imported successfully.";
    if (errors.length > 0) {
      summary += "\n\n" + errors.length + " errors:\n" + errors.slice(0, 10).join("\n");
      if (errors.length > 10) {
        summary += "\n... and " + (errors.length - 10) + " more errors";
      }
    }

    return { content: [{ type: "text", text: summary }] };
  }
);

Cancellation

For cancellable operations, check the abort signal periodically in your processing loop:

server.tool("run_tests", "...", { suite: z.string() },
  async function(args, extra) {
    var tests = await discoverTests(args.suite);

    for (var i = 0; i < tests.length; i++) {
      // Check for cancellation before each test
      if (extra.signal && extra.signal.aborted) {
        return {
          content: [{ type: "text", text: "Test run cancelled after " + i +
            "/" + tests.length + " tests." }]
        };
      }

      await runTest(tests[i]);

      if (extra.reportProgress) {
        await extra.reportProgress({ progress: i + 1, total: tests.length });
      }
    }

    return { content: [{ type: "text", text: "All " + tests.length + " tests passed." }] };
  }
);

File System Tools

File system tools are among the most commonly needed in MCP servers. Here are battle-tested patterns.

Safe File Reading

Always validate paths and handle encoding correctly:

var path = require("path");
var fs = require("fs");

var ALLOWED_ROOT = "/home/user/projects";

function validatePath(filePath) {
  var resolved = path.resolve(filePath);
  if (!resolved.startsWith(ALLOWED_ROOT)) {
    return { valid: false, error: "Access denied: path is outside allowed directory" };
  }
  return { valid: true, resolved: resolved };
}

server.tool(
  "read_file",
  "Read the contents of a file. Returns the full text content. " +
  "For binary files, returns a base64-encoded string. " +
  "Files larger than 1MB will be truncated with a warning.",
  {
    path: z.string().describe("Absolute file path to read"),
    encoding: z.enum(["utf8", "base64"]).default("utf8")
      .describe("File encoding. Use base64 for binary files.")
  },
  async function(args) {
    var check = validatePath(args.path);
    if (!check.valid) {
      return { content: [{ type: "text", text: "Error: " + check.error }], isError: true };
    }

    try {
      var stats = await fs.promises.stat(check.resolved);
      var maxSize = 1024 * 1024;  // 1MB
      var truncated = false;

      if (stats.size > maxSize && args.encoding === "utf8") {
        truncated = true;
      }

      var content;
      if (truncated) {
        var handle = await fs.promises.open(check.resolved, "r");
        var buffer = Buffer.alloc(maxSize);
        await handle.read(buffer, 0, maxSize, 0);
        await handle.close();
        content = buffer.toString("utf8");
      } else {
        content = await fs.promises.readFile(check.resolved, args.encoding);
      }

      var header = "File: " + check.resolved + " (" + stats.size + " bytes)";
      if (truncated) {
        header += "\nWARNING: File truncated to 1MB. Total size: " + stats.size + " bytes.";
      }

      return {
        content: [{ type: "text", text: header + "\n\n" + content }]
      };
    } catch (err) {
      return {
        content: [{ type: "text", text: "Error reading file: " + err.message }],
        isError: true
      };
    }
  }
);

File Search

var { glob } = require("glob");

server.tool(
  "search_files",
  "Search for files matching a glob pattern within the project directory. " +
  "Returns file paths, sizes, and modification dates. Maximum 500 results.",
  {
    pattern: z.string().describe("Glob pattern (e.g., '**/*.js', 'src/**/*.test.ts')"),
    root_dir: z.string().describe("Root directory to search from (absolute path)")
  },
  async function(args) {
    var check = validatePath(args.root_dir);
    if (!check.valid) {
      return { content: [{ type: "text", text: "Error: " + check.error }], isError: true };
    }

    var files = await glob(args.pattern, {
      cwd: check.resolved,
      absolute: true,
      nodir: true,
      maxDepth: 10,
      ignore: ["**/node_modules/**", "**/.git/**"]
    });

    if (files.length === 0) {
      return {
        content: [{ type: "text", text: "No files found matching pattern: " + args.pattern }]
      };
    }

    var limited = files.slice(0, 500);
    var results = [];
    for (var i = 0; i < limited.length; i++) {
      var stats = await fs.promises.stat(limited[i]);
      results.push({
        path: limited[i],
        size: stats.size,
        modified: stats.mtime.toISOString()
      });
    }

    var output = "Found " + files.length + " files";
    if (files.length > 500) output += " (showing first 500)";
    output += ":\n\n" + JSON.stringify(results, null, 2);

    return { content: [{ type: "text", text: output }] };
  }
);

Database Query Tools

Database tools are powerful but dangerous. The cardinal rule: never let the model write raw SQL that gets executed directly.

Safe Parameterized Queries

var { Pool } = require("pg");
var pool = new Pool({ connectionString: process.env.DATABASE_URL });

server.tool(
  "query_customers",
  "Search for customers by name, email, or status. Returns customer records " +
  "with ID, name, email, status, and signup date. Maximum 50 results.",
  {
    search: z.string().optional()
      .describe("Search term to match against name or email (case-insensitive)"),
    status: z.enum(["active", "inactive", "suspended"]).optional()
      .describe("Filter by account status"),
    sort_by: z.enum(["name", "email", "created_at"]).default("created_at")
      .describe("Field to sort results by"),
    sort_order: z.enum(["asc", "desc"]).default("desc"),
    limit: z.number().min(1).max(50).default(20)
  },
  async function(args) {
    var query = "SELECT id, name, email, status, created_at FROM customers WHERE 1=1";
    var params = [];
    var paramIndex = 1;

    if (args.search) {
      query += " AND (name ILIKE $" + paramIndex + " OR email ILIKE $" + paramIndex + ")";
      params.push("%" + args.search + "%");
      paramIndex++;
    }

    if (args.status) {
      query += " AND status = $" + paramIndex;
      params.push(args.status);
      paramIndex++;
    }

    // sort_by is from an enum, so it is safe to interpolate
    query += " ORDER BY " + args.sort_by + " " + args.sort_order;
    query += " LIMIT $" + paramIndex;
    params.push(args.limit);

    try {
      var result = await pool.query(query, params);
      return {
        content: [{
          type: "text",
          text: "Found " + result.rowCount + " customers:\n\n" +
                JSON.stringify(result.rows, null, 2)
        }]
      };
    } catch (err) {
      return {
        content: [{ type: "text", text: "Database error: " + err.message }],
        isError: true
      };
    }
  }
);

The key insight: the sort_by field is safe to interpolate because it comes from a Zod enum -- the model can only send one of the three allowed values. User-supplied strings always go through parameterized query placeholders.

Read-Only Database Tools

For reporting and analytics tools, use a read-only connection or SET TRANSACTION READ ONLY:

server.tool("run_report_query", "...", { /* schema */ },
  async function(args) {
    var client = await pool.connect();
    try {
      await client.query("SET TRANSACTION READ ONLY");
      await client.query("SET statement_timeout = '30s'");
      var result = await client.query(args.query, args.params);
      return {
        content: [{ type: "text", text: JSON.stringify(result.rows, null, 2) }]
      };
    } finally {
      client.release();
    }
  }
);

HTTP/API Wrapper Tools

Wrapping external APIs as MCP tools is one of the most common patterns. The tool abstracts away authentication, rate limiting, and error mapping.

var https = require("https");

var GITHUB_TOKEN = process.env.GITHUB_TOKEN;

server.tool(
  "github_list_issues",
  "List open issues from a GitHub repository. Returns issue number, title, " +
  "author, labels, and creation date. Sorted by most recently created.",
  {
    owner: z.string().describe("Repository owner (user or organization)"),
    repo: z.string().describe("Repository name"),
    labels: z.string().optional()
      .describe("Comma-separated label names to filter by (e.g., 'bug,urgent')"),
    state: z.enum(["open", "closed", "all"]).default("open"),
    per_page: z.number().min(1).max(100).default(30)
  },
  async function(args) {
    var url = "https://api.github.com/repos/" + args.owner + "/" + args.repo +
      "/issues?state=" + args.state + "&per_page=" + args.per_page;

    if (args.labels) {
      url += "&labels=" + encodeURIComponent(args.labels);
    }

    try {
      var response = await httpGet(url, {
        "Authorization": "Bearer " + GITHUB_TOKEN,
        "Accept": "application/vnd.github.v3+json",
        "User-Agent": "mcp-github-tools"
      });

      if (response.statusCode === 404) {
        return {
          content: [{ type: "text", text: "Error: Repository " + args.owner +
            "/" + args.repo + " not found or not accessible." }],
          isError: true
        };
      }

      if (response.statusCode === 403) {
        var remaining = response.headers["x-ratelimit-remaining"];
        return {
          content: [{ type: "text", text: "Error: GitHub API rate limit exceeded. " +
            "Remaining: " + remaining + ". Resets at: " +
            new Date(response.headers["x-ratelimit-reset"] * 1000).toISOString() }],
          isError: true
        };
      }

      var issues = JSON.parse(response.body);
      var simplified = issues.map(function(issue) {
        return {
          number: issue.number,
          title: issue.title,
          author: issue.user.login,
          labels: issue.labels.map(function(l) { return l.name; }),
          created_at: issue.created_at,
          comments: issue.comments
        };
      });

      return {
        content: [{
          type: "text",
          text: "Found " + simplified.length + " issues:\n\n" +
                JSON.stringify(simplified, null, 2)
        }]
      };
    } catch (err) {
      return {
        content: [{ type: "text", text: "Error calling GitHub API: " + err.message }],
        isError: true
      };
    }
  }
);

// Helper function for HTTPS GET requests
function httpGet(url, headers) {
  return new Promise(function(resolve, reject) {
    var parsedUrl = new URL(url);
    var options = {
      hostname: parsedUrl.hostname,
      path: parsedUrl.pathname + parsedUrl.search,
      method: "GET",
      headers: headers
    };
    var req = https.request(options, function(res) {
      var body = "";
      res.on("data", function(chunk) { body += chunk; });
      res.on("end", function() {
        resolve({ statusCode: res.statusCode, headers: res.headers, body: body });
      });
    });
    req.on("error", reject);
    req.end();
  });
}

The pattern here is important: translate HTTP status codes into meaningful MCP error messages. Do not return raw HTTP errors. The model does not know what a 403 means in the context of GitHub rate limiting unless you tell it.

Tool Naming Conventions and Discoverability

Tool naming directly affects how well the model selects the right tool. After building dozens of MCP servers, here are the conventions I have settled on:

Use Verb-Noun Format

  • create_task not task_create
  • list_users not users or get_all_users
  • search_files not find_files or file_search

Group Related Tools with Prefixes

When you have a suite of related tools, use a consistent prefix:

db_query_customers
db_insert_customer
db_update_customer
db_delete_customer

github_list_repos
github_list_issues
github_create_issue
github_add_comment

Common Verbs and Their Semantics

Verb Meaning Side Effects
get Retrieve a single item by ID None
list Retrieve multiple items with filters None
search Full-text or fuzzy search None
create Create a new resource Yes -- creates data
update Modify an existing resource Yes -- modifies data
delete Remove a resource Yes -- destructive
run Execute a process or command Varies
generate Create derived content Usually none

Testing MCP Tools

Testing MCP tools requires testing at three levels: unit tests for business logic, integration tests for the tool handler, and end-to-end tests against a running server.

Unit Testing Business Logic

Extract your business logic from the handler and test it independently:

// tasks.js -- business logic
async function createTask(pool, data) {
  var result = await pool.query(
    "INSERT INTO tasks (title, description, status, assignee, project_id) " +
    "VALUES ($1, $2, $3, $4, $5) RETURNING *",
    [data.title, data.description, data.status || "todo", data.assignee, data.project_id]
  );
  return result.rows[0];
}

module.exports = { createTask: createTask };
// tasks.test.js
var { createTask } = require("./tasks");
var assert = require("assert");

describe("createTask", function() {
  var mockPool;

  beforeEach(function() {
    mockPool = {
      query: async function(sql, params) {
        return {
          rows: [{
            id: "abc-123",
            title: params[0],
            description: params[1],
            status: params[2],
            assignee: params[3],
            project_id: params[4],
            created_at: new Date().toISOString()
          }]
        };
      }
    };
  });

  it("should create a task with default status", async function() {
    var task = await createTask(mockPool, {
      title: "Fix login bug",
      description: "Users cannot log in with special characters",
      assignee: "shane",
      project_id: "proj-1"
    });

    assert.strictEqual(task.title, "Fix login bug");
    assert.strictEqual(task.status, "todo");
  });
});

Integration Testing with the MCP Client

Use the SDK's InMemoryTransport to test the full tool handler without a real server:

var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { Client } = require("@modelcontextprotocol/sdk/client/index.js");
var { InMemoryTransport } = require("@modelcontextprotocol/sdk/inMemory.js");

describe("MCP Tool Integration", function() {
  var server;
  var client;

  beforeEach(async function() {
    server = new McpServer({ name: "test-server", version: "1.0.0" });

    // Register your tools on the server
    registerTools(server);

    client = new Client({ name: "test-client", version: "1.0.0" });

    var [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair();
    await Promise.all([
      server.connect(serverTransport),
      client.connect(clientTransport)
    ]);
  });

  it("should list available tools", async function() {
    var result = await client.listTools();
    assert.ok(result.tools.length > 0);

    var taskTool = result.tools.find(function(t) { return t.name === "create_task"; });
    assert.ok(taskTool, "create_task tool should be registered");
    assert.ok(taskTool.description.length > 20, "description should be detailed");
  });

  it("should create a task successfully", async function() {
    var result = await client.callTool("create_task", {
      title: "Write unit tests",
      description: "Cover all edge cases",
      project_id: "proj-1"
    });

    assert.ok(!result.isError);
    var content = JSON.parse(result.content[0].text);
    assert.strictEqual(content.title, "Write unit tests");
  });

  it("should return error for missing required fields", async function() {
    try {
      await client.callTool("create_task", {});
      assert.fail("Should have thrown");
    } catch (err) {
      assert.ok(err.message.includes("required"));
    }
  });
});

Run these tests with:

npx mocha --timeout 10000 tests/**/*.test.js

Output:

  MCP Tool Integration
    ✓ should list available tools (23ms)
    ✓ should create a task successfully (45ms)
    ✓ should return error for missing required fields (12ms)

  3 passing (287ms)

Complete Working Example: Project Management MCP Server

Here is a complete MCP server implementing a project management tool suite. It demonstrates every pattern covered in this article: input validation, error handling, tool composition, progress reporting, and database queries.

// server.js -- Project Management MCP Server
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { z } = require("zod");
var { Pool } = require("pg");

// Database connection
var pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 10,
  idleTimeoutMillis: 30000
});

var server = new McpServer({
  name: "project-manager",
  version: "1.0.0",
  description: "Project management tools for creating, tracking, and reporting on tasks"
});

// ============================================================
// Tool 1: create_task
// ============================================================
server.tool(
  "create_task",
  "Create a new task in a project. Returns the created task with its " +
  "auto-generated ID and timestamps. The task starts in 'todo' status " +
  "unless otherwise specified.",
  {
    title: z.string().min(1).max(200)
      .describe("Task title (1-200 characters)"),
    description: z.string().max(5000).optional()
      .describe("Detailed task description (max 5000 characters)"),
    project_id: z.string()
      .describe("Project ID this task belongs to (UUID format)"),
    assignee: z.string().optional()
      .describe("Username of the person assigned to this task"),
    priority: z.enum(["low", "medium", "high", "critical"]).default("medium")
      .describe("Task priority level"),
    due_date: z.string().optional()
      .describe("Due date in ISO 8601 format (e.g., 2026-03-15)")
  },
  async function(args) {
    try {
      // Verify project exists
      var projectCheck = await pool.query(
        "SELECT id, name FROM projects WHERE id = $1", [args.project_id]
      );
      if (projectCheck.rows.length === 0) {
        return {
          content: [{ type: "text", text: "Error: Project not found with ID " +
            args.project_id + ". Use list_projects to see available projects." }],
          isError: true
        };
      }

      var result = await pool.query(
        "INSERT INTO tasks (title, description, project_id, assignee, priority, " +
        "status, due_date, created_at, updated_at) " +
        "VALUES ($1, $2, $3, $4, $5, 'todo', $6, NOW(), NOW()) RETURNING *",
        [args.title, args.description || null, args.project_id,
         args.assignee || null, args.priority, args.due_date || null]
      );

      var task = result.rows[0];
      return {
        content: [{
          type: "text",
          text: "Task created successfully:\n\n" + JSON.stringify(task, null, 2)
        }]
      };
    } catch (err) {
      return {
        content: [{ type: "text", text: "Error creating task: " + err.message }],
        isError: true
      };
    }
  }
);

// ============================================================
// Tool 2: list_tasks
// ============================================================
server.tool(
  "list_tasks",
  "List tasks with optional filters. Returns tasks sorted by creation date " +
  "(newest first). Supports filtering by project, status, assignee, and " +
  "priority. Maximum 100 results per request.",
  {
    project_id: z.string().optional()
      .describe("Filter by project ID"),
    status: z.enum(["todo", "in_progress", "review", "done", "cancelled"]).optional()
      .describe("Filter by task status"),
    assignee: z.string().optional()
      .describe("Filter by assignee username"),
    priority: z.enum(["low", "medium", "high", "critical"]).optional()
      .describe("Filter by priority level"),
    limit: z.number().min(1).max(100).default(25)
      .describe("Maximum number of tasks to return")
  },
  async function(args) {
    var query = "SELECT id, title, status, priority, assignee, due_date, " +
                "created_at FROM tasks WHERE 1=1";
    var params = [];
    var paramIndex = 1;

    if (args.project_id) {
      query += " AND project_id = $" + paramIndex++;
      params.push(args.project_id);
    }
    if (args.status) {
      query += " AND status = $" + paramIndex++;
      params.push(args.status);
    }
    if (args.assignee) {
      query += " AND assignee = $" + paramIndex++;
      params.push(args.assignee);
    }
    if (args.priority) {
      query += " AND priority = $" + paramIndex++;
      params.push(args.priority);
    }

    query += " ORDER BY created_at DESC LIMIT $" + paramIndex;
    params.push(args.limit);

    try {
      var result = await pool.query(query, params);

      // Also get total count for context
      var countQuery = query.replace(
        /SELECT .* FROM/,
        "SELECT COUNT(*) as total FROM"
      ).replace(/ ORDER BY.*/, "");
      var countResult = await pool.query(countQuery, params.slice(0, -1));
      var total = parseInt(countResult.rows[0].total, 10);

      return {
        content: [{
          type: "text",
          text: "Found " + total + " tasks (showing " + result.rows.length + "):\n\n" +
                JSON.stringify(result.rows, null, 2)
        }]
      };
    } catch (err) {
      return {
        content: [{ type: "text", text: "Error listing tasks: " + err.message }],
        isError: true
      };
    }
  }
);

// ============================================================
// Tool 3: update_task_status
// ============================================================
server.tool(
  "update_task_status",
  "Update the status of a task. Valid transitions: " +
  "todo -> in_progress, in_progress -> review, review -> done, " +
  "any status -> cancelled. Returns the updated task.",
  {
    task_id: z.string().describe("Task ID to update (UUID format)"),
    status: z.enum(["todo", "in_progress", "review", "done", "cancelled"])
      .describe("New status for the task"),
    comment: z.string().max(1000).optional()
      .describe("Optional comment explaining the status change")
  },
  async function(args) {
    try {
      // Fetch current task
      var current = await pool.query(
        "SELECT * FROM tasks WHERE id = $1", [args.task_id]
      );
      if (current.rows.length === 0) {
        return {
          content: [{ type: "text", text: "Error: Task not found with ID " + args.task_id }],
          isError: true
        };
      }

      var task = current.rows[0];
      var validTransitions = {
        "todo": ["in_progress", "cancelled"],
        "in_progress": ["review", "todo", "cancelled"],
        "review": ["done", "in_progress", "cancelled"],
        "done": ["in_progress"],
        "cancelled": ["todo"]
      };

      var allowed = validTransitions[task.status] || [];
      if (allowed.indexOf(args.status) === -1) {
        return {
          content: [{ type: "text", text: "Error: Invalid status transition from '" +
            task.status + "' to '" + args.status + "'. Allowed transitions: " +
            allowed.join(", ") }],
          isError: true
        };
      }

      // Update the task
      var result = await pool.query(
        "UPDATE tasks SET status = $1, updated_at = NOW() WHERE id = $2 RETURNING *",
        [args.status, args.task_id]
      );

      // Record the status change in history
      if (args.comment) {
        await pool.query(
          "INSERT INTO task_history (task_id, from_status, to_status, comment, created_at) " +
          "VALUES ($1, $2, $3, $4, NOW())",
          [args.task_id, task.status, args.status, args.comment]
        );
      }

      return {
        content: [{
          type: "text",
          text: "Task status updated from '" + task.status + "' to '" +
            args.status + "':\n\n" + JSON.stringify(result.rows[0], null, 2)
        }]
      };
    } catch (err) {
      return {
        content: [{ type: "text", text: "Error updating task: " + err.message }],
        isError: true
      };
    }
  }
);

// ============================================================
// Tool 4: search_tasks
// ============================================================
server.tool(
  "search_tasks",
  "Full-text search across task titles and descriptions. Uses PostgreSQL " +
  "full-text search for relevance ranking. Returns the top matching tasks " +
  "ordered by relevance score.",
  {
    query: z.string().min(2)
      .describe("Search query. Supports multiple words (AND logic). Minimum 2 characters."),
    project_id: z.string().optional()
      .describe("Limit search to a specific project"),
    limit: z.number().min(1).max(50).default(10)
      .describe("Maximum number of results")
  },
  async function(args) {
    try {
      var tsQuery = args.query.split(/\s+/).join(" & ");

      var sql = "SELECT id, title, description, status, priority, assignee, " +
        "ts_rank(to_tsvector('english', title || ' ' || COALESCE(description, '')), " +
        "to_tsquery('english', $1)) as relevance " +
        "FROM tasks WHERE to_tsvector('english', title || ' ' || COALESCE(description, '')) " +
        "@@ to_tsquery('english', $1)";

      var params = [tsQuery];
      var paramIndex = 2;

      if (args.project_id) {
        sql += " AND project_id = $" + paramIndex++;
        params.push(args.project_id);
      }

      sql += " ORDER BY relevance DESC LIMIT $" + paramIndex;
      params.push(args.limit);

      var result = await pool.query(sql, params);

      if (result.rows.length === 0) {
        return {
          content: [{ type: "text", text: "No tasks found matching query: '" +
            args.query + "'" }]
        };
      }

      return {
        content: [{
          type: "text",
          text: "Found " + result.rows.length + " matching tasks:\n\n" +
                JSON.stringify(result.rows, null, 2)
        }]
      };
    } catch (err) {
      return {
        content: [{ type: "text", text: "Search error: " + err.message }],
        isError: true
      };
    }
  }
);

// ============================================================
// Tool 5: generate_project_report
// ============================================================
server.tool(
  "generate_project_report",
  "Generate a comprehensive status report for a project. Includes task " +
  "counts by status, overdue tasks, recent activity, and team workload. " +
  "This tool may take 5-10 seconds for large projects.",
  {
    project_id: z.string().describe("Project ID to generate report for"),
    include_details: z.boolean().default(false)
      .describe("Include individual task details in the report (more verbose)")
  },
  async function(args, extra) {
    try {
      // Verify project exists
      var project = await pool.query(
        "SELECT * FROM projects WHERE id = $1", [args.project_id]
      );
      if (project.rows.length === 0) {
        return {
          content: [{ type: "text", text: "Error: Project not found: " + args.project_id }],
          isError: true
        };
      }

      if (extra.reportProgress) {
        await extra.reportProgress({ progress: 1, total: 5 });
      }

      // Status breakdown
      var statusCounts = await pool.query(
        "SELECT status, COUNT(*) as count FROM tasks " +
        "WHERE project_id = $1 GROUP BY status ORDER BY status",
        [args.project_id]
      );

      if (extra.reportProgress) {
        await extra.reportProgress({ progress: 2, total: 5 });
      }

      // Overdue tasks
      var overdue = await pool.query(
        "SELECT id, title, assignee, due_date FROM tasks " +
        "WHERE project_id = $1 AND due_date < NOW() AND status NOT IN ('done', 'cancelled') " +
        "ORDER BY due_date ASC",
        [args.project_id]
      );

      if (extra.reportProgress) {
        await extra.reportProgress({ progress: 3, total: 5 });
      }

      // Team workload
      var workload = await pool.query(
        "SELECT assignee, COUNT(*) as active_tasks FROM tasks " +
        "WHERE project_id = $1 AND status IN ('todo', 'in_progress', 'review') " +
        "AND assignee IS NOT NULL GROUP BY assignee ORDER BY active_tasks DESC",
        [args.project_id]
      );

      if (extra.reportProgress) {
        await extra.reportProgress({ progress: 4, total: 5 });
      }

      // Recent activity (last 7 days)
      var recent = await pool.query(
        "SELECT id, title, status, updated_at FROM tasks " +
        "WHERE project_id = $1 AND updated_at > NOW() - INTERVAL '7 days' " +
        "ORDER BY updated_at DESC LIMIT 20",
        [args.project_id]
      );

      if (extra.reportProgress) {
        await extra.reportProgress({ progress: 5, total: 5 });
      }

      // Build report
      var report = "# Project Report: " + project.rows[0].name + "\n\n";
      report += "Generated: " + new Date().toISOString() + "\n\n";

      report += "## Task Status Breakdown\n\n";
      var totalTasks = 0;
      statusCounts.rows.forEach(function(row) {
        report += "- **" + row.status + "**: " + row.count + "\n";
        totalTasks += parseInt(row.count, 10);
      });
      report += "- **Total**: " + totalTasks + "\n\n";

      report += "## Overdue Tasks (" + overdue.rows.length + ")\n\n";
      if (overdue.rows.length === 0) {
        report += "No overdue tasks. Nice work!\n\n";
      } else {
        overdue.rows.forEach(function(task) {
          report += "- **" + task.title + "** (assigned: " +
            (task.assignee || "unassigned") + ", due: " + task.due_date + ")\n";
        });
        report += "\n";
      }

      report += "## Team Workload\n\n";
      workload.rows.forEach(function(row) {
        report += "- **" + row.assignee + "**: " + row.active_tasks + " active tasks\n";
      });
      report += "\n";

      report += "## Recent Activity (Last 7 Days)\n\n";
      recent.rows.forEach(function(task) {
        report += "- " + task.title + " [" + task.status + "] - " + task.updated_at + "\n";
      });

      if (args.include_details) {
        var allTasks = await pool.query(
          "SELECT * FROM tasks WHERE project_id = $1 ORDER BY status, priority DESC",
          [args.project_id]
        );
        report += "\n## All Tasks (Detailed)\n\n";
        report += JSON.stringify(allTasks.rows, null, 2);
      }

      return { content: [{ type: "text", text: report }] };
    } catch (err) {
      return {
        content: [{ type: "text", text: "Error generating report: " + err.message }],
        isError: true
      };
    }
  }
);

// ============================================================
// Server startup
// ============================================================
async function main() {
  // Verify database connection
  try {
    await pool.query("SELECT 1");
    console.error("Database connection verified");
  } catch (err) {
    console.error("WARNING: Database not available: " + err.message);
    console.error("Tools requiring database will return errors");
  }

  var transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("Project Manager MCP server running on stdio");
}

main().catch(function(err) {
  console.error("Fatal error:", err);
  process.exit(1);
});

The database schema for this server:

CREATE TABLE projects (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  name VARCHAR(200) NOT NULL,
  description TEXT,
  created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

CREATE TABLE tasks (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  title VARCHAR(200) NOT NULL,
  description TEXT,
  project_id UUID NOT NULL REFERENCES projects(id),
  assignee VARCHAR(100),
  priority VARCHAR(20) NOT NULL DEFAULT 'medium',
  status VARCHAR(20) NOT NULL DEFAULT 'todo',
  due_date DATE,
  created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
  updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

CREATE TABLE task_history (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  task_id UUID NOT NULL REFERENCES tasks(id),
  from_status VARCHAR(20) NOT NULL,
  to_status VARCHAR(20) NOT NULL,
  comment TEXT,
  created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

-- Index for full-text search
CREATE INDEX idx_tasks_fts ON tasks
  USING gin(to_tsvector('english', title || ' ' || COALESCE(description, '')));

-- Index for common query patterns
CREATE INDEX idx_tasks_project_status ON tasks(project_id, status);
CREATE INDEX idx_tasks_assignee ON tasks(assignee) WHERE assignee IS NOT NULL;
CREATE INDEX idx_tasks_due_date ON tasks(due_date) WHERE due_date IS NOT NULL;

Configure Claude Desktop to use this server:

{
  "mcpServers": {
    "project-manager": {
      "command": "node",
      "args": ["server.js"],
      "cwd": "/path/to/project-manager",
      "env": {
        "DATABASE_URL": "postgresql://user:pass@localhost:5432/projects"
      }
    }
  }
}

Common Issues & Troubleshooting

1. Tool Not Appearing in Claude Desktop

Symptom: You register a tool but it does not show up in Claude's tool list.

Error: Server "project-manager" failed to start

Cause: Your server is writing non-JSON-RPC output to stdout. All log messages must go to stderr.

Fix:

// Wrong -- this breaks the protocol
console.log("Server starting...");

// Correct -- use stderr for all logging
console.error("Server starting...");

2. Zod Validation Errors Not Reaching the Model

Symptom: The model retries the same tool call repeatedly with the same arguments and gets cryptic errors.

McpError: Invalid params: Expected string, received number at "user_id"

Cause: Zod validation errors are thrown as protocol-level errors, not tool results. Some clients do not surface these well. The model sees a generic failure and retries.

Fix: Add explicit validation in your handler with user-friendly error messages, in addition to the Zod schema:

async function(args) {
  if (!args.user_id || typeof args.user_id !== "string") {
    return {
      content: [{ type: "text", text: "Error: user_id must be a non-empty string" }],
      isError: true
    };
  }
  // ... rest of handler
}

3. Database Connection Pool Exhaustion

Symptom: Tools start timing out after sustained use.

Error: Connection terminated due to connection timeout
Error: sorry, too many clients already

Cause: Tool handlers acquiring database connections but not releasing them on error paths.

Fix: Always use try/finally or connection pooling with automatic release:

// Wrong -- leaks connection on error
async function(args) {
  var client = await pool.connect();
  var result = await client.query("SELECT * FROM users");
  client.release();  // Never reached if query throws
  return { content: [{ type: "text", text: JSON.stringify(result.rows) }] };
}

// Correct -- always releases connection
async function(args) {
  var client = await pool.connect();
  try {
    var result = await client.query("SELECT * FROM users");
    return { content: [{ type: "text", text: JSON.stringify(result.rows) }] };
  } finally {
    client.release();
  }
}

// Best -- use pool.query() which handles connection lifecycle automatically
async function(args) {
  var result = await pool.query("SELECT * FROM users");
  return { content: [{ type: "text", text: JSON.stringify(result.rows) }] };
}

4. Large Response Bodies Causing Timeouts

Symptom: Tools that return large datasets cause the client to hang or crash.

Error: Request timed out after 60000ms

Cause: Returning megabytes of JSON in a single tool response. The client and model both have limits on response size.

Fix: Implement pagination and response size limits:

async function(args) {
  var results = await pool.query("SELECT * FROM logs LIMIT $1", [args.limit]);

  var responseText = JSON.stringify(results.rows, null, 2);

  // Cap response size at 100KB
  if (responseText.length > 100000) {
    var truncatedRows = results.rows.slice(0, Math.floor(results.rows.length / 2));
    responseText = JSON.stringify(truncatedRows, null, 2);
    responseText += "\n\n[Response truncated. " + results.rows.length +
      " total results available. Use a smaller limit or more specific filters.]";
  }

  return { content: [{ type: "text", text: responseText }] };
}

5. Stale In-Memory State Across Reconnections

Symptom: Cursor-based pagination or session state breaks when the client reconnects.

Error: Cursor expired or invalid. Start a new query without cursor_id.

Cause: MCP clients may disconnect and reconnect to your server at any time. If your state is stored in a Map or in-memory object, it is lost on restart.

Fix: Either persist state to a database, or design tools to be fully stateless using offset-based pagination instead of cursor-based:

server.tool("list_records", "...", {
  offset: z.number().default(0).describe("Number of records to skip"),
  limit: z.number().default(25)
}, async function(args) {
  var result = await pool.query(
    "SELECT * FROM records ORDER BY id LIMIT $1 OFFSET $2",
    [args.limit, args.offset]
  );
  return {
    content: [{ type: "text", text: JSON.stringify({
      records: result.rows,
      offset: args.offset,
      limit: args.limit,
      next_offset: args.offset + result.rows.length,
      has_more: result.rows.length === args.limit
    }, null, 2) }]
  };
});

Best Practices

  • Write descriptions for the model, not for humans. The model reads tool and parameter descriptions to decide when and how to use each tool. Be specific about inputs, outputs, side effects, and error cases. A well-written description eliminates 90% of tool misuse.

  • Return structured data, not raw text. Even when the final output is a text block, include metadata like result counts, pagination info, and timestamps. The model uses this context to decide whether to make follow-up calls.

  • Always validate paths against an allowed directory. File system tools should resolve all paths and check they fall within a configured root. Path traversal via ../../etc/passwd is the most common MCP security issue I have seen.

  • Use parameterized queries exclusively for user-supplied values. Never interpolate model-generated strings into SQL. Even though the model is not a malicious user, it can generate unexpected input. The only exception is values from Zod enums, which are constrained to a fixed set at validation time.

  • Keep tools focused and composable. A tool that does one thing well is more useful than a tool that tries to handle every case. The model is excellent at chaining multiple simple tools together. Give it get_task and update_task rather than get_and_maybe_update_task.

  • Log everything to stderr. stdout is reserved for JSON-RPC messages. Any stray console.log will corrupt the protocol stream and crash the connection. Redirect your logging framework to stderr.

  • Test tools with the InMemoryTransport. The SDK provides an in-memory transport that lets you test the full tool lifecycle -- registration, discovery, argument validation, and execution -- without starting a real server. This catches issues that unit tests miss.

  • Set statement timeouts on database queries. A runaway query can lock up your entire MCP server. Always set SET statement_timeout = '30s' or use pool-level configuration to limit query execution time.

  • Cap response sizes. The model has a context window limit, and sending it 50MB of JSON is worse than useless. Implement hard limits on response size (I use 100KB as a default) and tell the model how many results were truncated so it can refine its query.

  • Design for reconnection. MCP connections can drop without warning. Any state that lives only in memory will be lost. Either make tools stateless or persist state to a durable store. If you must use in-memory state, document the expiration behavior in the tool description.

References

Powered by Contentful