Mcp

Building a Code Analysis MCP Server

Build a code analysis MCP server with AST parsing, dependency graphing, complexity metrics, and code search capabilities.

Building a Code Analysis MCP Server

Overview

The Model Context Protocol (MCP) gives AI assistants structured access to external tools and data sources. When you point that capability at code analysis, you get something genuinely useful: an AI that can parse your AST, trace your dependency graph, calculate complexity metrics, and search across your codebase with precision instead of guessing from context window snippets. This article walks through building a production-grade MCP server in Node.js that provides real static analysis capabilities, not toy examples.

Prerequisites

  • Node.js v18 or later
  • Familiarity with Express.js or any HTTP server framework in Node
  • Basic understanding of Abstract Syntax Trees (ASTs)
  • Claude Desktop or another MCP-compatible client installed
  • A codebase you want to analyze (we will use a sample Express project)

Install the core dependencies before we begin:

npm init -y
npm install @modelcontextprotocol/sdk acorn acorn-walk madge
  • @modelcontextprotocol/sdk - Official MCP SDK for building servers
  • acorn - Fast, lightweight JavaScript parser that produces ESTree-compliant ASTs
  • acorn-walk - AST traversal utilities for acorn
  • madge - Dependency graph generation from module imports/requires

Designing Tools for Code Analysis

An MCP server exposes three primitives: tools (functions the AI can call), resources (data the AI can read), and prompts (templates for common interactions). For code analysis, the design breaks down naturally:

Primitive Use Case Example
Tool Active analysis that takes parameters analyze_complexity, search_code, parse_ast
Resource Static or semi-static project data File tree, dependency graph, project config
Prompt Guided analysis workflows "Review this file for anti-patterns"

The mistake I see most people make is cramming everything into tools. Resources are cheaper for the AI to consume because they do not require a function call round-trip. If the data does not change between requests (like your project's file tree), expose it as a resource.

Setting Up the MCP Server Skeleton

Let us start with the foundation. The MCP SDK handles the JSON-RPC protocol layer so you can focus on analysis logic.

// server.js
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { z } = require("zod");
var fs = require("fs");
var path = require("path");
var acorn = require("acorn");
var walk = require("acorn-walk");

var server = new McpServer({
  name: "code-analysis",
  version: "1.0.0"
});

// We will register tools and resources in the following sections

var transport = new StdioServerTransport();
server.connect(transport).then(function () {
  console.error("Code Analysis MCP Server running on stdio");
});

Note the console.error instead of console.log. MCP servers communicate over stdio, so stdout is reserved for the protocol. All logging must go to stderr.

AST Parsing with Acorn

The AST is the foundation of every analysis tool we build. Acorn parses JavaScript into an ESTree-compliant tree structure that we can walk, query, and measure.

// lib/ast-parser.js
var acorn = require("acorn");
var walk = require("acorn-walk");
var fs = require("fs");

function parseFile(filePath) {
  var source = fs.readFileSync(filePath, "utf-8");
  var ast;
  try {
    ast = acorn.parse(source, {
      ecmaVersion: 2022,
      sourceType: "module",
      locations: true,
      allowHashBang: true,
      // Tolerate common syntax that strict parsing rejects
      allowReturnOutsideFunction: true
    });
  } catch (err) {
    // Fall back to script mode if module parsing fails
    try {
      ast = acorn.parse(source, {
        ecmaVersion: 2022,
        sourceType: "script",
        locations: true,
        allowHashBang: true,
        allowReturnOutsideFunction: true
      });
    } catch (fallbackErr) {
      return {
        error: "Parse failed: " + fallbackErr.message,
        line: fallbackErr.loc ? fallbackErr.loc.line : null,
        column: fallbackErr.loc ? fallbackErr.loc.column : null
      };
    }
  }
  return { ast: ast, source: source };
}

function extractFunctions(ast) {
  var functions = [];

  walk.simple(ast, {
    FunctionDeclaration: function (node) {
      functions.push({
        name: node.id ? node.id.name : "<anonymous>",
        line: node.loc.start.line,
        endLine: node.loc.end.line,
        params: node.params.length,
        bodyLength: node.loc.end.line - node.loc.start.line + 1
      });
    },
    FunctionExpression: function (node) {
      functions.push({
        name: node.id ? node.id.name : "<anonymous>",
        line: node.loc.start.line,
        endLine: node.loc.end.line,
        params: node.params.length,
        bodyLength: node.loc.end.line - node.loc.start.line + 1
      });
    },
    ArrowFunctionExpression: function (node) {
      functions.push({
        name: "<arrow>",
        line: node.loc.start.line,
        endLine: node.loc.end.line,
        params: node.params.length,
        bodyLength: node.loc.end.line - node.loc.start.line + 1
      });
    }
  });

  return functions;
}

module.exports = { parseFile: parseFile, extractFunctions: extractFunctions };

The dual-mode parsing (module first, then script fallback) is important. Real codebases mix CommonJS and ESM freely, and you do not want your analysis server crashing on a stray require() in a file declared as a module.

Implementing Complexity Metrics

Cyclomatic complexity measures the number of linearly independent paths through a function. Every if, else, for, while, case, catch, &&, ||, and ?: adds a path. A function with complexity above 10 is a candidate for refactoring. Above 20, it is almost certainly a problem.

// lib/complexity.js
var walk = require("acorn-walk");

function calculateCyclomaticComplexity(ast, functionNode) {
  var complexity = 1; // Base path

  walk.simple(functionNode, {
    IfStatement: function () { complexity++; },
    ConditionalExpression: function () { complexity++; },
    ForStatement: function () { complexity++; },
    ForInStatement: function () { complexity++; },
    ForOfStatement: function () { complexity++; },
    WhileStatement: function () { complexity++; },
    DoWhileStatement: function () { complexity++; },
    SwitchCase: function (node) {
      // Default case does not add complexity
      if (node.test !== null) complexity++;
    },
    CatchClause: function () { complexity++; },
    LogicalExpression: function (node) {
      if (node.operator === "&&" || node.operator === "||") {
        complexity++;
      }
    }
  });

  return complexity;
}

function analyzeFileComplexity(ast, source) {
  var lines = source.split("\n");
  var totalLines = lines.length;
  var codeLines = lines.filter(function (line) {
    var trimmed = line.trim();
    return trimmed.length > 0 && !trimmed.startsWith("//") && !trimmed.startsWith("*");
  }).length;
  var commentLines = lines.filter(function (line) {
    var trimmed = line.trim();
    return trimmed.startsWith("//") || trimmed.startsWith("*") || trimmed.startsWith("/*");
  }).length;

  var functionComplexities = [];

  walk.simple(ast, {
    FunctionDeclaration: function (node) {
      functionComplexities.push({
        name: node.id ? node.id.name : "<anonymous>",
        line: node.loc.start.line,
        complexity: calculateCyclomaticComplexity(ast, node),
        length: node.loc.end.line - node.loc.start.line + 1
      });
    },
    FunctionExpression: function (node) {
      functionComplexities.push({
        name: node.id ? node.id.name : "<anonymous>",
        line: node.loc.start.line,
        complexity: calculateCyclomaticComplexity(ast, node),
        length: node.loc.end.line - node.loc.start.line + 1
      });
    }
  });

  // Sort by complexity descending so the worst offenders appear first
  functionComplexities.sort(function (a, b) {
    return b.complexity - a.complexity;
  });

  return {
    totalLines: totalLines,
    codeLines: codeLines,
    commentLines: commentLines,
    commentRatio: commentLines / (codeLines || 1),
    functions: functionComplexities,
    averageComplexity: functionComplexities.length > 0
      ? functionComplexities.reduce(function (sum, f) { return sum + f.complexity; }, 0) / functionComplexities.length
      : 0,
    maxComplexity: functionComplexities.length > 0 ? functionComplexities[0].complexity : 0
  };
}

module.exports = {
  calculateCyclomaticComplexity: calculateCyclomaticComplexity,
  analyzeFileComplexity: analyzeFileComplexity
};

When Claude asks your MCP server to analyze a file, it gets back structured data like this:

{
  "totalLines": 342,
  "codeLines": 278,
  "commentLines": 31,
  "commentRatio": 0.111,
  "functions": [
    { "name": "processTransaction", "line": 45, "complexity": 14, "length": 67 },
    { "name": "validateInput", "line": 112, "complexity": 9, "length": 34 },
    { "name": "formatResponse", "line": 200, "complexity": 3, "length": 12 }
  ],
  "averageComplexity": 8.67,
  "maxComplexity": 14
}

That is far more useful than "this file looks complex." The AI can now make specific, data-backed recommendations.

Detecting Code Smells and Anti-Patterns

Static analysis shines when you codify institutional knowledge about what "bad" looks like. Here are patterns I have seen cause real production incidents:

// lib/code-smells.js
var walk = require("acorn-walk");

var SMELL_RULES = [
  {
    id: "deeply-nested",
    name: "Deep nesting",
    description: "Code nested more than 4 levels indicates complex logic that should be refactored",
    severity: "warning"
  },
  {
    id: "long-function",
    name: "Long function",
    description: "Functions over 50 lines are harder to test and maintain",
    severity: "warning"
  },
  {
    id: "too-many-params",
    name: "Too many parameters",
    description: "Functions with more than 4 parameters should use an options object",
    severity: "info"
  },
  {
    id: "console-log",
    name: "Console.log in production code",
    description: "Use a proper logging library instead of console.log",
    severity: "info"
  },
  {
    id: "empty-catch",
    name: "Empty catch block",
    description: "Swallowing errors silently hides bugs",
    severity: "error"
  },
  {
    id: "magic-number",
    name: "Magic number",
    description: "Numeric literals other than 0 and 1 should be named constants",
    severity: "info"
  }
];

function detectSmells(ast, source) {
  var smells = [];

  // Detect deep nesting
  function checkNestingDepth(node, depth) {
    if (depth > 4) {
      smells.push({
        rule: "deeply-nested",
        severity: "warning",
        line: node.loc.start.line,
        message: "Nesting depth of " + depth + " exceeds threshold of 4"
      });
    }
    var bodyStatements = [];
    if (node.consequent && node.consequent.body) {
      bodyStatements = node.consequent.body;
    } else if (node.body && node.body.body) {
      bodyStatements = node.body.body;
    }
    bodyStatements.forEach(function (child) {
      if (child.type === "IfStatement" || child.type === "ForStatement" ||
          child.type === "WhileStatement" || child.type === "ForOfStatement") {
        checkNestingDepth(child, depth + 1);
      }
    });
  }

  walk.simple(ast, {
    IfStatement: function (node) { checkNestingDepth(node, 1); },
    ForStatement: function (node) { checkNestingDepth(node, 1); },
    WhileStatement: function (node) { checkNestingDepth(node, 1); }
  });

  // Detect long functions
  walk.simple(ast, {
    FunctionDeclaration: function (node) {
      var length = node.loc.end.line - node.loc.start.line + 1;
      if (length > 50) {
        smells.push({
          rule: "long-function",
          severity: "warning",
          line: node.loc.start.line,
          message: "Function '" + (node.id ? node.id.name : "anonymous") +
                   "' is " + length + " lines long (threshold: 50)"
        });
      }
    }
  });

  // Detect too many parameters
  walk.simple(ast, {
    FunctionDeclaration: function (node) {
      if (node.params.length > 4) {
        smells.push({
          rule: "too-many-params",
          severity: "info",
          line: node.loc.start.line,
          message: "Function '" + (node.id ? node.id.name : "anonymous") +
                   "' has " + node.params.length + " parameters (threshold: 4)"
        });
      }
    }
  });

  // Detect empty catch blocks
  walk.simple(ast, {
    CatchClause: function (node) {
      if (node.body.body.length === 0) {
        smells.push({
          rule: "empty-catch",
          severity: "error",
          line: node.loc.start.line,
          message: "Empty catch block silently swallows errors"
        });
      }
    }
  });

  // Detect console.log usage
  walk.simple(ast, {
    CallExpression: function (node) {
      if (node.callee.type === "MemberExpression" &&
          node.callee.object.name === "console" &&
          node.callee.property.name === "log") {
        smells.push({
          rule: "console-log",
          severity: "info",
          line: node.loc.start.line,
          message: "console.log found - use a structured logger in production"
        });
      }
    }
  });

  return smells;
}

module.exports = { detectSmells: detectSmells, SMELL_RULES: SMELL_RULES };

Building a Dependency Analysis Tool

Understanding how files depend on each other is critical for large codebases. A change in one file can cascade through dozens of dependents. We use a combination of AST analysis for precision and the madge library for graph visualization.

// lib/dependency-analyzer.js
var walk = require("acorn-walk");
var fs = require("fs");
var path = require("path");

function extractDependencies(ast, filePath) {
  var deps = {
    requires: [],
    imports: [],
    builtins: [],
    external: [],
    local: []
  };

  var builtinModules = [
    "fs", "path", "http", "https", "url", "util", "os",
    "crypto", "stream", "events", "child_process", "cluster",
    "net", "tls", "dns", "readline", "zlib", "buffer",
    "querystring", "string_decoder", "assert", "timers"
  ];

  walk.simple(ast, {
    CallExpression: function (node) {
      // Detect require() calls
      if (node.callee.name === "require" &&
          node.arguments.length === 1 &&
          node.arguments[0].type === "Literal") {
        var moduleName = node.arguments[0].value;
        deps.requires.push({
          module: moduleName,
          line: node.loc.start.line
        });

        if (builtinModules.indexOf(moduleName) !== -1 ||
            moduleName.startsWith("node:")) {
          deps.builtins.push(moduleName.replace("node:", ""));
        } else if (moduleName.startsWith(".") || moduleName.startsWith("/")) {
          var resolved = resolveLocalModule(filePath, moduleName);
          deps.local.push({
            module: moduleName,
            resolved: resolved,
            line: node.loc.start.line
          });
        } else {
          deps.external.push({
            module: moduleName.split("/")[0],
            full: moduleName,
            line: node.loc.start.line
          });
        }
      }
    },
    ImportDeclaration: function (node) {
      var moduleName = node.source.value;
      deps.imports.push({
        module: moduleName,
        line: node.loc.start.line,
        specifiers: node.specifiers.map(function (s) {
          return s.local.name;
        })
      });
    }
  });

  return deps;
}

function resolveLocalModule(fromFile, relativePath) {
  var dir = path.dirname(fromFile);
  var resolved = path.resolve(dir, relativePath);
  var extensions = [".js", ".json", ".node", "/index.js"];

  for (var i = 0; i < extensions.length; i++) {
    var candidate = resolved + extensions[i];
    if (fs.existsSync(candidate)) {
      return candidate;
    }
  }

  // Check if the path itself exists (e.g., already has extension)
  if (fs.existsSync(resolved)) {
    return resolved;
  }

  return resolved + " (unresolved)";
}

function buildDependencyGraph(rootDir, fileList) {
  var graph = {};

  fileList.forEach(function (filePath) {
    var parsed = require("./ast-parser").parseFile(filePath);
    if (parsed.error) return;

    var deps = extractDependencies(parsed.ast, filePath);
    var relativePath = path.relative(rootDir, filePath);
    graph[relativePath] = {
      local: deps.local.map(function (d) {
        return path.relative(rootDir, d.resolved);
      }),
      external: deps.external.map(function (d) { return d.module; }),
      builtins: deps.builtins
    };
  });

  return graph;
}

module.exports = {
  extractDependencies: extractDependencies,
  buildDependencyGraph: buildDependencyGraph
};

The dependency graph output looks like this for a typical Express project:

{
  "app.js": {
    "local": ["routes/home.js", "routes/articles.js", "routes/contact.js"],
    "external": ["express", "body-parser", "helmet"],
    "builtins": ["path"]
  },
  "routes/articles.js": {
    "local": ["../models/dataAccess.js", "../utils/slugify.js"],
    "external": ["express", "contentful"],
    "builtins": ["url"]
  }
}

Building a Code Search Tool

Text search across a codebase is the most frequently used tool in practice. When Claude needs to find where a function is defined, where an error message originates, or which files reference a specific API endpoint, search is the first thing it reaches for.

// lib/code-search.js
var fs = require("fs");
var path = require("path");

var DEFAULT_IGNORE = [
  "node_modules", ".git", "dist", "build", "coverage",
  ".nyc_output", ".cache", "__pycache__"
];

function searchFiles(rootDir, pattern, options) {
  options = options || {};
  var maxResults = options.maxResults || 100;
  var fileGlob = options.filePattern || "*.js";
  var caseSensitive = options.caseSensitive !== false;
  var ignoreList = options.ignore || DEFAULT_IGNORE;

  var regex;
  try {
    regex = new RegExp(pattern, caseSensitive ? "g" : "gi");
  } catch (err) {
    return { error: "Invalid regex: " + err.message, results: [] };
  }

  var results = [];
  var filesSearched = 0;
  var startTime = Date.now();

  function shouldIgnore(name) {
    return ignoreList.indexOf(name) !== -1 || name.startsWith(".");
  }

  function matchesGlob(fileName, glob) {
    // Simple glob matching for common patterns
    if (glob === "*") return true;
    if (glob.startsWith("*.")) {
      return fileName.endsWith(glob.substring(1));
    }
    return fileName === glob;
  }

  function walkDir(dir) {
    if (results.length >= maxResults) return;

    var entries;
    try {
      entries = fs.readdirSync(dir, { withFileTypes: true });
    } catch (err) {
      return; // Permission denied, skip
    }

    entries.forEach(function (entry) {
      if (results.length >= maxResults) return;
      if (shouldIgnore(entry.name)) return;

      var fullPath = path.join(dir, entry.name);

      if (entry.isDirectory()) {
        walkDir(fullPath);
      } else if (entry.isFile() && matchesGlob(entry.name, fileGlob)) {
        filesSearched++;
        var content;
        try {
          content = fs.readFileSync(fullPath, "utf-8");
        } catch (err) {
          return;
        }

        var lines = content.split("\n");
        lines.forEach(function (line, index) {
          if (results.length >= maxResults) return;
          regex.lastIndex = 0;
          if (regex.test(line)) {
            results.push({
              file: path.relative(rootDir, fullPath),
              line: index + 1,
              content: line.trim(),
              column: line.search(regex)
            });
          }
        });
      }
    });
  }

  walkDir(rootDir);

  return {
    results: results,
    filesSearched: filesSearched,
    totalResults: results.length,
    truncated: results.length >= maxResults,
    elapsed: Date.now() - startTime + "ms"
  };
}

module.exports = { searchFiles: searchFiles };

For large codebases, this pure-JS search works for projects up to about 50,000 files. Beyond that, shell out to ripgrep:

var { execSync } = require("child_process");

function ripgrepSearch(rootDir, pattern, options) {
  options = options || {};
  var maxResults = options.maxResults || 100;
  var fileGlob = options.filePattern || "*.js";

  var cmd = "rg --json --max-count " + maxResults +
            " --glob \"" + fileGlob + "\"" +
            " \"" + pattern.replace(/"/g, '\\"') + "\"" +
            " \"" + rootDir + "\"";

  try {
    var output = execSync(cmd, {
      encoding: "utf-8",
      timeout: 30000,
      maxBuffer: 10 * 1024 * 1024
    });

    var results = output.split("\n")
      .filter(function (line) { return line.trim(); })
      .map(function (line) { return JSON.parse(line); })
      .filter(function (entry) { return entry.type === "match"; })
      .map(function (entry) {
        return {
          file: path.relative(rootDir, entry.data.path.text),
          line: entry.data.line_number,
          content: entry.data.lines.text.trim()
        };
      });

    return { results: results, totalResults: results.length };
  } catch (err) {
    if (err.status === 1) {
      return { results: [], totalResults: 0 }; // No matches
    }
    return { error: err.message, results: [] };
  }
}

Exposing a File Tree as an MCP Resource

Resources in MCP are URI-addressable data that the AI can read without making a tool call. A file tree resource lets the AI understand project structure upfront.

// lib/file-tree.js
var fs = require("fs");
var path = require("path");

function buildFileTree(rootDir, options) {
  options = options || {};
  var maxDepth = options.maxDepth || 6;
  var ignoreList = options.ignore || [
    "node_modules", ".git", "dist", "build", "coverage", ".cache"
  ];

  var stats = { files: 0, directories: 0, totalSize: 0 };

  function walk(dir, depth) {
    if (depth > maxDepth) return { name: "...", truncated: true };

    var entries;
    try {
      entries = fs.readdirSync(dir, { withFileTypes: true });
    } catch (err) {
      return { name: path.basename(dir), error: err.code };
    }

    var children = [];
    entries.sort(function (a, b) {
      // Directories first, then alphabetical
      if (a.isDirectory() && !b.isDirectory()) return -1;
      if (!a.isDirectory() && b.isDirectory()) return 1;
      return a.name.localeCompare(b.name);
    });

    entries.forEach(function (entry) {
      if (ignoreList.indexOf(entry.name) !== -1) return;
      if (entry.name.startsWith(".")) return;

      var fullPath = path.join(dir, entry.name);

      if (entry.isDirectory()) {
        stats.directories++;
        children.push({
          name: entry.name,
          type: "directory",
          children: walk(fullPath, depth + 1).children || []
        });
      } else if (entry.isFile()) {
        var fileStat = fs.statSync(fullPath);
        stats.files++;
        stats.totalSize += fileStat.size;
        children.push({
          name: entry.name,
          type: "file",
          size: fileStat.size,
          extension: path.extname(entry.name)
        });
      }
    });

    return { name: path.basename(dir), type: "directory", children: children };
  }

  var tree = walk(rootDir, 0);
  tree.stats = stats;
  return tree;
}

module.exports = { buildFileTree: buildFileTree };

Handling Large Codebases Efficiently

Performance matters when your MCP server is analyzing real projects. A medium-sized Node.js project might have 500 JavaScript files. A monorepo could have 10,000. Here are the techniques that actually work:

1. Lazy Parsing. Do not parse every file on startup. Parse on demand and cache the AST:

var astCache = {};
var CACHE_TTL = 60000; // 1 minute

function getCachedAst(filePath) {
  var cached = astCache[filePath];
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    var stat = fs.statSync(filePath);
    if (stat.mtimeMs <= cached.mtime) {
      return cached.result;
    }
  }

  var result = parseFile(filePath);
  var stat = fs.statSync(filePath);
  astCache[filePath] = {
    result: result,
    timestamp: Date.now(),
    mtime: stat.mtimeMs
  };
  return result;
}

2. File Filtering. Never walk into node_modules, .git, or build directories. This alone cuts 95% of files in a typical project.

3. Stream Processing. For search operations over large files, use streams instead of reading entire files into memory:

var readline = require("readline");

function searchFileStream(filePath, regex) {
  return new Promise(function (resolve) {
    var results = [];
    var lineNumber = 0;
    var rl = readline.createInterface({
      input: fs.createReadStream(filePath, { encoding: "utf-8" }),
      crlfDelay: Infinity
    });

    rl.on("line", function (line) {
      lineNumber++;
      regex.lastIndex = 0;
      if (regex.test(line)) {
        results.push({ line: lineNumber, content: line.trim() });
      }
    });

    rl.on("close", function () {
      resolve(results);
    });
  });
}

4. Result Limits. Always cap results. An unbounded search for var across a codebase will return thousands of matches and blow up the AI's context window. Default to 100, let the caller request more.

Complete Working Example

Here is the full MCP server that wires everything together. Save this as server.js and it is ready to connect to Claude Desktop.

// server.js - Complete Code Analysis MCP Server
var { McpServer } = require("@modelcontextprotocol/sdk/server/mcp.js");
var { StdioServerTransport } = require("@modelcontextprotocol/sdk/server/stdio.js");
var { z } = require("zod");
var fs = require("fs");
var path = require("path");
var acorn = require("acorn");
var walk = require("acorn-walk");

var PROJECT_ROOT = process.env.PROJECT_ROOT || process.cwd();

var server = new McpServer({
  name: "code-analysis",
  version: "1.0.0"
});

// ========== TOOL: Analyze Complexity ==========
server.tool(
  "analyze_complexity",
  "Analyze cyclomatic complexity and code metrics for a JavaScript file",
  {
    filePath: z.string().describe("Path to the JavaScript file (relative to project root)")
  },
  function (params) {
    var fullPath = path.resolve(PROJECT_ROOT, params.filePath);

    if (!fs.existsSync(fullPath)) {
      return { content: [{ type: "text", text: "Error: File not found: " + fullPath }] };
    }

    var source = fs.readFileSync(fullPath, "utf-8");
    var ast;
    try {
      ast = acorn.parse(source, {
        ecmaVersion: 2022,
        sourceType: "module",
        locations: true,
        allowReturnOutsideFunction: true
      });
    } catch (e) {
      try {
        ast = acorn.parse(source, {
          ecmaVersion: 2022,
          sourceType: "script",
          locations: true,
          allowReturnOutsideFunction: true
        });
      } catch (e2) {
        return { content: [{ type: "text", text: "Parse error: " + e2.message }] };
      }
    }

    var functions = [];
    walk.simple(ast, {
      FunctionDeclaration: function (node) {
        var complexity = 1;
        walk.simple(node, {
          IfStatement: function () { complexity++; },
          ConditionalExpression: function () { complexity++; },
          ForStatement: function () { complexity++; },
          ForInStatement: function () { complexity++; },
          ForOfStatement: function () { complexity++; },
          WhileStatement: function () { complexity++; },
          DoWhileStatement: function () { complexity++; },
          SwitchCase: function (n) { if (n.test !== null) complexity++; },
          CatchClause: function () { complexity++; },
          LogicalExpression: function (n) {
            if (n.operator === "&&" || n.operator === "||") complexity++;
          }
        });
        functions.push({
          name: node.id ? node.id.name : "<anonymous>",
          line: node.loc.start.line,
          complexity: complexity,
          length: node.loc.end.line - node.loc.start.line + 1
        });
      },
      FunctionExpression: function (node) {
        var complexity = 1;
        walk.simple(node, {
          IfStatement: function () { complexity++; },
          ConditionalExpression: function () { complexity++; },
          ForStatement: function () { complexity++; },
          WhileStatement: function () { complexity++; },
          SwitchCase: function (n) { if (n.test !== null) complexity++; },
          CatchClause: function () { complexity++; },
          LogicalExpression: function (n) {
            if (n.operator === "&&" || n.operator === "||") complexity++;
          }
        });
        functions.push({
          name: node.id ? node.id.name : "<anonymous>",
          line: node.loc.start.line,
          complexity: complexity,
          length: node.loc.end.line - node.loc.start.line + 1
        });
      }
    });

    functions.sort(function (a, b) { return b.complexity - a.complexity; });

    var lines = source.split("\n");
    var result = {
      file: params.filePath,
      totalLines: lines.length,
      codeLines: lines.filter(function (l) {
        var t = l.trim();
        return t.length > 0 && !t.startsWith("//");
      }).length,
      functions: functions,
      maxComplexity: functions.length > 0 ? functions[0].complexity : 0,
      averageComplexity: functions.length > 0
        ? Math.round(functions.reduce(function (s, f) {
            return s + f.complexity;
          }, 0) / functions.length * 100) / 100
        : 0
    };

    return {
      content: [{ type: "text", text: JSON.stringify(result, null, 2) }]
    };
  }
);

// ========== TOOL: Search Code ==========
server.tool(
  "search_code",
  "Search for a pattern across the codebase using regex",
  {
    pattern: z.string().describe("Regex pattern to search for"),
    filePattern: z.string().optional().describe("File glob pattern (default: *.js)"),
    maxResults: z.number().optional().describe("Max results to return (default: 50)")
  },
  function (params) {
    var maxResults = params.maxResults || 50;
    var fileGlob = params.filePattern || "*.js";
    var ignoreList = ["node_modules", ".git", "dist", "build", "coverage"];

    var regex;
    try {
      regex = new RegExp(params.pattern, "g");
    } catch (err) {
      return { content: [{ type: "text", text: "Invalid regex: " + err.message }] };
    }

    var results = [];
    var filesSearched = 0;

    function walkDir(dir) {
      if (results.length >= maxResults) return;
      var entries;
      try {
        entries = fs.readdirSync(dir, { withFileTypes: true });
      } catch (e) { return; }

      entries.forEach(function (entry) {
        if (results.length >= maxResults) return;
        if (ignoreList.indexOf(entry.name) !== -1) return;
        if (entry.name.startsWith(".")) return;

        var fullPath = path.join(dir, entry.name);
        if (entry.isDirectory()) {
          walkDir(fullPath);
        } else if (entry.isFile() && entry.name.endsWith(fileGlob.replace("*", ""))) {
          filesSearched++;
          try {
            var content = fs.readFileSync(fullPath, "utf-8");
            var lines = content.split("\n");
            lines.forEach(function (line, idx) {
              if (results.length >= maxResults) return;
              regex.lastIndex = 0;
              if (regex.test(line)) {
                results.push({
                  file: path.relative(PROJECT_ROOT, fullPath),
                  line: idx + 1,
                  content: line.trim().substring(0, 200)
                });
              }
            });
          } catch (e) { /* skip unreadable files */ }
        }
      });
    }

    walkDir(PROJECT_ROOT);

    var output = {
      query: params.pattern,
      filesSearched: filesSearched,
      totalResults: results.length,
      truncated: results.length >= maxResults,
      results: results
    };

    return {
      content: [{ type: "text", text: JSON.stringify(output, null, 2) }]
    };
  }
);

// ========== TOOL: Analyze Dependencies ==========
server.tool(
  "analyze_dependencies",
  "Extract require/import dependencies from a JavaScript file",
  {
    filePath: z.string().describe("Path to the JavaScript file (relative to project root)")
  },
  function (params) {
    var fullPath = path.resolve(PROJECT_ROOT, params.filePath);

    if (!fs.existsSync(fullPath)) {
      return { content: [{ type: "text", text: "Error: File not found: " + fullPath }] };
    }

    var source = fs.readFileSync(fullPath, "utf-8");
    var ast;
    try {
      ast = acorn.parse(source, {
        ecmaVersion: 2022, sourceType: "module",
        locations: true, allowReturnOutsideFunction: true
      });
    } catch (e) {
      try {
        ast = acorn.parse(source, {
          ecmaVersion: 2022, sourceType: "script",
          locations: true, allowReturnOutsideFunction: true
        });
      } catch (e2) {
        return { content: [{ type: "text", text: "Parse error: " + e2.message }] };
      }
    }

    var builtins = [
      "fs", "path", "http", "https", "url", "util", "os",
      "crypto", "stream", "events", "child_process", "net"
    ];

    var deps = { requires: [], imports: [], local: [], external: [], builtin: [] };

    walk.simple(ast, {
      CallExpression: function (node) {
        if (node.callee.name === "require" &&
            node.arguments.length === 1 &&
            node.arguments[0].type === "Literal") {
          var mod = node.arguments[0].value;
          deps.requires.push({ module: mod, line: node.loc.start.line });

          if (builtins.indexOf(mod) !== -1 || mod.startsWith("node:")) {
            deps.builtin.push(mod);
          } else if (mod.startsWith(".") || mod.startsWith("/")) {
            deps.local.push(mod);
          } else {
            deps.external.push(mod.split("/")[0]);
          }
        }
      },
      ImportDeclaration: function (node) {
        var mod = node.source.value;
        deps.imports.push({ module: mod, line: node.loc.start.line });
      }
    });

    return {
      content: [{
        type: "text",
        text: JSON.stringify({
          file: params.filePath,
          dependencies: deps,
          summary: {
            totalRequires: deps.requires.length,
            totalImports: deps.imports.length,
            localModules: deps.local.length,
            externalPackages: deps.external.length,
            builtinModules: deps.builtin.length
          }
        }, null, 2)
      }]
    };
  }
);

// ========== TOOL: Detect Code Smells ==========
server.tool(
  "detect_smells",
  "Detect code smells and anti-patterns in a JavaScript file",
  {
    filePath: z.string().describe("Path to the JavaScript file (relative to project root)")
  },
  function (params) {
    var fullPath = path.resolve(PROJECT_ROOT, params.filePath);

    if (!fs.existsSync(fullPath)) {
      return { content: [{ type: "text", text: "Error: File not found" }] };
    }

    var source = fs.readFileSync(fullPath, "utf-8");
    var ast;
    try {
      ast = acorn.parse(source, {
        ecmaVersion: 2022, sourceType: "module",
        locations: true, allowReturnOutsideFunction: true
      });
    } catch (e) {
      try {
        ast = acorn.parse(source, {
          ecmaVersion: 2022, sourceType: "script",
          locations: true, allowReturnOutsideFunction: true
        });
      } catch (e2) {
        return { content: [{ type: "text", text: "Parse error: " + e2.message }] };
      }
    }

    var smells = [];

    // Empty catch blocks
    walk.simple(ast, {
      CatchClause: function (node) {
        if (node.body.body.length === 0) {
          smells.push({
            rule: "empty-catch", severity: "error",
            line: node.loc.start.line,
            message: "Empty catch block silently swallows errors"
          });
        }
      }
    });

    // Long functions
    walk.simple(ast, {
      FunctionDeclaration: function (node) {
        var len = node.loc.end.line - node.loc.start.line + 1;
        if (len > 50) {
          smells.push({
            rule: "long-function", severity: "warning",
            line: node.loc.start.line,
            message: "Function '" + (node.id ? node.id.name : "anon") +
                     "' is " + len + " lines (threshold: 50)"
          });
        }
      }
    });

    // Too many params
    walk.simple(ast, {
      FunctionDeclaration: function (node) {
        if (node.params.length > 4) {
          smells.push({
            rule: "too-many-params", severity: "info",
            line: node.loc.start.line,
            message: "Function has " + node.params.length + " parameters (use options object)"
          });
        }
      }
    });

    // console.log
    walk.simple(ast, {
      CallExpression: function (node) {
        if (node.callee.type === "MemberExpression" &&
            node.callee.object.name === "console" &&
            node.callee.property.name === "log") {
          smells.push({
            rule: "console-log", severity: "info",
            line: node.loc.start.line,
            message: "console.log found - use a structured logger"
          });
        }
      }
    });

    return {
      content: [{
        type: "text",
        text: JSON.stringify({
          file: params.filePath,
          smells: smells,
          summary: {
            total: smells.length,
            errors: smells.filter(function (s) { return s.severity === "error"; }).length,
            warnings: smells.filter(function (s) { return s.severity === "warning"; }).length,
            info: smells.filter(function (s) { return s.severity === "info"; }).length
          }
        }, null, 2)
      }]
    };
  }
);

// ========== RESOURCE: File Tree ==========
server.resource(
  "project-tree",
  "file://project/tree",
  { description: "Complete file tree of the project" },
  function () {
    var ignoreList = ["node_modules", ".git", "dist", "build", "coverage"];

    function buildTree(dir, depth) {
      if (depth > 5) return [];
      var entries;
      try {
        entries = fs.readdirSync(dir, { withFileTypes: true });
      } catch (e) { return []; }

      var items = [];
      entries.sort(function (a, b) {
        if (a.isDirectory() && !b.isDirectory()) return -1;
        if (!a.isDirectory() && b.isDirectory()) return 1;
        return a.name.localeCompare(b.name);
      });

      entries.forEach(function (entry) {
        if (ignoreList.indexOf(entry.name) !== -1) return;
        if (entry.name.startsWith(".")) return;

        var fullPath = path.join(dir, entry.name);
        if (entry.isDirectory()) {
          items.push({
            name: entry.name,
            type: "directory",
            children: buildTree(fullPath, depth + 1)
          });
        } else {
          items.push({
            name: entry.name,
            type: "file",
            size: fs.statSync(fullPath).size
          });
        }
      });

      return items;
    }

    var tree = buildTree(PROJECT_ROOT, 0);
    return {
      contents: [{
        uri: "file://project/tree",
        text: JSON.stringify(tree, null, 2),
        mimeType: "application/json"
      }]
    };
  }
);

// ========== START SERVER ==========
var transport = new StdioServerTransport();
server.connect(transport).then(function () {
  console.error("Code Analysis MCP Server started");
  console.error("Project root: " + PROJECT_ROOT);
});

Connecting to Claude Desktop

Add this to your Claude Desktop configuration file at ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):

{
  "mcpServers": {
    "code-analysis": {
      "command": "node",
      "args": ["C:/path/to/your/server.js"],
      "env": {
        "PROJECT_ROOT": "C:/path/to/your/project"
      }
    }
  }
}

Restart Claude Desktop. You should see the code analysis tools appear in the tools panel. Now you can ask Claude things like "analyze the complexity of my routes/articles.js file" and it will call the analyze_complexity tool directly.

Example Session Output

When you ask Claude to analyze a file, the tool returns structured data:

> Analyze the complexity of routes/articles.js

The file has 342 total lines with 278 lines of code. Here are the
complexity results:

- processArticle (line 45): complexity 14 - HIGH. This function has
  14 independent paths. Consider extracting the validation logic
  into a separate function.
- getArticleBySlug (line 112): complexity 9 - MODERATE. The fallback
  chain for slug resolution adds paths.
- formatResponse (line 200): complexity 3 - LOW. Clean and simple.

Average complexity: 8.67
File has 2 code smells: 1 empty catch block (line 89), 1 console.log
(line 156).

Common Issues and Troubleshooting

1. Parse errors on JSX or TypeScript files

SyntaxError: Unexpected token (14:8)
  at Parser.pp$4.raise (acorn/src/location.js:159:13)

Acorn only parses standard JavaScript. For JSX, install acorn-jsx. For TypeScript, you need a different parser entirely. The pragmatic solution is to detect file extensions and skip unsupported files with a clear message:

var SUPPORTED_EXTENSIONS = [".js", ".mjs", ".cjs"];

function canParse(filePath) {
  var ext = path.extname(filePath).toLowerCase();
  if (SUPPORTED_EXTENSIONS.indexOf(ext) === -1) {
    return {
      supported: false,
      reason: "Extension " + ext + " not supported. Use acorn-jsx for .jsx or @typescript-eslint/parser for .ts files."
    };
  }
  return { supported: true };
}

2. MCP server not appearing in Claude Desktop

Error: spawn node ENOENT

This means Claude Desktop cannot find the node binary. Use the full path to node in your config:

{
  "mcpServers": {
    "code-analysis": {
      "command": "C:/Program Files/nodejs/node.exe",
      "args": ["C:/projects/mcp-server/server.js"]
    }
  }
}

On macOS, the path is typically /usr/local/bin/node or the path from which node.

3. Out of memory on large dependency graphs

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

This happens when you try to parse every file in a monorepo at once. Solution: set NODE_OPTIONS in your MCP config and implement pagination:

{
  "env": {
    "NODE_OPTIONS": "--max-old-space-size=4096",
    "PROJECT_ROOT": "/path/to/project"
  }
}

Also, add depth limits to your dependency graph traversal and only analyze files the AI actually asks about.

4. Stale results after file changes

Tool returned complexity of 14 for processTransaction, but I just
refactored it. The tool is reporting old results.

The AST cache is the culprit. Check the file's mtime before returning cached results:

function getCachedAst(filePath) {
  var cached = astCache[filePath];
  if (cached) {
    var currentStat = fs.statSync(filePath);
    if (currentStat.mtimeMs <= cached.mtime) {
      return cached.result;
    }
  }

  // Cache miss or stale - reparse
  var result = parseFile(filePath);
  var stat = fs.statSync(filePath);
  astCache[filePath] = {
    result: result,
    mtime: stat.mtimeMs
  };
  return result;
}

5. Regex special characters in search patterns

SyntaxError: Invalid regular expression: /user.find({/: Unexpected token {

User-provided search patterns often contain characters that are special in regex. Provide a literal search mode:

function escapeRegex(str) {
  return str.replace(/[.*+?^${}()|[\]\\]/g, "\\$&");
}

// In your search tool handler:
var searchPattern = params.literal
  ? escapeRegex(params.pattern)
  : params.pattern;

Best Practices

  • Keep tools focused. One tool should do one thing well. Do not build a single analyze tool that takes a mode parameter to switch between 10 different behaviors. Claude works better with 5 specific tools than 1 kitchen-sink tool.

  • Return structured data, not prose. Let the AI turn your JSON into natural language. When your tool returns { "complexity": 14, "threshold": 10 }, the AI can say "this function's complexity of 14 exceeds the recommended threshold of 10." If your tool returns "The complexity is high," the AI has nothing to work with.

  • Set hard limits on output size. MCP tool responses consume tokens in the AI's context window. Cap search results at 50-100 matches. Truncate file contents to 10,000 lines. Return summaries for large datasets with the option to drill down.

  • Validate all file paths. Never trust paths from the AI. Resolve them against your project root and verify they do not escape it. A path traversal bug in an MCP server gives the AI access to your entire filesystem:

function safePath(projectRoot, requestedPath) {
  var resolved = path.resolve(projectRoot, requestedPath);
  if (!resolved.startsWith(path.resolve(projectRoot))) {
    throw new Error("Path traversal detected: " + requestedPath);
  }
  return resolved;
}
  • Handle binary files gracefully. Your search and parse tools will encounter images, compiled files, and other binary data. Check before reading:
function isBinaryFile(filePath) {
  var buffer = Buffer.alloc(512);
  var fd = fs.openSync(filePath, "r");
  var bytesRead = fs.readSync(fd, buffer, 0, 512, 0);
  fs.closeSync(fd);

  for (var i = 0; i < bytesRead; i++) {
    if (buffer[i] === 0) return true;
  }
  return false;
}
  • Use resources for stable data, tools for dynamic analysis. The file tree changes rarely during a conversation. Expose it as a resource. Complexity analysis depends on which file the AI wants to examine. Make it a tool.

  • Log tool invocations to stderr. When debugging why the AI is getting unexpected results, you need to see exactly what it asked for and what you returned. Structured logging to stderr is invisible to the MCP protocol but invaluable for debugging:

function logToolCall(toolName, params, result) {
  console.error(JSON.stringify({
    timestamp: new Date().toISOString(),
    tool: toolName,
    params: params,
    resultSize: JSON.stringify(result).length
  }));
}
  • Test tools independently before connecting to Claude. Write a simple test harness that calls your tool handlers directly. Do not debug MCP protocol issues and analysis logic issues at the same time:
// test-tools.js
var assert = require("assert");
var { analyzeFileComplexity } = require("./lib/complexity");
var { parseFile } = require("./lib/ast-parser");

var result = parseFile("./test-fixtures/simple.js");
assert(!result.error, "Should parse without error");

var complexity = analyzeFileComplexity(result.ast, result.source);
assert(complexity.maxComplexity < 20, "Test fixture should have low complexity");
console.log("All tests passed");

References

Powered by Contentful