MCP Server Marketplace: Publishing and Distribution
Guide to packaging, publishing, and distributing MCP servers via npm with auto-configuration and marketplace listing strategies.
MCP Server Marketplace: Publishing and Distribution
Overview
The Model Context Protocol (MCP) has created a new category of distributable software: context servers that extend what AI assistants can see and do. Publishing an MCP server is not the same as publishing a typical npm package. You need CLI installers, auto-configuration for multiple clients, structured metadata for discovery, and a documentation strategy that speaks to both humans and AI. This guide covers the full pipeline from packaging an MCP server to getting it into the hands of users through npm, registries, and emerging marketplace platforms.
Prerequisites
- Node.js v18+ installed
- npm account with publishing access
- Working MCP server (stdio transport)
- Familiarity with MCP protocol concepts (tools, resources, prompts)
- Basic understanding of Claude Desktop configuration
- Experience publishing npm packages
The Emerging MCP Server Ecosystem
The MCP server ecosystem is still young, but it is growing fast. As of early 2026, there are hundreds of MCP servers available across npm, GitHub, and dedicated directories. The ecosystem has settled into a few clear patterns.
Transport standardization. Most published servers use stdio transport. This is the simplest model for distribution because the client spawns the server as a child process. No ports, no network configuration, no firewall issues. SSE and streamable HTTP transports exist for remote servers, but stdio dominates the package distribution story.
Discovery problem. Finding MCP servers is still harder than it should be. npm search does not understand MCP metadata. GitHub topic tags help but require manual curation. Several community directories have emerged, and Anthropic maintains an official registry. The servers that get adopted are the ones that make installation trivially easy.
Trust and security. MCP servers run with the same permissions as the user. A malicious server can read files, make network requests, or execute arbitrary code. This makes trust signals critical: verified npm publishers, source code transparency, clear permission documentation, and community reputation all matter.
The winners in this ecosystem will be the servers that solve real problems, install in one command, and document exactly what they do and what access they need.
Packaging MCP Servers for npm Distribution
An MCP server destined for npm needs a specific structure. Here is a production-ready layout:
my-mcp-server/
bin/
cli.js
server.js
lib/
tools/
search.js
analyze.js
resources/
config.js
prompts/
summarize.js
index.js
test/
tools.test.js
integration.test.js
docs/
tools.md
resources.md
prompts.md
package.json
README.md
CHANGELOG.md
LICENSE
.npmignore
The bin/ directory is critical. You need two entry points: cli.js for the installer and setup commands, and server.js for the actual MCP server process that clients will spawn.
Here is a minimal bin/server.js:
#!/usr/bin/env node
var McpServer = require("../lib/index.js");
var server = new McpServer();
server.start().catch(function(err) {
process.stderr.write("Failed to start server: " + err.message + "\n");
process.exit(1);
});
The shebang line is mandatory. Without it, Windows users will get cryptic errors about the file not being a valid Win32 application, and Unix users will get permission denied errors even after chmod.
Your .npmignore should exclude development files but keep documentation:
test/
.github/
.eslintrc*
.prettierrc*
*.test.js
coverage/
.env
.env.*
node_modules/
Do not ignore docs/ or CHANGELOG.md. Users browsing the package on npm should see comprehensive documentation.
Creating a Discoverable package.json
The package.json is your primary metadata surface. MCP-specific fields go in a custom mcp key, but the standard fields matter just as much for discoverability.
{
"name": "@grizzlypeak/mcp-code-analyzer",
"version": "1.4.2",
"description": "MCP server that provides code analysis tools for JavaScript and TypeScript projects",
"keywords": [
"mcp",
"mcp-server",
"model-context-protocol",
"code-analysis",
"javascript",
"typescript",
"ai-tools",
"claude",
"llm"
],
"bin": {
"mcp-code-analyzer": "./bin/cli.js",
"mcp-code-analyzer-server": "./bin/server.js"
},
"main": "./lib/index.js",
"scripts": {
"start": "node bin/server.js",
"test": "node --test test/*.test.js",
"test:integration": "node --test test/integration.test.js",
"lint": "eslint lib/ bin/",
"prepublishOnly": "npm test && npm run lint"
},
"mcp": {
"transport": "stdio",
"tools": [
{
"name": "analyze_complexity",
"description": "Analyze cyclomatic complexity of JavaScript/TypeScript files",
"inputSchema": {
"type": "object",
"properties": {
"filePath": {
"type": "string",
"description": "Path to the file to analyze"
}
},
"required": ["filePath"]
}
},
{
"name": "find_duplicates",
"description": "Detect duplicate code blocks across a project directory",
"inputSchema": {
"type": "object",
"properties": {
"directory": {
"type": "string",
"description": "Root directory to scan"
},
"threshold": {
"type": "number",
"description": "Similarity threshold 0-1 (default 0.8)"
}
},
"required": ["directory"]
}
}
],
"resources": [
{
"uri": "config://analysis-rules",
"name": "Analysis Rules",
"description": "Current analysis configuration and rule set"
}
],
"prompts": [
{
"name": "code_review",
"description": "Generate a structured code review for a file or directory"
}
],
"permissions": [
"fs:read",
"fs:write:./reports"
],
"clients": ["claude-desktop", "claude-code", "continue", "cline"]
},
"engines": {
"node": ">=18.0.0"
},
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/grizzlypeak/mcp-code-analyzer.git"
},
"homepage": "https://github.com/grizzlypeak/mcp-code-analyzer#readme",
"bugs": {
"url": "https://github.com/grizzlypeak/mcp-code-analyzer/issues"
},
"author": "Shane Larson <[email protected]>",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.2.0"
},
"devDependencies": {
"eslint": "^8.56.0"
}
}
Several things to note here. The keywords array should always include mcp, mcp-server, and model-context-protocol. These are the terms people search for. The mcp.permissions field is not part of the official spec yet, but several registries and clients are starting to read it. Declaring your permissions upfront builds trust.
The bin field registers two commands: the CLI for setup and the server binary for MCP clients. Both are required for a good distribution story.
Building a CLI Installer
The CLI installer is what turns your server from "download and manually configure" into "one command and done." This is the single biggest factor in adoption. Here is a production CLI installer:
#!/usr/bin/env node
var fs = require("fs");
var path = require("path");
var os = require("os");
var childProcess = require("child_process");
var PACKAGE_NAME = "@grizzlypeak/mcp-code-analyzer";
var SERVER_BIN = "mcp-code-analyzer-server";
var DISPLAY_NAME = "Code Analyzer";
var commands = {
install: installServer,
uninstall: uninstallServer,
status: checkStatus,
doctor: runDiagnostics,
help: showHelp
};
var command = process.argv[2] || "help";
if (!commands[command]) {
console.error("Unknown command: " + command);
console.error('Run "mcp-code-analyzer help" for usage information.');
process.exit(1);
}
commands[command]();
function installServer() {
var clients = detectInstalledClients();
if (clients.length === 0) {
console.log("No supported MCP clients detected.");
console.log("Supported clients: Claude Desktop, Claude Code, Continue, Cline");
console.log("");
console.log("You can manually configure your client with:");
console.log(" Server command: " + resolveServerPath());
console.log(" Transport: stdio");
process.exit(0);
}
console.log("Detected MCP clients: " + clients.map(function(c) { return c.name; }).join(", "));
console.log("");
var installed = 0;
var failed = 0;
clients.forEach(function(client) {
try {
client.configure();
console.log(" [OK] " + client.name + " configured successfully");
installed++;
} catch (err) {
console.error(" [FAIL] " + client.name + ": " + err.message);
failed++;
}
});
console.log("");
console.log("Installation complete. " + installed + " client(s) configured, " + failed + " failed.");
if (installed > 0) {
console.log("Restart your MCP client(s) to activate the server.");
}
}
function uninstallServer() {
var clients = detectInstalledClients();
clients.forEach(function(client) {
try {
client.remove();
console.log(" [OK] Removed from " + client.name);
} catch (err) {
console.error(" [FAIL] " + client.name + ": " + err.message);
}
});
}
function checkStatus() {
var serverPath = resolveServerPath();
console.log("Server binary: " + serverPath);
console.log("Binary exists: " + fs.existsSync(serverPath));
var clients = detectInstalledClients();
clients.forEach(function(client) {
var configured = client.isConfigured();
console.log(client.name + ": " + (configured ? "configured" : "not configured"));
});
}
function runDiagnostics() {
console.log("Running diagnostics...\n");
// Check Node version
var nodeVersion = process.version;
var major = parseInt(nodeVersion.slice(1).split(".")[0], 10);
console.log("Node.js version: " + nodeVersion + (major >= 18 ? " [OK]" : " [WARN: v18+ recommended]"));
// Check server binary
var serverPath = resolveServerPath();
var binaryExists = fs.existsSync(serverPath);
console.log("Server binary: " + (binaryExists ? "[OK]" : "[MISSING] " + serverPath));
// Try spawning the server
if (binaryExists) {
try {
var result = childProcess.spawnSync(process.execPath, [serverPath, "--version"], {
timeout: 5000,
encoding: "utf8"
});
if (result.status === 0) {
console.log("Server spawn test: [OK] " + result.stdout.trim());
} else {
console.log("Server spawn test: [FAIL] Exit code " + result.status);
if (result.stderr) {
console.log(" stderr: " + result.stderr.trim());
}
}
} catch (err) {
console.log("Server spawn test: [FAIL] " + err.message);
}
}
// Check client configurations
var clients = detectInstalledClients();
if (clients.length === 0) {
console.log("\nNo MCP clients detected on this system.");
} else {
console.log("\nClient configurations:");
clients.forEach(function(client) {
var configured = client.isConfigured();
console.log(" " + client.name + ": " + (configured ? "[OK]" : "[NOT CONFIGURED]"));
});
}
}
function showHelp() {
console.log("Usage: mcp-code-analyzer <command>");
console.log("");
console.log("Commands:");
console.log(" install Configure MCP clients to use this server");
console.log(" uninstall Remove server configuration from MCP clients");
console.log(" status Check current installation status");
console.log(" doctor Run diagnostics and check for common issues");
console.log(" help Show this help message");
}
function resolveServerPath() {
// Resolve the actual path to the server binary
var binDir = path.dirname(process.argv[1]);
return path.join(binDir, SERVER_BIN + (process.platform === "win32" ? ".cmd" : ""));
}
function detectInstalledClients() {
var clients = [];
// Claude Desktop
var claudeConfigPath = getClaudeDesktopConfigPath();
if (claudeConfigPath) {
clients.push(createClaudeDesktopClient(claudeConfigPath));
}
// Claude Code (checks for claude CLI in PATH)
try {
var which = childProcess.spawnSync(
process.platform === "win32" ? "where" : "which",
["claude"],
{ encoding: "utf8", timeout: 3000 }
);
if (which.status === 0) {
clients.push(createClaudeCodeClient());
}
} catch (e) {
// Claude Code not found, skip
}
return clients;
}
function getClaudeDesktopConfigPath() {
var configDir;
if (process.platform === "darwin") {
configDir = path.join(os.homedir(), "Library", "Application Support", "Claude");
} else if (process.platform === "win32") {
configDir = path.join(process.env.APPDATA || "", "Claude");
} else {
configDir = path.join(os.homedir(), ".config", "claude");
}
var configFile = path.join(configDir, "claude_desktop_config.json");
// Check if the directory exists (meaning Claude Desktop is likely installed)
if (fs.existsSync(configDir)) {
return configFile;
}
return null;
}
function createClaudeDesktopClient(configPath) {
return {
name: "Claude Desktop",
configure: function() {
var config = {};
if (fs.existsSync(configPath)) {
var raw = fs.readFileSync(configPath, "utf8");
config = JSON.parse(raw);
}
if (!config.mcpServers) {
config.mcpServers = {};
}
var serverCommand = resolveGlobalNpxPath();
config.mcpServers[PACKAGE_NAME] = {
command: serverCommand.command,
args: serverCommand.args
};
var configDir = path.dirname(configPath);
if (!fs.existsSync(configDir)) {
fs.mkdirSync(configDir, { recursive: true });
}
fs.writeFileSync(configPath, JSON.stringify(config, null, 2), "utf8");
},
remove: function() {
if (!fs.existsSync(configPath)) return;
var raw = fs.readFileSync(configPath, "utf8");
var config = JSON.parse(raw);
if (config.mcpServers && config.mcpServers[PACKAGE_NAME]) {
delete config.mcpServers[PACKAGE_NAME];
fs.writeFileSync(configPath, JSON.stringify(config, null, 2), "utf8");
}
},
isConfigured: function() {
if (!fs.existsSync(configPath)) return false;
var raw = fs.readFileSync(configPath, "utf8");
var config = JSON.parse(raw);
return !!(config.mcpServers && config.mcpServers[PACKAGE_NAME]);
}
};
}
function createClaudeCodeClient() {
return {
name: "Claude Code",
configure: function() {
var serverPath = resolveServerPath();
var result = childProcess.spawnSync("claude", [
"mcp", "add",
PACKAGE_NAME,
"--transport", "stdio",
"--", "node", serverPath
], { encoding: "utf8", timeout: 10000 });
if (result.status !== 0) {
throw new Error(result.stderr || "Failed to add server to Claude Code");
}
},
remove: function() {
childProcess.spawnSync("claude", [
"mcp", "remove", PACKAGE_NAME
], { encoding: "utf8", timeout: 10000 });
},
isConfigured: function() {
var result = childProcess.spawnSync("claude", [
"mcp", "list"
], { encoding: "utf8", timeout: 10000 });
return result.stdout && result.stdout.indexOf(PACKAGE_NAME) !== -1;
}
};
}
function resolveGlobalNpxPath() {
// Use npx to run the server - this works reliably across platforms
// and handles the case where the package is installed globally
if (process.platform === "win32") {
return {
command: "cmd",
args: ["/c", "npx", "-y", PACKAGE_NAME + "-server"]
};
}
return {
command: "npx",
args: ["-y", PACKAGE_NAME + "-server"]
};
}
Run the installer after a global npm install:
npm install -g @grizzlypeak/mcp-code-analyzer
mcp-code-analyzer install
Expected output:
Detected MCP clients: Claude Desktop, Claude Code
[OK] Claude Desktop configured successfully
[OK] Claude Code configured successfully
Installation complete. 2 client(s) configured, 0 failed.
Restart your MCP client(s) to activate the server.
The doctor command is invaluable for support. When users report issues, you tell them to run mcp-code-analyzer doctor and share the output. This eliminates 80% of back-and-forth debugging.
Auto-Configuration for Claude Desktop and Other Clients
The Claude Desktop configuration lives in a JSON file at a platform-specific path:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json - Linux:
~/.config/claude/claude_desktop_config.json
The configuration format:
{
"mcpServers": {
"@grizzlypeak/mcp-code-analyzer": {
"command": "npx",
"args": ["-y", "@grizzlypeak/mcp-code-analyzer-server"],
"env": {
"ANALYZER_DEPTH": "5"
}
}
}
}
When your installer writes to this file, you must handle several edge cases:
- The file does not exist yet (first MCP server being configured)
- The file exists but is empty
- The file has existing servers that must be preserved
- The JSON is malformed (user hand-edited it incorrectly)
Here is a robust configuration writer:
function safeWriteConfig(configPath, serverName, serverConfig) {
var config = {};
if (fs.existsSync(configPath)) {
var raw = fs.readFileSync(configPath, "utf8").trim();
if (raw.length > 0) {
try {
config = JSON.parse(raw);
} catch (parseErr) {
// Back up the broken config before overwriting
var backupPath = configPath + ".backup." + Date.now();
fs.copyFileSync(configPath, backupPath);
console.warn("Warning: Existing config was malformed. Backed up to " + backupPath);
config = {};
}
}
}
if (!config.mcpServers || typeof config.mcpServers !== "object") {
config.mcpServers = {};
}
config.mcpServers[serverName] = serverConfig;
var configDir = path.dirname(configPath);
if (!fs.existsSync(configDir)) {
fs.mkdirSync(configDir, { recursive: true });
}
// Write to a temp file first, then rename (atomic write)
var tmpPath = configPath + ".tmp";
fs.writeFileSync(tmpPath, JSON.stringify(config, null, 2), "utf8");
fs.renameSync(tmpPath, configPath);
return config;
}
The atomic write pattern (write to temp file, then rename) prevents corruption if the process is killed mid-write. This matters more than you might think -- users often Ctrl+C installers impatiently.
For environment variables, give users a way to pass them during install:
mcp-code-analyzer install --env ANALYZER_DEPTH=5 --env API_KEY=sk-xxx
Handle this in the CLI:
function parseEnvArgs(argv) {
var env = {};
var i = 0;
while (i < argv.length) {
if (argv[i] === "--env" && argv[i + 1]) {
var parts = argv[i + 1].split("=");
var key = parts[0];
var value = parts.slice(1).join("=");
env[key] = value;
i += 2;
} else {
i++;
}
}
return Object.keys(env).length > 0 ? env : undefined;
}
Documenting Tools, Resources, and Prompts for Marketplace Listings
Documentation for MCP servers needs to serve three audiences: humans browsing a registry, AI assistants reading tool descriptions, and developers integrating your server programmatically.
Create a structured docs/ directory:
# docs/tools.md
## analyze_complexity
Analyzes the cyclomatic complexity of JavaScript and TypeScript source files.
**Input:**
| Parameter | Type | Required | Description |
|-----------|--------|----------|----------------------------|
| filePath | string | Yes | Path to the file to analyze |
**Output:**
Returns a JSON object with complexity metrics per function:
- `functionName`: Name of the function
- `complexity`: Cyclomatic complexity score (1 = simple, 10+ = complex)
- `lineStart`: Starting line number
- `lineEnd`: Ending line number
**Example:**
Tool: analyze_complexity Input: { "filePath": "/project/src/auth.js" } Output: { "file": "/project/src/auth.js", "functions": [ { "functionName": "validateToken", "complexity": 4, "lineStart": 12, "lineEnd": 38 }, { "functionName": "refreshSession", "complexity": 7, "lineStart": 40, "lineEnd": 89 } ], "averageComplexity": 5.5 }
For your README, include a capabilities summary table that registries can parse:
## Capabilities
| Type | Name | Description |
|----------|--------------------|------------------------------------------------|
| Tool | analyze_complexity | Analyze cyclomatic complexity of source files |
| Tool | find_duplicates | Detect duplicate code blocks in a project |
| Resource | analysis-rules | Current analysis configuration and rules |
| Prompt | code_review | Generate structured code review |
## Permissions Required
| Permission | Reason |
|-----------------|-------------------------------------------|
| fs:read | Read source files for analysis |
| fs:write:reports| Write analysis reports to ./reports dir |
The tool descriptions in your MCP server code are equally important. They are what the AI assistant reads to decide when to use your tool:
var server = new McpServer({
name: "@grizzlypeak/mcp-code-analyzer",
version: "1.4.2"
});
server.tool(
"analyze_complexity",
"Analyze the cyclomatic complexity of a JavaScript or TypeScript source file. " +
"Returns complexity scores for each function in the file. " +
"Use this when the user asks about code complexity, maintainability, " +
"or wants to identify functions that need refactoring.",
{
filePath: {
type: "string",
description: "Absolute path to the .js or .ts file to analyze"
}
},
function(params) {
return analyzeFile(params.filePath);
}
);
Notice the description includes when to use the tool, not just what it does. This helps the AI assistant select the right tool from potentially dozens of available options.
Versioning and Changelog Best Practices
MCP servers need stricter versioning discipline than typical libraries because breaking changes can silently corrupt AI workflows. Users may not even realize their MCP server updated until their assistant starts failing.
Follow these rules:
Major version bump (2.0.0): Tool names changed, tool input schemas changed in backward-incompatible ways, resources removed or renamed, transport type changed.
Minor version bump (1.5.0): New tools added, new optional parameters on existing tools, new resources or prompts added, performance improvements.
Patch version bump (1.4.3): Bug fixes, documentation updates, dependency updates with no behavior change.
Your CHANGELOG should be structured for both humans and automated tooling:
# Changelog
## [1.5.0] - 2026-02-10
### Added
- New `find_dead_code` tool for detecting unreachable code paths
- Support for `.tsx` files in `analyze_complexity` tool
- `--env` flag for passing environment variables during install
### Changed
- Improved accuracy of duplicate detection algorithm (threshold now uses AST comparison)
- `analyze_complexity` now includes JSDoc complexity annotations
### Fixed
- Server crash when analyzing files with circular imports
- Windows path handling in `find_duplicates` tool
## [1.4.2] - 2026-01-28
### Fixed
- CLI installer now correctly detects Claude Desktop on Linux
- Atomic config file writes prevent corruption on interrupted install
Pin your @modelcontextprotocol/sdk dependency to a specific minor version range. The SDK is still evolving, and a major SDK update could break your server silently:
{
"dependencies": {
"@modelcontextprotocol/sdk": "~1.2.0"
}
}
Use tilde (~) not caret (^) for the SDK dependency. You want patch updates but not minor version bumps until you have tested them.
Testing Across Different MCP Clients
MCP clients have subtle differences in how they spawn servers, handle timeouts, and interpret responses. You need to test across at least Claude Desktop, Claude Code, and one third-party client.
Here is a test harness that simulates MCP client behavior:
var childProcess = require("child_process");
var path = require("path");
var assert = require("assert");
var test = require("node:test");
var SERVER_PATH = path.join(__dirname, "..", "bin", "server.js");
function createMcpClient(serverPath) {
var proc = childProcess.spawn(process.execPath, [serverPath], {
stdio: ["pipe", "pipe", "pipe"],
env: Object.assign({}, process.env, {
NODE_ENV: "test"
})
});
var responseBuffer = "";
var pendingCallbacks = {};
var nextId = 1;
proc.stdout.on("data", function(chunk) {
responseBuffer += chunk.toString();
// MCP uses newline-delimited JSON
var lines = responseBuffer.split("\n");
responseBuffer = lines.pop(); // Keep incomplete line in buffer
lines.forEach(function(line) {
line = line.trim();
if (!line) return;
try {
var msg = JSON.parse(line);
if (msg.id && pendingCallbacks[msg.id]) {
pendingCallbacks[msg.id](null, msg);
delete pendingCallbacks[msg.id];
}
} catch (e) {
// Not JSON, might be a log line on stdout (bad practice but common)
}
});
});
return {
send: function(method, params, callback) {
var id = nextId++;
var message = JSON.stringify({
jsonrpc: "2.0",
id: id,
method: method,
params: params || {}
}) + "\n";
pendingCallbacks[id] = callback;
proc.stdin.write(message);
// Timeout after 10 seconds
setTimeout(function() {
if (pendingCallbacks[id]) {
pendingCallbacks[id](new Error("Request timed out after 10s"));
delete pendingCallbacks[id];
}
}, 10000);
},
close: function() {
proc.stdin.end();
proc.kill();
}
};
}
test("server responds to initialize", function(t, done) {
var client = createMcpClient(SERVER_PATH);
client.send("initialize", {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "test-harness", version: "1.0.0" }
}, function(err, response) {
assert.ifError(err);
assert.ok(response.result);
assert.ok(response.result.serverInfo);
assert.ok(response.result.capabilities);
client.close();
done();
});
});
test("tool call returns valid result", function(t, done) {
var client = createMcpClient(SERVER_PATH);
client.send("initialize", {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "test-harness", version: "1.0.0" }
}, function(err) {
assert.ifError(err);
client.send("initialized", {}, function() {
client.send("tools/call", {
name: "analyze_complexity",
arguments: { filePath: path.join(__dirname, "fixtures", "sample.js") }
}, function(toolErr, toolResponse) {
assert.ifError(toolErr);
assert.ok(toolResponse.result);
assert.ok(Array.isArray(toolResponse.result.content));
client.close();
done();
});
});
});
});
test("server handles malformed input gracefully", function(t, done) {
var client = createMcpClient(SERVER_PATH);
client.send("initialize", {
protocolVersion: "2024-11-05",
capabilities: {},
clientInfo: { name: "test-harness", version: "1.0.0" }
}, function(err) {
assert.ifError(err);
client.send("tools/call", {
name: "analyze_complexity",
arguments: { filePath: 12345 }
}, function(toolErr, toolResponse) {
// Should return an error, not crash the server
assert.ok(toolResponse.error || (toolResponse.result && toolResponse.result.isError));
client.close();
done();
});
});
});
Run this against your server:
node --test test/integration.test.js
Expected output:
TAP version 13
# Subtest: server responds to initialize
ok 1 - server responds to initialize (234ms)
# Subtest: tool call returns valid result
ok 2 - tool call returns valid result (512ms)
# Subtest: server handles malformed input gracefully
ok 3 - server handles malformed input gracefully (189ms)
1..3
# tests 3
# pass 3
# fail 0
Also test platform-specific issues. Windows path separators, long path names, and spaces in directory names cause the most cross-platform bugs. Create test fixtures with paths like test/fixtures/path with spaces/sample.js.
Monetization Models for MCP Servers
The MCP server ecosystem is still figuring out monetization, but several models are emerging.
Open core. Publish the basic server for free on npm. Offer a paid tier with additional tools, higher rate limits, or premium resources. Gate access with an API key passed as an environment variable:
function checkLicense(apiKey) {
if (!apiKey) {
return { tier: "free", toolLimit: 3 };
}
// Validate against your license server
var https = require("https");
return new Promise(function(resolve, reject) {
var req = https.get(
"https://api.grizzlypeaksoftware.com/licenses/validate?key=" + apiKey,
function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() {
try {
resolve(JSON.parse(body));
} catch (e) {
resolve({ tier: "free", toolLimit: 3 });
}
});
}
);
req.on("error", function() {
resolve({ tier: "free", toolLimit: 3 });
});
});
}
Usage-based pricing. Track tool invocations and charge per call. This works well for servers that wrap expensive APIs (databases, cloud services, specialized analysis tools).
Sponsorware. Open source the server but gate early access to new features behind GitHub Sponsors or Patreon. Once you hit a funding goal, the feature becomes public.
Marketplace commission. List on an MCP marketplace that handles payments and takes a percentage. This is the model most likely to scale as dedicated MCP marketplaces mature.
The important thing is to separate your server logic from your licensing logic. Make it easy to swap monetization strategies as the ecosystem evolves.
Building a Registry/Directory for MCP Servers
If you are building a registry or directory for MCP servers, here is a minimal but functional implementation:
var express = require("express");
var fs = require("fs");
var path = require("path");
var app = express();
app.use(express.json());
var DATA_DIR = path.join(__dirname, "data");
var REGISTRY_FILE = path.join(DATA_DIR, "registry.json");
function loadRegistry() {
if (!fs.existsSync(REGISTRY_FILE)) {
return { servers: [] };
}
return JSON.parse(fs.readFileSync(REGISTRY_FILE, "utf8"));
}
function saveRegistry(registry) {
if (!fs.existsSync(DATA_DIR)) {
fs.mkdirSync(DATA_DIR, { recursive: true });
}
fs.writeFileSync(REGISTRY_FILE, JSON.stringify(registry, null, 2), "utf8");
}
// Submit a server to the registry
app.post("/api/servers", function(req, res) {
var submission = req.body;
// Validate required fields
var required = ["name", "version", "description", "repository", "transport"];
var missing = required.filter(function(field) {
return !submission[field];
});
if (missing.length > 0) {
return res.status(400).json({
error: "Missing required fields: " + missing.join(", ")
});
}
// Validate tools have proper schemas
if (submission.tools) {
var invalidTools = submission.tools.filter(function(tool) {
return !tool.name || !tool.description;
});
if (invalidTools.length > 0) {
return res.status(400).json({
error: "All tools must have name and description fields"
});
}
}
var registry = loadRegistry();
// Check for existing entry
var existingIndex = registry.servers.findIndex(function(s) {
return s.name === submission.name;
});
var entry = {
name: submission.name,
version: submission.version,
description: submission.description,
repository: submission.repository,
npm: submission.npm || null,
transport: submission.transport,
tools: submission.tools || [],
resources: submission.resources || [],
prompts: submission.prompts || [],
permissions: submission.permissions || [],
categories: submission.categories || [],
author: submission.author || "Unknown",
license: submission.license || "MIT",
submittedAt: new Date().toISOString(),
downloads: 0,
verified: false
};
if (existingIndex >= 0) {
entry.downloads = registry.servers[existingIndex].downloads;
entry.verified = registry.servers[existingIndex].verified;
registry.servers[existingIndex] = entry;
} else {
registry.servers.push(entry);
}
saveRegistry(registry);
res.status(201).json({ message: "Server registered successfully", entry: entry });
});
// Search the registry
app.get("/api/servers", function(req, res) {
var registry = loadRegistry();
var results = registry.servers;
// Filter by search query
if (req.query.q) {
var query = req.query.q.toLowerCase();
results = results.filter(function(server) {
return server.name.toLowerCase().indexOf(query) !== -1 ||
server.description.toLowerCase().indexOf(query) !== -1 ||
(server.categories || []).some(function(c) {
return c.toLowerCase().indexOf(query) !== -1;
});
});
}
// Filter by transport type
if (req.query.transport) {
results = results.filter(function(server) {
return server.transport === req.query.transport;
});
}
// Filter by category
if (req.query.category) {
var cat = req.query.category;
results = results.filter(function(server) {
return (server.categories || []).indexOf(cat) !== -1;
});
}
// Sort by downloads (most popular first)
results.sort(function(a, b) {
return b.downloads - a.downloads;
});
// Pagination
var page = parseInt(req.query.page, 10) || 1;
var limit = parseInt(req.query.limit, 10) || 20;
var offset = (page - 1) * limit;
res.json({
total: results.length,
page: page,
limit: limit,
servers: results.slice(offset, offset + limit)
});
});
// Get a specific server's details
app.get("/api/servers/:name", function(req, res) {
var registry = loadRegistry();
var server = registry.servers.find(function(s) {
return s.name === req.params.name;
});
if (!server) {
return res.status(404).json({ error: "Server not found" });
}
// Increment download counter (for install tracking)
if (req.query.install) {
server.downloads++;
saveRegistry(registry);
}
res.json(server);
});
var PORT = process.env.PORT || 3000;
app.listen(PORT, function() {
console.log("MCP Registry running on port " + PORT);
});
This gives you a searchable, filterable registry. In production you would add authentication for submissions, automated npm metadata fetching, server health checks, and user reviews.
Complete Working Example
Here is a fully packaged MCP server ready for npm publishing. This is a "code metrics" server that analyzes JavaScript files.
lib/index.js - The MCP server implementation:
var McpServer = require("@modelcontextprotocol/sdk/server/mcp.js").McpServer;
var StdioTransport = require("@modelcontextprotocol/sdk/server/stdio.js").StdioServerTransport;
var fs = require("fs");
var path = require("path");
var z = require("zod");
function createServer() {
var server = new McpServer({
name: "mcp-code-metrics",
version: require("../package.json").version
});
// Tool: Count lines of code
server.tool(
"count_lines",
"Count lines of code, comments, and blanks in a JavaScript file. " +
"Use when the user wants to understand file size or code density.",
{
filePath: z.string().describe("Absolute path to a .js file")
},
function(params) {
try {
var content = fs.readFileSync(params.filePath, "utf8");
var lines = content.split("\n");
var stats = { total: lines.length, code: 0, comments: 0, blank: 0 };
lines.forEach(function(line) {
var trimmed = line.trim();
if (trimmed.length === 0) {
stats.blank++;
} else if (trimmed.startsWith("//") || trimmed.startsWith("/*") || trimmed.startsWith("*")) {
stats.comments++;
} else {
stats.code++;
}
});
return {
content: [{
type: "text",
text: JSON.stringify(stats, null, 2)
}]
};
} catch (err) {
return {
content: [{
type: "text",
text: "Error: " + err.message
}],
isError: true
};
}
}
);
// Tool: List exported functions
server.tool(
"list_exports",
"List all exported functions and variables from a Node.js module. " +
"Use when the user wants to understand a module's public API.",
{
filePath: z.string().describe("Absolute path to a .js file")
},
function(params) {
try {
var content = fs.readFileSync(params.filePath, "utf8");
var exports = [];
// Match module.exports patterns
var assignPattern = /module\.exports\.(\w+)\s*=/g;
var match;
while ((match = assignPattern.exec(content)) !== null) {
exports.push({ name: match[1], type: "named" });
}
// Match exports.X patterns
var namedPattern = /exports\.(\w+)\s*=/g;
while ((match = namedPattern.exec(content)) !== null) {
exports.push({ name: match[1], type: "named" });
}
return {
content: [{
type: "text",
text: JSON.stringify({
file: params.filePath,
exports: exports,
count: exports.length
}, null, 2)
}]
};
} catch (err) {
return {
content: [{ type: "text", text: "Error: " + err.message }],
isError: true
};
}
}
);
// Resource: Server configuration
server.resource(
"config",
"config://settings",
{
description: "Current server configuration and supported file types",
mimeType: "application/json"
},
function() {
return {
contents: [{
uri: "config://settings",
mimeType: "application/json",
text: JSON.stringify({
supportedExtensions: [".js", ".mjs", ".cjs"],
maxFileSize: "10MB",
version: require("../package.json").version
}, null, 2)
}]
};
}
);
return server;
}
module.exports = {
createServer: createServer,
start: function() {
var server = createServer();
var transport = new StdioServerTransport();
return server.connect(transport);
}
};
bin/server.js - The server entry point:
#!/usr/bin/env node
var mcpCodeMetrics = require("../lib/index.js");
mcpCodeMetrics.start().catch(function(err) {
process.stderr.write("mcp-code-metrics: " + err.message + "\n");
process.exit(1);
});
Publishing to npm:
# Verify package contents before publishing
npm pack --dry-run
# Output:
# npm notice
# npm notice package: @grizzlypeak/[email protected]
# npm notice Tarball Contents
# npm notice 1.2kB bin/cli.js
# npm notice 342B bin/server.js
# npm notice 4.1kB lib/index.js
# npm notice 1.8kB package.json
# npm notice 3.2kB README.md
# npm notice 1.1kB CHANGELOG.md
# npm notice 1.1kB LICENSE
# npm notice === Tarball Details ===
# npm notice name: @grizzlypeak/mcp-code-metrics
# npm notice version: 1.0.0
# npm notice package size: 4.8 kB
# npm notice unpacked size: 12.8 kB
# npm notice total files: 7
# Publish (use --access public for scoped packages)
npm publish --access public
After publishing, a user installs and sets up the server in one shot:
npm install -g @grizzlypeak/mcp-code-metrics
mcp-code-metrics install
The server is now available in Claude Desktop and Claude Code. No manual JSON editing required.
Common Issues and Troubleshooting
1. "spawn ENOENT" When Client Tries to Start Server
Error: spawn npx ENOENT
at ChildProcess._handle.onexit (node:internal/child_process:286:19)
This means the MCP client cannot find npx in its PATH. Claude Desktop on macOS does not inherit your shell's PATH. Fix this by using absolute paths in the configuration:
{
"mcpServers": {
"mcp-code-metrics": {
"command": "/usr/local/bin/node",
"args": ["/usr/local/lib/node_modules/@grizzlypeak/mcp-code-metrics/bin/server.js"]
}
}
}
Or on Windows, use the full path to node.exe:
{
"mcpServers": {
"mcp-code-metrics": {
"command": "C:\\Program Files\\nodejs\\node.exe",
"args": ["C:\\Users\\shane\\AppData\\Roaming\\npm\\node_modules\\@grizzlypeak\\mcp-code-metrics\\bin\\server.js"]
}
}
}
2. Server Starts But No Tools Appear in Client
[MCP] Server "mcp-code-metrics" connected but reported 0 tools
This usually means the server is writing log output to stdout, which corrupts the JSON-RPC communication. MCP servers MUST use stderr for logging, never stdout:
// WRONG - breaks MCP communication
console.log("Server starting...");
// CORRECT - use stderr for logs
process.stderr.write("Server starting...\n");
// Or redirect console.log to stderr
console.log = function() {
var args = Array.prototype.slice.call(arguments);
process.stderr.write(args.join(" ") + "\n");
};
3. "EPERM: operation not permitted" on Windows Config Write
Error: EPERM: operation not permitted, rename 'C:\Users\shane\AppData\Roaming\Claude\claude_desktop_config.json.tmp' -> 'C:\Users\shane\AppData\Roaming\Claude\claude_desktop_config.json'
This happens when Claude Desktop has the config file open and locked. The installer should catch this and fall back to a direct write:
try {
fs.renameSync(tmpPath, configPath);
} catch (renameErr) {
if (renameErr.code === "EPERM" || renameErr.code === "EBUSY") {
// File is locked, fall back to direct write
fs.writeFileSync(configPath, fs.readFileSync(tmpPath, "utf8"), "utf8");
fs.unlinkSync(tmpPath);
} else {
throw renameErr;
}
}
4. npm Publish Fails With "402 Payment Required"
npm ERR! 402 Payment Required - PUT https://registry.npmjs.org/@grizzlypeak%2fmcp-code-metrics
npm ERR! 402 Payment Required - You must sign up for private packages
Scoped packages (@org/package) default to private on npm. You need to explicitly set public access:
npm publish --access public
Or add it to your package.json permanently:
{
"publishConfig": {
"access": "public"
}
}
5. Server Hangs on Windows After Tool Call
[MCP] Tool call "count_lines" timed out after 30000ms
This is often caused by synchronous file operations blocking the event loop on Windows when reading large files through network drives or OneDrive-synced folders. Switch to async operations:
var fsPromises = require("fs").promises;
function countLinesAsync(filePath) {
return fsPromises.readFile(filePath, "utf8").then(function(content) {
var lines = content.split("\n");
// ... process lines
return stats;
});
}
Best Practices
Always use stdio transport for published servers. HTTP/SSE transports require network configuration that defeats the purpose of easy distribution. Reserve HTTP transports for servers deployed as shared services.
Include a
doctorcommand in your CLI. The time you spend building diagnostics will be repaid tenfold in reduced support tickets. Check Node version, binary paths, client configurations, and run a test spawn all in one command.Write all logs to stderr, never stdout. This is the single most common bug in published MCP servers. stdout is reserved for JSON-RPC protocol messages. A single stray
console.logbreaks everything.Back up existing configuration before modifying it. Your installer should never destroy a user's existing MCP client configuration. Read the current config, merge your server entry, and write it back. Keep a
.backupcopy.Test with
npm packbefore every publish. Runnpm pack --dry-runto see exactly what files will be included. Check for accidentally included.envfiles,node_modules, or test fixtures that inflate the package size.Declare your permissions explicitly. Even though MCP does not enforce a permission model yet, documenting what file system access, network access, or environment variables your server needs builds trust and prepares for future permission systems.
Pin your SDK dependency with tilde, not caret. The MCP SDK is still evolving rapidly. A minor version bump in the SDK could change protocol behavior. Use
~1.2.0to get patches but not minor updates.Version your tool schemas as part of your API contract. If you change a tool's input schema, that is a breaking change for AI workflows that depend on it. Treat tool schemas with the same respect as REST API contracts.
Support
--versionand--helpflags on both binaries. Users and automated tools will call your server binary with--versionto check what is installed. Make sure bothbin/cli.jsandbin/server.jshandle these flags gracefully instead of crashing.Automate publishing with GitHub Actions. Tag a release, run tests, publish to npm, and update your registry listing in one pipeline. Manual publishes lead to forgotten changelog entries and version mismatches.
References
- Model Context Protocol Specification - Official MCP protocol documentation
- MCP TypeScript SDK - Official TypeScript/JavaScript SDK for MCP
- npm Publishing Guide - npm documentation on package publishing
- Claude Desktop MCP Configuration - How to configure MCP servers in Claude Desktop
- Semantic Versioning - SemVer specification for version numbering
- MCP Server Examples - Official collection of reference MCP server implementations