← Back to Blog
code·8 min read

How to Build an MCP Server with TypeScript

If you're building AI-powered applications in 2025, you've probably heard about MCP. Maybe you've even wondered why everyone suddenly cares about yet another protocol. Here's the thing: learning how to build an MCP server isn't just another checkbox on your developer skills list—it's becoming table stakes for connecting AI assistants to the real world.

I've spent the last few months building MCP integrations for various projects, and I'll walk you through what actually matters. No fluff, just the practical bits you need to get a working server up and running.

What MCP Is and When You Need It

Model Context Protocol (MCP) is an open standard that Anthropic released in November 2024. Think of it as a universal adapter between AI assistants and external systems—databases, APIs, file systems, whatever your application needs to touch.

The architecture is straightforward: a host application (Claude Desktop, an IDE, your custom agent) contains MCP clients that connect to MCP servers. Those servers expose capabilities from external systems over JSON-RPC 2.0. Before MCP existed, connecting an AI to five different tools meant building five custom integrations. Now you build to one protocol, and it works everywhere.

MCP defines three core primitives. Tools are functions the AI can call—fetching data, running calculations, triggering actions. Resources expose read-only data identified by URIs (think file contents or database records). Prompts are reusable templates for common interaction patterns.

The adoption curve has been steep. OpenAI announced support in March 2025, Google DeepMind followed in April 2025, and by late 2025 the protocol moved to the Linux Foundation with backing from AWS, Microsoft, and Cloudflare. The TypeScript SDK alone sees millions of weekly downloads. This isn't experimental anymore—it's becoming the standard way AI assistants interact with external systems.

MCP vs Skills—Choosing the Right Approach

Here's something most tutorials skip: MCP isn't your only option for extending AI capabilities. Anthropic introduced Agent Skills in October 2025, and understanding when to use each saves real headaches.

MCP provides connectivity. It connects AI to external systems through a standardised protocol. You build and run servers. There's infrastructure involved.

Skills provide procedural knowledge. They teach Claude how to perform specific workflows through Markdown files. No servers, no infrastructure—just a SKILL.md file with instructions and examples.

When I'm building AI projects, here's my mental model: MCP is for connectivity, Skills are for procedures. Need to query a database in real-time? MCP. Want Claude to follow your company's code review checklist? Skills. The most powerful setups combine both—MCP pulls data from your CRM while a Skill structures how that data gets analysed and reported.

Choose MCP when you need live data from external systems, want cross-platform support (Claude Desktop, Claude Code, custom agents), or you're building shared infrastructure. Choose Skills when you're teaching Claude organisational knowledge, creating repeatable task automation, or want something deployed in minutes rather than hours.

There's also a token efficiency angle. MCP tool definitions stay in context with every request—useful for discoverability, but adds overhead. Skills use progressive disclosure: a tiny descriptor loads initially, and the full instructions only get pulled when Claude determines they're relevant. For specialised workflows you use occasionally, Skills can dramatically reduce token consumption.

Project Setup and Dependencies

Let's build something. You'll need Node.js 18+ and about ten minutes.

mkdir my-mcp-server && cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D @types/node typescript

The official TypeScript SDK (@modelcontextprotocol/sdk) is currently at version 1.25.2. Zod handles schema validation—the SDK requires it as a peer dependency.

Configure your package.json for ES modules:

{
  "type": "module",
  "scripts": {
    "build": "tsc && chmod 755 build/index.js",
    "inspector": "npx @modelcontextprotocol/inspector build/index.js"
  }
}

And tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./build",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true
  },
  "include": ["src/**/*"]
}

Nothing exotic here. ES2022 target, Node16 module resolution, strict mode because we're not animals.

Building a Tool-Based MCP Server

The SDK provides a high-level McpServer class that handles most use cases cleanly. Create src/index.ts:

import {
  McpServer,
  ResourceTemplate,
} from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const server = new McpServer({
  name: "demo-server",
  version: "1.0.0",
});

// Register a tool with Zod validation
server.registerTool(
  "calculate",
  {
    title: "Calculator",
    description: "Perform basic arithmetic",
    inputSchema: {
      operation: z.enum(["add", "subtract", "multiply", "divide"]),
      a: z.number().describe("First number"),
      b: z.number().describe("Second number"),
    },
  },
  async ({ operation, a, b }) => {
    let result: number;
    switch (operation) {
      case "add":
        result = a + b;
        break;
      case "subtract":
        result = a - b;
        break;
      case "multiply":
        result = a * b;
        break;
      case "divide":
        if (b === 0) throw new Error("Division by zero");
        result = a / b;
        break;
    }
    return { content: [{ type: "text", text: String(result) }] };
  },
);

// Register a dynamic resource
server.registerResource(
  "greeting",
  new ResourceTemplate("greeting://{name}", { list: undefined }),
  { title: "Greeting", description: "Generate personalised greeting" },
  async (uri, { name }) => ({
    contents: [
      {
        uri: uri.href,
        text: `Hello, ${name}!`,
        mimeType: "text/plain",
      },
    ],
  }),
);

// Start the server
const transport = new StdioServerTransport();
await server.connect(transport);

Build with npm run build, then test using the MCP Inspector:

npm run inspector

The Inspector opens a UI where you can invoke your tools and resources interactively. It's genuinely useful for debugging—far better than squinting at JSON-RPC logs.

Error Handling and Debugging

Here's a mistake that'll cost you an hour the first time: console.log() breaks the protocol. MCP uses stdio for communication, so anything written to stdout corrupts the JSON-RPC stream. Use console.error() for debug output.

// Correct - writes to stderr
console.error("Debug info:", someData);

// Wrong - corrupts protocol messages
console.log("Debug info");

For structured error handling, the SDK provides McpError with standard JSON-RPC error codes:

import { McpError, ErrorCode } from "@modelcontextprotocol/sdk";

server.registerTool(
  "divide",
  {
    inputSchema: { a: z.number(), b: z.number() },
  },
  async ({ a, b }) => {
    if (b === 0) {
      throw new McpError(
        ErrorCode.InvalidParams,
        "Division by zero is not allowed",
      );
    }
    return { content: [{ type: "text", text: String(a / b) }] };
  },
);

The common error codes you'll use: InvalidParams for bad inputs, InternalError for unexpected failures, and MethodNotFound when a client requests something you don't support.

One pattern I've found useful: wrap all tool handlers in a try-catch that logs to stderr before re-throwing. When something fails at 2am, you'll want those logs.

server.registerTool(
  "risky-operation",
  {
    inputSchema: { data: z.string() },
  },
  async ({ data }) => {
    try {
      const result = await doSomethingRisky(data);
      return { content: [{ type: "text", text: result }] };
    } catch (error) {
      console.error(`[risky-operation] Failed:`, error);
      if (error instanceof McpError) throw error;
      throw new McpError(
        ErrorCode.InternalError,
        `Operation failed: ${error.message}`,
      );
    }
  },
);

Deploying to Production

MCP supports two transport mechanisms with different deployment implications.

Stdio transport communicates over standard input/output. The client spawns your server as a subprocess. Microsecond latency, no network overhead, but limited to local deployments. This is what Claude Desktop uses by default.

Streamable HTTP (added in the March 2025 spec revision) enables remote deployments through a single HTTP endpoint. Works with serverless platforms, supports horizontal scaling. Use this for anything production-facing.

For Docker deployment:

FROM node:20-slim
WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

COPY build/ ./build/

ENV NODE_ENV=production
USER node

CMD ["node", "build/index.js"]

Configure Claude Desktop to run it:

{
  "mcpServers": {
    "my-server": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "my-mcp-server:latest"]
    }
  }
}

A few security essentials: always validate inputs against injection attacks, bind local servers to 127.0.0.1 only, and never store secrets in configuration files—use environment variables or a proper secrets manager.

The MCP spec mandates OAuth 2.1 with PKCE for authenticated remote servers. If you're exposing an MCP server over HTTP to external clients, implementing proper authentication isn't optional—it's required by the spec. For internal tools running locally via stdio, the attack surface is smaller, but input validation still matters. Assume the AI will eventually send something unexpected.

What's Next

You've got a working MCP server. The natural next steps: connect it to a real data source, add authentication for production use, maybe explore the Streamable HTTP transport for remote deployments.

The official documentation covers advanced topics like sampling (where servers can request LLM completions from clients), elicitation for gathering user input, and the experimental Tasks API for long-running async operations.

MCP is still evolving—v2 of the TypeScript SDK is in development with a stable release expected in Q1 2026—but the fundamentals are solid. Build something useful, ship it, iterate. That's still how this works.

Thomas Wiegold

AI Solutions Developer & Full-Stack Engineer with 14+ years of experience building custom AI systems, chatbots, and modern web applications. Based in Sydney, Australia.

Ready to Transform Your Business?

Let's discuss how AI solutions and modern web development can help your business grow.

Get in Touch