MCP: The USB-C of AI — How Model Context Protocol Is Connecting Everything
In November 2024, Anthropic quietly open-sourced a small but ambitious project: the Model Context Protocol. It did not make headlines. There was no keynote, no viral demo, no hype cycle. It was just a spec, a few SDKs, and an idea — what if every AI model could talk to every tool and data source through a single, universal standard?
Fast forward to early 2026, and MCP is downloaded roughly 100 million times per month. Over 5,500 MCP servers exist in the wild. OpenAI, Google, and Microsoft have all adopted it. Anthropic donated the protocol to the Linux Foundation for neutral governance. MCP has become, in remarkably short order, the USB-C of AI.
This is the story of how it happened, how MCP works under the hood, and how to start building with it today.
What Is MCP?
The Model Context Protocol is an open standard that defines how AI applications — LLMs, agents, coding assistants, chatbots — connect to external tools and data sources. It provides a universal interface so that any AI client can discover, authenticate with, and invoke any MCP-compatible server without custom integration code.
The simplest analogy is USB-C. Before USB-C, every device had its own connector. Your phone had Micro-USB, your laptop had Thunderbolt, your camera had Mini-USB, your printer had USB-B. If you wanted to connect two devices, you needed to find the right cable — and pray you had the right adapter. It was a mess.
USB-C solved this by defining one connector that works for everything: charging, data transfer, video output, audio. One cable, universal compatibility.
MCP does the same thing for AI. Before MCP, if you wanted Claude to access your database, you wrote a custom plugin. If you wanted GPT to read your files, you wrote a different custom plugin. If you wanted Gemini to call your API, you wrote yet another custom plugin. Every AI model had its own way of connecting to tools, and none of them were compatible.
MCP replaces all of that with one protocol. Build an MCP server once, and it works with every AI client that speaks MCP — Claude, ChatGPT, Copilot, Cursor, your own custom agent, and anything else that comes along.
Why MCP Won
Plenty of standards have been proposed in AI. Most of them went nowhere. MCP succeeded for a specific combination of reasons.
1. Timing
MCP arrived exactly when AI agents were taking off. Through 2024 and 2025, every major AI lab shipped agent capabilities — tool use, multi-step reasoning, autonomous execution. But every implementation was bespoke. Agents needed a standard way to discover and invoke tools, and nothing existed. MCP filled the vacuum at precisely the right moment.
2. Simplicity
MCP is built on JSON-RPC 2.0, the same lightweight protocol that powers the Language Server Protocol (LSP) in every code editor. It is not a complex, heavyweight framework. It is a thin, well-defined messaging format that is easy to implement in any language. If you have ever built a REST API, you can build an MCP server in an afternoon.
3. Backing
Anthropic open-sourced MCP from day one, but the real inflection point came when competitors adopted it. OpenAI integrated MCP into ChatGPT and the Agents SDK. Google added MCP support to Gemini. Microsoft brought it to Copilot and VS Code. When your competitors adopt your standard, it stops being your standard and becomes the standard.
4. Open Governance
Anthropic donated MCP to the Linux Foundation, ensuring that no single company controls the protocol. This neutral governance model — the same approach that made Kubernetes, GraphQL, and OpenTelemetry successful — gave enterprise adopters the confidence that MCP would not become a proprietary lock-in play.
5. Developer Experience
The TypeScript and Python SDKs were excellent from the start. The spec was clear and well-documented. Building an MCP server felt familiar, not alien. The ecosystem grew organically because it was genuinely easy to contribute to.
How MCP Works
At its core, MCP follows a client-server architecture. AI applications (hosts) contain MCP clients that connect to MCP servers. Each server exposes capabilities through three primitives.
The Architecture
The MCP architecture has three layers. At the top is the Host — the AI application like Claude Desktop, VS Code, or your custom agent. The Host contains one or more MCP Clients, each maintaining a one-to-one connection with an MCP Server. Servers connect to the actual resources: databases, APIs, file systems, and other external services.
┌─────────────────────────────────────────────────┐
│ HOST APPLICATION │
│ (Claude, VS Code, Agent) │
│ │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ MCP Client │ │ MCP Client │ ... │
│ └──────┬──────┘ └──────┬──────┘ │
└─────────┼────────────────┼──────────────────────┘
│ JSON-RPC │ JSON-RPC
▼ ▼
┌──────────────┐ ┌──────────────┐
│ MCP Server │ │ MCP Server │
│ (Database) │ │ (File Sys) │
└──────┬───────┘ └──────┬───────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ PostgreSQL │ │ Local Files │
└──────────────┘ └──────────────┘The Three Primitives
Resources — Data that the server exposes for the AI to read. Think of these as GET endpoints. A database server might expose tables as resources. A file system server exposes files and directories. Resources are identified by URIs and can be listed and read by the client.
Tools — Actions that the AI can invoke. Think of these as POST endpoints. A database server might expose a query tool that executes SQL. A GitHub server might expose a create_issue tool. Tools have defined input schemas (JSON Schema) and return structured results.
Prompts — Reusable prompt templates that the server provides. These are pre-built instructions optimized for specific tasks. A code analysis server might offer a "review this pull request" prompt template that structures the AI's analysis in a consistent way.
Transport Layers
MCP supports multiple transport mechanisms for the JSON-RPC messages.
stdio — The server runs as a local subprocess. The client communicates via standard input/output. This is the simplest transport and works well for local tools like file system access or local database connections. No network configuration required.
Streamable HTTP — The server runs as a remote HTTP service. The client sends requests via HTTP POST and receives responses, with optional Server-Sent Events (SSE) for streaming. This is the transport for remote services, cloud-hosted tools, and shared MCP servers.
The Protocol Flow
A typical MCP interaction follows this sequence:
- The client connects to the server and they exchange capabilities during initialization.
- The client calls tools/list to discover what tools are available.
- The AI model decides which tool to call based on the user's request and the tool descriptions.
- The client sends a tools/call request with the tool name and arguments.
- The server executes the action and returns the result.
- The AI model incorporates the result into its response.
// Simplified MCP protocol exchange (JSON-RPC 2.0)
// 1. Client → Server: Initialize
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-03-26",
"clientInfo": { "name": "my-agent", "version": "1.0.0" },
"capabilities": {}
}
}
// 2. Server → Client: Initialize response
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2025-03-26",
"serverInfo": { "name": "weather-server", "version": "1.0.0" },
"capabilities": { "tools": {} }
}
}
// 3. Client → Server: List available tools
{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/list"
}
// 4. Server → Client: Tool definitions
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"tools": [
{
"name": "get_weather",
"description": "Get current weather for a city",
"inputSchema": {
"type": "object",
"properties": {
"city": { "type": "string", "description": "City name" }
},
"required": ["city"]
}
}
]
}
}
// 5. Client → Server: Call a tool
{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "get_weather",
"arguments": { "city": "Toronto" }
}
}
// 6. Server → Client: Tool result
{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [
{
"type": "text",
"text": "Toronto: -5°C, light snow, humidity 78%"
}
]
}
}Building Your First MCP Server
Let us build a practical MCP server from scratch. We will create a weather service that exposes a tool for getting current weather data and a resource for listing supported cities.
Step 1: Project Setup
mkdir mcp-weather-server && cd mcp-weather-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node
npx tsc --initStep 2: Define the Server
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
// Create the MCP server
const server = new McpServer({
name: "weather-server",
version: "1.0.0",
description: "An MCP server that provides weather information",
});
// Simulated weather data (replace with a real API in production)
const weatherData: Record<string, { temp: number; condition: string; humidity: number }> = {
toronto: { temp: -5, condition: "light snow", humidity: 78 },
london: { temp: 8, condition: "overcast", humidity: 85 },
tokyo: { temp: 12, condition: "clear", humidity: 45 },
sydney: { temp: 28, condition: "sunny", humidity: 60 },
"new york": { temp: 2, condition: "partly cloudy", humidity: 55 },
};
// ── Register a Tool ──────────────────────────────────────
server.tool(
"get_weather",
"Get the current weather for a specified city",
{
city: z.string().describe("The city name to get weather for"),
units: z
.enum(["celsius", "fahrenheit"])
.optional()
.default("celsius")
.describe("Temperature units"),
},
async ({ city, units }) => {
const data = weatherData[city.toLowerCase()];
if (!data) {
return {
content: [
{
type: "text" as const,
text: `Weather data not available for "${city}". Supported cities: ${Object.keys(weatherData).join(", ")}`,
},
],
isError: true,
};
}
const temp =
units === "fahrenheit"
? Math.round(data.temp * 9 / 5 + 32)
: data.temp;
const symbol = units === "fahrenheit" ? "°F" : "°C";
return {
content: [
{
type: "text" as const,
text: `Weather in ${city}: ${temp}${symbol}, ${data.condition}, humidity ${data.humidity}%`,
},
],
};
}
);
// ── Register a Resource ──────────────────────────────────
server.resource(
"supported-cities",
"weather://cities",
{ description: "List of all cities with available weather data" },
async () => ({
contents: [
{
uri: "weather://cities",
mimeType: "application/json",
text: JSON.stringify(Object.keys(weatherData), null, 2),
},
],
})
);
// ── Register a Prompt ────────────────────────────────────
server.prompt(
"weather_report",
"Generate a weather report for a city",
{ city: z.string().describe("City to report on") },
async ({ city }) => ({
messages: [
{
role: "user" as const,
content: {
type: "text" as const,
text: `Please provide a detailed weather report for ${city}. Use the get_weather tool to fetch the current conditions, then write a brief, friendly summary suitable for a morning briefing.`,
},
},
],
})
);
// ── Start the Server ─────────────────────────────────────
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("Weather MCP Server running on stdio");
}
main().catch(console.error);Step 3: Configure for Claude Desktop
To use this server with Claude Desktop or any MCP-compatible client, add it to the client's configuration.
{
"mcpServers": {
"weather": {
"command": "npx",
"args": ["tsx", "/path/to/mcp-weather-server/src/index.ts"]
}
}
}Once configured, the AI client will automatically discover the server's tools, resources, and prompts during initialization. When a user asks about the weather, the AI will see the get_weather tool and use it.
Building an MCP Client
Most developers will consume MCP servers rather than build clients from scratch — Claude Desktop, VS Code, and similar hosts handle the client side. But understanding how a client works is valuable for building custom AI applications and agents.
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
async function main() {
// Create a client transport that spawns the server as a subprocess
const transport = new StdioClientTransport({
command: "npx",
args: ["tsx", "./path/to/weather-server/src/index.ts"],
});
// Create and connect the client
const client = new Client(
{ name: "my-ai-agent", version: "1.0.0" },
{ capabilities: {} }
);
await client.connect(transport);
// Discover available tools
const { tools } = await client.listTools();
console.log("Available tools:");
for (const tool of tools) {
console.log(` - ${tool.name}: ${tool.description}`);
}
// Discover available resources
const { resources } = await client.listResources();
console.log("\nAvailable resources:");
for (const resource of resources) {
console.log(` - ${resource.uri}: ${resource.description}`);
}
// Call a tool
const result = await client.callTool({
name: "get_weather",
arguments: { city: "Toronto", units: "celsius" },
});
console.log("\nWeather result:", result.content);
// Read a resource
const cities = await client.readResource({
uri: "weather://cities",
});
console.log("\nSupported cities:", cities.contents);
await client.close();
}
main().catch(console.error);Integrating MCP with an LLM Agent
The real power of MCP emerges when you wire it into an agent loop. The client discovers tools from MCP servers and passes them to the LLM, which decides when and how to use them.
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic();
async function agentWithMcp() {
// Connect to one or more MCP servers
const weatherClient = await connectMcpServer(
"npx", ["tsx", "./weather-server/src/index.ts"]
);
const fileClient = await connectMcpServer(
"npx", ["tsx", "./filesystem-server/src/index.ts"]
);
// Gather tools from all connected servers
const allTools = [
...(await weatherClient.listTools()).tools,
...(await fileClient.listTools()).tools,
];
// Convert MCP tool definitions to Anthropic format
const anthropicTools = allTools.map((tool) => ({
name: tool.name,
description: tool.description || "",
input_schema: tool.inputSchema,
}));
// Run the agent loop
const messages: Anthropic.MessageParam[] = [
{
role: "user",
content:
"What is the weather in Toronto? Save a summary to weather-report.txt.",
},
];
while (true) {
const response = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 4096,
tools: anthropicTools,
messages,
});
if (response.stop_reason === "tool_use") {
const toolUse = response.content.find(
(block) => block.type === "tool_use"
);
if (!toolUse || toolUse.type !== "tool_use") break;
// Route the tool call to the right MCP server
const mcpClient = findServerForTool(toolUse.name, {
weather: weatherClient,
file: fileClient,
});
const result = await mcpClient.callTool({
name: toolUse.name,
arguments: toolUse.input as Record<string, unknown>,
});
messages.push({ role: "assistant", content: response.content });
messages.push({
role: "user",
content: [
{
type: "tool_result",
tool_use_id: toolUse.id,
content: JSON.stringify(result.content),
},
],
});
} else {
// Agent has finished
const textBlock = response.content.find(
(block) => block.type === "text"
);
if (textBlock && textBlock.type === "text") {
console.log("Agent response:", textBlock.text);
}
break;
}
}
await weatherClient.close();
await fileClient.close();
}
async function connectMcpServer(command: string, args: string[]) {
const transport = new StdioClientTransport({ command, args });
const client = new Client(
{ name: "mcp-agent", version: "1.0.0" },
{ capabilities: {} }
);
await client.connect(transport);
return client;
}
function findServerForTool(
toolName: string,
servers: Record<string, Client>
): Client {
// In production, maintain a tool → server mapping built during discovery
if (toolName.startsWith("get_weather")) return servers.weather;
return servers.file;
}
agentWithMcp().catch(console.error);Real-World MCP Use Cases
MCP servers have exploded across every domain where AI needs to interact with external systems. Here are the patterns seeing the most adoption.
Database Access
MCP servers for PostgreSQL, MySQL, SQLite, and MongoDB let AI agents query databases directly. Instead of copying data into a prompt, the agent can run SQL queries, explore schemas, and analyze results interactively. This is transformative for data analysis workflows.
// Example: PostgreSQL MCP server tool registration
server.tool(
"query",
"Execute a read-only SQL query against the database",
{
sql: z.string().describe("The SQL query to execute (SELECT only)"),
},
async ({ sql }) => {
// Safety: only allow SELECT statements
if (!sql.trim().toUpperCase().startsWith("SELECT")) {
return {
content: [{ type: "text" as const, text: "Only SELECT queries are allowed." }],
isError: true,
};
}
const result = await pool.query(sql);
return {
content: [
{
type: "text" as const,
text: JSON.stringify(result.rows, null, 2),
},
],
};
}
);File System and Code Analysis
File system MCP servers give agents the ability to read, write, search, and navigate codebases. Combined with code analysis tools like tree-sitter or LSP servers, agents can understand code structure at a deep level — finding definitions, tracing call graphs, and identifying dependencies.
API Integrations
MCP servers for GitHub, Jira, Slack, Linear, Notion, and dozens of other services let agents interact with the tools teams already use. An agent can create GitHub issues, update Jira tickets, send Slack notifications, and pull data from Notion — all through standard MCP tool calls.
Monitoring and Observability
MCP servers for Datadog, Grafana, PagerDuty, and CloudWatch let agents monitor infrastructure, query metrics, and respond to incidents. An on-call agent can check dashboards, read logs, and correlate errors across services without a human manually navigating multiple UIs.
Browser Automation
MCP servers built on Playwright or Puppeteer give agents the ability to browse the web, fill forms, take screenshots, and extract data from web pages. This enables workflows like automated testing, web scraping, and UI verification.
The MCP Ecosystem
The MCP ecosystem has grown at a remarkable pace. As of early 2026, there are over 5,500 community-built MCP servers covering everything from cloud infrastructure to creative tools.
Discovery and Registries
Finding MCP servers has been addressed through several registries and directories. The official MCP server registry, community-maintained lists on GitHub, and package manager integrations (npm, pip) make it straightforward to discover servers for common use cases.
Popular MCP Servers
- Filesystem — Read, write, and search files with configurable permission boundaries
- GitHub — Full GitHub API access: repos, issues, PRs, actions, code search
- PostgreSQL — Read-only database access with schema exploration
- Puppeteer / Playwright — Browser automation and web scraping
- Slack — Channel management, message sending, and search
- Google Drive — Document access, search, and management
- Docker — Container management and log access
- Kubernetes — Cluster inspection, pod management, and log streaming
- Sentry — Error tracking, issue management, and performance monitoring
- Stripe — Payment data access, invoice management, and subscription queries
Building for the Registry
Publishing an MCP server follows the same patterns as publishing any npm or pip package. The MCP specification includes a server manifest format that makes discovery and configuration automatic.
{
"name": "@myorg/mcp-analytics",
"version": "1.0.0",
"description": "MCP server for product analytics",
"mcp": {
"transport": ["stdio"],
"capabilities": {
"tools": true,
"resources": true,
"prompts": true
}
},
"bin": {
"mcp-analytics": "./dist/index.js"
}
}Security Considerations
MCP servers expose real capabilities to AI systems. Security is not optional — it is foundational. The protocol and ecosystem have evolved several layers of defense.
Authentication and Authorization
MCP supports OAuth 2.1 for remote servers, allowing proper authentication flows. Servers can require specific scopes, validate tokens, and enforce per-user permissions. For local stdio servers, the security boundary is the operating system process model.
// Example: OAuth-protected MCP server
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
const server = new McpServer({
name: "secure-api-server",
version: "1.0.0",
});
// Tool with permission checks
server.tool(
"delete_record",
"Delete a record from the database",
{ id: z.string().describe("Record ID to delete") },
async ({ id }, { authInfo }) => {
// Verify the caller has delete permissions
if (!authInfo?.scopes?.includes("records:delete")) {
return {
content: [
{
type: "text" as const,
text: "Permission denied: requires records:delete scope",
},
],
isError: true,
};
}
await db.records.delete(id);
return {
content: [{ type: "text" as const, text: `Record ${id} deleted.` }],
};
}
);Sandboxing and Trust Boundaries
MCP encourages a layered trust model. Local stdio servers run in a sandboxed subprocess with limited file system access. Remote servers communicate over HTTPS with proper certificate validation. Clients should prompt users before executing destructive operations and maintain audit logs of all tool invocations.
Prompt Injection Defense
Because MCP servers return data that feeds into LLM context, they are a potential vector for indirect prompt injection. A malicious data source could include instructions that try to manipulate the AI. Defenses include input sanitization, output validation, and keeping tool results in structured formats that the LLM treats as data rather than instructions.
Best Practices
- Apply the principle of least privilege — give servers only the permissions they need.
- Use read-only access by default and require explicit opt-in for write operations.
- Implement rate limiting and request size limits on remote servers.
- Log all tool invocations for audit trails.
- Validate all inputs with strict schemas (Zod, JSON Schema).
- Use human-in-the-loop confirmation for destructive or irreversible actions.
What Is Next for MCP
MCP is evolving rapidly. The protocol is still young, and several major developments are on the horizon.
Streamable HTTP Transport
The original SSE-based remote transport is being replaced by Streamable HTTP, which supports bidirectional streaming, better error handling, and session resumption. This makes remote MCP servers more robust and performant, especially for long-running operations.
Improved Authentication
The authentication story is maturing with full OAuth 2.1 support, including PKCE flows, dynamic client registration, and token refresh. This makes it practical to build multi-tenant MCP servers that serve different users with different permissions.
Agent-to-Agent Communication
One of the most exciting developments is using MCP as a protocol for agents to communicate with each other. Instead of just connecting AI to tools, MCP can connect AI to AI — enabling multi-agent systems where specialized agents collaborate through well-defined interfaces.
Elicitation
The proposed elicitation capability allows MCP servers to request additional information from the user during tool execution. If a server needs clarification or confirmation, it can ask the client to prompt the user, enabling interactive workflows that go beyond simple request-response patterns.
Registry and Discovery Standards
As the ecosystem grows, standardized registry and discovery protocols will make it easier to find, evaluate, and configure MCP servers. Imagine a future where your AI agent can discover and connect to new tools on the fly, much like a browser discovers web services today.
Conclusion
MCP started as a small open-source project from Anthropic and became the standard protocol for connecting AI to the world. It succeeded because it was simple, open, and arrived at exactly the right time.
The implications are profound. Before MCP, every AI integration was a custom project. After MCP, integrations are composable, reusable, and universal. A tool built for Claude works with GPT works with Gemini works with your custom agent. The ecosystem effect is just beginning — 5,500 servers today will be 50,000 by the end of the year.
For developers, the message is clear: learn MCP now. If you build tools, APIs, or services, expose them via MCP. If you build AI applications, consume MCP servers. If you build agents, use MCP as your universal tool interface. The standard has won. The only question is how quickly you adopt it.
The USB-C analogy is apt, but it may actually undersell the impact. USB-C standardized physical connectors. MCP is standardizing the interface between artificial intelligence and the entire digital world. Every database, every API, every file system, every service — all accessible through one protocol.
The best infrastructure is invisible. MCP is becoming the invisible layer that connects AI to everything — and like the best standards, once it is everywhere, we will wonder how we ever built anything without it.