MCP, From Pidgin to Protocol

Before MCP, every AI integration was its own pidgin — improvised, simplified, just enough to get one client talking to one tool. Five floors from the first shared word to a real protocol: what MCP is, when to use it, every server worth running, the failure modes teams hit.

MCP, From Pidgin to Protocol — AI
The protocol that ended the N-by-M graveyard.

Before MCP, AI integrations were an N-by-M graveyard. Five AI clients times twenty internal systems meant a hundred bespoke adapters: Claude-to-GitHub, Cursor-to-Postgres, ChatGPT-to-Linear, internal-agent-to-Slack, and a pile of JSON schemas rotting in five repos. Every model talked to every tool through a different tunnel that someone had to maintain forever.

MCP is the protocol that ended the graveyard. Anthropic shipped it in November 2024 as an open standard for connecting AI applications to external systems. As of May 2026 it has 97 million monthly SDK downloads, 81,000 GitHub stars, 17,000+ servers indexed across registries, and is in production at 78 percent of enterprise AI teams. OpenAI adopted it in May 2025. Google followed for Gemini. Anthropic donated the protocol to the Linux Foundation in December 2025. Microsoft, AWS, Cloudflare, and Bloomberg all signed on as supporters of the new Agentic AI Foundation.

This is the companion piece for MCP. Where the RAG article walked retrieval up five floors of understanding, this one walks the protocol — the new lingua franca for AI talking to the world — from a first shared word to its diplomatic frontier. By the time you reach the top, you'll know what MCP is, when to use it instead of REST or vendor function calling, every server worth running, every alternative worth knowing about, and the failure modes most teams discover the hard way.


Floor 1 — First Word (ELI5)

An AI model is like a very smart kid locked in a room with no hands. MCP is a standard way to give it safe, labeled doors to things outside the room: files, calendars, GitHub, databases, browsers, support tickets. The important part isn't magic intelligence. The important part is that every door works the same way, no matter which AI is opening it.


TL;DR (engineer level)

📋
MCP in six bullets:

• What. A JSON-RPC 2.0 client/server protocol for exposing tools, resources, and prompts to AI hosts. Think LSP for agent context, not "REST replacement."

• When. When more than one AI client needs the same integration. When the integration combines actions, context, prompts, and permissions. When portability and discovery matter more than raw throughput.

• How. Local servers run over stdio; production remote servers run over Streamable HTTP with OAuth 2.1.

• When NOT. A single backend call, a public REST endpoint, a vendor SDK, or a deterministic workflow are enough. MCP adds discovery, transport, auth, schemas, consent, and a security surface — that overhead has to earn its place.

• Killer use case. Not thin REST wrappers. It's packaging context, actions, auth, tool descriptions, prompts, and operational boundaries into one reusable agent interface that every client can speak.

• Blood price. Tool overload kills accuracy. Tool descriptions are prompt-injection surfaces. Local servers inherit ugly host privileges. Registries verify identity, not safety. "We support MCP" is now marketing fog.

Floor 2 — What problem MCP actually solves

Three years ago, the bottleneck of AI was the model. The model couldn't reason, the model couldn't follow instructions, the model couldn't keep its facts straight. Today, that's mostly fixed. The bottleneck moved.

The product is no longer the model. The product is the model plus context plus permissions plus actions plus audit. A model that can't read your repo, query your tickets, inspect production logs, or act inside approved boundaries is just a haunted autocomplete box. RAG solved part of this — the read side. MCP solves the other part — the read-and-write side, the part where the model actually does things in the world.

Why now? Because the integration tax was about to crush the ecosystem. Every IDE, every chatbot, every agent framework, every internal AI tool was inventing its own way to plug into the same handful of systems. GitHub. Slack. Jira. Postgres. Filesystem. Browser. The math was unsustainable: N AI clients times M integrations, and every cell in the matrix was someone's quarterly OKR.

MCP attacks the integration tax by inverting the relationship. A system exposes one MCP server. Any MCP-capable host — Claude Desktop, Cursor, VS Code, OpenAI Agents SDK, Gemini, your internal agent shell — discovers what that server offers and calls it through the same protocol. Not perfect. Not safe by default. Not a silver bullet. But a shared shape is enough to stop burning weeks on adapters and start asking the real question: which capabilities should an AI be allowed to use, and under what conditions?


Floor 3 — How it actually works

MCP has three roles: host, client, and server. The host is the AI application — Claude Desktop, Cursor, VS Code with Copilot, an OpenAI Agent, your internal shell. The MCP client is the connector inside that host. The MCP server is the program exposing context and actions. One host runs one client per server.

The wire format is JSON-RPC 2.0. Initialization negotiates protocol version, identity, and capabilities. After that, the client can list tools, read resources, fetch prompt templates, subscribe to changes, and invoke tools. MCP is stateful at the protocol layer — initialization matters, capabilities matter, and HTTP sessions may carry an MCP-Session-Id header for continuity.

Two transports matter today. Stdio is local: the host launches the server as a subprocess and talks to it over standard input and output. This is why every bad MCP install guide begins with "npx -y something something." Streamable HTTP is the production path: POST and GET to one endpoint, optional Server-Sent Events for streaming, support for resumability, session headers, and standard HTTP auth. It replaced an older HTTP+SSE transport that still haunts compatibility layers.

The server exposes three primitives:

  • Tools — model-invoked functions. create_issue, query_database, open_browser, search_docs.
  • Resources — readable context. Files, docs, database rows, logs, objects.
  • Prompts — reusable templates and workflows the server publishes. Underrated.

The client exposes three primitives back. Roots negotiate filesystem boundaries. Sampling lets the server ask the client to run model inference on its behalf — the server gets model judgment without owning API keys. Elicitation lets the server ask the client to collect input from the user. These are powerful and dangerous. Treat them as loaded weapons, not plugin candy.

Two principles most tutorials skip. First: a tool list is not a capability surface — what matters is whether the model can pick the right one under load. Add 87 poorly named tools and accuracy collapses; the model isn't confused by the task, it's confused by the menu. Second: prompts are probably more valuable than tools. Tools do actions. Prompts encode operational judgment. A great MCP server doesn't just expose "create_issue" — it exposes "triage production incident" with the right tools and the right context already wired in.


The server landscape (2026, opinionated)

Reference servers (official, learning material)

The official modelcontextprotocol/servers repo distinguishes live reference servers from archived historical ones. The live set: Everything (a protocol testbed), Fetch, Filesystem, Git, Memory, Sequential Thinking, Time. Archived historical examples that the ecosystem still copies the shape of: AWS KB Retrieval, Brave Search, GitHub, GitLab, Google Drive, Google Maps, PostgreSQL, Puppeteer, Redis, Sentry, Slack, SQLite. Honest take: reference servers are learning material and primitives, not enterprise architecture. Filesystem and Git are useful. Production teams should prefer vendor-maintained servers.

Vendor-maintained servers (where this becomes real)

VendorServerNotes
GitHubOfficial GitHub MCP serverRepository, issue, PR, Actions, platform operations. Inherits caller permissions. The canonical "vendor should own its own MCP surface" case.
Linearhttps://mcp.linear.app/mcpHosted authenticated server. Streamable HTTP, OAuth 2.1, dynamic client registration, direct bearer tokens. Clean SaaS MCP.
CloudflareRemote MCP infrastructure + 2,500+ endpointsWorkers stack with OAuth provider support, MCP server portals, Access-backed auth. Internal workforce adoption documented.
SourcegraphCode search + analysis MCPMulti-repo code intelligence as agent-callable tools.
SentryObservability + MCP instrumentationTracking tool executions, prompt retrievals, and resource access. If your agent can act, you need traces — otherwise you're debugging a séance.
ZapierMCP endpoint for Zapier accountsThousands of apps reachable behind one automation layer. Not glamorous, but it matters.
OpenAIDocs MCP + ChatGPT/API remote MCPBoth producer and consumer. Joined steering committee May 2025.
GoogleDeveloper docs, Colab, Data Commons MCPsADK uses MCP toolsets natively.

Honest take: vendor-hosted remote MCP is what makes the protocol survivable. Local npx toys got MCP adopted. Remote OAuth-backed servers make it enterprise-relevant.

Clients (the host applications)

ClientWhat it isMCP support quality
Claude DesktopAnthropic's reference appFull primitives. The reference UX.
Claude CodeAnthropic's CLIFull primitives. The serious operator's surface.
CursorAI-first IDELocal + remote MCP, OAuth-backed Streamable HTTP.
VS Code + GitHub CopilotMicrosoft IDENative agent-mode MCP. Mass-market entry point.
Continue / Cline / Zed / WindsurfOther IDE / IDE pluginsContinue supports stdio + SSE + streamable-http. Zed treats MCP as "context servers."
OpenAI Agents SDK / Responses APIOpenAI orchestration surfaceHosted + local MCP patterns. Enterprise-ready.
Gemini CLI / Google ADKGoogle agent surfacesMCP toolsets first-class.
Sourcegraph CodyAI for code searchTools + resources only — no sampling/elicitation.

Honest take: client support is broad but not equivalent. "Supports MCP" can mean tools only, no prompts, weak resources, no sampling, broken OAuth, no dynamic discovery, or terrible approval UX. Always ask the precise capability checklist before you trust the marketing.

SDKs

TypeScript, Python, C#, and Go are Tier 1. Java and Rust are Tier 2. Swift, Ruby, and PHP are Tier 3. Kotlin is TBD. TypeScript and Python are the practical defaults — that's where the community is and where the docs are richest. Go is strong for infrastructure servers. Rust is still more "serious systems people are watching this" than fastest path to ship.

Registries and marketplaces

The official MCP Registry launched in preview September 2025 — it stores standardized metadata, not server code. It verifies namespaces via GitHub, DNS, and HTTP, and explicitly delegates deeper security scanning to package registries and downstream aggregators. Third-party directories: mcp.so, Smithery, PulseMCP, Glama, OpenTools, MCPHub. Honest take: registries solve discovery, not trust. A directory is not a security review. A verified namespace means "this is probably the claimed publisher" — it does not mean "this server will not exfiltrate your tokens." Registry presence is a gravestone label, not an autopsy.


Real production cases

MCP is not theoretical. The following systems run on it, in production, today.

DeploymentWhat MCP is doing
Claude Desktop / Claude CodeAnthropic made MCP mainstream by putting local servers into Claude Desktop and later expanding remote/server workflows. Reference UX: install server, grant access, let Claude use tools.
CursorCoding agents reaching external tools — repo context, issue trackers, docs, browser tools, design systems, database inspection. Where MCP feels natural: the IDE is already an agent host.
GitHub Copilot ChatAgent-mode reads repository state, creates issues, lists PRs, performs GitHub actions with inherited permissions. The canonical vendor-owns-its-own-surface case.
Cloudflare Workers + remote MCPMoved the conversation from local subprocesses to deployable remote MCP. Workers ships OAuth provider support and MCP server portals; internal Cloudflare adoption uses Access-backed auth and discovery portals.
LinearHosted authenticated server for issues, projects, comments. Streamable HTTP + OAuth 2.1 + dynamic client registration. Clean SaaS MCP from a vendor that took it seriously.
Zapier MCPSingle endpoint that gives any AI client access to thousands of apps in the Zapier graph. Quiet but enormous.
Sentry MCP monitoringInstrumentation for tracking tool executions, prompt retrievals, and resource access in MCP servers. Production agents must be traceable; without it you're debugging blind.
Anthropic, Block, OpenAI co-founders of AAIFLinux Foundation December 2025 donation. Microsoft, AWS, Cloudflare, Bloomberg, Google as supporters. The political move that made MCP survivable.

Should I use MCP? A decision tree

Ask these in order. Don't skip ahead. Most projects that fail to ship MCP do so because they answered question 4 first.

  1. Will more than one AI client use this integration? If yes → MCP. If no, keep reading.
  2. Is the integration for normal software clients too (humans, services)? → Build a REST API or OpenAPI surface. Add an MCP wrapper later if agent demand shows up.
  3. Does the vendor already ship an SDK that handles auth, retries, pagination, webhooks, and domain objects? → Use the SDK. Wrap with function calling locally. Don't reinvent.
  4. One agent app, three to ten tools, one model provider you control? → Function calling. MCP is overkill.
  5. Is the workflow strictly deterministic — same input, same call, every time? → Queue, workflow engine, or direct API. Don't put a model in the middle.
  6. Otherwise: MCP. Start with one server. Hook it into Claude Desktop or Cursor. Iterate. Add OAuth before you put it on the internet.

Brutal rule: if your MCP server is just curl with theatrical lighting, you probably built the wrong thing. The best MCP servers are not API wrappers — they're opinionated capability products with prompts and tools aligned around real workflows.


Alternatives — when each one wins

MCP isn't the only way to plug an AI into the world. Five alternatives are worth knowing, with clear when-each-wins logic.

ApproachWins whenMCP wins when
REST / OpenAPIPublic-facing interface for humans and services. High throughput. Strict contracts. Caching. Existing auth.Consumer is an AI host that needs discovery, schemas, prompts, resources, and permission mediation.
Vendor function calling
(OpenAI, Anthropic, Google)
One app, one model provider, one controlled tool set. Simpler, cheaper, closer to the model.Same integration usable from Claude, Cursor, VS Code, OpenAI Agents, Gemini, and internal hosts.
OpenAPI-as-tool-schemaYou already have a REST API. Quick tool generation.Integration needs richer context, local resources, prompt templates, subscriptions, sampling, roots, or multi-client reuse.
LangChain toolsInside a LangChain Python app. Custom chains.Ecosystem boundaries. A LangChain tool is app-local. An MCP server is client-portable.
OpenAI plugins (deprecated)
or ChatGPT GPTs
End-user distribution inside ChatGPT. Simple consumer workflows.Building infrastructure, IDE integrations, cross-client tooling, internal agent platforms.
Agent SDKs
(OpenAI Agents, Mastra, Vercel AI SDK, Google ADK)
Orchestration: model loops, memory, evals, deployment, tracing, handoffs.MCP is not an agent framework. It's the tool/context interface those frameworks consume. Agent SDK + MCP is the right pairing.
No-code agent builders
(Zapier, Make, n8n, Copilot Studio)
Buyer wants workflows, not architecture.Engineers need portable, inspectable, versioned capability surfaces.

The dirty secret of 2026: most production AI assistants are MCP plus a couple of vendor SDKs plus an agent framework on top, not pure MCP. The categories blur. The right system picks two or three of these and routes between them.


Floor 4 — Advanced (the production-grade patterns)

OAuth 2.1 and the auth problem

Remote MCP lives or dies on auth. The current authorization spec is built around OAuth 2.1-style flows for HTTP transports — the MCP server acts as a protected resource, clients discover authorization metadata via a well-known endpoint, dynamic client registration and client metadata documents exist for interoperability. This is good. It is not solved. Enterprise auth still wants SSO, device posture, scopes per tool, audit, revocation, and policy engines. Most of that lives outside the MCP spec, in the surrounding identity stack. The companion piece on workload identity federation covers the part of this story that MCP itself doesn't address.

Multi-server orchestration and the tool overload problem

Connect GitHub, Linear, Slack, Postgres, Sentry, browser automation, internal docs, and a couple of custom servers, and the model sees a cathedral of tools. Tool search, namespacing, dynamic loading, and gateways become mandatory. Agents are not going to load 600 tools into context. They will search capability registries. Anthropic has been pushing tool search and programmatic calling patterns; OpenAI's Agents SDK supports hosted and local MCP composition. Direction is obvious: capability discovery is the next layer of plumbing.

Sampling, roots, and elicitation — the dangerous primitives

Sampling is under-discussed. It lets a server ask the client to run model inference, with text or audio or image content. Useful when the server needs model judgment but shouldn't hold API keys. Also a security nightmare if the client can't clearly show the prompt, the model, the data exposure, and the result path.

Roots are the filesystem boundary primitive. They're not sandboxing — they're a negotiated list of places the server may operate. Good clients enforce them. Lazy clients merely confess them. Production deployments need actual sandboxes underneath.

Elicitation lets the server pause and ask the client to collect user input mid-flow. Powerful for human-in-the-loop workflows. Equally powerful for malicious servers to extract sensitive information through plausible-looking forms.

Resource subscriptions and living context

Static tool lists are fine for demos. Production systems change: tickets update, logs stream, schemas evolve, files move. Resource subscriptions and list-change notifications matter for living context. If your MCP server pretends nothing changes between calls, you're shipping a postcard, not a system.

Federation — the next fight

Not "one MCP server," but server portals, gateways, private registries, downstream marketplaces, enterprise allowlists, and capability brokers. This is where Cloudflare's MCP portal direction matters more than another toy server — turning MCP from a per-machine plumbing into a per-organization fabric is what makes it operationally serious.


Floor 5 — Cutting edge (politics, protocols, and failure modes)

MCP is Anthropic-born — that matters and is now contained

Standards are never politically neutral at birth. MCP came from Anthropic. But Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation on December 9, 2025, with Anthropic, Block, and OpenAI as co-founders, and Google, Microsoft, AWS, Cloudflare, and Bloomberg as supporters. That doesn't make MCP politically pure. It makes MCP survivable. The protocol is now governed by an open foundation, the spec lives in public, and the steering committee includes the labs whose models will be the largest consumers. This is the correct outcome for an interoperability protocol, and the timeline matters: from "Anthropic project" to "Linux Foundation property" in 13 months is fast.

MCP is LSP for AI, not REST 2.0

The right comparison isn't to REST or to OpenAPI. The right comparison is to LSP — the Language Server Protocol that lets any editor talk to any language server through one shared interface. Or DAP — the Debug Adapter Protocol that does the same for debuggers. Same JSON-RPC pattern. Same capability negotiation. Same client/server lifecycle. Same notifications. Different danger surface: language servers rarely ask an LLM to decide whether to send your production credentials to a SaaS endpoint. MCP inherits the elegance of LSP and the security implications of giving an autonomous agent permission to act on your behalf.

MCP is not A2A

MCP is agent-to-tool-and-context. A2A — agent-to-agent — is a different protocol for autonomous services to negotiate tasks, status, delegation, and handoff. If two services need to coordinate as peers, that's not naturally MCP work. You can fake it with tools. You should not build your whole agent society out of fake tools. Google's protocol guide positions MCP alongside A2A, AP2 (agent payments), and A2UI (agent-rendered UI) — all complementary, none interchangeable.

Unsolved gaps as of May 2026

  • Capability negotiation is too shallow for trust — no semantic risk labels, no standardized scopes.
  • Tool descriptions are prompt-injection surfaces, and the spec acknowledges this without solving it.
  • Registries authenticate names better than they authenticate behavior.
  • Permission UX varies wildly by client. Same server, different consent dialogs, different leaked context.
  • Multi-server tool overload silently degrades model accuracy below acceptable thresholds.
  • Local stdio servers inherit ugly host privileges — they run as your user, with your tokens, in your shell.
  • Remote MCP auth is improving but enterprise policy enforcement is still uneven.
  • Observability is bolted on after the first incident, because people enjoy pain.

Security failure modes (not theoretical)

A malicious MCP server can lie in tool descriptions. It can request broad scopes. It can exfiltrate context through tool outputs. It can poison prompt templates. It can leak tokens from environment variables. It can smuggle commands through unsafe subprocess handling. Supply-chain risk is brutal because most installs are an npx or uvx command copied from a directory page, and most directories don't audit the underlying code. The official spec's security section explicitly warns that tool descriptions are untrusted and that hosts must obtain consent for data access and tool execution. Almost nobody reads it.

The future: capability negotiation with teeth

Signed server metadata. Tool-level scopes. Semantic risk labels ("this tool can write to production"). Sandbox requirements. Provenance. Version pinning. Policy engines. Evals for tool-selection accuracy. Registries that say more than "this namespace is real." All of this is in motion. None of it is shipped. The MCP roadmap mentions stateless server operation and automatic discovery via MCP Server Cards for H2 2026 — that's the floor of what enterprise needs, not the ceiling.


When NOT to use MCP

  • A single internal function call. Use function calling.
  • To hide a bad API. Fix the API. MCP can't paper over a broken contract.
  • Destructive tools without scoped auth, audit logs, approval gates, idempotency, and rollback. The blast radius is too large.
  • Random community servers running on developer laptops with broad filesystem and token access. Read the source first. Or run them sandboxed.
  • Connecting every server to every agent. Tool bloat is a silent accuracy killer. Curate.
  • Deterministic workflow orchestration. Use a queue, a workflow engine, or a direct SDK. Don't put a model in the middle of "every step has to fire exactly once."
  • Treating roots as sandboxing. They're boundaries only when the client enforces them. They're not jails.
  • An MCP server with vague tool names. run_query is a loaded gun. read_only_customer_invoice_summary is a tool.
  • Believing registry presence means safety. A registry entry is a gravestone label, not an autopsy.

Hot takes nobody else will tell you

1. The best MCP servers are not API wrappers

They're opinionated capability products. "create_issue" is a function. "triage production incident with the on-call playbook attached" is a product. The first one ships in a weekend. The second one earns its place in your stack.

2. The model is rarely confused by the task

It's confused by your 87 poorly named tools. Every tool you add to the menu degrades selection accuracy on the tools you actually wanted called. Curate aggressively. Use tool search for large surfaces. Hide tools the agent shouldn't reach for.

Stdio servers are perfect for personal automations and laptop power-users. They are terrible for shared infrastructure — they inherit user privileges, they don't have proper auth, and they don't survive being moved off the original machine. Remote Streamable HTTP plus OAuth 2.1 is what turns MCP from a hobby into a platform.

4. Prompts may end up more valuable than tools

Tools do actions. Prompts encode operational judgment. The MCP spec treats them as equal first-class primitives, but the ecosystem still over-indexes on tools because they're concrete. A great server publishes both — tools for what to do, prompts for when and why.

5. "We support MCP" is now marketing fog

Every vendor announces MCP support. Almost none ship the full spec. Before you trust the badge, ask: tools? resources? prompts? sampling? roots? OAuth? Streamable HTTP? dynamic registration? subscriptions? approval UX? audit logs? Once you ask, the fog clears.

6. Registries authenticate names, not behavior

A verified namespace means "this is probably the claimed publisher." It does not mean "this server will not exfiltrate your tokens." Registry entries are gravestone labels, not autopsies. Treat installation like installing an unsigned binary, because that's what you're doing.

7. MCP doesn't remove integration work — it moves it to the right boundary

Every "we'll just plug in MCP" decision underestimates auth, observability, and capability curation. MCP gives you one shared protocol; the work that used to be in a hundred adapters now lives in fewer servers, but it's still real engineering. The win is concentration, not elimination.


The bigger picture

RAG is the read layer of agentic AI. MCP is the read-and-write layer. Together they're how the model stops being a haunted autocomplete box and starts being something that can actually do work in your business. Eighteen months from now, every IDE, every chatbot, every internal agent, every consumer assistant will speak MCP — the same way every editor speaks LSP today, and every debugger speaks DAP. The protocol won't be the differentiator. The servers will.

This is part of the bet behind Sinapt — the agent-first knowledge layer I'm building. The Sinapt knowledge base exposes itself to every agent in your stack via MCP. One source of truth, queryable by Claude, Cursor, VS Code, ChatGPT, your internal agent shell, and whatever ships next month — without you writing a single per-client adapter. That's the company-scale thesis. The deep dive is in Sinapt and the Queryable Company. The product itself is at sinapt.ai.

If you take one thing from this article: MCP isn't a protocol you adopt. It's a protocol you grow into. Start with one server. Wire it to one client. Add OAuth before you put it on the internet. Curate your tool surface ruthlessly. Trace everything. Read the spec's security section more than once. The teams that win the next two years are the ones whose agents can act safely, not the ones with the longest tool list.

💬
Working with a team that wants to adopt AI-native workflows at scale? I help engineering teams build this capability — workflow design, knowledge architecture, team training, and embedded engineering. → AI-Native Engineering Consulting
📖
Related reading

RAG, From Crayons to PhD — the companion piece. RAG is the read layer for AI; MCP is the read+write layer. Same five-floor structure.

Sinapt and the Queryable Company — the company-wide queryable layer that exposes itself to every agent via MCP. Same primitives, different scale.

Claude Killed the API Key — what enterprise auth for remote MCP actually looks like, and why workload identity federation is the missing piece.