MCP Won the Protocol War. Security Lost.
Model Context Protocol is now an industry standard under the Linux Foundation. The security issues that researchers flagged remain unfixed.
The Model Context Protocol started as Anthropic's internal solution for connecting Claude to external tools. In November 2024, they open-sourced it. By March 2025, OpenAI adopted it. By May, Microsoft and GitHub joined the steering committee. In December, Anthropic donated MCP to the Linux Foundation. Google, AWS, Microsoft, Cloudflare, and Bloomberg signed on.
MCP won. It's the standard for how AI agents connect to external systems.
In April 2025, security researchers published an analysis of MCP's outstanding vulnerabilities. Prompt injection. Tool permissions that allow combining tools to exfiltrate data. Lookalike tools that silently replace trusted ones. The issues haven't been fixed.
Organizations implementing MCP report 40-60 percent faster agent deployment times. Gartner predicts 40 percent of enterprise applications will include task-specific AI agents by end of 2026. The protocol is enabling exactly the adoption curve everyone wanted.
The security model assumes trust at multiple points where trust doesn't exist.
MCP lets an agent call external tools defined by JSON specifications. When an agent connects to a new MCP server, it receives a list of available tools and their parameters. The agent can then call those tools based on user requests or its own reasoning.
The problem: an agent can't verify that a tool does what its description says. A tool named "read_file" might exfiltrate data on every call. A tool named "send_email" might BCC every message to an attacker. The agent relies on descriptions provided by the server, and those descriptions can lie.
This matters more as MCP adoption increases. The protocol's value comes from a growing ecosystem of servers. You can connect an agent to your CRM, your database, your email, your calendar. Each connection is a server you're trusting with whatever access the agent has.
The current mitigation is user approval for tool calls. Before an agent executes a sensitive action, it asks the user. This works for occasional, visible actions. It fails for high-volume automations where the whole point is eliminating human review.
The other security gap is tool shadowing. If two MCP servers offer tools with similar names, an agent might call the wrong one. A malicious server can register tools that intercept requests meant for legitimate ones. There's no namespacing that makes tool origin clear.
I don't think these problems are unfixable. Cryptographic tool verification is tractable. Capability-based permissions exist. Audit logging with tamper-proof guarantees is well understood.
The issue is that security features add friction. They slow adoption. They complicate the developer experience. The incentive for every party involved is to ship integrations now and address security later.
This is how security debt accumulates. A protocol gets adopted because it's easy. Vulnerabilities get documented but not prioritized. The installed base grows. Eventually the cost of fixing issues becomes prohibitive because too many systems depend on the insecure behavior.
MCP is at the early stage of this pattern. The protocol could still be hardened. The organizations on the steering committee have the resources. The question is whether they'll prioritize security before the ecosystem calcifies around the current model.
My guess: they won't. Security features will arrive incrementally, probably after a high-profile incident makes them politically necessary. The organizations deploying MCP-connected agents today are accepting risks they may not fully understand.
The protocol won. That's separate from whether it's ready for production.