Malte Wagenbach -March 2026
Right now, every major technology company is building infrastructure for AI agents to talk to each other. Anthropic released the Model Context Protocol. Google countered with Agent-to-Agent Protocol. The Linux Foundation launched the Agentic AI Foundation with backing from AWS, Microsoft, OpenAI, and 97 other organizations. Mastercard is testing agent-to-agent payments in Australia. The W3C has a working group on agent communication standards.
The vision is seductive: autonomous AI agents discovering each other across the internet, negotiating complex tasks, executing multi-step workflows without human intervention. Your personal agent books your trip by coordinating with airline agents, hotel agents, and payment agents -all through standardized protocols.
There is one problem with this vision. The agents cannot reliably do a single task yet.
We are designing the postal system before anyone has learned to write.
1 | What Is Actually Being Built
Let me be precise about what exists, because the protocols themselves are real engineering -it is the premise that is wrong.
Model Context Protocol (MCP) is a client-server protocol on JSON-RPC 2.0, open-sourced by Anthropic in November 2024. It standardizes how AI applications connect to external tools. A host (Claude, ChatGPT, Cursor) connects to servers that expose Tools (functions), Resources (data), and Prompts (templates). Transport runs over stdio or HTTP+SSE. Auth uses OAuth 2.1. By March 2026, it reports 97 million monthly SDK downloads and 10,000+ active servers. Every major AI platform supports it.
In December 2025, Anthropic donated MCP to the Agentic AI Foundation (AAIF) under the Linux Foundation -co-founded by Anthropic, Block, and OpenAI. Platinum members include AWS, Google, Microsoft, Cloudflare, Bloomberg. Three projects at launch: MCP itself, Block's open-source agent "goose," and OpenAI's AGENTS.md specification (used in 20,000+ repos).
Agent-to-Agent Protocol (A2A) was announced by Google in April 2025, donated to the Linux Foundation in June 2025. Where MCP connects agents to tools (vertical), A2A connects agents to other agents (horizontal). It introduces Agent Cards -JSON metadata at /.well-known/agent-card.json describing an agent's capabilities, endpoints, and auth requirements. Think machine-readable business cards. IBM's competing protocol (ACP) merged into A2A in September 2025. Version 0.3 shipped with gRPC support. 150+ organizations endorse it.
Identity and trust layers remain fragmented. Mastercard's Verifiable Intent framework -the most production-ready approach -has processed real agent transactions in Australia and New Zealand. Decentralized Identifiers (DIDs) for agents exist as academic proposals (arxiv 2511.02841) but not deployed standards. MCP uses OAuth 2.1. A2A has its own auth scheme. Nothing bridges them.
This is all competent engineering. The specs are clean. The governance is serious. The adoption numbers are real.
And almost none of it matters yet.
2 | The Competence Problem
Ask yourself: when was the last time you trusted an AI agent to complete a multi-step task without supervision?
Not generate text. Not answer a question. Actually do something -book a flight, file a document, negotiate a price, execute a transaction -end to end, autonomously, with real consequences if it gets it wrong.
If you have, you are in a vanishingly small minority. And you probably checked its work afterward.
This is the competence problem, and it is the actual bottleneck in the agentic future. Not communication between agents. Not protocol standardization. Not identity verification. The bottleneck is that individual agents are not reliable enough to be trusted with consequential actions.
The failure modes are well-documented:
Hallucinated actions. An agent told to book a hotel might confirm a reservation that does not exist, generate a fake confirmation number, and present it with full confidence. The model does not know the difference between completing a task and narrating the completion of a task.
Brittle tool use. Connect an agent to real APIs and watch what happens. A field that expects a date gets a timestamp. A required parameter gets skipped. An API returns an unexpected error format and the agent improvises a response instead of handling the failure. MCP makes it easier to connect to tools -it does not make the agent better at using them.
Context collapse. Give an agent a 15-step task and observe which steps get lost. Models have finite context windows and finite attention. By step 8, the constraints from step 2 have faded. The agent completes the task -just not the task you asked for.
Compounding errors. In a single-agent system, one mistake is recoverable. In a multi-agent system, errors compound. Agent A misinterprets your intent. Agent B acts on that misinterpretation. Agent C commits real resources based on B's flawed output. By the time a human notices, three agents have confidently executed the wrong plan. The coordination infrastructure made the failure worse, not better.
This is not a "current limitations" caveat to be hand-waved away. It is the central fact of AI agents in 2026.
3 | The Protocol Inversion
The industry has the dependency graph backwards.
The implicit theory is: build the coordination layer, and agent competence will follow. If agents can discover each other (Agent Cards), communicate through standard protocols (A2A), access tools cleanly (MCP), and verify identity (DIDs) -then the multi-agent internet emerges naturally.
This is an inversion. The actual dependency runs the other way:
Agent competence → useful tool use → useful agent coordination → useful identity/trust
You do not need A2A if a single agent cannot reliably complete a single task. You do not need Agent Cards if there are no agents worth discovering. You do not need Mastercard's Verifiable Intent if no one is willing to let an agent spend money unsupervised.
Every layer of the "agentic stack" presupposes that the layer below it works. None of them do. Not reliably. Not for consequential actions. Not without a human checking the output.
This is why the adoption numbers are misleading. MCP's 97 million SDK downloads and 10,000+ servers are real -but they mostly serve developer tools, where the human is still in the loop. Cursor uses MCP to connect your AI coding assistant to your codebase. Claude Code uses MCP to call local tools. These are legitimate, useful applications. They are also a human supervising an agent using tools, not an autonomous agent acting on your behalf.
The gap between "AI assistant uses MCP to help a developer" and "autonomous agent uses MCP + A2A to negotiate a contract with another agent" is not an incremental step. It is a category change. And no protocol bridges it.
4 | Why Smart People Build the Wrong Thing
If the competence problem is so obvious, why is the industry pouring resources into coordination protocols?
Protocols are legible. You can write a spec, publish it, form a foundation, count members, and announce adoption numbers. Competence is messy -it requires fundamental advances in reasoning, planning, and error recovery that nobody knows how to reliably achieve.
Protocols create moats. If your protocol becomes the standard, you control the rails. Google wants A2A to be the default because Google Cloud becomes the natural home for agent-to-agent orchestration. Anthropic wants MCP to win because every MCP server is a reason to use Claude. The Linux Foundation wants governance because governance is their business model. The incentives to build coordination infrastructure exist independent of whether anyone needs it yet.
Protocols look like progress. A protocol spec shipped is a milestone. A foundation launched is an announcement. 97 organizations joining is a press release. Meanwhile, making an agent 10% more reliable at a single task is invisible work that does not generate headlines.
History rhymes. In the late 1990s, the industry built elaborate SOAP/XML web service standards (WSDL, UDDI, WS-*) for service-to-service communication. A universal service registry where programs would discover and negotiate with each other -sound familiar? Most of it was abandoned in favor of simple REST APIs. The coordination layer was built before the things being coordinated were worth coordinating.
The same pattern plays out in every technology cycle. The infrastructure people build what they know how to build. The hard problem -making the thing actually work -remains unsolved, dressed up in a new abstraction layer.
5 | What Would Actually Matter
If you wanted to make the agentic internet real, you would not start with protocols. You would start with these problems:
Reliable single-agent task completion. Can an agent book a real flight, with a real credit card, handling edge cases (sold out, wrong date format, payment failure), and succeed 99% of the time? Not 60%. Not 80%. 99%. Because at 80% reliability, no one will let an agent spend money. This is the hard problem, and it has nothing to do with MCP or A2A.
Verifiable execution. When an agent says "I booked your flight," can you verify that it actually happened without checking yourself? Not a confirmation message the agent generated -a cryptographic proof from the airline's system that a real reservation exists. This matters more than agent-to-agent trust. It is agent-to-human trust.
Graceful failure and recovery. When an agent gets stuck -and it will -what happens? Does it hallucinate a completion? Fail silently? Escalate to a human with enough context to take over? The failure mode matters more than the success mode, because failure is the common case for complex tasks. No current protocol addresses this.
Scope-limited autonomy. The vision of fully autonomous agents is a distraction. What people actually need is agents that can do one thing really well within a defined scope -and know when they have left that scope. A booking agent that is excellent at flights and knows to stop when asked about insurance. The industry is building general coordination infrastructure for agents that would be more useful as narrow specialists.
None of these problems are sexy. None of them produce foundation announcements or protocol specs. All of them are prerequisites for the agentic internet to matter.
6 | The Real Timeline
Here is what I think actually happens:
2026-2027: MCP continues growing as a developer tool integration standard. Useful, legitimate, not revolutionary. A2A remains mostly a spec with pilot programs. The "agentic web" stays a conference talking point.
2028-2029: Agent competence improves enough that narrow, supervised agents handle specific tasks reliably -customer support, scheduling, data entry. They use MCP to connect to tools. They do not use A2A because there is no reason for them to talk to each other. The human is still the coordinator.
2030+: Maybe. If individual agents become reliable enough that trusting them with real autonomy is rational, then coordination protocols start to matter. A2A or its successor becomes relevant. Agent identity becomes necessary. But this depends entirely on solving the competence problem -which is an AI research problem, not an infrastructure problem.
The protocols being built today might end up being useful in five years. Or they might be the SOAP/XML of the 2020s -elaborate infrastructure for a future that arrived differently than expected.
Either way, the bottleneck is not the plumbing. It is what flows through it.
The Uncomfortable Truth
The agentic internet is being built by infrastructure companies because infrastructure is what they know how to build. Anthropic builds protocols. Google builds protocols. The Linux Foundation governs protocols. Mastercard builds payment rails.
None of them are solving the hard problem: making an AI agent reliable enough that you would trust it with your credit card and your calendar and your reputation, unsupervised.
Until that problem is solved, MCP is a nice developer tool. A2A is a well-written spec. Agent Cards are a clever idea. And the "agentic internet" is a story we tell at conferences while the agents themselves cannot reliably complete a three-step task.
The postal system is impressive. But first, someone has to learn to write.
For more on how institutional readiness shapes technological transitions, see Governing the Transition. For the broader infrastructure stack these protocols claim to serve, see The Transition Stack.