MCP: Giving Agents Access to Everything They Need
MCP: Giving Agents Access to Everything They Need. A2A: Agents That Actually Collaborate. How MCP and A2A Work Together: The Full Stack.
MCP: Giving Agents Access to Everything They Need
Launched by Anthropic in November 2024, the Model Context Protocol solves a fundamental problem: how do you give an AI agent secure, standardized access to tools, data, and workflows without building custom integrations for every single connection? [1]
MCP operates on three core primitives that mirror how human engineers work. Tools are functions the agent can call—like running code, sending emails, or querying APIs. Resources are data sources the agent can read—databases, files, documentation, or real-time feeds. Prompts are reusable templates that standardize how agents approach common tasks [3].
The protocol runs on JSON-RPC 2.0, which means it's lightweight, secure, and familiar to any developer who's built web services. What makes it powerful is the two-way connection model: agents can both request access to resources and receive push notifications when data changes. Your deployment agent doesn't just check server status—it gets alerted the moment something breaks [1].
The adoption numbers tell the story. Enterprises are configuring MCP connections in 15-30 minutes and launching 50+ tool integrations within 90 days. Major platforms like VS Code, AWS, Sourcegraph, and Replit have already integrated MCP support. In December 2025, Anthropic donated the protocol to the Linux Foundation's Agentic AI Foundation, signaling this isn't just another vendor play—it's infrastructure [4].
Here's what this looks like in practice: A coding assistant agent uses MCP to access your Git repository (resource), call your testing framework (tool), and apply your code review template (prompt). Instead of three separate integrations, you get one standardized connection that works across any MCP-compatible system.
A2A: Agents That Actually Collaborate
While MCP handles the agent-to-tool relationship, Google's Agent-to-Agent Protocol tackles the harder problem: how do autonomous agents discover, communicate, and coordinate with each other across organizational boundaries? [2]
A2A, announced in April 2025, introduces Agent Cards—standardized profiles that live at .well-known/agent.json endpoints, similar to how websites publish robots.txt files. These cards advertise what an agent can do, what tasks it accepts, and how to authenticate with it. It's service discovery for the AI era [2].
The protocol's task lifecycle model mirrors how engineering teams actually work. Tasks move through states: Submitted, Working, Completed. Agents can exchange messages, share artifacts (files, JSON data, or rich text), and handle authentication through OAuth 2.0 with PKCE. Crucially, A2A supports asynchronous operations with push notifications—agents don't have to constantly poll each other for updates [5].
The backing is impressive: 50+ partners including Salesforce, Accenture, MongoDB, LangChain, SAP, Atlassian, McKinsey, and Deloitte. Google moved the protocol to the Linux Foundation in June 2025, making it vendor-neutral infrastructure [2].
Consider a software deployment scenario: Your CI/CD agent (Agent A) needs to coordinate with a security scanning agent (Agent B) and a notification agent (Agent C). With A2A, Agent A discovers the other agents via their published cards, submits tasks with specific requirements, and receives status updates as work progresses. No custom APIs, no vendor lock-in, no integration hell.
How MCP and A2A Work Together: The Full Stack
The magic happens when you combine both protocols. MCP handles vertical integration (agent-to-tools), while A2A handles horizontal coordination (agent-to-agent). As one expert put it: "MCP is about grounding agents with tools and data. A2A is about letting agents work together across boundaries" [6].
Here's a real-world example that shows the power of this combination:
Scenario: Automated Customer Onboarding
- Orchestrator Agent receives a new customer signup
- Via MCP: Accesses customer database (resource) and validation tools (tools)
- Via A2A: Discovers and coordinates with specialized agents:
- Identity Agent: Creates accounts in auth systems (uses MCP for LDAP/OAuth tools)
- Provisioning Agent: Sets up infrastructure (uses MCP for cloud APIs)
- Communication Agent: Sends welcome emails (uses MCP for email/SMS tools)
- Via A2A: All agents report progress back to orchestrator
- Via MCP: Orchestrator logs completion to audit system
Each agent is a specialist with deep tool access via MCP, but they coordinate as a team via A2A. It's like having a senior engineer who can use any tool (MCP) manage a distributed team across departments (A2A).
Real-World Implementation: What Actually Works
After building with both protocols, here's what we've learned works—and what doesn't.
Start with MCP for single-agent workflows. The Python SDKs are mature, and you can get a basic agent connected to your tools in under an hour. We've seen Nordic companies use this pattern for voice AI systems that need to access telephony APIs, customer databases, and notification services [1].
Add A2A for multi-agent coordination. The GitHub samples are solid, but expect to spend time on the orchestration layer. You'll need something that can translate between MCP-style tool calls and A2A-style task delegation. Think of it as building the "middle management" layer for your agent team [2].
The challenges are real. Latency compounds in agent chains—a five-agent workflow can easily hit 10+ seconds end-to-end. Debugging distributed agent systems is harder than debugging distributed microservices because agents make autonomous decisions you can't predict. State management across async agent conversations gets complex fast [7].
But the value is immediate. One Nordic telecom we work with reduced their customer service integration complexity by 70% using MCP connections. Instead of maintaining 15+ custom API integrations, they have agents that can access any customer data or trigger any workflow through standardized MCP resources and tools.
The Nordic Opportunity: AI-Native Infrastructure
Nordic countries have always punched above their weight in infrastructure plays—from Ericsson's telecom dominance to Spotify's streaming architecture. MCP and A2A represent the same kind of foundational opportunity in the AI era.

The timing is perfect. The AI agents market is projected to grow from $5.9B in 2024 to $105.6B by 2034—a 38.5% CAGR [4]. But more importantly, 88% of executives are already piloting or scaling autonomous agents, and 85% are integrating agents into core workflows [4]. This isn't future tech—it's happening now.
Nordic companies building AI products have a unique advantage: deep expertise in protocol design and distributed systems, combined with pragmatic approaches to standardization. Companies like Telenor could build agent orchestration platforms that coordinate voice AI, network management, and customer service agents across their entire infrastructure.
The post-code era doesn't mean no more programming—it means programming at the agent orchestration level. Instead of writing functions, you're designing agent teams. Instead of debugging code, you're optimizing agent collaboration patterns. The judgment required is higher-level but more impactful.
What Changes When Agents Build the Software
We're witnessing the emergence of what we call "agent-native architecture"—systems designed from the ground up for AI agents to discover, connect, and collaborate. This isn't just automation of existing workflows; it's a fundamental shift in how software systems are composed and operated.
Traditional software development follows a pattern: requirements → design → code → test → deploy. Agent-native development follows a different pattern: capabilities → discovery → orchestration → adaptation. You don't build features—you compose agent teams with complementary capabilities that can adapt to changing requirements in real-time.
The protocols make this possible at scale. MCP ensures agents have reliable access to the tools and data they need. A2A ensures they can find and coordinate with other agents across organizational boundaries. Together, they create the "Internet of Agents"—a distributed system where AI agents can collaborate as easily as humans use web browsers.
The implications are profound. Software teams will shift from writing code to designing agent orchestrations. Product development will accelerate because agents can prototype, test, and iterate faster than human developers. But the judgment required—knowing which agents to deploy, how to structure their collaboration, and when to intervene—becomes more critical than ever.
As one expert noted: "We're at the beginning of what feels like the HTTP moment for AI agents" [6]. Just as HTTP enabled the web by standardizing how computers share information, MCP and A2A are standardizing how AI agents share capabilities. The companies that master these protocols first will build the platforms that define the next decade of software.
Code is becoming free. The judgment to orchestrate AI agents into effective teams—that's where the value lies.
Sources
- https://www.anthropic.com/news/model-context-protocol
- https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability
- https://modelcontextprotocol.io/docs/getting-started/intro
- https://natoma.ai/blog/the-emergence-of-ai-agent-protocols-comparing-anthropic-s-mcp-ibm-s-acp-and-google-s-a2a
- https://a2a-protocol.org/latest
- https://dr-arsanjani.medium.com/complementary-protocols-for-agentic-systems-understanding-googles-a2a-anthropic-s-mcp-47f5e66b6486
- https://www.gravitee.io/blog/googles-agent-to-agent-a2a-and-anthropics-model-context-protocol-mcp
- https://www.clarifai.com/blog/mcp-vs-a2a-clearly-explained
Want to go deeper?
We explore the frontier of AI-built software by actually building it. See what we're working on.