The MCP Foundation: Agents That Actually Use Tools
The MCP Foundation: Agents That Actually Use Tools. A2A: The Delegation Layer for Agent Teams. The Hybrid Architecture: MCP + A2A in Production.
The MCP Foundation: Agents That Actually Use Tools
Model Context Protocol isn't just another API standard—it's the infrastructure that lets agents become productive team members. Launched by Anthropic in November 2024, MCP solves the fundamental problem of agent-tool integration through a clean JSON-RPC architecture [2].
The protocol defines three core components: Hosts (LLM applications like Claude), Clients (connectors), and Servers (data sources and tools). What makes this powerful is the bidirectional flow—agents can access resources like Google Drive files or GitHub repos, while also offering capabilities like sampling and root prompts back to the system [4].
But here's where it gets interesting for production deployments: MCP enables progressive disclosure. Instead of dumping entire databases into context windows, agents can query specific data points as needed. A Nordic healthcare system, for instance, could let epidemiology agents access patient databases through MCP servers while maintaining strict privacy controls—each query logged and auditable [3].
The code execution feature is particularly compelling. Agents write TypeScript files like ./servers/google-drive/getDocument.ts to interact with MCP tools, creating persistent, reviewable workflows rather than ephemeral API calls. This isn't just more efficient—it's more trustworthy. You can audit exactly what your agents are doing and why.
Early adoption signals are strong. Block, Apollo, Zed, Replit, Codeium, and Sourcegraph have all integrated MCP support, with Claude Desktop providing native tooling [2]. The MCP-Bench project from Accenture offers standardized benchmarks for tool-using agents, giving teams concrete metrics for evaluation [7].
A2A: The Delegation Layer for Agent Teams
While MCP handles the vertical relationship between agents and tools, Agent2Agent protocol manages horizontal coordination between peer agents. Google's A2A, launched in April 2025, turns isolated AI workers into collaborative teams [1].
The architecture is elegantly simple: agents discover each other through JSON "Agent Cards" that describe capabilities and interfaces. Task lifecycles are managed through structured messaging over HTTP, Server-Sent Events, or JSON-RPC. The protocol is modality-agnostic, meaning text agents can coordinate with voice agents or vision systems seamlessly [8].
Consider a hiring workflow: a primary agent receives a job requisition, then delegates interview scheduling to a calendar agent, background checks to a verification agent, and candidate assessment to a specialized evaluation agent. Each handoff is logged, creating an immutable audit trail of decisions and actions [1].
Harrison Chase from LangChain captured the significance: "This is a shared protocol which meets the needs of agent builders" [5]. The ecosystem response has been swift—Atlassian's Rovo, Salesforce's Agentforce, and dozens of other platforms are building A2A support.
For Nordic organizations prioritizing data sovereignty, A2A's peer-to-peer architecture is crucial. Unlike centralized orchestration systems that require cloud coordination, A2A agents can operate entirely within local infrastructure while maintaining full interoperability.
The Hybrid Architecture: MCP + A2A in Production
The real power emerges when you combine both protocols. A2A handles delegation and coordination, while MCP manages tool access and data integration. This hybrid approach mirrors how human engineering teams actually work—managers delegate tasks, individual contributors use specialized tools.
A concrete example from Nordic enterprise deployments: An operations agent receives an infrastructure alert via A2A, delegates investigation to a monitoring agent, which uses MCP to query Prometheus servers and log systems. The monitoring agent identifies the root cause, delegates remediation to a deployment agent, which uses MCP to access Kubernetes APIs and execute fixes. Every step is logged, auditable, and reversible [5].
The token economics are compelling. Traditional approaches would load entire system states into LLM context windows—expensive and slow. The MCP+A2A pattern keeps context minimal while enabling complex multi-step workflows. Teams report infrastructure costs dropping 60-80% compared to monolithic agent approaches.
Framework integration is accelerating. LangGraph provides graph-based orchestration, AutoGen enables conversational workflows, and CrewAI offers role-based team structures—all now supporting MCP and A2A protocols [6]. This means you can choose orchestration patterns that match your organizational structure rather than being locked into vendor-specific approaches.
Framework Wars: AutoGen, LangGraph, and CrewAI
The protocol standardization is reshaping the agent framework landscape. Each major framework is adapting MCP and A2A support while emphasizing different orchestration philosophies [6].
LangGraph excels at complex, branching workflows where agents need to backtrack and retry operations. Think compliance processes or scientific research where multiple hypothesis paths need exploration. The graph structure makes dependencies explicit and enables sophisticated error handling.
AutoGen focuses on conversational coordination—agents that negotiate, debate, and reach consensus. This works well for creative tasks or strategic planning where multiple perspectives improve outcomes. The chat-based interface makes it accessible to non-technical stakeholders.
CrewAI emphasizes role-based teams that mirror human organizational structures. Each agent has defined responsibilities, reporting relationships, and performance metrics. This approach resonates with enterprises migrating existing processes to agent-based execution.
The choice matters less than the underlying protocols. MCP and A2A provide portability between frameworks, reducing vendor lock-in and enabling gradual migration strategies. Nordic teams are particularly focused on this flexibility given sovereignty requirements and smaller vendor ecosystems.
Production lessons are emerging from early adopters. One Nordic fintech reports spending $47,000 learning A2A/MCP infrastructure patterns—expensive education that highlights the importance of starting with clear use cases and building incrementally rather than attempting full-scale transformations [5].
Nordic Edge: Sovereignty, Auditability, and Local Deployment
Nordic organizations have unique advantages in the agent coordination era. Strong data protection frameworks, advanced local infrastructure, and cultural emphasis on transparency align perfectly with MCP/A2A architectures.

The sovereignty angle is particularly compelling. Both protocols support fully local deployment—no cloud dependencies, no data exfiltration, complete control over agent behavior. Nordic CTOs are leveraging this for sensitive applications like healthcare analytics, financial modeling, and government services.
Consider epidemiological modeling during health crises. Traditional approaches require either manual coordination between specialists or centralized systems that create privacy risks. MCP/A2A enables distributed agent teams where epidemiologists, data scientists, and policy experts each have specialized agents that coordinate seamlessly while keeping sensitive data within institutional boundaries.
The audit trail capabilities address regulatory requirements that are increasingly important across Nordic markets. Every agent interaction, tool usage, and delegation decision is logged with cryptographic integrity. This isn't just compliance theater—it enables continuous improvement of agent performance and identification of bias or errors in automated decision-making.
Local hardware deployment is becoming economically viable. Nordic data centers offer competitive pricing for GPU clusters, and the efficiency gains from MCP token reduction make local agent teams cost-competitive with cloud alternatives while providing complete control over data and processing.
The Post-Code Reality: When Judgment Becomes the Bottleneck
The convergence of MCP and A2A represents something larger than protocol standardization—it's the infrastructure for post-code software development. When agents can write, test, and deploy code autonomously while coordinating through standardized protocols, the bottleneck shifts from implementation to judgment.
This aligns perfectly with Up North AI's thesis: "Code is free. Judgment isn't." The protocols make technical execution increasingly commoditized while amplifying the value of strategic thinking, ethical reasoning, and domain expertise. Nordic organizations that invest in judgment—clear requirements, robust testing, ethical frameworks—will leverage agent teams most effectively.
The implications extend beyond software. Agent coordination patterns will reshape how we think about organizational design, process optimization, and human-AI collaboration. The teams building these systems today are defining the operating principles for the next decade of business automation.
For Nordic CTOs, the window for experimentation is open but narrowing. The protocols are stable, the frameworks are maturing, and early adopters are establishing competitive advantages. The question isn't whether to adopt agent coordination—it's how quickly you can build the judgment systems to guide them effectively.
The future belongs to organizations that can orchestrate both human and artificial intelligence toward shared objectives. MCP and A2A provide the technical foundation. Everything else is judgment.
Sources
- https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability
- https://www.anthropic.com/news/model-context-protocol
- https://www.anthropic.com/engineering/code-execution-with-mcp
- https://modelcontextprotocol.io/specification/2025-06-18
- https://workos.com/blog/mcp-vs-a2a
- https://arxiv.org/html/2508.10146v1
- https://github.com/Accenture/mcp-bench
- https://docs.cloud.google.com/agent-builder/agent-engine/develop/a2a
Want to go deeper?
We explore the frontier of AI-built software by actually building it. See what we're working on.