AI agent proliferation and interoperability challenges
AI agents are expanding fast across organizations. They’re automating decision-making, customer engagement, and IT processes that once relied heavily on manual workflows. But as more businesses deploy these systems, a critical problem is emerging, agents built in different environments still can’t work together seamlessly. Off-the-shelf tools and custom-built agents often rely on distinct architectures and communication rules. Without a way to synchronize them, productivity gains remain limited.
Arnal Dayaratna, Research VP of Software Development at IDC, explained it clearly: “No one really knows” the definitive solution yet. The industry is searching for a universal data layer, something that allows these agents to exchange information and cooperate effectively. This layer would help standardize how agents communicate, ensuring smooth interaction even across different system types.
For executive leaders, the message is direct. Build your automation strategy with interoperability in mind. Don’t commit fully to closed ecosystems that may isolate you later. Instead, prioritize frameworks that add flexibility and security. Maintain a structure that can scale with new developments in AI agent communication. Fragmented systems will slow innovation, but those that can exchange data efficiently will have an edge as the market matures.
Incompatibility among competing agent communication protocols
AI agents today rely on software rules known as “protocols” to communicate and share information. Several major players are shaping these standards, each with a different vision. Anthropic developed the Model Context Protocol (MCP), which governs how agents access data and tools inside trusted platforms like Claude Desktop. Google’s Agent2Agent (A2A) takes a broader approach, using structured “agent cards” to identify and interact with other agents. Meanwhile, the open-source Agent Network Protocol (ANP) focuses on discovering agents across the open web.
Microsoft’s research highlights a key point: none of these systems were built to talk to each other. Each assumes different trust boundaries, creating friction when enterprises want to use agents from multiple vendors. The result is a patchwork of integrations, where organizations must manually build APIs and connectors to bridge these protocols. This slows adoption and increases maintenance complexity.
For executives, this fragmentation is both risk and opportunity. The risk lies in committing to a technology that could soon lose compatibility or support. The opportunity lies in shaping internal architecture around adaptability, designing systems that can switch protocols or integrate new ones without disruption. By preparing for protocol evolution now, companies can move faster when the market inevitably standardizes around a few dominant systems.
The takeaway is clear: don’t wait for a universal standard to arrive. Build your systems as if it will never exist. Flexibility is your best hedge in a fast-moving AI landscape defined by rapid innovation and shifting technical foundations.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
Rapid evolution of protocol standards poses future risks
The AI agent ecosystem is changing with remarkable speed. What works today might be outdated in a few months. New tools and protocols are emerging faster than most IT teams can adapt. That level of evolution brings both promise and uncertainty. Steve Wilson, Chief AI Officer at Exabeam, made this clear when he noted that “all of these will get overturned and subsumed and it’s going to happen super fast.” He pointed out that even Anthropic’s Model Context Protocol (MCP) is already falling out of favor because developers are moving toward simpler, lightweight formats that allow faster development and deployment.
For large organizations, this rapid turnover can be disruptive. Investing heavily in one protocol can create technical debt when newer, more efficient systems replace it. The real challenge isn’t just about keeping pace, it’s about making sure infrastructure remains agile during constant updates. CIOs and CTOs who emphasize modular design and open integration will be better positioned to absorb these changes.
Executives should treat protocol flexibility as a strategic priority. The goal is not to choose a “perfect” system, but to create one that can evolve without major reengineering. Staying adaptable is what will keep enterprise-grade AI deployments sustainable in the face of near-continuous transformation. Companies that can shift direction quickly will be able to capture the benefits of new technology instead of being slowed down by it.
Necessity for experimentation and modular IT infrastructure
The pace of change in AI is forcing CIOs to rethink how they plan, test, and scale technology. Experimentation is no longer optional, it’s a competitive requirement. Jim Swanson, CIO at Johnson & Johnson, emphasized that without active experimentation, organizations risk falling behind. His team at J&J is already applying modular architectures and redesigning business workflows that reach across multiple platforms and datasets. This approach gives them the ability to pivot as new technologies and protocols emerge.
Swanson also underlined the importance of data quality and system maturity, stating that true enterprise value appears only when “you marry it with the quality of the data and maturity of the full technology stack inclusive of the AI component.” In practice, that means combining flexibility with discipline. Experimentation should not come at the cost of governance or security, it must align with long-term business architecture plans.
For executives, the key insight is to maintain modular infrastructure that enables rapid change without destabilizing critical systems. Encourage your teams to test new AI protocols and tools, but insist on frameworks that make switching costs low. A modular setup ensures that new tools can be integrated or replaced at speed, allowing the organization to stay current without major disruptions. Companies that operate this way don’t just react to change, they set the pace.
Implementing orchestration and control mechanisms to govern AI behavior
As AI agents become more autonomous, governance needs to evolve just as quickly. Without control systems, multiple agents operating across business functions can make inconsistent decisions or take conflicting actions. Carter Busse, CIO at Workato, advises that CIOs should place strong orchestration, policy, and transactional controls between agents and enterprise systems. These controls ensure that the organization maintains full oversight over how AI behaves and interacts with business processes.
The objective is clear: ensure autonomy without losing authority. Businesses need layered governance structures that define how and when agents can access sensitive data, interact with critical systems, or trigger business processes. Proper orchestration guarantees that each AI action aligns with operational and compliance standards, giving leaders confidence that their systems behave predictably.
For executives, implementing these controls is not about slowing down AI adoption, it’s about enabling scale safely. When orchestration frameworks are in place, enterprises can deploy more agents across more tasks while keeping operations compliant and reliable. This approach allows innovation to proceed at speed without sacrificing accountability, stability, or trust in the organization’s digital decision-making systems.
Parallels between current AI standards and historical protocol wars
The current wave of AI protocol development reflects a familiar but accelerated pattern. Competing technical standards are emerging simultaneously, each backed by major organizations and developer communities. Over time, some will fade, and others will dominate. Arnal Dayaratna, Research VP at IDC, pointed out that these agentic systems are still in their infancy, meaning that consolidation is inevitable and will likely happen faster than previous technology movements.
This consolidation will reshape how enterprises adopt AI. Right now, fragmentation forces CIOs to build custom integrations and maintain interoperability between multiple agents and protocols. But as standards form and stabilize, the path will clear for smoother large-scale implementation. This stability will unlock more consistent governance, faster deployment cycles, and broader collaboration between AI systems from different vendors.
For executive decision-makers, the message is to prepare infrastructure and teams for the standards shift ahead. Avoid long-term investments tied to one protocol or vendor until the market shows signs of convergence. Stay engaged with emerging standards bodies and technology leaders driving interoperability. When consolidation arrives, companies that have tracked the progression closely will transition smoothly and capture early advantage from the unified, high-efficiency AI ecosystems that follow.
Main highlights
- Interoperability must guide AI strategy: CIOs should prioritize building flexible systems that allow AI agents from different environments to communicate effectively, ensuring scalability and reducing operational friction.
- Fragmented protocols need bridging solutions: With protocols like MCP, Agent2Agent, and ANP lacking native compatibility, leaders should invest in integration frameworks and prepare for industry consolidation while avoiding dependence on a single vendor.
- Adaptability protects long-term investments: Protocol standards are changing rapidly; executives should adopt modular infrastructures that can evolve without major rework, securing agility as new technologies emerge.
- Experimentation drives competitive advantage: IT leaders should encourage controlled experimentation with new AI systems while maintaining strong governance, ensuring both innovation and operational stability.
- Governance is non-negotiable for autonomous AI: CIOs must implement orchestration, policy, and control mechanisms to maintain visibility and authority over AI-driven actions across all business functions.
- Standardization is coming fast, prepare early: As the AI protocol market matures and consolidates, decision-makers should track developments closely and design systems ready to adapt quickly when dominant standards emerge.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


