Shared memory and context as the cornerstone of AI orchestration
AI reaches its full potential when it understands the bigger picture, not just isolated commands. Shared memory and context give AI that broader awareness. When systems can access company history, workflows, and relevant data instantly, they stop being reactive tools and start becoming proactive partners. This structure means tasks can be assigned without repetitive explanations or reloading background details. It saves time and keeps operations consistent across teams and departments.
Arnab Bose, Chief Product Officer at Asana, put it simply: shared context gives AI the “direct access from the get-go” that it needs to perform with purpose and precision. It’s not a futuristic concept, it’s a practical layer that brings together memory, governance, and trust. In practice, shared memory means decisions are faster, less fragmented, and supported by traceable context. Over time, this approach creates continuity between human decisions and AI-driven execution.
For executives, shared context isn’t just about workflow efficiency. It’s about creating an AI infrastructure where every new model, agent, and integration inherits the same understanding of the organization. That’s how scale happens. Governance mechanisms, such as review checkpoints and secured data layers, make sure that AI operates safely within company policy. When done right, shared memory becomes the foundation for long-term AI orchestration, one that increases transparency, speeds up collaboration, and keeps systems aligned with business goals.
AI agents as proactive, integrated team members
AI tools are evolving from simple support systems into true teammates. With Asana’s “AI Teammates,” the company has reframed how businesses can use intelligent systems, embedding them directly into teams rather than hosting them as separate utilities. Once activated, these agents gain access to the same permission levels and resources as human team members, including connected platforms like Microsoft 365 and Google Drive. This allows them to work alongside employees, not behind them, contributing directly to project progress or problem resolution.
Arnab Bose, Asana’s Chief Product Officer, explained that each AI teammate is designed to “manifest itself as a teammate,” integrating into the same communication and permission systems that human users rely on. It’s a shift toward collaboration where AI agents have shared access to team history and current tasks. This setup removes redundant action loops and boosts transparency, everyone can see what both humans and AI agents have done, documented in a unified system.
For C-suite leaders, the business implication is significant. Embedded AI builds consistency across teams and brings more accountability into digital workflows. It ensures that when AI takes part in decision-making or executes on assigned objectives, it operates clearly within the same guardrails as any trusted employee. Human checkpoints still exist, ensuring alignment and quality, but overall output becomes faster and more predictable. The key takeaway is simple, when AI is treated as part of the team, not an external add-on, enterprise orchestration becomes more efficient, traceable, and scalable.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
Ensuring transparency and robust human oversight
Transparency is the cornerstone of trust when deploying enterprise-grade AI systems. Every action taken by an AI agent must be visible, traceable, and auditable. As Asana’s Chief Product Officer, Arnab Bose, explained, Asana designed its AI systems to maintain full visibility, recording both AI and human actions across workflows. The result is “ease of explainability” — a way for organizations to understand what decisions were made, why they were made, and how those outcomes were reached.
Human oversight remains critical. Asana integrates review checkpoints that allow teams to intervene, adjust, or refine an AI’s work in real time. This ensures that the technology operates within business objectives, compliance standards, and organizational values. Admins can edit, pause, or redirect agent behavior through built-in controls, keeping AI outputs aligned with company expectations.
For executive leaders, transparency and oversight are strategic safeguards. They reduce the risks of unintended AI decisions while preserving accountability. This model does not slow innovation, it strengthens it. With documented actions and human-readable feedback loops, organizations can scale AI responsibly while maintaining complete control over performance. Balancing autonomy with oversight ensures that AI evolves as a reliable extension of workforce intelligence, not an uncontrolled force within it.
Navigating integration and authorization challenges
Integration remains one of the toughest barriers in enterprise AI adoption. Connecting AI systems across departments and platforms introduces complexity in authorization, data management, and security. As Arnab Bose from Asana noted, employees often struggle with OAuth authorization, the process of allowing AI access to sensitive systems such as Asana through APIs. Without clear enterprise standards, it’s difficult to determine which permissions are safe and which pose risks.
The lack of unified authorization management creates vulnerabilities and slows adoption. Bose suggested that centralizing credential control through corporate identity providers, or building a universal directory of approved AI agents, could streamline secure integrations. This approach would give IT teams full visibility into which agents operate within the enterprise network, what data they access, and how permissions are configured.
For executives, the issue is less technical and more structural. AI orchestration succeeds when its security processes are simplified, transparent, and consistent. Without strong integration governance, innovation will always meet friction. Establishing enterprise-wide rules for authorization and approved AI agents is not only a security measure, it’s a foundation for scalable, compliant AI ecosystems. Leaders who achieve this balance will reduce operational risk while unlocking faster and safer automation across their organizations.
The need for a universal protocol for AI memory and collaboration
Right now, the AI industry lacks a unified framework that allows systems to share memory and context across different platforms. This absence of standards limits how effectively AI agents can collaborate or exchange information. Arnab Bose, Chief Product Officer at Asana, explained that without such a common protocol, every integration must be custom-built. Each connection becomes a separate project, slowing down innovation and fragmenting enterprise workflows.
A universal standard would allow AI agents from different platforms to interact seamlessly, exchanging contextual data without bespoke architecture. This capability would change enterprise AI orchestration from isolated deployments to a synchronized environment where multi-agent collaboration becomes the norm. It would streamline communication between applications and reduce time spent managing separate integrations.
For C-suite leaders, supporting or adopting such standardization is a strategic imperative. Interoperability unlocks efficiency, scalability, and consistency across departments and systems. Organizations that advocate for shared AI protocols will stay ahead, benefiting from faster integration cycles, improved data coherence, and stronger governance. This shift will also reduce reliance on vendor-specific solutions, giving enterprises more control over their AI ecosystems.
The promise and limitations of emerging standards like MCP
There are early signs of progress toward that universal integration layer. Anthropic’s Modern Context Protocol (MCP) is one of the first attempts to enable AI agents to connect with multiple external systems in a single setup. Arnab Bose highlighted MCP as a promising development because it simplifies how AI agents exchange data and context, cutting down on repetitive, one-off connections between applications. MCP aims to create a single action path for integrations rather than different custom connectors for each system.
However, according to Bose, MCP is not yet a complete solution. Broader adoption across industries and platforms will take time. Each enterprise still needs to assess the protocol’s compatibility with its existing systems, governance models, and security requirements. MCP’s real potential lies in how consistently it can be implemented across organizations and vendors willing to commit to a shared framework.
For business leaders, MCP represents both progress and a call for collaboration. The potential benefits, simpler integrations, faster deployment, and stronger context transfers, are clear. But scaling that potential depends on alignment across enterprises and technology providers. Decision-makers should see emerging protocols like MCP as the stepping stones toward a more connected, transparent, and efficient AI ecosystem, not as finalized solutions. The companies that engage early in shaping these standards will set the pace for how enterprise AI orchestration evolves.
Key executive takeaways
- Build a foundation of shared memory and context: Leaders should invest in shared context frameworks that give AI systems instant access to historical and operational data. This reduces redundancy and ensures faster, more accurate decision-making across the organization.
- Treat AI as a full team participant: Integrate AI agents directly into existing workflows so they operate as active contributors, not side tools. This drives collaboration, accelerates execution, and builds consistent progress tracking within teams.
- Maintain transparency through human oversight: Implement structured checkpoints and clear audit trails for all AI actions. This hybrid approach ensures accountability, strengthens trust, and keeps AI outcomes aligned with strategic goals.
- Simplify secure integration and authorization: Standardize AI access protocols and approvals through centralized identity management. Executives should prioritize creating internal directories of trusted AI agents to enhance security and streamline multi-platform integration.
- Drive interoperability with shared standards: Push for cross-platform AI collaboration by supporting industry-wide protocols for context and memory sharing. Leaders who adopt or help define these standards will gain efficiency, reduce integration costs, and expand scalability.
- Adopt emerging protocols like MCP strategically: Evaluate and pilot solutions such as Anthropic’s Modern Context Protocol to unify AI interactions across systems. Early engagement offers a competitive edge, but leaders must balance innovation with governance and security readiness.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


