AI agents are rapidly expanding the enterprise attack surface
AI agents are connecting to more enterprise systems than any other software in history. They operate across networks, ingest data, and trigger actions faster than humans can track. Yet, the frameworks that keep human users in check don’t apply here. These agents run autonomously, creating new pathways for access, and therefore new entry points for attackers. Spiros Xanthos, Founder and CEO of Resolve AI, warned that the lack of governance structures around agentic AI leaves enterprises vulnerable. Jon Aniano, SVP of Product and CRM Applications at Zendesk, added that traditional security models still rely on human identity and oversight, not autonomous machine-to-machine operations.
The enterprise environment has evolved faster than its security counterparts. These systems move in microseconds, while human oversight works at a slower, decision-based pace. That imbalance gives attackers more room to exploit weak integration points and unsecured agent permissions. In short, the evolution of AI agents is outpacing the human capacity to secure them.
C-suite executives should view this shift as a leadership problem, not a purely technical one. Security can no longer focus only on firewalls and compliance checklists. It must now include dynamic governance for agents that act on behalf of the company. This requires new standards for access control, behavior tracking, and accountability. Forward-thinking leaders will make it a priority to establish frameworks that keep agentic AI aligned with corporate safety principles, long before a major incident forces the issue.
Model context protocol (MCP) simplifies integration
The Model Context Protocol (MCP) has become a favorite among enterprises because it makes it easier to connect multiple AI agents, tools, and data systems. It brings speed to integration, reducing time spent on setup and cross-system communication. But that same ease of connection is also what makes MCP risky. According to Spiros Xanthos from Resolve AI, MCP servers are “extremely permissive” — often allowing broader access than application programming interfaces (APIs), which have stricter, predefined security boundaries.
This permissiveness creates a problem. As organizations deploy multiple agents with distinct access privileges, they must manage a growing and complex permission matrix. Traditional security tools were never designed for continuous, autonomous interactions between machines. The few solutions that come close, such as Splunk’s fine-grained index-level access controls, still cater mainly to human operators. This gap leaves agent-based systems exposed to unintended data sharing, privilege escalation, and cross-agent misbehavior.
Executives should look closely at how their companies are adopting MCP and where governance needs to evolve. Agility and integration speed are valuable, but neither should outweigh the cost of losing control of internal systems. The most resilient enterprises will implement layered permissions, audit every agent’s access, and continuously reassess risk as systems become more interconnected. Leadership teams should make sure their security and engineering units align on this goal, balancing speed of innovation with control over exposure.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
Accountability in AI-driven workflows is ambiguous
AI agents are now fully embedded in day-to-day business operations. They assist customer service teams, process transactions, and even handle parts of authentication workflows. What once involved a direct line between a user, a system, and recorded human oversight now includes multiple agents acting on independent logic. This creates confusion around accountability. Jon Aniano, SVP of Product and CRM Applications at Zendesk, explained that accountability becomes difficult when a human directs an AI to act, and the AI takes the wrong action. In such cases, it’s not immediately clear who is responsible, the human, the AI, or the enterprise governance model.
In customer service platforms, for instance, AI-driven efficiency has scaled beyond what most businesses anticipated. While this improves response times and user experiences, it also introduces complex risks. Mis-authentication, data exposure, or authorization errors can quickly escalate without clear fail-safes. Zendesk mitigates this by enforcing strict access scopes and explicitly sanctioned API-based actions. Still, these measures depend on human configuration and policy discipline, not on universally accepted technical standards. The gap points to an urgent need for a global framework that defines accountability across hybrid human-AI activity.
Executives should recognize that accountability gaps are not just operational issues, they represent governance and compliance risks. As AI takes over authentication and decision-making flows, leadership teams must invest in traceability. Every AI action needs to be logged, attributable, and auditable in real time. For industries bound by heavy regulation, such as finance and healthcare, maintaining partial human verification remains essential until technical standards mature. Trust in AI automation will depend on transparency, and that trust must be built deliberately through policy, documentation, and ongoing oversight.
Enterprises remain cautious about fully authorizing autonomous AI
Full automation remains the goal for many organizations, but comfort with autonomous decision-making is still limited. Most businesses continue to rely on human supervision to validate AI-driven actions. The fear of unintended consequences, especially in production systems or regulated environments, holds enterprises back. Spiros Xanthos, Founder and CEO at Resolve AI, acknowledged that while autonomous agents may eventually surpass humans in trust and precision, enterprises are not ready to delegate full control. Resolve AI itself is experimenting with limited “standing authorizations” in low-risk tasks, such as coding support, where outcomes can be safely reviewed.
Human oversight ensures that any AI-driven action can be verified before it impacts critical systems. This deliberate restraint slows adoption but keeps operations stable. Even among companies eager to innovate, full AI autonomy is viewed as something to expand gradually. The challenge lies in scaling trust without compromising safety or compliance. As confidence grows through controlled deployments, standing authorizations may be extended, but only after consistent validation of reliability and behavioral predictability.
For executives, the path toward greater AI autonomy should be staged, verified, and continually monitored. Developing internal standards for risk tiers, from “safe to automate” to “require human validation” — will help enterprises evolve responsibly. This incremental approach builds confidence internally and externally. Leaders should encourage testing scenarios that combine efficiency with safety, allowing teams to refine controls before expanding autonomy. Strategic patience here is a strength; it preserves operational integrity while positioning the company for sustainable, long-term AI integration.
Existing security tools provide interim measures to manage and secure AI agent activities
Enterprises don’t need to wait for a new generation of security frameworks to begin addressing the risks of AI agents. Existing tools, when refined and properly configured, already offer transitional protection. Spiros Xanthos, Founder and CEO at Resolve AI, pointed to Splunk’s fine-grained index-level access controls as an example of a tool that can be adapted for agent-level governance. These provide segmented access permissions that restrict what each agent can interact with. Jon Aniano, SVP of Product and CRM Applications at Zendesk, added that Zendesk is taking a structured, cautious path, using declaratively designed API calls that explicitly define actions an agent may perform. This ensures that every expansion of AI capabilities is deliberate and validated through human oversight.
This controlled approach doesn’t eliminate risk, but it keeps exposure within manageable boundaries while standards and agent governance models mature. The goal isn’t to slow AI innovation; it’s to ensure it operates within transparent parameters. Enterprises can apply the same precision they use in traditional identity management to agent identities, defining roles, scopes, and approval thresholds for every interaction an agent performs in production environments.
For executives, the focus should be on balance, using existing technology efficiently while investing in scalable oversight mechanisms. These interim tools represent the bridge between today’s fragmented governance and tomorrow’s standardized frameworks. Leadership teams should insist on regular audits, event logging, and telemetry that can trace agent behavior at every stage. Incremental deployment, paired with formal testing, allows teams to gain confidence in security posture before expanding agent authority. AI adoption will continue to accelerate, but organizations that combine speed with disciplined control will be the ones that maintain trust and long-term resilience.
Key takeaways for leaders
- AI agents are expanding faster than enterprise security can adapt: Enterprise AI agents now hold deeper system access than any prior software, creating major new vulnerabilities. Leaders should invest in adaptive security frameworks that account for autonomous, non-human interactions before exposure widens.
- MCP’s convenience creates serious control gaps: Model Context Protocol (MCP) accelerates integration but sacrifices access discipline. Executives should balance speed with governance, enforcing layered permissions and continuous monitoring to limit unintended system exposure.
- Accountability in AI-driven actions is undefined: As AI systems make independent decisions, responsibility becomes blurred between human and machine. Leaders must mandate transparent logging, traceability, and audit mechanisms to maintain compliance and assign clear accountability.
- Enterprises remain cautious about full automation: Most organizations still rely on human oversight, particularly for high-risk decisions. Executives should phase AI autonomy carefully, testing in low-risk areas first, while reinforcing human review to protect core operations.
- Existing tools can stabilize security in the interim: Current technologies such as Splunk and declarative API controls can help manage AI access until new standards mature. Leaders should use these tools strategically, auditing agent behavior regularly and expanding permissions only after proven reliability.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


