Agentic AI brings unprecedented complexity in identity and authorization management

AI agents are beginning to operate independently across core business systems, email, customer databases, and CRMs, executing tasks that usually require human authorization. The fundamental question is: under whose authority do these agents act? Nancy Wang, CTO at 1Password, described this as the key challenge. Authenticating an agent, confirming that it is allowed to exist, is relatively straightforward. Defining authorization, what that agent is permitted to do, is far more complex.

For enterprises, this is a governance issue hiding inside a technical problem. AI agents don’t just interact with data; they act on it. Traditional identity systems were built for humans logging in and out, not for autonomous systems that learn, adapt, and act continuously. The shift redefines access control from static permission lists to dynamic, context-aware authorizations that adapt as agents evolve.

Executives must recognize how critical this transition is. Poorly designed authorization frameworks will expose companies to breaches and liability at levels we’ve never seen before. Decision-makers should push for AI governance structures that include explicit rules for access scope, task approval, and audit logging at every action level. The enterprises that get this right will have a security advantage similar in importance to the early adopters of two-factor authentication, a default expectation of trust.

The evolution of enterprise tools mirrors the journey of consumer password managers

Nancy Wang shared a familiar scenario at 1Password: the product started as a consumer tool but entered corporate systems organically as employees brought it into their workflow. People trust tools that work, and if a product proves reliable in a personal setting, it naturally scales into professional use. That same pattern is now playing out with AI. Individual developers and teams adopt generative AI agents for convenience, often faster than their IT departments can establish governance policies.

This pattern is not inherently dangerous but does demand executive attention. When consumer-grade AI tools enter enterprise stacks, the lines between sanctioned and unsanctioned usage blur. What begins as an efficiency win can become a serious security vulnerability if not brought under proper management. Wang pointed out that “agents also have secrets,” meaning these tools store or access credentials that must be guarded with the same rigor as human users’ passwords.

Leaders need to treat AI agents as first-class digital identities in their cybersecurity strategy. The goal is controlled empowerment, allowing innovation to move fast while maintaining transparency, traceability, and consistent standards across departments. As with the early rise of bring-your-own-device culture, ignoring informal adoption doesn’t stop it. Company policies should adapt to how people actually work, not how we assume they will.

In today’s enterprise environment, balancing autonomy with accountability will determine how safely and effectively organizations scale AI-driven productivity. Those who combine security with simplicity will lead.

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.

Developer practices are exacerbating security risks when integrating AI into coding environments

The way developers interact with AI tools today is creating one of the largest new security risks in enterprise environments. Alex Stamos, Chief Product Officer at Corridor, pointed out that many developers still paste usernames, passwords, and API keys directly into AI prompts. This creates a direct exposure pathway for sensitive data because prompts are often processed or stored in external systems. It’s a basic error, but it happens frequently across industries.

Companies like 1Password are adapting. Nancy Wang explained that their systems automatically scan output code to detect any plain-text credentials and move them into secure storage before they persist. This automated protection aligns with how enterprises need to evolve: the security layer should be invisible to the user but always active. Still, Wang acknowledged a fundamental tension, if tools are difficult to set up or slow down workflows, people bypass them. This happens even in companies fully aware of security implications.

Leaders need to understand this behavioral element. Developer productivity and cybersecurity must coexist without friction. Overly rigid controls damage velocity, while lax oversight opens the door to breaches. The executive priority should be investing in “secure usability”, making safe practices the easiest option rather than the most cumbersome one. This approach not only protects critical systems; it also keeps top engineering talent motivated and efficient.

AI coding agents pose unique challenges that traditional security scanning tools are not designed to handle

Most legacy security scanners were designed for static analysis, tools that inspect code line by line, flagging issues based on predefined rules. AI coding assistants don’t work that way. They generate, test, and rewrite code dynamically, hundreds of times faster than static systems can react. Alex Stamos of Corridor highlighted that even small detection errors, such as false positives, can derail how these AI systems operate. Once an AI model marks a piece of code as flawed, it continues to adjust its behavior based on that feedback, often amplifying the initial mistake.

This means security tools need a new operating model. Real-time scanning must balance speed and precision without interrupting the flow of generative coding. The challenge is technical, but its consequences are strategic. Failure to detect vulnerabilities quickly can compromise core systems; false alerts disrupt productivity and trust in development pipelines. Achieving sub-second response times while maintaining accuracy is now a fundamental expectation, not an enhancement.

For decision-makers, understanding this shift is essential. Security tooling is no longer just a backend feature, it’s becoming part of the active development environment. The AI layer that assists coders must also defend them. Enterprises that invest in adaptive, context-sensitive scanning will keep their development pipelines fast and secure, avoiding the setbacks of outdated tools that can’t keep up with real-time AI collaboration.

Current authorization frameworks are ill-equipped to manage AI agents’ expansive access rights

AI agents are being granted access far beyond what most enterprise systems were designed to handle. Spiros Xanthos, founder and CEO of Resolve AI, explained that these agents typically have more privileges than conventional applications, permissions that can expose data or allow unintended actions. This elevated access profile creates a direct security concern since an exploited agent could act on behalf of an attacker.

Nancy Wang, CTO of 1Password, noted that existing standards like SPIFFE and SPIRE, developed for securing workloads in containerized environments, are being tested for AI systems but don’t fully align with how autonomous agents function. These frameworks manage machine identities but lack the flexibility for dynamic, short-lived authorization that AI demands. Wang emphasized the need for scope-limited and time-bound credentials. In her words, access must be constrained to specific actions within defined windows, minimizing exposure when threats arise.

Executives should focus on establishing identity protocols built specifically for AI ecosystems. This means enforcing granular policy definitions, linking each agent’s identity, scope of work, and time of activity to a verifiable audit record. Broad, permanent access keys no longer meet modern security standards. The architecture must evolve toward contextual authorization, ensuring agents are empowered to act only where and when necessary.

Leaders should consider this an opportunity to modernize enterprise access management entirely. Authorizing tasks instead of roles not only reduces cybersecurity exposure but also increases operational control, aligning business actions more closely with verifiable intent.

Open standards will ultimately define the future of AI agent authorization, outpacing proprietary approaches

The race to secure AI ecosystems has spurred dozens of companies to push proprietary solutions. But Alex Stamos, Chief Product Officer at Corridor, stated clearly that none of these closed systems will dominate. Instead, open standards, particularly extensions to OpenID Connect (OIDC)—are emerging as the strongest candidates for long-term adoption. Open systems foster interoperability and industry-wide trust, both essential in managing AI agent identities that must coordinate across multiple platforms and vendors.

Enterprises should treat open standard adoption as a strategic decision. Proprietary approaches often limit flexibility, bind organizations to specific vendors, and create integration bottlenecks down the line. For businesses operating globally, the need for cross-platform consistency outweighs any short-term convenience from vendor-specific systems. Adhering to open standards ensures internal teams and external partners can interact securely and predictably.

Executives should direct their organizations to contribute to these open frameworks rather than waiting for a single vendor to dictate the course. Collaborative standards accelerate stability, reduce regulatory risk, and enable future innovation on top of secure foundations. The companies that move early to align their systems with emergent open identity protocols will gain both security resilience and market credibility as AI networks become more interconnected across industries.

At scale, even “edge cases” in identity management become systemic vulnerabilities

When systems operate at global scale, small flaws no longer remain isolated incidents. Alex Stamos, Chief Product Officer at Corridor and former Chief Information Security Officer at Facebook, shared that the platform faced around 700,000 account takeovers every day. This level of exposure demonstrates how minimal irregularities, when multiplied across billions of interactions, quickly evolve into major security threats. The same reality will shape the next phase of AI adoption, where each agent’s identity and authorization must be validated continuously and transparently.

As AI systems proliferate, what used to be considered exceptional or rare behavior becomes routine. Every misconfigured credential, unverified action, or unattended authorization can have real-world consequences, affecting not just data integrity but also user trust. Enterprises that view scale as a technical factor rather than a governance challenge will fall behind. Preventing these failures requires integrated oversight, proactive monitoring, and a shift toward automated identity validation capable of responding to anomalies at machine speed.

For executives, the lesson is clear: scalability must include security scalability. Every process, from identity verification to access control, needs to handle exponential growth without increasing risk. Traditional solutions based on periodic audits or manual approvals cannot handle billions of autonomous interactions per day. Investing in continuous, standards-based identity infrastructure is now a business priority, not an optional upgrade.

Leaders who act early to strengthen identity management frameworks before mass deployment of AI agents will establish their organizations as secure and reliable players in a rapidly evolving AI ecosystem. The cost of inaction will rise sharply as volume grows, and companies that underestimate the impact of scale on identity control may find themselves reacting to crises rather than preventing them.

Concluding thoughts

AI agents are not just another layer of software. They’re active participants in your systems, making decisions, moving data, and representing your organization in ways that demand clear governance. The identity and authorization challenges described throughout this piece are not theoretical; they’re structural. They define how securely, efficiently, and responsibly your organization will scale AI.

The lesson for leadership is direct. Relying on old frameworks built for static human users will not hold under the demands of adaptive, interconnected AI ecosystems. Enterprises that fail to act will be forced to retrofit solutions later at higher cost and lower control. The true differentiator isn’t how fast you adopt AI, it’s how intelligently you secure it.

Decision-makers should prioritize three things now: first, mandate modern identity standards built for autonomous systems; second, enforce time-bound, task-limited authorizations; and third, align with open, interoperable frameworks that can evolve with the technology.

AI will test every assumption we’ve made about access, control, and trust. The companies that lead this transformation won’t just be compliant or secure, they’ll set the foundation for the next generation of intelligent, resilient enterprises.

Alexander Procter

April 2, 2026

9 Min

Okoone experts
LET'S TALK!

A project in mind?
Schedule a 30-minute meeting with us.

Senior experts helping you move faster across product, engineering, cloud & AI.

Please enter a valid business email address.