Agentic AI requires a shift to behavioural and automated security approaches
We’re seeing a fundamental change in how security works in the age of artificial intelligence. This isn’t about defending a perimeter anymore, it’s about understanding behavior. Agentic AI doesn’t just generate content like traditional generative AI. It acts autonomously. It makes decisions and executes tasks on its own, without asking for permission every time. That comes with a completely different risk profile.
Gee Rittenhouse, VP of Security Services at AWS, put it clearly: these AI agents look and behave a lot like human insiders from a security perspective. And that means if something goes wrong, the damage can unfold fast and in unpredictable ways. The boundary lines for these agents aren’t outside your systems. They’re buried deep inside the applications themselves. So conventional firewall-centric thinking doesn’t cut it anymore.
Security teams need to watch what these agents are doing. That requires shifting to behavioural observability, tracking what’s normal, spotting what deviates, and acting quickly. Security systems need to evolve in real time. Monitoring needs to be continuous and adaptive. This isn’t about blocking known threats; it’s about seeing unfamiliar ones as they happen.
Agentic AI forces us to think faster and see farther. It doesn’t follow predictable rules, and that unpredictability demands smarter systems, ones that use anomaly detection and real-time insights to respond before harm is done. This is where AI helps security scale as quickly as the threats do. It’s not a defensive move, it’s about staying ahead.
Reinforcing traditional security basics under increased machine autonomy
Some fundamentals in security don’t change. Identity management. Least privilege. They’re still at the core. But here’s the difference now: when autonomous agents run without any human in the loop, a small slip in those basics can turn into a big problem very quickly.
Amy Herzog, AWS’s CISO, made a strong point. If you don’t get the basics right, the consequences hit hard and fast. Agents operate at machine speed. If a credential is too broad or if access isn’t properly restricted, agents won’t hesitate, they’ll act immediately. And if that action is wrong, it can scale damage in seconds.
What’s needed is tighter control. Short-lived credentials. Clearly scoped access rights. Minimal privileges. These are non-negotiables in environments where decisions are made by software, not humans. Security controls need to be fast, lean, and built into the system architecture from day one.
Leaders should be thinking about how quickly their systems can detect misuse, not just by people, but by autonomous functions that behave according to logic, not judgment. Speed works both ways. If you can’t keep up with the systems you’ve deployed, you’ve already lost pace.
This isn’t about going back to basics. It’s about reinforcing them with precision. In a world driven by AI actions, getting the fundamentals right is the only way to give your organization freedom to move fast, without leaving the back door open.
Defining autonomy boundaries and embedding trust within systems
Agentic systems will act without waiting for instructions. That’s their purpose. But giving autonomy doesn’t mean giving up control. The challenge isn’t stopping the agent from performing tasks, it’s defining exactly what those tasks should be, and what limits can’t be crossed.
Neha Rungta, Director of Applied Science at AWS, made the point clearly. If you assign an autonomous agent to handle customer refunds, how many refunds, and how much, should it be allowed to process without oversight? $100? $500? Once you set that answer, you’ve defined the agent’s operational boundary. That’s how you build trust, not by hoping it stays in line, but by giving it clear limits and enforcing them in-system.
That’s where tools like automated reasoning come in. These are systems that use math, not speculation, to prove that the agent can’t take unauthorized actions. It’s not subjective. You can prove the security outcomes using formal logic, which means you can trust the system’s decisions without guessing.
This approach is already being implemented. Amazon launched the AWS Security Agent to help builders embed security controls directly into application code from the beginning. That means restrictions, credentials, and permissions are hardcoded into the system design, not bolted on afterward.
For C-level leaders, this isn’t just a technical feature, it’s a governance requirement. You need to be confident that the automation you deploy won’t drift into unsafe behavior, and that your security team isn’t left chasing down compliance breaches after the fact. Boundaries must be clear from the start. And trust must be built in, not assumed.
Transitioning security professionals from AI consumers to AI builders
There’s been a trend in cybersecurity toward using AI as a convenience tool. Ask a chatbot for summaries, skim some logs, it’s useful, but it’s reactive. That’s not enough anymore. The shift to autonomous agents isn’t just a technology upgrade, it’s a redefinition of how cybersecurity leadership needs to think.
Most security teams today still operate as consumers of AI technology. They use assistants and automated systems in a helper role. But Hart Rossman, Vice President at the Office of the CISO at AWS, called out the real opportunity: we need to build. That means creating security agents tailored to specific environments, not relying on generic ones.
AWS’s security incident response agent is already doing this. It identifies evidence, assembles insights across systems, and suggests next steps. This cuts investigation time from days to minutes. That matters. It reduces exposure time. It accelerates response. It makes your organization harder to hit, and far quicker to recover.
Executives need to think beyond dashboards and manual alerts. The vision is hands-off, proactive, and continuous. Predefined workflows shouldn’t drive your incident response. Agents should do it in real time, adapting as the threat changes.
If your security tools can’t operate intelligently and independently, they’re falling behind. The path forward isn’t to delegate responsibility to AI, it’s to reshape your teams so they create systems that perform under pressure. That’s the difference between meeting baseline compliance and actually securing a business built on speed and scale.
Enhancing defensive capabilities through advanced AI integration
There’s a lot of concern about attackers using AI to scale threats. That’s valid. But we should focus on the bigger opportunity, the fact that AI, especially agentic systems, gives defenders more speed, more context, and broader visibility than ever before. The advantage is shifting to the defense side.
According to Gee Rittenhouse, Vice President of Security Services at AWS, AI systems like large language models (LLMs) handle security data at a scale that humans simply can’t match. These models can retain historical context across millions of events, track evolving patterns over time, and act on signals at machine speed. That’s not just automation, it’s intelligent execution.
This means security teams are no longer confined to fragmented dashboards or reactive triage. AI can consolidate massive datasets, detect early-stage threats, and trigger defensive measures automatically. When correctly integrated, these systems reduce noise and prioritize impacts that matter. That leads to tighter, faster responses and fewer blind spots.
For executives, the takeaway is this: AI isn’t optional in modern security. If you’re depending on manual workflows or legacy architecture to respond to dynamic threats, the gap between risk and response will keep growing. Agentic AI closes that gap. It operates continuously, scales with infrastructure, and adapts to new threat surfaces without constant re-engineering.
And while attackers are using AI, they typically don’t have access to the same level of infrastructure, intelligence, or system insight that defenders do. If you leverage the tools correctly, the odds tilt in your favor. Not by waiting until an attack happens, but by designing a system that sees it before it escalates. That’s how complex, AI-driven threats stop being disruptive, and start being manageable.
Key executive takeaways
- Shift to behavioural and automated security: Leaders should prioritize behavioural analysis and observability to effectively monitor autonomous agents, as traditional perimeter-based defenses no longer address agentic AI’s insider-like risks.
- Reinforce core identity and privilege controls: Executive teams must audit and tighten identity frameworks and privilege boundaries, ensuring credentials are short-lived and tightly scoped to avoid rapid-scale errors in machine-speed environments.
- Define autonomy boundaries with precision: Organizations should embed defined operational limits and trust mechanisms within autonomous systems to reduce the risk of unintended actions and support auditability at scale.
- Build AI-driven security, don’t just consume it: Security leaders should invest in building intelligent, agent-based tools tailored to their environments, shifting from reactive AI usage to proactive, integrated solutions that reduce incident response time dramatically.
- Use AI to strengthen defense posture: Executives must view AI as a force multiplier for defense, deploying LLMs and autonomous agents that monitor, contextualize, and act on threats in real time to reduce exposure and blind spots.


