Generative AI is reshaping developer roles in code creation
Software engineering is entering a new era driven by generative AI. Developers are no longer limited to writing code line by line. Instead, they oversee intelligent systems that generate and deploy software faster than ever before. This shift from being directly involved in coding to supervising AI-driven development pipelines is redefining what engineering efficiency looks like.
Tom Scully, Principal Architect for Government and Critical Infrastructure at Palo Alto Networks (Asia-Pacific and Japan), describes this transition as moving from being “in the loop” to “on the loop.” AI agents now handle tasks like writing and testing code, while engineers ensure quality and correctness at scale. The faster pace of production challenges traditional quality assurance and security frameworks that were built for slower cycles of development.
Businesses can no longer rely solely on human review to maintain compliance and safety. The system itself must embed those controls. Decision-makers need to rethink governance models, invest in smarter oversight mechanisms, and strengthen accountability structures without slowing innovation.
The pace of change is undeniable. The Palo Alto Networks State of Cloud Security Report 2025 shows that 53% of organizations deploy code at least once a week, 17% do so daily, and 85% believe security slows down delivery. That tension between speed and safety is exactly where leadership focus is required, ensuring that automation accelerates progress without introducing hidden risks.
For executives, the objective is clarity: use AI to drive operational velocity while maintaining full visibility into how that intelligence operates. Developers are becoming orchestrators of automation, and their oversight will determine whether organizations move forward safely at scale.
Integrated DevSecOps platforms must merge speed with robust security governance
As automation expands across development pipelines, the connection between software velocity and security integrity becomes more critical. Continuous integration and deployment now happen so quickly that manual security checks cannot keep up. Tom Scully argues that the future lies in shared DevSecOps platforms where security, operations, and infrastructure teams work from a single, unified foundation.
When governance controls, inspection tools, and automation coexist on one platform, organizations can operate securely at what Scully calls “machine speed.” The benefits are immediate: issues discovered in production can be traced back to specific points in the CI/CD pipeline, controls can be applied automatically, and compliance gaps can be closed instantly.
For senior leaders, this integration should be viewed as a strategic necessity. It allows every team, from engineers to cybersecurity specialists, to operate with synchronized visibility. That coordination reduces risk and supports faster innovation without compromising trust or compliance.
Palo Alto Networks’ Prisma and Cortex platforms embody this approach, providing a model for proactive, end-to-end security across the entire software lifecycle. They combine continuous visibility with automated threat detection that spans code-to-cloud workflows.
Executives should ensure their organizations adopt similar unified architectures to remove silos between development, security, and operations. In doing so, they will achieve both rapid delivery and strong resilience, an outcome every forward-looking enterprise now requires to compete effectively in an AI-driven world.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.
Multi‑stage security validation is essential to mitigate risks in automated deployments
Automation moves fast, but security must move faster. As AI code generation accelerates software development, every stage, from writing code to runtime, must include embedded validation. Relying on end‑stage testing is no longer enough. The most secure organizations create layers of checks across the entire DevSecOps pipeline.
Tom Scully emphasizes this concept of defense‑in‑depth, where multiple safeguards are applied at the authoring, pre‑deployment, deployment, and runtime phases. These include static code analysis, configuration validation for deployment files, and posture checks that identify misconfigurations before code is released. Once in production, automated runtime security and red‑teaming tools work to detect and remediate any remaining issues.
This systematic approach provides resilience against the increasing complexities of AI‑augmented software. Each layer reinforces the next, making the overall system stronger and more trustworthy. For executives, this translates to reduced exposure and faster incident response when issues occur. It also ensures that as automation scales, oversight and accountability remain intact.
Leaders should prioritize investments in automated testing and toolchains that can operate in real time alongside fast‑moving AI code generation. These systems make it possible to deliver software continuously without compromising the integrity that customers and regulators expect. Managing risk through multiple lines of control is no longer optional, it’s the only way to ensure that speed does not sacrifice safety.
Human oversight remains indispensable in AI‑driven coding environments
Large language models are improving rapidly, yet they still produce inconsistent results. Code quality and security remain dependent on skilled human supervision. The role of developers and architects is transforming into one of guidance and validation, where they monitor, interpret, and correct the outputs of AI systems.
Tom Scully describes this as maintaining a human in/on the loop approach. This means keeping humans involved not just at the end of the development process but throughout the lifecycle, observing system performance, checking for errors, and confirming that automated decisions align with established standards. This level of involvement ensures accountability, especially in environments where machine‑generated code directly affects business‑critical infrastructure.
For executive teams, the message is straightforward: automation should not remove expertise. Skilled developers must continue to lead the process, ensuring that every AI‑driven cycle delivers secure, compliant, and reliable results. Human‑machine collaboration allows teams to capture the efficiency of automation while preserving human judgment where it matters most.
Decision‑makers should view this oversight not as a bottleneck but as a safeguard for innovation. The most successful organizations will be those that cultivate technical teams capable of supervising AI tools effectively, balancing autonomy, verification, and the ability to act when systems deviate from expected performance.
Automated feedback loops and scoring systems can enhance the quality and security of AI‑generated code
AI doesn’t perfect itself. The quality of AI‑generated code depends on continuous evaluation and improvement, which requires feedback loops and automated scoring mechanisms built directly into the development process. Organizations need systems that assess each output for accuracy, compliance, and security before it enters production.
Tom Scully outlines an approach that measures code outputs against defined security and quality standards. These automated loops can score code for vulnerabilities, check for policy violations, and evaluate whether it meets internal architectural requirements. When results fall short, prompts can be refined and the model retrained, creating a process of consistent advancement rather than static performance.
This oversight structure reduces risk by ensuring that risky or non‑compliant code never reaches the release stage. Scully also stresses implementing guardrails to detect and block issues such as personally identifiable information exposure, inadequate permissions, or toxic content before deployment. Automated checks provide a real‑time warning system that keeps the organization aligned with its compliance obligations.
For executives, integrating these feedback systems supports two priorities simultaneously, operational efficiency and risk management. The constant scoring and refinement process strengthens model reliability and helps maintain the trust of clients and regulators. Organizations that implement measurable feedback systems gain a sustainable advantage because they continuously improve both their technical output and their governance posture.
Executive‑level governance and clear AI security standards are essential for safe innovation
As AI takes on a central role in software engineering, governance must evolve with it. Standards and accountability need to be driven from the board level, not handled as an operational afterthought. Tom Scully advises executive teams to define company‑wide policies for AI use, link them to recognized frameworks, and ensure those policies are enforced through consistent review.
Frameworks such as ISO 27001 and the U.S. NIST Risk Management Framework offer established methods to maintain security and compliance alignment. These frameworks help organizations identify risks, design controls, and measure how AI systems meet both internal and regulatory requirements. Establishing these practices at the highest level gives organizations clarity about which models and tools are authorized, how assessments are conducted, and what happens when a vulnerability is detected.
Scully also emphasizes the need for continual monitoring, tracking runtime posture, maintaining updated model inventories, and documenting ongoing risk assessments. For C‑suite leaders, this translates into visibility and assurance that innovation is managed responsibly. It allows AI‑enabled teams to push ahead with automation and product development while keeping full alignment with corporate and regulatory standards.
Safe innovation depends on structured governance. When boards commit to transparent policies, consistent oversight, and enforceable standards, the organization gains freedom to innovate without compromising its ethical and security obligations. This balance between speed, oversight, and responsibility defines the next phase of growth for enterprises adopting AI in their software delivery pipelines.
Key takeaways for decision-makers
- AI is redefining developer roles: Automation is moving developers from code creators to system overseers. Leaders should invest in training and governance frameworks to manage AI‑driven development responsibly while preserving oversight and security standards.
- Unified DevSecOps platforms drive secure speed: Integrating security, operations, and infrastructure on one platform enables faster, safer deployments. Executives should back unified systems that merge automation, inspection, and governance to operate securely at scale.
- Layered validation protects against rapid‑fire risks: Security must be embedded at every development stage. Leaders should prioritize multi‑stage checks and automated QA systems that catch vulnerabilities before deployment to safeguard both speed and compliance.
- Human oversight remains non‑negotiable: AI can generate code rapidly but still needs expert supervision. Executives should ensure teams maintain hands‑on control, verifying outputs and maintaining accountability throughout the pipeline.
- Automated feedback loops improve output quality: Scoring and refining AI outputs through continuous feedback strengthens reliability and compliance. Leaders should invest in automated guardrails that block risky content and optimize coding models.
- AI governance starts at the board level: Safe innovation requires top‑down policies aligned with ISO 27001 and NIST frameworks. Executives should define approved tools, enforce oversight, and monitor posture to combine rapid innovation with disciplined control.
A project in mind?
Schedule a 30-minute meeting with us.
Senior experts helping you move faster across product, engineering, cloud & AI.


