AI initiatives often neglect agile values by emphasizing tools over people

Most organizations bringing artificial intelligence into their systems aren’t doing it right. They’re jumping headfirst into shiny tools and platforms, but skipping the fundamentals, what problems are we solving, and who are we solving them for? The first value in the Agile Manifesto is simple: individuals and interactions over processes and tools. But when it comes to AI, that’s now often ignored.

There’s a reason so many AI projects underdeliver. The mindset gets stuck on the latest model release or infrastructure upgrade, and forgets the user. You can’t build transformative products on top of misaligned foundations. If the humans, the teams, aren’t enabled to explore, iterate, and speak up, AI efforts become showpieces rather than solutions. That’s not scalable, and it’s definitely not sustainable.

Business leaders say they understand AI’s significance. In fact, 84% acknowledge that AI will significantly impact their business models. But only 14% feel fully ready. That’s not just a readiness gap, it’s a clarity gap on how to bridge vision and execution. If your teams aren’t talking to each other, and if your systems are optimized for sleek demos rather than real problems, the tech won’t move the needle.

You don’t win with AI by chasing the tools. You win by enabling people, clear priorities, tight feedback loops, and a culture that aligns execution with outcomes. That’s how real adoption happens.

Miscommunication and narrow focus drive AI project failures

AI can’t fix problems you don’t understand in the first place. That’s where most of the failure begins, stakeholders aren’t aligned on goals, and developers are working toward the wrong targets. It’s a coordination breakdown that leads to fragile or irrelevant AI models deployed into production. No surprise when they don’t deliver.

RAND interviewed 65 experienced engineers and data scientists. The finding? Projects often fail not because the tech isn’t good enough, but because organizations push forward with poor data, weak deployment infrastructure, and AI systems optimized for irrelevant metrics. In some cases, models are deployed that don’t even fit into the broader business processes. That’s a waste of talent and capital.

Executives need to see AI not as a standalone investment, but as a systemic one. If you’re not involving key users and stakeholders from the beginning, if you’re not refining the actual problems AI should tackle, you’ll fall into the trap of automating noise instead of solving anything meaningful. Alignment is critical. Clarity at the top sets the foundation for successful AI at every layer.

You can’t brute force AI success with money or headcount. You get there by fostering transparency, asking better questions, and embedding AI into workflows that matter. That’s how these technologies scale beyond the pilot phase.

A disconnect between leadership expectations and developers’ realities

Let’s be direct. Most leaders overestimate the real-world impact of AI on developers’ productivity. They think dropping in AI tools, especially code generation, will solve velocity and satisfaction issues. That’s not happening. Two-thirds of developers have reported they haven’t seen any meaningful productivity gains from AI. That’s not a minor misalignment; that’s strategic drift.

Leaders want to help developers thrive, but they’re investing in the wrong areas. AI tools are being built and deployed without a clear understanding of what developers actually need. When developers are most productive, it’s not because something else is writing code for them. It’s because they have time to focus on coding, without distraction, and without searching for missing documentation or fixing accumulating technical debt.

Andrew Boyagi, Head of DevOps Evangelism at Atlassian, puts it clearly: “The leaders want to empower their devs to be happy, to be productive. But when they don’t understand how to do that, it’s a huge problem.” The problem isn’t attitude, it’s perspective. The focus shouldn’t be on replacing developer time; it should be on increasing its quality.

Executives need to start listening to their teams. AI can create leverage, but only when it’s applied in the right places. Ask what developers are spending their time on. Identify friction. Then apply AI where it removes those blockers. Everything else is just noise.

An overreliance on AI-generated code can compromise software quality and stability

The increased use of generative AI in software development has changed how fast teams can write code. That part is clear. What’s less discussed, but more important, is what that speed does to quality. Without strong automated testing and code review systems already in place, AI-generated code often gets pushed without proper validation. That leads to higher failure rates, slower recovery, and reduced long-term maintainability.

The data backs this up. The 2024 Accelerate State of DevOps report shows that AI adoption correlates with a 7.2% drop in delivery stability and a 1.5% reduction in throughput. That’s not marginal, it’s regression. Teams move faster, but fail harder and recover slower. If your systems aren’t stable, your product suffers. And worse, your team burns out fixing what shouldn’t have broken.

There’s also a deeper issue beyond failure metrics. AI-generated code is more difficult for humans to understand and maintain. When systems grow, that becomes a liability. Developers can’t fix issues they don’t fully grasp. Over time, the hidden cost of unreadable or unclear code turns into tech debt, which slows everything down.

If you want to scale AI in engineering, focus first on code quality and maintainability. Make sure your teams have the ability to review, test, and audit what AI produces. Let AI assist but not decide. Quality doesn’t come from automation alone, it comes from skilled teams working with tools they trust. That’s how you build momentum that lasts.

AI shows promise in addressing developers’ documentation and code comprehension challenges

This is where AI is already creating real utility. Developers aren’t primarily losing time writing code, they’re losing it trying to understand outdated systems, finding documentation, and navigating technical debt. When AI is applied to these bottlenecks, it delivers measurable value. It reduces friction.

According to the Atlassian-DX DevEx survey, 69% of developers waste about eight hours a week due to inefficiencies like poor documentation and lack of system context. That’s an entire workday gone, every week. This is where AI can step in directly. Tools for automated, AI-generated documentation and code summarization are already making complex codebases easier to work with.

The 2024 DORA report supports this shift. It found that the only statistically significant positive outcome from AI adoption so far was a 7.5% improvement in documentation quality. That’s a big deal. It means that when AI is directed at clarity instead of quantity, it amplifies team performance.

C-suite leaders need to take this seriously. Stop thinking of AI as a productivity multiplier in abstract terms. Think of it as a tool to eliminate waste, on real tasks your teams deal with daily. AI that helps developers find the right information, understand unstable code, and write documentation is worth more than another tool generating new lines of code. Solve the pain points, and the productivity will follow.

Psychological and cultural obstacles restrain effective AI experimentation

There’s a problem we’re not talking about enough. Developers are using AI tools, improving their productivity, but staying quiet about it. Why? They’re concerned that showing a performance boost might lead others to question whether they’re now expendable. That fear keeps progress under the radar.

Patrick Debois laid it out at DevOpsDays London. He shared that people are deliberately not reporting productivity gains from AI because they don’t want to draw attention to themselves. That’s not a tech issue, it’s cultural. And it holds back innovation at the team level.

This hesitation means companies are missing opportunities. If developers are hiding successful experiments with AI, organizations lose visibility into what’s actually working. That puts executives in a fog, making it hard to know where to invest or scale.

Leaders need to fix this, not with mandates, but by shaping the environment. AI adoption should feel safe, not risky. Teams need the freedom to test, share results, and iterate without worrying that success will hurt them. When people feel trusted, they accelerate growth. That’s how breakthrough tools actually make it into your workflow and stick.

Agile methodologies can support AI development if they are properly adapted

Agile works, when you use it appropriately. The problem is, most organizations treat Agile like a set of rigid templates rather than a flexible framework. That’s especially limiting when applied to AI projects. AI development doesn’t follow the same patterns as traditional software engineering. The work is more experimental, the deliverables less predictable, and the milestones harder to quantify in short cycles.

One in five participants in a RAND study pointed directly to “rigid interpretations of Agile” as a barrier to AI success. That feedback isn’t surprising. Agile principles weren’t originally built to account for how data science and AI evolve. Machine learning models often change scope based on new data, and experimenting with these models doesn’t fit neatly into two-week sprints.

That doesn’t mean Agile is obsolete for AI teams. It just means it needs to be applied with more awareness. For example, Snigdha Satti, a business analyst, shared how the data science team at News UK struggled to plan their work using standard Agile routines. They had too many tasks, shifting priorities, and no clear backlog management. By breaking the cycle and adapting Agile to their needs, clarifying business goals, introducing focused standups, and refining collaboration, they created direction and focus.

Executives need to step in here. Don’t ask AI teams to fit into outdated Agile molds that slow them down. The goal isn’t to enforce process, it’s to support velocity without sacrificing clarity. AI work should stay iterative and collaborative, but with tailored planning cycles and space for research. If you want AI projects to succeed long-term, your development practices need to recognize how AI fundamentally behaves differently from traditional software systems. Be disciplined, but don’t be rigid.

The bottom line

Making AI work at scale isn’t about having the best models or the biggest budget. It’s about alignment, between teams, tools, processes, and business goals. Most AI failures aren’t technical. They’re organizational. Miscommunication, misused frameworks like Agile, and leadership assumptions are what derail progress.

AI will keep evolving fast. But if your internal culture, workflows, and decision-making don’t evolve with it, the tech won’t materialize into real impact. You need strategy connected to execution, not just ambition paired with investment.

The takeaway for leadership is clear: Listen to your teams. Build systems that prioritize clarity, trust, and adaptability. Let AI solve the right problems, not just the visible ones. And if an existing process like Agile isn’t working for your AI teams, adapt it. Flexibility backed by focus is what drives outcomes.

Your edge won’t come from adopting AI. It’ll come from using it better than those who do it for the headline.

Alexander Procter

June 25, 2025

9 Min