Disconnect between executives and developers on AI adoption

Most executives are bullish on AI. That’s expected, and to a degree, justified. The promise is clear: better performance, faster delivery, reduced costs. But when nearly half of employees say AI isn’t working for them, you’ve got a real signal worth paying attention to.

According to recent data reported by Axios, 75% of company leaders think their AI rollout has been successful. But only 45% of employees agree. That’s a massive disconnect. It means you could be scaling tools that are actually slowing people down or adding friction. In engineering especially, top-down AI mandates aren’t landing well. Developers are being handed tools that don’t fit their workflows or solve their actual problems. That’s not productivity; that’s noise.

We’re not optimizing AI adoption if policies are being written in boardrooms without input from those doing the hands-on work. Maintaining performance expectations while failing to equip teams with useful, context-aware tools is a fast way to lose valuable talent, or worse, ship bad code.

If you’re in the C-suite, here’s the fix: close the gap. Spend real time understanding how your engineering teams work. Ask what’s broken, and listen when they tell you. Let your developers participate in the strategy. That’s the only way you’ll get the real benefits of AI at scale.

Competitive pressure drives AI adoption despite implementation challenges

Speed matters. No question. But moving fast without control isn’t leadership, it’s reaction. A lot of organizations are chasing AI adoption because they feel like they have to. Not because they’re ready.

Executives at companies like Meta, Salesforce, and AWS openly say that AI reduces the need for large development teams. It’s not subtle. The cost efficiency is appealing, clearly. But cost-cutting is not a strategy. It’s a by-product of doing something well.

GitHub’s Copilot has been adopted by 77,000 organizations since late 2021. Microsoft shared that in its Q4 earnings. It’s not hype. It’s happening. And at Y Combinator, 25% of startups in the current cohort are building products with AI-generated code as the foundation. They’re not dabbling, they’re betting their entire engineering stack on it.

Simon Lau, Engineering Manager at ChargeLab, summed it up clearly: if your competitors are using AI and you’re not, you’re falling behind. That’s the reality now. So yes, smart executives are moving quickly. But speed without design creates complexity. And too much complexity kills momentum.

Adopt AI, but do it with clarity. Know what you’re solving for. Just chasing the tech won’t help, purposeful implementation will. Make sure your teams are actually equipped to succeed with the tools, not just expected to use them. That’s how you stay competitive without burning through your workforce along the way.

Declining developer confidence due to technical limitations

AI tools are good. But they’re not magic. And they’re definitely not infallible. Engineering teams are realizing this quickly. There’s a point where promise meets reality, and the reality is that today’s AI coding tools still need a lot of hand-holding.

Just look at the numbers. In Stack Overflow’s 2024 survey of 65,000 developers, 81% said AI tools improved productivity, and 58% reported increased efficiency. Good headline. But zoom in. A separate Harness survey showed 67% of engineering leaders now spend more time debugging AI-generated code than before. 59% say AI tools disrupt deployments half the time or more. Even worse, 68% report increased time spent fixing security issues introduced by AI-generated code.

The pattern is consistent: AI tools are producing more code, but too often it’s the wrong code. Developers still need to double-check outputs. They need to patch gaps. And when systems go to production, that surface area for failure grows. Technical debt builds up fast when invalid logic slips past untested.

This is where developer trust starts to break down. When the tools are off target, people stop using them. Or worse, they use them reluctantly and spend more time cleaning up than coding.

If your teams are reporting more bugs, slower deploys, or extra QA cycles, listen closely. Don’t assume more output equals more progress. High-volume doesn’t mean high-value. Executives need to focus on quality over activity. Scale the tools that actually reduce complexity, not ones that just generate more of it.

Mandates focused on AI usage metrics are counterproductive

Metrics are important. But not all metrics are meaningful. Right now, too many leaders are measuring AI adoption based on shallow indicators, code acceptance rates, frequency of AI use, or total lines of AI-generated code. These metrics say nothing about whether the tools are helping.

Developers are smart. They know when a tool isn’t actually making their work better. And when leadership starts pushing dashboards that reward superficial usage, the message is clear: use the tool, no matter what. Even if it breaks things or slows you down. That kills morale and results in poor engineering.

Plenty of developers are already speaking out. Some report being tracked by weekly updates or even leaderboards ranking their AI usage. That’s not a signal of success, it’s a red flag. When individual contribution is judged by how often someone uses a flawed tool, you’re not improving performance. You’re just enforcing it.

Most of these mandates come from outside the engineering organization. Executives often impose OKRs or usage targets without a functional understanding of the tools or their impact. This gap in understanding creates friction. It turns AI from an asset into an obligation.

C-suites need to step back and reevaluate what they’re encouraging. The goal isn’t more AI usage. The goal is better code, cleaner output, higher reliability. The right question isn’t “How often are we using AI?” It’s “Is it making the work better, faster, or safer?” If it’s not, then the metric doesn’t matter.

Empowering developers enhances AI adoption

AI tools don’t fail because of their technology, they fail because of how they’re introduced. When developers are told what tools to use, how to use them, and when to use them without being part of the decision, adoption breaks down.

ChargeLab handled this differently, and it worked. Their engineering team, led by CTO Ehsan Mokhtari and Engineering Manager Simon Lau, avoided heavy mandates. They didn’t enforce one specific AI tool across the team. Instead, they created space for developers to experiment. Each engineer was free to choose what worked best for their tasks, whether it was GitHub Copilot, Windsurf, ChatGPT, Claude, or Cursor. The result? All 40 developers at the company now use AI coding tools daily, with an internal survey showing roughly a 40% increase in productivity.

This is how you get real adoption. Give technical people the autonomy to adapt tools to fit their own context. Let them lead within their domain. It turns out they know what they’re doing. They also know when something is useful and when it’s not.

Rajesh Jethwa, CTO of Digiterre, said it clearly: “People closest to the problems are the ones who have all the context, and context is what really matters when it comes to AI tools.” That’s a principle worth following. If your developers don’t have the freedom to shape how AI fits into their stack, you’re holding back the impact.

If your goal is higher efficiency, better output, and stronger team alignment, start by letting developers help define how to get there. Support from the top matters. But when that support comes with trust and flexibility, it actually works.

The necessity of cultural alignment over forceful implementations

You don’t get long-term gains from AI by forcing it on teams. Progress follows alignment. Leadership that understands how engineering teams operate on the ground creates better outcomes.

At ChargeLab, Mokhtari didn’t just authorize AI adoption, he engaged directly with the tools. He built credibility by understanding what the developers were dealing with. That matters. Developers respect leaders willing to put time into the tools, not just the reports. Mokhtari also established a clear company goal, to save $1 million through effective AI use, but left room for each team to drive their own success, based on their work. That’s smart leadership: define the destination, but leave space for tactical autonomy.

This kind of cultural approach doesn’t happen by accident. It happens when leaders are deliberate. Not with slogans or checklists, but by creating an environment where people can test, iterate, and share what works.

As Mokhtari pointed out: “You cannot really pull people forward and make them innovative. You have to foster the culture.” That means giving teams the right tools, but also the right conditions, time, space, and internal trust. When developers feel ownership in the process, they contribute more intelligently, and AI adds value without friction.

For executives, here’s the point: culture scales. Mandates don’t. If you want AI to drive performance in your organization, start with the people building the future you want. Everything else follows.

Main highlights

  • Misalignment weakens AI impact: Leaders see AI rollouts as successful, but nearly half of developers disagree. To avoid operational friction, executives should align strategy with on-the-ground engineering needs and feedback.
  • Pressure-led adoption risks failure: Companies accelerating AI adoption out of fear of falling behind often overlook readiness and execution quality. Leaders should focus on structured implementation, not just competitive urgency.
  • Developer trust in AI is fading: Frequent errors, time-consuming debugging, and rising security issues from AI tools are eroding developer confidence. Executives should prioritize tool reliability and fit for complex workflows before scaling.
  • Usage metrics don’t equal value: Tracking AI usage through code volume and acceptance rates ignores whether tools actually help. Leaders should shift from vanity metrics to impact-based evaluation that reflects code quality and real efficiency gains.
  • Autonomy drives better adoption: Teams that are trusted to choose and experiment with their own AI tools report stronger adoption and measurable productivity boosts. Executives should empower teams to tailor AI integration to their workflows.
  • Culture beats control in AI rollouts: Mandates from disconnected leadership fail, while open, trust-based environments yield higher engagement and innovation. Leaders should invest in culture, collaboration, and clear but flexible goals for sustainable results.

Alexander Procter

June 26, 2025

9 Min