Full automation of coding by AI is at least five years away

We’re not plugging out human developers anytime soon. A recent report from LessWrong shifts the forecast for fully autonomous AI-driven coding to around February 2032. That’s a meaningful delay when you consider the previous expectation had been 2027 or 2028. So, if you’re a CTO or CIO making decisions today, this gives you room to breathe, but not to sit still. What it means is that human engineers remain central to your product cycle for the rest of the decade.

The definition used here is specific: a “superhuman coder” is an AI system that can outperform the best engineers in your team by a factor of 30 in speed, using only 5% of the computational power those engineers normally use. It’s not narrow automation of snippets or templated scripts. We’re talking radical autonomy.

This delay reflects how tough the real-world variables are. AI isn’t scaling its efficiency at the same pace it was a few years ago. The machine learning bottlenecks around compute, energy, talent, and capital are now pushing back hard. While advances in transformer models and automation are real, their practical integration on complex software tasks hasn’t been linear. That’s a big reason timelines are moving.

For C-suite leadership, the message is simple: plan with AI, not around it. Developers will be at the center of software execution for years, supported, not replaced, by intelligent systems. So invest in your people while building out AI capabilities in ways that elevate human performance. The tech is moving fast, but not fast enough to outpace good strategic leadership focused on hybrid growth.

Revised forecasts stem from new modeling and increased pessimism regarding rapid AI advancement

The optimism around AI’s self-improvement curve hit a wall. LessWrong’s new forecast isn’t just a new guess, it’s the result of deliberate rethinking. The underlying model switched to what they’re calling “capability benchmark trend extrapolation.” That means they’re using current AI performance on standard tests to project future capability, and recognizing that performance is flattening in some key areas.

Before this, the mood in the AI R&D space was more aggressive. Models were improving fast, compute resources seemed infinite, and automation’s upward momentum looked unstoppable. But those assumptions didn’t hold up under closer analysis. The revised model now accounts for something more sober, slowdowns in compute growth, chip supply concerns, rising energy constraints, and hitting real limits on research automation.

Specifically, the LessWrong team built this around METR’s time horizon suite (METR-HRS), the most comprehensive benchmarking tool available today for estimating how much compute is required to reach AGI. Even with the best benchmarks, they point out that real-world capability lags behind test performance. They’ve projected a one-year slowdown in model training efficiency and a two-year slowdown in AI research automation. This comes from structural bottlenecks in innovation and diminishing returns in software gains.

If you’re leading a company right now, don’t build your strategy off best-case AI scenarios. These models are highly sensitive to their own assumptions. In operational terms, build with realism. AI is improving, yes, but not every breakthrough is scalable, not every experiment yields impact, and not every forecast deserves blind trust. That’s how winning companies maintain momentum in uncertain terrain, they keep a steady hand on both technology and execution.

AI development toward AGI is now framed as a series of incremental stages

The path to artificial general intelligence (AGI) has been redefined. It’s not a binary leap from today’s AI to full cognitive parity with humans. LessWrong’s latest model lays out a progression, measured and structured, through distinct capability milestones. First comes the “superhuman coder,” followed by the “superhuman AI researcher,” then systems that outperform top human specialists across almost every cognitive task. The final destination is artificial superintelligence (ASI), where capability no longer mirrors the human ceiling, it surpasses it by orders of magnitude.

This phased roadmap shifts how we should think about enterprise readiness. For executives steering transformation programs, expecting AGI to show up all at once is unproductive. The rollout will be tiered, and these stages bring very different challenges and opportunities. A “superhuman coder,” for example, doesn’t imply the collapse of engineering teams, but it changes how efficiency, oversight, and architectural control need to be structured.

Daniel Kokotajlo, a researcher at LessWrong, reinforces the urgency: “AGI arriving in the next decade seems a very serious possibility.” His team found that many AI researchers already rely on AI tools to accelerate their work, even if the full scope of their impact remains unclear. What’s emerging isn’t speculative, there’s movement, but it’s stratified. Capability doesn’t need to be absolute to be transformational.

If you’re preparing for the next five to ten years, build your AI strategy on this incremental structure. Each level of AI ability offers leverage, if you’re ready to integrate it properly. That starts with understanding what each stage means for your talent structure, your R&D processes, and the way you frame long-term business investment. Large system overhauls won’t come from one sudden shift, but from mastering each capability increment as it arrives.

Benchmark-based predictions are useful but have serious limitations

There’s no shortage of benchmarks in the AI world. But betting everything on what test scores say about future capability is a mistake. The LessWrong team built its forecast primarily on METR-HRS, a leading benchmark suite designed to track how much compute is needed to reach AGI-level performance. These metrics serve a purpose, but the researchers are clear, benchmarks are proxies, not guarantees.

As AI models evolve, there’s a consistent risk that performance on standardized tests may not translate to sustained effectiveness in real-world systems. Systems that pass benchmarks may still fall short on complex, unpredictable multi-repo software environments. LessWrong’s forecast rolls with that uncertainty. They acknowledge that while current trends help shape expectations, they can, and often do, break. That variability makes accurate prediction a moving target.

For C-suite leaders, especially CTOs and CIOs, the working principle should be this: use benchmarks to track momentum, not to set hard expectations. If you’re planning deployments or product shifts based purely on when an AI model might technically perform at a certain level, you’re exposing your roadmap to avoidable fragility.

Accurate forecasting in AI is volatile by nature. Benchmarks like METR-HRS are valuable tools, but they are not final indicators of system-wide reliability. Factor in trial projects, edge-case behavior, and real-world integration friction. Run AI efforts with tight feedback loops and course correction options. That’s how you stay adaptable in a field where what’s “near-future” rarely arrives in a straight line.

AI’s future role in enterprises will focus on augmentation rather than replacing human workers

AI isn’t making humans obsolete anytime soon, and that’s not a shortcoming. It’s the reality executives need to operate in. Right now, the immediate opportunity for C-suite leaders isn’t full automation. It’s acceleration. What’s already clear is that AI can compress workflows, shorten iteration cycles, and support complex decision-making under human oversight. LessWrong’s model, and broader industry activity, both suggest the long-term shift could eventually reduce certain categories of knowledge work, but we’re not there yet.

Enterprises should be focused on structured integration, not disruption. That means bounded pilots, internal tooling sharpened by AI, and well-managed guardrails for autonomy. Most importantly, auditability and accountability systems need to evolve in parallel. That’s where differentiation happens, for companies that scale AI as a strategic layer of capability inside disciplined systems.

Sanchit Vir Gogia, Chief Analyst at Greyhound Research, made the call clearly: enterprises shouldn’t debate whether AI can code, it already can. The focus now is on using AI to compress cycle times while keeping humans responsible for outcomes. Gogia also warned executives not to misread the timeline: the near future isn’t autonomous codebases, it’s augmented teams.

So if you’re running digital transformation across an enterprise, recalibrate the mental model. Strip out hype and focus on process redesign. Use AI to amplify your people, not displace them. That’s where real value shows up, operational lift, tighter loops, fewer bottlenecks. Systems built with AI integrated intelligently, with clear ownership from human teams, will push ahead faster and cleanly than those trying to swap out labor prematurely.

AI forecasting is complex, subjective, and prone to change

Predictions in AI come with caveats, every model is only as stable as its assumptions. LessWrong’s latest forecast is a perfect case. Their team adjusted their outlook based on emerging constraints like compute supply, diminishing returns on research, and shifting feedback mechanisms within AI development itself. The result? An updated, more cautious timeline for key milestones like autonomous coding and AGI.

What’s important here for C-suite leaders isn’t the exact forecast, it’s an understanding of how fragile forecasting can be in this space. Models can look mathematically sound and still miss the impact of breakthrough discoveries, regulatory shifts, or geopolitical pressures on semiconductor supply chains. LessWrong stated it directly: no AI model, including theirs, should be relied upon fully. Their own framework includes subjective interpretations layered on top of benchmark data.

The implications for enterprise strategy are clear. Don’t base critical planning on static timelines or promises from vendors offering certainty. Anchor your approach in adaptable strategy, built to absorb disruption. Build for optionality. If you’re aligning your investment cycles purely on hard AGI arrival dates, you’re leaving yourself exposed to blind spots that can cost real time and capital.

This space is moving. But it isn’t moving in a straight line. The models fluctuating today are a signal, it’s OK to be optimistic, but your planning needs to account for structural variability. That’s where companies maintain control. They don’t forecast, they prepare.

Key takeaways for leaders

  • Full AI coding automation is at least 5 years out: Forecasts now place fully autonomous coding AI around 2032, giving companies time to strengthen human-AI collaboration rather than plan for developer replacement. Leaders should invest in systems that enhance rather than eliminate engineering teams.
  • Slower progress reflects new technical and resource limits: Updated models account for slowed growth in compute, R&D pace, and diminishing software returns. Decision-makers should build AI strategies on realistic timelines grounded in infrastructure and capability constraints.
  • AGI will emerge in structured stages: AI progression will move through distinct performance milestones like superhuman coders and researchers well before achieving AGI. Leaders should assess each stage’s implications for team structure, software design, and resourcing.
  • AI benchmarks are directiona: Benchmark trends like METR-HRS offer helpful signals but often fail under real-world complexity. Use them as progress indicators, not planning anchors, and incorporate flexibility into AI integration efforts.
  • AI integration should accelerate work: Near-term enterprise advantage comes from using AI to compress timelines and redesign workflows under human oversight. Prioritize augmentation and accountability to drive performance without undermining talent.
  • AI forecasts are fragile and shouldn’t drive rigid plans: LessWrong’s own model relies on subjective inputs and evolving variables, underscoring that no timeline is absolute. Build operating frameworks that adapt to changes in AI development speed rather than assuming firm arrival dates.

Alexander Procter

February 9, 2026

9 Min