The rapid proliferation of LLMs is environmentally and economically unsustainable
We’re seeing too many large language models show up, and the rate of new releases is accelerating. LLMs are powerful, but training them burns serious amounts of energy. Training just one high-end model, like a proprietary LLM, can cost up to $5 million. Then you’ve got to keep the thing running. Inference, the process of generating responses, costs millions per year if the usage scales. And it will.
That kind of outlay might be justified if models produced unique benefits. But when most of them end up doing the same tasks, text generation, summarization, reasoning, it’s clear we have a saturation problem. We’re generating cost and emissions far faster than we’re creating new capabilities.
This comes with a carbon cost, too. Training one LLM can emit about 200 tons of CO₂, like running 40 cars for a year. That figure doesn’t account for the emissions from the operations side. These models deploy across massive GPU clusters running 24/7. Most are powered by energy grids still dominated by fossil fuels. That multiplies the impact.
For companies betting on AI, the real question is one of efficiency: Are you building something meaningfully different, or just adding your name to a crowded scoreboard? Too many organizations seem to fall into the second category. This leads to billions in sunk cost and environmental damage, without significant differentiation in output or performance.
If your organization claims to prioritize ESG or carbon reductions while also launching duplicative LLM projects, you’ve got a conflict to resolve. You can’t run both at scale without being transparent about the tradeoff. The smarter play is strategy: fewer models, higher impact, real differentiation.
New LLMs typically offer only incremental improvements over existing technology
Most new LLMs are minor upgrades, slight tweaks in architecture, modest improvements in reasoning, marginal performance gains in benchmarks. If you’re honest about it, most models do the same things with slightly different parameters.
Here’s the thing, huge models like GPT-3 (175 billion parameters), BLOOM (176 billion), and Google’s PaLM (500 billion) already cover most use cases. Training data overlaps heavily. We’re all pulling from the same pool: Wikipedia, Common Crawl, Reddit, news sites, books. So, it’s not surprising that outputs often look identical, same information, same style, same blind spots.
Companies release a new model, call it “state-of-the-art,” and claim it beats competitors by a few percentage points. That’s fine, but the difference isn’t changing the world.
This matters at the executive level. Training and deploying new models requires real budget. But the return on investment stalls when the differences between models are measured in decimals rather than outcomes. In short: you’re paying a premium for almost the same output.
If your goal is to deploy AI that drives new product lines, expands capabilities, or reshapes customer experience, ask yourself: does the LLM you’re backing actually do anything new? Most don’t. Which means you’re spending to end up exactly where you started, just with more infrastructure sitting on your balance sheet.
The real opportunity now is optimization, not duplication. Incremental gains aren’t bad, but they shouldn’t demand massive new investments without clear returns. Choose models that deliver real value, not just more of the same with a shinier label.
AI development practices conflict with corporate sustainability pledges
There’s a disconnect happening inside many organizations, and it’s not hard to spot. On one side, we see strong ESG messaging, public sustainability targets, and polished environmental reports. On the other, that same organization is training large-scale LLMs on fossil-fueled data centers with zero transparency into the environmental costs. That’s not a sustainable position. Leaders can’t ignore the contradiction.
If your company is pushing green initiatives while also funding resource-heavy AI development, those priorities are going to clash. Training a single LLM with current infrastructure can generate up to 200 tons of CO₂. That number increases quickly if you’re training multiple variants or doing ongoing fine-tuning. And if your power source isn’t clean, the emissions multiply.
AI labs and vendors often keep the carbon impact vague. That won’t hold much longer. Regulators are increasing scrutiny, and stakeholders are connecting AI practices with sustainability frameworks. So, if you’re building high-emission software while claiming to prioritize environmental responsibility, that gap will become a liability.
Institutional investors, customers, and even employees are more aware of greenwashing risks. If the story you’re telling externally doesn’t match your internal AI roadmap, the credibility gap gets wider. That affects brand value, talent retention, and long-term positioning.
Executives don’t have to choose between innovation and sustainability. But they must lead with transparency. Own the environmental cost of these systems and put in place measurable practices to reduce it. If your AI strategy generates emissions, then show data on how you’re working to offset or minimize them. This is about aligned decision-making, not marketing.
A coordinated, efficiency-driven approach could mitigate environmental impact
There’s a better way to build. Right now, too many companies are independently training models that solve the same core problems: language processing, reasoning, summarization, chat applications. The compute demands are massive, but the capabilities are often already available through existing open-source or commercial models.
Instead of each organization starting from scratch, collaboration can reduce waste and accelerate productivity. Use what’s already built. Adopt standardized LLM architectures. Plug into shared infrastructure powered by renewable energy sources. When AI efforts are distributed but aligned, both the economic and environmental costs drop.
Some models trained on fossil-fueled grids produce up to 50 times more emissions than those trained using renewables. So, even your power location matters. If your AI operations are running out of a coal-dependent region, that decision has significant sustainability implications. Forward-looking companies are already optimizing model training to reduce those variables.
Shared projects and platforms don’t limit innovation. They free up teams to focus on product differentiation, user experience, and deployment, areas where real impact happens. Platform-level collaboration also reduces duplication and allows more investment in areas like inference efficiency, multilingual performance, and edge computing integration.
If you’re in a leadership seat, ask your team hard questions: Do we need to train this from scratch? Are we duplicating what already works? Can we scale through integration instead of reinvention? Those questions tie directly to capital efficiency, environmental risk, and time-to-market.
The era of training huge models in isolation is breaking down. What comes next is more efficient, more collaborative, and frankly, smarter. High-value AI doesn’t have to come at high environmental cost, if you’re building strategically.
The democratization of LLMs has led to excessive redundancy in model development
Open access to powerful AI models has changed the game. Open-source LLMs like LLaMA, Falcon, and Mistral are now widely available, and anyone with the hardware can download, fine-tune, and deploy them. This is good for accessibility, but it comes with a growing problem, redundancy. The space is filling up with models that largely do the same thing, trained on near-identical datasets, differing only slightly in architecture or tuning strategy.
This is a resource-intensive pattern. Training each model requires significant compute, millions of dollars, and serious power consumption. The problem is, very few of these new builds are meaningfully differentiated. In most cases, they produce outputs nearly identical to existing models. That means we’re burning billions’ worth of energy and development time on systems that replicate work already done.
This trend is expanding fast because the barriers to entry have dropped. You don’t need a massive AI lab anymore. You just need access to commodity GPUs and readily available codebases. As a result, companies feel pressure to release their “own” model, even if it offers little functional advantage. That’s duplication, scaling fast and consuming more power than it returns in value.
For executives tracking ROI, this matters. The investment required to develop and deploy a new LLM is high, training, infrastructure, compliance, inference operations, and updates all compound. If the resulting system isn’t delivering uniquely valuable results or contributing to IP or competitive advantage, your capital is misallocated and your emissions footprint grows without justification.
The better path is selectivity. Use what already works unless building something better can be clearly defined and justified. Integration with proven open-source models, combined with performance tuning for real business needs, often delivers better outcomes at a fraction of the cost. Avoid projects that aim to “own a model” for its own sake.
C-suite leaders need to step in and enforce clarity: What’s the goal? What problem does this model solve? How is it tangibly better than existing tools? If those questions don’t get concrete answers, resource allocation should stop there.
Key takeaways for decision-makers
- Too many models, not enough differentiation: Organizations are rapidly deploying large language models (LLMs) with high environmental and financial cost, despite most offering similar capabilities. Leaders should ensure AI investments deliver unique value before allocating significant resources.
- Diminishing returns from new LLMs: Successive LLMs built on the same datasets offer only minor performance improvements. Executives should question whether new models materially outperform existing ones before greenlighting development.
- Sustainability gaps are eroding credibility: Building LLMs conflicts with many companies’ public sustainability goals, creating reputational and compliance risks. Leaders must align AI roadmaps with ESG commitments and disclose environmental tradeoffs transparently.
- Shared infrastructure lowers cost and impact: Centralizing LLM development using open-source frameworks, renewable energy, and standardized architectures reduces duplication and emissions. Companies should prioritize collaboration over isolated model training to optimize ROI and sustainability.
- Open-source sprawl drives inefficient duplication: While AI access has improved, the ease of building models has led to redundant deployments that waste compute and budget. Executives should evaluate whether new in-house models provide strategic differentiation or simply replicate freely available tools.