AI investment surges despite industry fatigue
Artificial Intelligence is still attracting serious money, so don’t be fooled by the noise. While some people in tech say they’re tired of the hype, the numbers show that investment interest is not just alive, it’s accelerating. AI isn’t a trend anymore. It’s an economic force, and the stakeholders backing it aren’t slowing down.
In 2024 alone, $72 billion was invested globally into AI companies. Of that, $31 billion went directly into generative AI, covering everything from language models to creative tools. The total investment in AI, according to Crunchbase, has now crossed the $100 billion threshold. These figures matter because they tell us this: investors aren’t betting small. They’re all in.
C-suite leaders need to understand what’s happening here. Generative AI isn’t just some flashy tech chasing headlines. It’s being applied to real-world challenges, healthcare optimization, enterprise API automation, and next-generation security testing. These aren’t proof-of-concepts anymore. They are practical, scalable, and already being deployed by businesses across sectors.
The next few years will reward decision-makers who are proactive. Companies that wait for AI to “settle down” will find themselves outpaced. The signal here is clear: the money isn’t flowing in because it’s trendy, it’s flowing in because AI is shifting how industries operate and create value.
Rapid consolidation and acquisition of AI startups
We’re watching the fastest transformation of a new sector in recent memory. AI startups are appearing, and exiting, at record speed. If you’re leading a large company, you need to recognize this for what it is: a maturity phase.
This is what happens when big companies need technology faster than they can build it. Corporate development teams are actively tracking, analyzing, and acquiring early-stage AI companies. With internal roadmaps stretching out years, they’re using their balance sheets to buy time and talent. Crunchbase and HumanX predict that nearly 30% of AI startups showcased at the HumanX conference, about 45 companies, will be acquired within the next 12 months. That’s not hypothetical. It’s already happening: Nvidia has picked up Run:ai and OctoAI, Databricks moved on MosaicML, and ServiceNow acquired Moveworks. Deals are closing every week.
That’s not a red flag. It’s a signal of real value. Strong teams are getting acquired because they’ve built something useful, early, and scalable. This is common in growing tech sectors, and C-level leaders should understand that a startup exit is no longer viewed as a failure to reach unicorn status. It’s often a calculated and successful outcome.
The point is this: staying on top of innovation doesn’t mean doing everything in-house anymore. It’s about identifying the right AI capabilities, early on, and moving quickly enough to either integrate or acquire. Speed matters. So does vision. If you’re too slow, someone else will buy what you need, before you even realize you need it.
Historical patterns of exuberance and cycle maturation in tech
There’s nothing abnormal about what’s happening in AI markets right now. If you’ve been in tech for a while, you’ve seen this cycle before: early enthusiasm, massive inflows of capital, fast growth, and plenty of risk-taking. It’s part of how new industries take shape.
Investor Tomasz Tunguz, referencing innovation cycle economist Carlota Perez, laid this out clearly at the HumanX conference. Every wave of technological disruption, whether it was railroads, telecommunications, or the internet, has followed a pattern. First comes overspending to build something big and unknown. Then comes a correction. What remains afterward are companies with real product-market fit and sustainable business models.
That’s where AI is heading. Right now, we’re still early in what Perez called the “installation” phase, where infrastructure, platforms, and proofs-of-concept are being funded aggressively. This includes LLMs, custom chips, AI-native APIs, data labeling pipelines, and other foundational stacks. Some of these bets will fail. Others will dominate entire industries.
For executives, it’s important to act based on pattern recognition. When capital moves this quickly, the question is not “Is this a bubble?” The better question is, “What stage are we in, and where is durable value being created?” Looking at AI through the lens of previous innovation cycles helps filter out noise. The correction will come, but what survives that is what really matters.
AI innovation faces self-disruption due to rapid technological improvements
The speed of progress in AI is breaking records. Performance keeps rising, and costs keep dropping. That’s creating a second layer of disruption, the disruptors themselves are now vulnerable to being replaced quickly. If you’re building or investing in AI, assume that what’s cutting-edge today might be outdated in months, not years.
Hardware is improving fast. New chips are delivering major boosts in inference performance. According to Tomasz Tunguz, we’ve already seen about a 1,000x improvement in the price-performance ratio for AI execution, and another 1,000x could still be ahead. If you’re a company optimizing on costs, that kind of efficiency jump gives you significant room to scale AI usage rapidly.
The other factor is open source, and it’s already reshaping the game. When DeepSeek released their reasoning model in January 2025, it matched OpenAI’s o1 benchmark performance while costing 96% less in production. That wasn’t on cutting-edge hardware either, export restrictions forced them to use lower-tier GPUs. Despite that, it worked, and Nvidia’s stock dropped 17% as the market reacted.
Figures like these should reshape how C-level teams think about technology bets. High investment in closed systems isn’t always safe, because the next open model may outperform it before ROI even matures. This doesn’t mean abandoning closed models or paid infrastructure. It does mean you need optionality in your AI strategy. Keep an open lane for lower-cost, performant tools that don’t lock you in.
AI’s rate of self-disruption makes long-term vendor decisions more complicated. Flexibility isn’t optional anymore, it’s operationally required.
Open source models as a disruptive force in AI
Open source is rewriting the rulebook on accessibility and speed in AI development. It’s moving faster than most expected and increasingly closing the performance gap with proprietary models. What once took months in 2023 now takes weeks. That rate of improvement isn’t theoretical, it’s documented. In 2024, the average time it took for open models to match or exceed proprietary performance was just 41 days. In 2023, it was 140 days.
The DeepSeek release in January 2025 made a strong point. Their reasoning model matched OpenAI’s o1 model on key benchmarks while costing 96% less to run. It didn’t rely on high-end GPUs, yet it still delivered. That forced a market reaction, Nvidia’s stock dropped 17%—showing just how seriously investors take credible open tech shifts.
Still, cost and speed aren’t everything. Especially in enterprise, trust drives adoption. That includes trust in the model’s security, reliability, licensing, and long-term viability. Stefan Weitz, CEO of HumanX, emphasized that enterprises remain cautious. Many want service agreements, support, and ecosystem guarantees, not just free model downloads. Jager McConnell, CEO of Crunchbase, echoed this sentiment. When new tools emerge, customers ask “Do I trust it?” before “Can I afford it?”
For C-suite leaders, the takeaway is to treat open source as a competitive force rather than a fringe idea. It’s not just for small players. It’s producing enterprise-grade results, fast. Smart companies will integrate the best of open source with stable support models. That’s how you keep flexibility high and risk low, without overspending.
Vulnerability of SaaS and software models amid AI-driven workflow changes
Software workflows aren’t just improving, they’re fundamentally changing. Generative AI is eliminating many traditional interface layers. The value isn’t in the UI anymore, it’s in what sits behind it. That shift is pushing SaaS companies toward becoming back-end services, accessible via API, while front-end interactions are generated or handled dynamically by AI systems.
Jager McConnell, CEO of Crunchbase, summarized this transition simply: “What if an LLM just creates the UI that I need for the thing that I’m trying to do?” That’s where things are headed. Right now, AI agents are actively interfacing with software systems on behalf of users. With standardized protocols like Model Context Protocol emerging, AI can query systems and generate what users need without opening an app or touching a screen.
This shift challenges how SaaS companies define value. If your company relies heavily on UI as the main engagement channel, then differentiation is at risk. In this new model, your API becomes your product. Execution, uptime, and feature accessibility, these quickly become key metrics, not onboarding flows or visual polish.
Executives need to start thinking in API-first terms if they aren’t already. The AI layer is reducing the need for people to ‘click through’ software. Instead, users are beginning to tell systems what they want, and the AI layer takes it from there. Companies that want to survive must align their products and pricing models with that future. Those that resist will find themselves outpaced by tools that are simpler to use, faster to integrate, and cheaper to run.
Changing dynamics in startup teams and software development
AI is shifting the structure of how startups operate. Teams are becoming leaner, and individual productivity is rising fast. One skilled developer with access to the right AI tools can now outperform what used to require an entire engineering team. This isn’t a future scenario, it’s already happening in early-stage product development circles.
Tomasz Tunguz, Partner at Theory Ventures, highlighted this at HumanX, pointing out how traditional team ratios, like engineers to sales or product, no longer hold. This change has implications for how startups build, scale, and fundraise. With fewer people required to get to a working prototype or MVP, burn rates drop and timelines shorten. Founders can test more ideas, faster, and with smaller up-front investments.
For C-suite executives and venture leads, this evolution demands a reassessment of hiring models and internal productivity metrics. Large teams are no longer the signal of velocity or capability. The focus should shift toward high-leverage players who understand how to use AI effectively. Resource allocation needs to reflect software velocity, not headcount.
The downstream effects will hit the labor marketplace, particularly contractors and mid-tier development roles. Demand will remain for experienced coders who understand system design, scalability, and integration. But low-complexity work will increasingly be handled or accelerated by generative AI. Leaders should prepare for a labor market where differentiation comes from adaptability and machine fluency, not headcount volume or manual output.
Competitive edge lies in proprietary data and unique insights
In an environment where models can be cloned or open-sourced in weeks, proprietary data is what sets companies apart. The real value in deploying AI at scale isn’t in building the model, it’s in how well it adapts to and learns from data that only your company can access. If someone else can’t use your data, they can’t replicate your results.
Jager McConnell, CEO of Crunchbase, put it clearly: “If you’ve got proprietary data that no one else has access to, it’s very hard to beat me at the game.” That’s where long-term defensibility comes from. AI models generalize knowledge, but data personalizes outcomes. When a company applies domain-specific, hard-to-source data to a model, it creates real differentiation. Competitors can’t copy that without replicating dataset depth, structure, and context, something few can do quickly.
For executives, the immediate takeaway is this: audit your data assets. Understand what you have that competitors don’t. Invest in structuring and cleaning that data so that it’s model-ready. Build protective layers around data pipelines, compliance, and governance. And don’t underestimate enterprise customers’ sensitivity to trustworthy data usage, clear policies around privacy and model training can become differentiators in their own right.
In this coming wave, the companies that will sustain AI advantages are the ones that can combine strong models with exclusive, well-managed, and high-quality data. Everything else, model architecture, inference system, even talent, can be matched. Data cannot.
Enterprises are actively experimenting with AI but lack coherent strategies
Most large companies experimenting with AI today are doing so through fragmented efforts, dozens or even hundreds of pilot projects running in parallel, across isolated teams. That approach reflects excitement, but not direction. Execution at that scale without strategy leads to disconnected results and poor return on time and capital.
Stefan Weitz, CEO of HumanX, noted this firsthand, pointing out that he spoke with one company actively running over 230 AI pilots at the same time, without a unified roadmap. This kind of scattershot deployment suggests companies want to be seen as participating in AI innovation without clearly defining what success looks like or how these pilots connect to broader business outcomes.
The problem isn’t experimentation; it’s the lack of convergence. Without aligning pilots to strategic goals, whether customer acquisition, operational efficiency, cost reduction, or new product development, organizations risk internal exhaustion and external irrelevance. Teams get stuck in iteration mode, never deploying anything beyond internal demos.
For C-suite leaders, the fix is critical but straightforward: consolidate. Treat AI like a capability, not a campaign. Define where in the organization AI will make real impact, and build a central framework to prioritize, scale, and measure those initiatives. Whether through an internal center of excellence or an embedded AI strategy team, integration is what separates noise from competitive advantage. AI should be a shared engine, not a list of disconnected tools.
Interdisciplinary AI applications are yielding tangible real-world benefits
Not all of the best use cases for AI are happening inside large enterprises or software platforms. Some of the most compelling developments are coming from interdisciplinary work, where domain experts in fields like healthcare, environmental sciences, and public infrastructure are applying AI in highly targeted, impactful ways.
Stefan Weitz highlighted one example from TED, where Caltech researchers partnered with AI systems to address hospital-acquired infections related to catheters. The result wasn’t just theoretical, it was a tangible product: a new catheter design that actively prevents bacterial travel using a structural innovation developed in part through AI-generated proposals. That kind of outcome matters, fast, applied, and practical.
These types of projects often rely on collaboration between engineers, researchers, and end-users rather than traditional tech product teams. When tightly scoped problems meet precise datasets and focused AI models, the results aren’t just promising, they’re transformational.
For business leaders, this speaks to a broader opportunity: funding projects and partnerships that bring together deep technical talent with industry-specific expertise. That’s where new capabilities are being built, the kind that won’t just optimize existing processes but redefine them entirely. Support these initiatives not just for innovation, but for the real-world leverage they provide across sectors that still depend on incremental development timelines.
AI that helps reduce hospitalization, mitigate wildfires, or eliminate high-cost inefficiencies won’t just deliver a moral win, it will open up new markets faster than internal R&D alone ever could. Stay close to that edge.
Trust remains the crucial factor in enterprise AI adoption
In a market where AI capabilities are growing and evolving rapidly, trust is becoming the most important variable in enterprise decision-making. Speed, price, and functionality matter, but none of it gets adopted at scale unless key stakeholders also trust the solution.
Jager McConnell, CEO of Crunchbase, made this clear: “What’s going to drive at least the next five years of customer action is going to be, who do I trust?” It’s a sharp and accurate signal. Trust in AI isn’t just about product execution. It includes vendor reliability, security assurances, regulatory alignment, model transparency, and long-term support. When cheaper, faster options emerge, as they often do, the first question customers ask isn’t whether the performance is better. It’s whether the system is safe, robust, and well-governed.
DeepSeek’s open-source release in January 2025 was a high-performing, low-cost answer to proprietary models. But skepticism still followed. Even with tangible performance advantages, many enterprises stayed with existing providers due to a lack of trust in the new model’s origins, governance, or reliability. This is a pattern you can expect to repeat.
For C-level leaders, trust isn’t a vague ideal. It’s a business differentiator. Providers that consistently demonstrate model safety, clear usage rights, ethical training pipelines, and predictable performance are the ones winning long-term enterprise deals. That also means leaders building with or selling AI need to embed accountability into every aspect of the solution stack, from training data to inference operations.
The enterprise buyer today may experiment widely, but they commit carefully. If your AI strategy involves external vendors, customers, or integrations, focus on building tangible trust signals: documentation, certifications, audits, SLAs, and human support. Because in this environment, trust doesn’t follow success, it precedes it.
Recap
AI isn’t slowing down, and neither are the people building with it. The speed, scale, and unpredictability of what’s coming next aren’t problems to solve, they’re just the environment we operate in now. Market cycles will swing, models will improve, and the tools we use today will be obsolete faster than anyone is comfortable with.
For executives, the real challenge isn’t choosing between open or closed, SaaS or API, startup bets or M&A deals. The challenge is clarity, knowing what to invest in, what to ignore, and where your organization can create long-term leverage. That starts with trust, with owning your data, with building optionality into every major tech decision.
You don’t need to chase every signal. But you do need to build a system that can adapt when the next one arrives. The companies that succeed in this cycle won’t just be fast, they’ll be deliberate, prepared, and hard to replicate. Choose what’s defensible. Deploy what works. And when the next wave hits, be ready to move.