Cybersecurity teams must brace for a historic surge in reported CVEs in 2026

In 2026, cybersecurity isn’t just on your CTO’s to-do list. It should be at the center of every executive conversation. The Forum of Incident Response and Security Teams (FIRST) is projecting a major shift in how we see vulnerabilities. For the first time, we’re likely to cross the 50,000 mark in published Common Vulnerabilities and Exposures (CVEs). Median forecasts sit at around 59,000. The potential upper bound? Nearly 118,000. That’s not noise. That’s a signal, and one your security team can’t ignore.

The volume alone changes how you need to think about your security operations. This isn’t a linear increase; it’s systemic. It means more triaging, more decision points, and more load on systems that were never built for these numbers. If you’re still running a manual-heavy vulnerability management process, now’s the time to rethink scope, staffing, automation, and internal thresholds before you fall behind.

You don’t need to treat every CVE as mission-critical. But you do need your team calibrated to respond with speed and clarity on the ones that matter. Not every vulnerability is a real-world exploit. The challenge is finding the signal in the volume, and doing it consistently.

This change marks a new era in operational planning. It’s not the kind of awareness you can afford to push down the chain. Executives should be leading this, structuring budgets, aligning priorities, and asking the fundamental question: Are our people, tools, and processes ready to scale against this rising wave?

Elevated CVE volumes are expected to persist and potentially grow over the next three years

This isn’t a one-year spike, it’s a trend. FIRST is forecasting more than 51,000 CVEs in 2027, and over 53,000 in 2028. But what should really get your attention is the projected upper range. For 2028, that forecast caps out near 193,000 reported vulnerabilities. These aren’t theoretical numbers. They’re based on real structural changes in how the digital ecosystem works today.

The implication is clear. As the software stack deepens and the dependency chain widens, cloud infrastructure, third-party APIs, and open-source codebases, we’ll keep unearthing more risks. More surface area means more exposure. And for organizations that haven’t retooled their security operations for scale, that lag time is going to be costly.

The bigger issue isn’t just the numbers. It’s that many companies still make year-over-year decisions based on past operating models. That thinking no longer holds. If the top end of the forecast materializes, the gap between organizations that planned ahead and those that didn’t will widen quickly.

This is a moment for strategic recalibration. That means investing in systems that learn, teams that adapt, and leadership that understands security is now a continuous-line item. Not a reactive fix. Not a special project. Budget and operational planning should reflect that reality.

These disclosures are not going away. There’s no return path to lower volumes. We’re standing in a future where security isn’t a back-office function, it defines digital execution at every level. So, act accordingly.

Industry-wide shifts in development and disclosure practices are fueling higher CVE counts

The growth in vulnerability disclosure isn’t accidental, it’s systemic. Security researchers, vendors, and development teams are broadening their lens. They’re testing and reporting across more products, platforms, and codebases than at any point in the past. Combine that with the widespread use of open-source software and greater supply chain visibility, you’re going to see more issues surface.

These CVEs aren’t appearing because security is worsening. They’re appearing because visibility is improving. Technologies are more connected, more modular, and more third-party reliant than ever. As that happens, the tools for discovery evolve. So does the scrutiny.

Factor in the role of CVE numbering authorities. The way vulnerabilities are tracked and processed has changed. More authorities are contributing data, and standards are tightening. That increases accuracy but also raises the volume. You can’t push that away with policy alone, it demands a scalable response.

For executives, this means understanding how internal and external software gets built, tested, and secured. Your exposure isn’t just an IT matter. It ties directly to operational continuity, legal risk, and customer trust. If your systems depend heavily on open-source components or SaaS integrations, your risk profile isn’t based on how good your internal developers are, it’s about everything you connect to.

Planning must reflect the reality that the attack surface is dynamic. As infrastructure scales, vulnerability discovery scales with it. If you ignore that, you’re leaving blind spots in key business systems.

Organizations must shift from a reactive approach to strategic vulnerability management

Reacting to every vulnerability on a severity score alone doesn’t scale and doesn’t work. In a high-volume environment, you need to prioritize smartly, focused on business relevance. The real question is whether a specific CVE puts your assets, data, or critical processes at risk. That’s the lens security teams must use moving forward.

As Éireann Leverett from FIRST said, the goal is to stop chasing every CVE and start making strategic choices about where your attention and time should go. Triage now means separating what’s exploitable and operationally significant from what’s just noise.

That requires full visibility into where your assets are and how they’re interconnected. Without that, your teams are guessing, and they’ll burn time on things that don’t move the needle. Risk isn’t just about a number, it’s about context. The same CVE means something different on an internet-facing server than it does in an isolated lab environment.

For leadership, the shift isn’t optional. It’s essential to move from reactive workflows to a layered prioritization strategy that combines internal asset criticality with external threat context. That means integrating real-time threat intelligence, vulnerability scanning, and asset inventory systems in a way that produces actionable insights, not abstract reports.

This is where smarter tools and sharper processes create leverage. Without them, your teams fall behind. With them, you’re in control, identifying real problems early, assigning resources efficiently, and reducing window-of-exploit timelines before they spiral.

Enhanced cross-organizational collaboration is critical for effective vulnerability management

If you’re managing security in isolation, you’re increasing your risk. The organizations that recover faster from cyber incidents are the ones already connected to others through established trust networks. That’s where threat intelligence is exchanged in real time, and coordinated responses happen before things get out of control.

This isn’t about standing up a few Slack channels or external mailing lists. It’s about investing in long-term, operational relationships that span vendors, partners, and industry consortia. When a serious vulnerability surfaces, having pre-established communication links means your team isn’t scrambling for contacts, they’re executing.

Chris Gibson, CEO of FIRST, put this clearly: “No company can solve vulnerabilities and cybersecurity in isolation. The organizations that recover fastest are the ones with trusted networks already in place, sharing threat intelligence and coordinating response before a crisis hits.”

Executives should be asking whether those relationships exist within their companies right now. If not, they’re behind. Participating in shared intel platforms, contributing to coordinated disclosure programs, and engaging with industry-specific response teams should be part of the strategic agenda, not just security team hygiene.

This level of collaboration increases operational agility. It also enhances visibility into threats affecting others before they hit your systems. The more connected your security posture is, the more proactive your response becomes.

FIRST’s forecasting model uses a flexible, range-based methodology to account for uncertainty

Planning security around fixed assumptions is risky. That’s why FIRST’s approach makes sense, it doesn’t give you a single-point prediction. It delivers a statistically modeled range, built on credible data and refined by structural trend analysis. It’s designed for decision-makers who need flexibility, not guesswork.

Their model uses asymmetric confidence intervals to account for upside volatility, meaning there’s a higher probability of CVEs exceeding the expected median. It also reflects structural changes that began in 2017 and 2018, such as shifts in CVE publication practices and increased asset visibility in digital supply chains.

From a leadership standpoint, that translates into better-informed scenario planning. Planning around a 59,000-case median prepares your operational engine, but having procedures for 100,000 ensures resilience when volume suddenly spikes. You don’t need perfect prediction to be effective, you just need to prepare with a range mindset.

Accuracy also matters. In 2025, FIRST’s forecasts had a mean absolute percentage error of 7.48% across the year, and 4.96% for the fourth quarter. That gives it a solid track record compared to other security forecasting tools. It’s not theory, it’s data-driven decision support.

If you’re making long-term investments in automation, people, or threat response capabilities, this kind of forecasting framework gives you a better position to act, not just react. And when you prepare around that range, you can scale up or down without compromising core operations.

Operational resilience depends on scalable automation and revised process strategies

The cybersecurity workload is increasing, not slightly, significantly. Preparing for tens of thousands of vulnerabilities is one thing. Operating in a world with over 100,000 is another. At that volume, manual workflows collapse. What you need is scale, delivered by automation, streamlined remediation logic, and accurate prioritization.

This isn’t optional. If your systems require human involvement at every step, triage, classification, assignment, you’re already behind the curve. When volume spikes, your team burns out and risks rise fast. Instead, invest in automation that sorts, routes, and flags issues based on actual context, not just severity scores.

FIRST emphasized this difference clearly. Scaling from managing 30,000 issues to potentially 100,000 isn’t just an operations problem. It’s strategic. It affects how you schedule remediation, how much downtime your systems can realistically tolerate, and which vendors slow your patching cycles.

To stay resilient, you need to adapt across people, process, and technology. Ask whether your patch management timelines are compressed enough. Ask if your dependencies are tracked tightly enough across infrastructure and software packages. Most importantly, ask if your teams can respond to a spike in actionable CVEs without cutting corners.

If you don’t build for scale now, you’ll be forced to retrofit during an active crisis. That introduces time delays and quality risks when you can least afford them.

Continuous forecasting and quarterly updates will aid in adaptive security strategies

Static planning is obsolete. The threat landscape can shift within months, and organizations that can’t adapt quickly lose visibility and momentum. This is why FIRST’s quarterly updates, and planned expansion into detailed CVSS v3 vector analyses, make a difference. They’re not providing reports for recordkeeping. They’re delivering real-time input for strategic recalibration.

The forecasting model leans on evolving data streams from the US National Vulnerability Database and MITRE’s CVE system. As the year unfolds, these inputs refine the probability distributions across scenarios, letting teams adjust focus areas, budgets, and staffing as conditions change, not after.

Quarterly updates align with how you should approach cybersecurity now: dynamically. Not every CVE carries the same operational impact. Understanding how attack techniques, exploit sophistication, and system exposure coalesce over time is what determines high-value response.

Executives should treat this data as an operational asset. Apply it in vendor management. Use it in procurement strategy. Shape staffing and contract priorities with it. Most vulnerabilities don’t arrive at your doorstep with clear escalation paths, so build a system that understands patterns, adapts quickly, and keeps leadership informed.

With forward-looking forecasting baked into your security program, you’re not just reacting better. You’re positioning ahead of the curve. That’s where the edge is.

Final thoughts

The numbers heading into 2026 aren’t a blip, they’re a shift. Vulnerability volumes are scaling faster than most internal processes can handle. If your cybersecurity response still relies on reactive workflows, narrow prioritization models, or manual-heavy triage, you’re already behind.

What this demands now is leadership. Not from the security team alone, but from the top. Strategic investment in automation, smarter patch management, and cross-functional coordination needs to happen before volume pressure turns into operational debt. Risk doesn’t wait for alignment.

This isn’t about doing more, it’s about doing the right things faster. Starting with better visibility, tighter vendor controls, and a framework that lets your teams focus where it actually matters. FIRST’s forecasts don’t just predict workload, they make clear what preparedness actually looks like.

Resilience isn’t built in crisis. It’s built now.

Alexander Procter

February 16, 2026

10 Min