Traditional vulnerability management is overwhelmed by volume

Let’s talk straight. The way most companies handle vulnerabilities today is broken. You’re reacting, not planning. And in a world generating millions of security findings across tens of thousands of systems, reacting is a losing game.

Recent data from a Vulnerability Operation Center (VOC) analysis shows just how overwhelming this has become. Their team pulled over 1.3 million security findings, actual weak points, across about 68,500 customer assets. From that pile, more than 32,500 unique software vulnerabilities were identified. About 10,000 of those were ranked high-risk, scoring over 8 on the CVSS scale (that’s the severity scale cybersecurity people use). If you’re trying to fix everything, you’re going to fall behind. You can’t control the flow. You’ve got limited engineers, limited time, and systems that depend on consistency. So let’s be blunt, it doesn’t scale, and it never did.

Actually solving this means flipping the problem. Build systems that manage risk by design, not by reaction. We’ll get to that, but the first step? Stop chasing everything. Organizations need to accept that not all vulnerabilities matter equally. Prioritize what’s worth fixing, and engineer environments tough enough that they don’t fail when something slips through.

The CVE and CVSS systems are hampered by bureaucratic delays and biases

The current foundation for assessing software vulnerabilities comes from MITRE and NIST, institutions operating under U.S. federal oversight. They run the CVE (Common Vulnerabilities and Exposures) program and the CVSS (Common Vulnerability Scoring System). Conceptually, it works. Vulnerabilities are tracked, assigned identifiers, scored for severity, and made public so companies can react. The problem? The system is slow, and lately, it’s getting worse.

As of April 2025, MITRE reported there were nearly 290,000 published CVEs. But over 24,000 of them were unenriched, meaning there was no reliable analysis attached. That backlog was due to a policy bottleneck at NIST in March 2024, where they temporarily paused processing without stopping the inflow of vulnerability reports. Critical information got stuck in limbo. When you combine that delay with debates between researchers and vendors, arguing over severity, exploitability, and impact, the result is a system that doesn’t support real-time action.

To make matters worse, the U.S. Department of Homeland Security initially decided not to renew MITRE’s funding, threatening the entire CVE system. Only strong pushback from the security community reversed that decision and kept the lights on.

All of this creates uncertainty. As an executive, you need security processes you can trust. Systems built on unstable information pipelines aren’t reliable. The takeaway here is not that CVE and CVSS are useless, they’re essential, but that you cannot depend on them alone. Supplement with internal intelligence, external datasets, and predictive models that don’t rely solely on federal workflows. Build redundancy into how you consume threat data. Trust, but diversify.

Many reported vulnerabilities are unlikely to be exploited

The number of reported vulnerabilities has exploded. It’s tempting to treat every CVE as a potential breach. But that’s inefficient and not grounded in reality.

Let’s focus on the facts. According to Google’s Threat Analysis Group and Mandiant, 97 zero-day exploits, the most urgent type, were identified in 2023. Now compare that with the 290,000 CVEs published as of April 2025. Less than 6% of those have ever been exploited in the wild. That tells you most vulnerabilities in circulation won’t be used by attackers. Most of them aren’t worth your team’s immediate attention.

Also consider how organizations respond in practice. A 2022 study pointed out that half of businesses patch 15.5% or fewer of their vulnerabilities each month. That’s not just a resource issue, that’s a prioritization issue. Trying to patch everything means you end up doing none of it well. It’s not scalable, and it’s not aligned with threat behavior.

If you’re a leader responsible for operational continuity and security, the signal is clear. You need to overhaul your strategy based on risk relevance. Focus your remediation plans on threats that have a proven or likely exploitation path, not theoretical low-impact issues. This isn’t about doing less, it’s about doing all of it smarter.

The exploit prediction scoring system (EPSS)

If you’re spending money, time, and talent on cybersecurity, the work must be lined up with real threats. That’s where the Exploit Prediction Scoring System, or EPSS, becomes a major advantage.

Created by the Forum of Incident Response and Security Teams (FIRST), EPSS gives you a probability score that tells how likely it is a given vulnerability will be exploited in the wild. It’s not perfect. It doesn’t predict attacks on your specific systems. But it gives you something the traditional CVSS doesn’t: a measurable, data-backed sense of where to focus.

From EPSS alone, we’ve seen that the probability of vulnerability exploitation climbs fast when multiple CVEs are in the mix, even if individually they each score low. In one example, 397 vulnerabilities from a public sector client were analyzed. The data showed that by the 265th vulnerability considered, the probability that at least one would be exploited had exceed 99%, despite none of the individual EPSS values exceeding 11%. This isn’t about isolated flaws anymore, volume changes risk.

That insight gives decision-makers leverage. You’re no longer flying blind with a long patch list. You know which items pose a real-world risk, and at what scale. This changes how you plan remediation timelines, allocate engineering capacity, and communicate urgency to your senior teams. EPSS backs your priorities with probability, not guesswork.

Use it where it counts most, especially on internet-facing systems where risk exposure is much higher. Combine EPSS with contextual intelligence from your own environment and global threat feeds. You’re not chasing software bugs anymore. You’re targeting the ones that actually matter.

Vulnerabilities with low individual exploitation risk collectively pose substantial threats

A single low-risk vulnerability may seem harmless. But what happens when you have hundreds, or thousands, of them running across your infrastructure? This is where probability changes the game.

EPSS gives you a percentage likelihood that a given vulnerability will be exploited in the wild. By itself, that number might seem too low to be a concern, maybe 1%, maybe 5%. But when your environment includes hundreds of those, the odds stack fast. Analysis of real client data shows that once you consider the first 265 vulnerabilities, the probability that at least one will be exploited exceeds 99%. These were not high-scoring vulnerabilities on their own. But the volume alone made the total risk unavoidable.

For large enterprises, this introduces a major operational challenge. Ignoring lower-risk vulnerabilities simply because of their individual scores leaves you blind to the cumulative exposure. You can’t assume safety based on low individual probability when you have broad distributed systems.

As an executive, what you need to take away from this is simple, your exposure isn’t defined by a single component but by the total risk across everything you operate. Security decisions must account for scale. The planning must reflect the growing likelihood of exploitation as more vulnerabilities go unaddressed.

Risk must be calculated across the environment, not just from within individual systems. Patch planning, system hardening, and architectural design should all be shaped with this in mind.

Understanding attacker behavior

Attackers don’t care how you classify vulnerabilities. They care about access. They don’t scan for specific CVEs out of curiosity, they look for points of entry that let them in or help them move deeper.

This requires reframing vulnerability management. It’s not about chasing flaws; it’s about preventing compromise. Our data and internal research show this clearly. For example, attackers with a 5% success rate per system need only 180 targets to almost guarantee one successful breach. A more skilled attacker, one with a 20% rate, would only need 42. That’s baseline math. And when the target is a company with thousands of devices, those odds start leaning heavily in favor of the attacker.

We spoke with senior red team penetration testers to ground this understanding in reality. They reported success rates of about 30% when targeting arbitrary internet-facing systems. That’s not hypothetical, it’s lived experience. These are the professionals testing systems that are supposedly well-secured.

Your internal systems might be patched and hardened. But it only takes one weak point, one overlooked system, to give skilled attackers the access they need to then move deeper. That’s why traditional vulnerability management, focusing on system-by-system patching, won’t keep pace. Instead, you need to evaluate how systems expose each other, and what happens post-compromise.

If your focus stays on managing individual vulnerabilities without accounting for attacker behavior patterns and systemic exposure, you’re not solving the right problem. Leading organizations look at attack surfaces as networks, interconnected systems with shared exposures and shared consequences.

The shift is clear. From software flaws to systemic design. From passive response to active anticipation. Fix what attackers care about. Deploy defenses that hold up across compromise attempts, not just individual bugs.

Dual strategy of threat mitigation and risk reduction

Organizations are still treating vulnerability management as a checklist, scan, detect, patch. That approach doesn’t carry its weight anymore. Threats have evolved. The environment has scaled. What’s missing is a split focus: one stream handling immediate threats, the other focused on long-term risk architecture.

Threat mitigation addresses what’s active, what’s targeting your systems right now. It requires prioritization based on threat intelligence, exposure, and likelihood of exploitation. This is where tools like EPSS, along with threat feeds and red team data, come in. The goal is to prevent known, high-likelihood attacks from gaining ground.

Risk reduction is a slower, strategic process. It’s about changing the system to reduce future exposure, shrinking the attack surface, eliminating outdated systems, improving default configurations, and increasing architectural resilience. It deals less with what is being attacked and more with what could become a target.

Trying to do all of this using a single process leads to inefficiency and burnout. Security teams bounce between firefighting urgent issues and attempting long-term improvements, and often get stuck doing neither well. By separating mitigation and risk workstreams explicitly, you allow each team to operate with better purpose and intent.

For executives, this is about alignment. Give your teams strategic clarity. Define where urgent effort goes, and where sustainable improvements should focus. It frees up your organization from running endless cycles of reactive patching, while also moving the baseline forward. The outcome is tighter security posture with smarter resource use.

Threat mitigation strategies

Not all systems are equally exposed. That matters. Internet-facing systems are under constant scanning, testing, and attack. These are the entry points attackers prioritize, and they demand directed defensive attention.

EPSS is highly effective on these systems. It was designed to predict whether a vulnerability will be exploited in the wild. Public-facing infrastructure matches that model closely, it’s where exploitation tends to happen first. Using EPSS to guide patching on internal-only systems is far less actionable. Those systems aren’t subject to the same attacker behavior or exposure levels.

Organizations that apply the same prioritization framework across all systems, internal or external, waste effort. EPSS should direct mitigation where it delivers clear value: internet-exposed assets. Patch those first. Monitor them closely. Apply additional controls like firewalls, access restrictions, and rate-limiting there first.

This lets your security team focus its energy. You’re not spreading defenses thin across assets with vastly different risk profiles. Instead, you secure what faces the outside world, the actual front lines, using data that’s designed for that purpose.

Executives don’t need to micromanage this process, but they need to enable it. Ensure your team is given the mandate and resources to differentiate between asset types. Let your security organization prioritize external systems aggressively without getting slowed down by internal, low-risk distractions.

Prioritized efforts based on exposure level and exploitation probability will deliver far more protection per hour of work. That’s how you maintain operational efficiency without compromising resilience.

Systemic risk reduction

Security isn’t just about fixing what’s broken. It’s about designing systems that are resilient by default. This removes dependency on reaction speed and shifts strength into how systems are built and operate daily. When organizations focus exclusively on patch cycles and vulnerability backlogs, they ignore what shapes exposure in the first place, architecture.

Reducing the attack surface must become a primary objective. That means removing unnecessary externally facing systems, shutting down misconfigured assets, and phasing out unsupported infrastructure. Defensive layering, across networks, identity, applications, and data, limits how far an attacker can go, even after the initial compromise. Segmentation is vital here, especially in environments with flat networks or legacy designs.

Security baselines should be elevated systematically. Instead of reacting to individual issues, focus efforts on reducing the volume and severity of vulnerabilities across your environment. The fewer opportunities you leave available, the less chance an attacker has to get traction, even when new CVEs are published. This approach is more efficient and resource-conscious over time.

Executives need to treat these architectural improvements as core to organizational resilience. Budget allocation for systemic upgrades must be viewed the same as any other critical infrastructure investment. When the foundation is strong, less effort is spent reacting to constant threats. The strategy becomes proactive, and results compound with every cycle.

Resilient systems are built on secure-by-default principles

The legacy model for security, bolting solutions onto existing infrastructure after deployment, is outdated. Future-ready organizations are embedding security into their technology ecosystems from the start. That shift changes outcomes at scale.

To get ahead, companies should adopt secure-by-default principles. This includes setting hardening standards, automating patch baselines, and ensuring new systems are validated against predefined configurations. Risk scenarios must be factored in during the design phase, which means threat modeling and real-world adversarial testing should become standard before production rollout.

Teams should adopt security policies that apply across vendors and suppliers. A supply chain with unclear security obligations is a constant source of unpredictable risk. You can’t secure your own environment if third-party services don’t meet the same minimums. Define expectations contractually and enforce improvements programmatically.

Long-term strategy also demands that security teams operate as enablers across the business, not blockers. When product engineering, infrastructure, and security align on proactive improvement, progress accelerates without compromise. That kind of collaboration needs executive sponsorship and active engagement.

Strategic security isn’t reactive, and it isn’t automatic. It’s built through defined plans and enforced execution. The systems you design today dictate the level of risk you face tomorrow. Prioritize resilience now, at the architectural and cultural level, and the rest becomes easier to scale.

Recap

If you’re still measuring security by how fast your teams patch vulnerabilities, you’re not aligned with how modern threats operate, or how successful organizations respond. The volume’s too high, the signals are too scattered, and the CVE system isn’t built to keep pace with the scale you’re managing.

It’s not about fixing everything. It’s about fixing what matters and building environments where failure in one system doesn’t expose the rest. That means separating threat mitigation from long-term risk reduction. It means deploying tools like EPSS where they work best, on exposed systems, and investing in secure architectures internally that limit impact, not just entry.

Security isn’t a checklist. It’s an operating model tied directly to business continuity and competitive advantage. When resilience is designed into your systems instead of layered on top, your teams move faster, your exposure shrinks, and your outcomes improve.

The defenders who win aren’t the ones who scramble best. They’re the ones who plan better, design with purpose, and scale security as a default, not an afterthought. Make that shift now. The next wave of threats isn’t going to wait for you to catch up.

Alexander Procter

June 6, 2025

13 Min