TL:DR
CVSS scores measure theoretical severity, not real-world risk. Only about 2% of published CVEs are exploited in the wild, but most organizations still prioritize patching by score alone, which leads to wasted effort and missed threats. A risk-based approach that factors in exploitability data (EPSS, CISA KEV), asset context, and business impact helps your team patch the right things first, not just the highest-rated things.
In 2024, more than 40,000 new CVEs were published. Of those, only 768 were confirmed as exploited in the wild, roughly 2% of the total. That gap is the core problem with CVSS-only vulnerability prioritization: most organizations sort by severity score and work down the list, spending time on critical-rated vulnerabilities that no attacker is actually using while lower-scored vulnerabilities with active exploitation sit further down the queue.
CVSS tells you how damaging a vulnerability is if it is successfully exploited. It does not tell you whether anyone is exploiting it, whether exploit code exists, or whether the affected system is exposed to the internet or buried behind multiple layers of segmentation. Severity and risk are not the same thing, and treating them interchangeably leaves gaps in your defenses.
In this article, we’ll explain why CVSS-only prioritization leads to effort in the wrong places, what exploitability-aware alternatives exist, and how to build a vulnerability prioritization framework that accounts for real-world risk, asset context, and business impact.
How CVSS Scores Work And What They Actually Measure
The Common Vulnerability Scoring System (CVSS) is a standardized framework that rates the theoretical severity of a vulnerability on a scale of 0.0 to 10.0. The score is calculated based on factors like attack vector (network, adjacent, local, physical), attack complexity, privileges required, user interaction, and the impact on confidentiality, integrity, and availability.
What CVSS does well is provide a consistent, vendor-neutral measure of how damaging a vulnerability is if it is successfully exploited. What it does not do is account for whether the vulnerability is being exploited in the wild, whether exploit code is publicly available, or whether the affected system is exposed to the internet or sitting behind multiple layers of network segmentation.
Two vulnerabilities with the same CVSS score can carry very different levels of real-world risk depending on these factors, and that gap is where CVSS-only prioritization breaks down.
The Problem With CVSS-Only Prioritization
High CVSS, Low Real-World Risk
Not every critical-rated vulnerability is an immediate threat. Some require local access, specific configurations, or unusual preconditions that would make exploitation unlikely in most environments. A CVSS 9.8 vulnerability that requires physical access to a server in a locked data center is not the same risk as a CVSS 9.8 on an internet-facing VPN appliance, but CVSS treats them identically. Teams that patch strictly by score will spend time on the former, while the latter sits in the queue.
Low CVSS, High Real-World Risk
The reverse is equally dangerous. A medium-severity vulnerability (CVSS 6.5) on an internet-facing system that is being actively exploited in the wild is a bigger problem than a critical vulnerability on an air-gapped internal system with no known exploit. The understanding of CVSS vs exploitability is where many organizations fall short. Severity and risk are not the same thing, and treating them interchangeably leads to gaps in your defenses.
Volume Overload
When everything above a 7.0 is treated as critical, the backlog grows faster than teams can clear it. In 2024, roughly 11% of all published CVEs were rated high or critical. Applied across a large environment with thousands of assets, that translates to a patching queue that is permanently overloaded. The result is patch fatigue: teams lose the ability to distinguish between what is actually urgent and what can wait, and prioritization becomes inconsistent.
What the Exploitability Data Shows
The gap between CVSS severity and real-world exploitation is wider than most teams realize, and it is getting more dangerous at the same time. VulnCheck found that in the first half of 2025, 32% of known exploited vulnerabilities had exploitation evidence on or before the day the CVE was publicly issued, up from 24% in 2024. For the small percentage of vulnerabilities that are being exploited, the window between disclosure and attack is shrinking fast.
This is where metrics like the Exploit Prediction Scoring System (EPSS) come in. EPSS estimates the probability that a vulnerability will be exploited in the next 30 days, based on real-world threat data, rather than theoretical severity. It is not a replacement for CVSS, but it does add a critical layer of context. A CVSS 7.0 with a high EPSS score is a more immediate concern than a CVSS 9.8 with a near-zero EPSS score, and that distinction is the foundation of any effective risk-based patching strategy.
Building a Risk-Based Patching Strategy
Moving beyond CVSS-only prioritization means incorporating multiple factors into your decision-making. Four inputs matter most:
Factor 1: Exploitability
Is the vulnerability being actively exploited in the wild? Does public exploit code exist?
CISA’s Known Exploited Vulnerabilities (KEV) catalog is the clearest signal here: if a CVE is on the KEV list, it needs to be patched immediately. EPSS scores and threat intelligence feeds from your security vendors add further context on exploitation likelihood.
Factor 2: Asset Context
Where does the affected system sit in your environment?
An internet-facing VPN appliance, a system with access to sensitive data, or anything with elevated privileges should be weighted significantly higher than an internal development server behind multiple layers of segmentation. The same vulnerability carries different risk on different assets, and your prioritization should reflect that.
Factor 3: Business Impact
What happens to the business if this system is compromised?
A vulnerability on a revenue-generating customer-facing platform carries more urgency than the same vulnerability on a seasonal reporting tool. Tying vulnerability prioritization to business impact ensures that patching effort aligns with what the organization actually needs to protect.
Factor 4: Compensating Controls
Is there a WAF, IPS rule, network segmentation, or other control already mitigating the exposure?
If a vulnerability on an internet-facing system is shielded by a virtual patch at the network level, the effective risk is lower even if the CVSS score is high. Compensating controls do not eliminate the need to patch, but they can influence the order in which you do it.
Frameworks and Tools That Support Risk-Based Prioritization
Several frameworks exist to help organizations move beyond CVSS-only triage. These include:
- SSVC (Stakeholder-Specific Vulnerability Categorization) is a decision-tree model developed by CISA and Carnegie Mellon’s SEI that replaces numeric scores with four actionable decisions: Track (no action needed now), Track* (monitor closely), Attend (remediate sooner than your normal cycle), and Act (remediate immediately). Each classification is based on five inputs, including exploitation status, technical impact, automatability, mission prevalence, and public well-being impact, which forces prioritization decisions based on context rather than severity alone.
- EPSS complements CVSS by estimating the probability of exploitation in the next 30 days. Used alongside CVSS, it helps teams separate high-severity vulnerabilities that are likely to be exploited from those that are not.
- CISA KEV functions as a “patch now” list. If a vulnerability appears in the KEV catalog, it has been confirmed as exploited in the wild, and CISA recommends remediation within a defined timeline. For any vulnerability prioritization framework, the KEV catalog is a mandatory input.
Putting It Into Practice
Shifting from CVSS-only to risk-based prioritization does not require replacing your existing tools; it requires adding context to the data they already provide. A good starting point is defining patch SLAs by risk tier, rather than CVSS score alone. For example, patch CISA KEV entries on internet-facing systems within 24 to 48 hours, high-EPSS vulnerabilities on exposed assets within a week, and everything else on a standard 30-day cycle.
Getting leadership buy-in is easier when you frame the shift as better resource allocation, rather than lower standards. You are not patching less, you are patching smarter by directing limited resources toward the vulnerabilities that represent the greatest actual risk to the business.
There are a few common pitfalls to watch for: over-automating without human review (automated scoring is a starting point, not a final answer), ignoring asset context (treating every instance of a CVE identically regardless of where it sits), and treating EPSS as a replacement for CVSS rather than a complement. The strongest approach uses all these inputs together, not any one of them in isolation.
Conclusion
CVSS is a useful metric, but it does not tell the whole story. It is helpful for measuring theoretical severity, though it does not account for context like exploitability, where the affected system sits in your environment, or what the business consequence of a compromise would be.
A risk-based patching strategy that factors in exploitability, asset exposure, business context, and compensating controls will always outperform one that sorts by score and works down the list. The organizations that manage their vulnerability backlogs effectively are not the ones that patch the most; they are the ones that patch the right things first.