“New-School” Vulnerability Management vs. Old-School Vulnerability Management: A 7 Round Smackdown
Share with Your Network
I’ve been talking about the benefits of adopting a risk-based approach to vulnerability management (VM) for some time now. Since Jeff Heuer and I founded Kenna Security, in fact. For those of you who’ve already heard it and are sold—I hope this post rings true about the benefits of risk-based (“New-School”) VM over plain, old (Old-School) VM. For those of you who haven’t, well it’s ok, data-driven folks are nothing if not welcoming.
While I understand the 5 Whys are a widely accepted practice of interrogation, being the father of 4 kids makes me used to answering “why” a couple more times. So let’s go ahead and do that.
The 7 Whys
1. You can’t remediate everything (nor do you necessarily need to), but you likely can remediate the risky stuff
Looking at our database here at Kenna Security, the average enterprise has 39 million vulnerabilities. Yes, that’s right, 39 million vulnerabilities. At the same time, our research also finds that any organization, regardless of size, can only address about one out of every 10 vulnerabilities, see the chart below for an illustration of what this looks like. For you data heads in the audience, admire the strength in the R2 statistic.
Those may seem like pretty discouraging numbers showing that organizations can’t possibly keep up with the volume of vulnerabilities. But don’t be too disheartened, there is good news. Only 5% of all vulnerabilities are both observed within organizations and known to be exploited, so really, these are where your focus should be, after all, likelihood matters. It is with a risk-based approach that you can quickly and easily identify these high-risk vulns so your teams can prioritize fixing those first. Our latest research with the Cyentia Institute found that while organizations can’t remediate everything they can, and do, remediate all of their high-risk vulnerabilities.
2. Not all assets or applications are created equal
Let’s be honest, not every end point, server or application in your organization has the same risk profile. Some don’t host sensitive information, some don’t present an attractive target, some may be more difficult to access. These differences are difficult to quantify for Old-School VM which is an entirely subjective process that relies on manpower to make these assessments. While a smart security analyst can take asset criticality into account even in the Old-School VM approach, it doesn’t scale.
New-School VM technologies let you automatically factor in the importance of an asset, system, or application; the importance of the information on or within that asset or application; and then uses that to determine whether fixing a vulnerability on that system should be prioritized before a vulnerability on another system.
Keep in mind, the importance of an application or asset isn’t just about the data stored or processed. Threat modeling is an important part of New-School VM. Take the example of a popular public web site that is primarily “brochure ware” for the consumer to learn more about a given topic. This site may not require a login or house any sensitive data at all. In fact, it could be entirely made up of public data. But let’s say this popular site is compromised and used to serve up malware. Your very public web site is now attacking millions of your users in order to compromise their machines.
3. Get the full picture from multiple feeds and tools
Merely scanning for vulnerabilities or bringing in data from only one or two scanning vendors is only one part of the puzzle for VM. To truly reduce your risk profile you need the big picture. What is going on outside of your four walls? What are the real-world consequences of any vulnerability? What are all of the vulnerabilities from all tools (including AppSec)?
Under the Old-School VM methods, organizations may have had threat intelligence feeds (if they were advanced), but it was a manual process that required mad skillz and a lot of hours to research each and every vulnerability by visiting 5 to 10 sites, read about that vulnerability, look through threat intelligence reports, look for signatures or exploits in whatever data is available (open or closed source), consider the technical details of the vulnerability, then make an assessment. Fortunately, New-School VM uses data science to correlate threat intel at scale, normalizing hundreds of different data sources and giving teams back valuable time to do other, more strategic projects.
In addition, it’s no longer enough to just have one vendor’s set of feeds from which to determine VM strategy. With truly risk-based vulnerability management you can pull in data from all of the vulnerability tools you have and correlate that information to determine where to start with your remediation efforts, in light of the full picture across your entire environment. This is especially relevant in the arena of application security where there is such a wide range of security scanning tools, from SAST, DAST, SCA to bug bounties.
4. Ain’t No One Got Time For This – Automate ALL THE THINGS
I touched on the importance of automation in today’s data-heavy environments above, but it’s important enough that I feel like it deserves its own call-out. The Old-School method of manually tracking vulnerabilities through a giant Excel spreadsheet and then having the security teams make assessments just can’t keep up with the increasing volume of data and vulnerabilities.
Today there is more data to pull from, but also more data than humans can, or should be expected to, sift through by themselves. Fortunately, an abundance of data is one of the things machines handle very well.
Automating routine tasks frees security teams to act on data, rather than spend valuable time cleaning and correlating that data. For example, the cleaning, correlation, de-duplication and mapping of vulnerability data to the organization’s assets should all be automated. Additionally, automation should be leveraged to weed out false positives and fix naturally occurring false negatives between scanning technologies, thereby preventing teams from wasting valuable time attempting to fix something that isn’t broken or, potentially worse, ignoring an issue that’s critical.
Automation can also be used to take the data about the vulns, the data about the risk of those vulns, the data on how to remediate those vulns, and sending that data on to the teams fixing the vulns—IT or dev—through tools they are already using to manage workflow for incidents, remediation, bugs and features. All of this automation can streamline your prevention and remediation efforts significantly.
5. Report to management based on something they actually care about
It’s a brave new world of board-level scrutiny on cybersecurity programs. This is an exciting opportunity for CISOs, but only if they can report to the board in a meaningful way. Focusing on risk is one way to communicate clearly and in a way they can understand when presenting to the board.
Used to be that you’d take a spreadsheet of vulns in to the board (or if you were fancier some 3 by 3 matrix heatmap), and then try to explain how remediating a subset based on some seemingly arbitrary criteria was enough to ameliorate the risk of a breach. The spreadsheet approach is slow, unscalable, and impractical—and it doesn’t help anyone understand risk.
Under this approach board members were still missing the context needed to understand the organization’s risk posture, which is what they really care about. They want to know where the organization stands today in terms of risk, how that compares to last month, what progress has been made to reduce the organization’s risk exposure, and what is being planned to reduce risk and at what cost. By taking a risk-based approach to vulnerability management, security teams can deliver meaningful reports to the board and rest assured that their efforts are actually making a difference in the organization’s risk posture.
6. Avoid getting distracted by high-profile breaches and vulnerability logos
Although hype around vulnerabilities and breaches has drawn much needed attention to the importance of security, not all vulnerabilities are worthy of “celebrity treatment.” As the news and hype around security vulnerabilities escalate, it becomes increasingly difficult for security teams to stay current with the threat landscape and determine how to prioritize their efforts.
In some instances the hype is warranted, but in other instances, hype can result in staff hunting down and remediating a vulnerability that doesn’t pose any real danger. At the same time, the increased noise can overshadow critical risks that require attention.
Security teams need a way to be alerted to threats and prioritize their efforts based on factors that really matter. Accomplishing this goal requires security teams to embrace risk as an objective, consistent way to prioritize remediation.
7. Increased Collaboration
We all recognize that IT and dev have very limited time to fix vulns, and doing patches often takes them away from spending time getting mission-critical tasks out the door. They often see security fixes as an unrewarding task, so anything that can help collaboration across teams is welcome.
By using the common language of risk, security can both identify and streamline the number of vulnerabilities they ask IT and dev to fix, but also clearly delineate why they are doing so. And an automated risk-based vulnerability management system can also automate tracking and the sharing of information (as mentioned above)—bringing yet another level of cooperation and collaboration between teams.
That’s it. The long and the short, but mainly long, of the main reasons why data-driven New-School vulnerability management beats out Old-School VM any day of the week.