# Quantifying Risk Reduction with an Unknown Denominator

I get so many inspirations from my Twitter conversations it's not even funny.  This time it's Jeremiah Grossman who posted the FaceBook statistic on mid-year bug bounties, and the subsequent conversation with Martin Fisher that got me thinking.

The problem I see (and no, I'm confident I'm not alone on this thought process) that exists with all these risk reduction measurements is that they're impossible to quantify.  There is simply no way to say that by doing X, you've reduced risk by Y% - at least not when you don't know the total number of issues that exist.  And therein lies the problem.

This is sort of a Catch 22.  If we knew all the risks in a piece of software we wouldn't need software security people around to point them out to us - but let's assume we live in reality land where that's not possible.  So now we're left with a need to define how much risk we've reduced by identifying and remediating some number of security-related defects - preferably as a percentage for ease of consumption - except we have one major problem.  That problem is that we don't have the denominator in that fraction.  The denominator is the "total number of security defects that exist"... so we're stuck.

Let's take a more concrete example outside the computer world.  Let's think about a used car.  If you're going to purchase a used car you're going to want to be certain that you don't buy a lemon (something full of defects).  Sound familiar?  So what you want to do is either find them yourself before you buy, or hire someone to help you do it (a professional) before the purchase is made.  Now if you hire that professional, and he or she finds 5 major issues such as dry-rot in the brake lines, a mis-firing spare plug, and so on .... how much risk has that removed from your purchase of that used car?  The legitimate answer is that you don't know, because you aren't sure of the entirety of the things wrong with that car.  So how much was it worth to you to find those things out before you buy?  That's probably easier because we can say with relative certainty that finding issues with brakes and such are big dollar fixes after the purchase - so that can be quantifiable... especially because you can't drive around a car with failed brakes.  Sadly this doesn't translate into the software world where we have security issues in software - because quite frankly sometimes companies are willing to live with security defects that we aren't comfortable with as security professionals.

Bringing it back down to my core issue here - any time security professionals start to talk about "risk reduction" we have to acknowledge that if the denominator is unknown (meaning, we don't know the total number of risks present in a system) we can't adequately tell anyone how much we've accomplished by removing 1, 10, or 100 of them.  So we need alternative ways to quantify risk reduction or else take a different approach to demonstrating business value of security.  This would be a great topic to hear what you think on...

Rohit Sethi(anon) | ‎02-22-2012 06:32 PM

How about this. There are two classes of security weaknesses: 1) common, known vulnerabilities and weaknesses such as those from the CVE and CWE and 2) unknown (to security practitioners)  vulnerabilities and weaknesses .

Defense-in-depth notwithstanding, we can only reasonably expect  to protect ourselves against known weaknesses/vulnerabilities. If we encounter a 0-day exploit in the wild and learn about its root cause, it goes from unknown to known and we can reasonably expect ourselves to defend against it.

Thus, the denominator is the number of known classes of vulnerabilities/weaknesses that our system could be vulnerable to. Even if we can't prove it definitively, we can have reasonable assurance that an application is not vulnerable to  Cross Site Scripting and therefore we've reduced the risk by removing one class of vulnerability from our system. It's even concievable (although perhaps not likely) that we can mitigate *every* known vulnerability/weakness in our system. Yes that would still leave us open to other sorts of attacks that we don't know about yet, but I suspect mitigating known weaknesses is high enough of a bar to attain for just about everyone. The key is that the denominator is ever increasing as we discover, and know about, new kinds of weaknesses and vulnerabilities.

Jay Jacobs(anon) | ‎03-09-2012 10:38 AM

Hey Raf - The conclusion that risk reduction is "impossible to quantify" because of an unknown denominator is not exactly valid, but your other conclusion that we need alternative ways to quantify risk reduction (rather than counting the entire population) is spot on.

There are statistical methods for deriving a population from a sample and methods for accounting for the uncertainty in the data collection (i.e.. polls during presidential elections, measuring bacteria growth). While the outcome will still contain uncertainty, it'll have less uncertainty, more reliability and more repeatability than unaided intuition, plus it gives us a foundation on which to improve (and it then acts as an aid to intuition).

This is not an information security problem, it is a data analysis problem that occurs in most fields along with the information security field.

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search