Will the ‘real’ IT security researcher please stand up?

In contrast to 10 years ago, security news and flaw reports are becoming common in the mainstream media. It would not be long before we see a permanent section in technology news reporting security flaws from so-called security researchers.  But are these ‘real research’ and is this information helping the situation in the long run?

 

Fundamentally, research involves a scientific and methodical approach to improving the state of the art. This involves:

(1) surveying the current strengths, limitations, and schools of thoughts,

(2) eventually proposing, implementing and

(3) testing new revolutionary approaches to create a game-changing or disruptive innovation which not only solves the problem but makes all previous solutions (and problems) obsolete.

 

Let’s take a step back and look across the security news headlines again, and you will soon realize that most of the articles are still at point (1), and rarely do you come across any research at (2) or at (3). As an IT security researcher, this is the main concern I have with my industry.

 

Most security researchers are still comfortable with identifying flaws or racing to be the first to find out zero-day vulnerabilities. However, wait a minute, is this productive, and isn’t erring human? If that is the case, why is it surprising to find flaws in new software or applications? Yes, one can point out that mobile phones or even modern automobile systems have security flaws, but is this newsworthy? Are they revolutionary and did they help to make the situation better or worse?

If a fire breaks out, which kind of people would you prefer? The ones who incessantly scream: “Look, there is a fire!” or the ones who actually put out the fire and then gather together to design the place to be more fire safe in the future?

 

Being a critic is easier than being an innovator or being the engineers who labor through several hours or even years to create something beautiful and useful for society. So, are most IT security researchers really helping the situation or just simply pointing fingers? Granted, they report the flaws to the software companies in the hope that the companies will fix them, but how many actually follow through to create that quantum leap to prevent similar events from happening? Apart from fear mongering, what else can they achieve?

 

We already have enough digital garbage, and generating more ‘research’ which reveals nothing but flaws and offering no solutions will make the cycle reactionary and unsustainable. This eventually makes the race more and more difficult for the ‘good guys’. 

 

There needs to be research which genuinely addresses the reactionary nature of security solutions, and works to outsmart impending security threats. That’s what we are striving to do at HP Labs’ Cloud and Security Lab based in Singapore, Bristol and Princeton. For example, we have a number of researchers working on long-term and impending cloud security issues such as our TrustCloud project that addresses key issues and challenges in achieving a trusted and accountable cloud through the use of detective controls via technical and policy-based approaches, and our G-Cloud project that is a program to develop a cloud infrastructure with government grade security, while maintaining flexibility and efficiency and making sure that services are protected against future cyber attacks.

 

An encouraging trend is the recent rising interest from both academia and government-linked research institutions in addressing security issues via fundamental research methodologies. For example, just a few years ago, expensive biometrics-based security research was all the rage as there was an urgent need to solve serious authentication breaches. Interestingly, it took a simple proposal of a two-factor authentication (2FA) (e.g. one-time passwords) to eradicate the need for elaborate biometrics equipment as the novel approach simply leverages existing tools such as our mobile devices or platforms.

 

This kind of disruptive innovation is what the security industry need to see, so that it changes the playing field for the “bad guys” and delays the time that they take to outsmart the systems. Oh yes, 20-year-old problems such as buffer overflow still exist. I wonder why…

 

Comments
Wh1t3Rabbit | ‎02-01-2012 09:15 AM

  Well now, that certainly is a stance against much of the established security community which prides itself on the discovery and exploitation of 0-day security bugs.  An interesting perspective, but I have to tell you I do agree with a significant amount of what you're saying - all break and no fix makes for a very poor culture of destruction.

 

  I'm not sure that information security takes themselves as seriously as other fields of legitimate research, so that many of the rules simply don't appear to apply as you've stated.  This actually has been something I've been writing and speaking about for a long, long time on my blog (hp.com/go/white-rabbit).  If we're just a bunch of 'breakers' we're really not solving any problems as you've pointed out.  Security researchers are infamous (this is not a good thing) for "dumping off a vulnerability" on the doorstep of a company or organization, then threatening to expose them for having it in their code or architecture.  Yes, it's the company's responsibility to remediate or ensure it doesn't happen again, and yes many organization simply act as if they don't care and drag their knuckles ... but this falls back to the security research community.

 

  Go to any conference, security conference that is, and look around.  The talks that get the big crowds are the "How I hacked ... and you can too", and the ones where people are offering real solutions to serious problems are sparsely attended ...why is that?  It's part human nature - we all can't turn away from a train wreck -and part need to be in the spotlight and 'cool' I guess.  Or maybe we just need to demonstrate our mental superiority?  If that's the case I suggest we do that the way your labs are doing it - but solving problems that plague organizations globally.  Solving real security issues, on massive scales is where the real security research should more keenly focus today.  Just my $0.02 ...

 

/Wh1t3 Rabbit.

Adrian Sanabria(anon) | ‎02-01-2012 06:19 PM

The "Fire" analogy doesn't really work. The "breakers" are not at all analogous to someone shouting fire. That is the staff you (hopefully) have watching for alerts and incidents. The reason is that, when the security researcher finds an issue, there is no fire yet, because they are not the bad guys.

 

If we are to use the "Fire" analogy, the security researcher would be a building or fire inspector. They point out and say, "hey, this is an issue, and it could start a fire". We don't expect the fire inspector to help fix the issue beyond making a few suggestions. Similarly, why is it that we expect the "fixer" to be a security specialist and not an IT generalist?

 

In my opinion, the "security fixer" skillset instead should belong with our everyday admins, developers and engineers. They know the environment, and they'll know what fix fits best without breaking/compromising productivity, usability and budgets.

RyanKo | ‎02-01-2012 09:54 PM

Hi Wh1t3Rabbit and Adrian, 

 

Thanks for your comments and thanks for your tweeting/ sharing.

 

I am glad the post gained some traction and made readers think about the issue, and perhaps made them take a stand like you did. Honestly, I struggled a while to post this draft up as it may become divisive, but I thought I do it, as all I see in the news nowadays are what you would call breakers. While they are undoubtedly important, I have a real concern about the future of the internet for our children's generation. We need more fixers and I hope that we can encourage more with that mindset. 

 

At the same time, there is another important area - secure software engineering. I believe that there is a lot of room in schools and higher learning institutions to teach security-grounded software engineering, and at least inculcate the culture of secure application development into the minds of fledgling software engineers. 

 

I welcome more comments and perspectives, and am looking forward to them. :)

 

 

RyanKo

Nadhan | ‎02-02-2012 12:14 PM

Ryan, Wh1t3Rabbit and Adrian,

 

Notwithstanding the excellent work being by HP Labs, the onus still lies today on the consumer of the Cloud services to ensure the overall security of the solution as I outline in my post here.   Also, the hype and perceived financial benefits of the Cloud is likely to eclipse security concerns about security.  Enterprises deploying solutions in the Cloud must understand and appreciate the disastrous consequences of security being compromised.  I have outlined some analogies to reinforce this point

Argo Pollis(anon) | ‎02-04-2012 09:22 AM

This was a very interest post and its subject is spot on in todays environment.   I agree with much of what you've said but I want to go a step further.  I think computer security research seems (at least to me) to have come to a dead end.  I think the handwriting was "on the wall" as far back as the 1980s when researchers began trying to fix security problems by break and patch methods, until they realized that it was a never ending chase. Applying abstraction to software allowed us to see how to structure and write good code and we could even sometimes prove its security properties too.  But, flaws in coding and logic always seemed to creep back in.  In the late 1980's there was also great hope of beating viruses.  I had occasion to recently talk to an exec at a large AV company who admits "they" (the virus creators) have won.  I could go down the list of exciting technologies (firewalls, applying strict typing, trusted systems, PKI, etc) that got its starts back then and have made no real difference.  As a researcher and designer I am still faced with roughly the same kind of security challenges.  Electronic commerce on the Internet of today and tommorrow is at best statistically "safe" (it is never secure) and telling customers otherwise is just not honest.   As researchers we ought not kid ourselves over the nature of what we do.

Sheikh Habib(anon) | ‎02-04-2012 12:37 PM

Ryan, excellent and timely post I would say. I have the same observations for several years and finally got someone who really stand up to ask the question in the open podium. Keep it up!!!

RyanKo | ‎02-05-2012 06:25 AM

Hi Argo and Sheik,

 

Thanks for your comments. Argo, you are right. According to a Sophos report, a new (unique) malware is created every 1/2 second in the world. The current way we are approaching this is definitely a platform that we cannot win in.

 

We need to find a new way to protect our precious IT resources. Lets continue to strive hard and keep working on this cause. Do feel free to retweet or post this blog and invite your friends to discuss their thoughts. I welcome a lively open discussion and I hope this gets the security community to a better awareness of the problem that may grow to an uncontrollable scale.

 

Ryan  

Honored Contributor | ‎02-06-2012 12:50 AM

Hello everybody,

 

trying to answer some of the points of the original post and some of the other responses.

 

First, as others already wrote, the analogy used does not fit.

To bring a (hopefully) better one: it seems like the work of the Watergate journalists is still appreciated (expect by the Nixons) - but nobody expected them to "fix democracy".

All they did was shouting "fire", but that was what was needed!

So IMHO this is still necessary (to what degree is another question entirely).

 

One of the reasons (as I see them after over 25 years of IT, including development, troubleshooting, administration, and teaching) is the very simple - and very stupid! - reason that all the student books, trainings, and such, still do not care about the need to learn programming in a stable/defensive/reliable way!

Take the original "hello world" (or any of today's newer versions):

if does not care about the return value of "printf()", it is not even mentioned that there is a return value, and nobody seems to wonder what that might be good for (AFAIK).

This might go back to the principle of "partial correctness" where the theory of operation/programming only cares about "valid input", but never about anything else (leading to whole industry delivering "fuzzying tools", and countless hacks).

 

Going on from that point of view I do agree that there is a lot lacking in today's "research": we do need

- research about "why does nobody care"

- research on "how to teach it better"

- research on "ROI on doing it right"

- research on "how to communicate security issues to the developers"

 

To use just another example: for hundreds of years people have learned "how to use a sword" with blunt weapons first, and with wearing protective armor - nobody would try to teach youngsters using razor-sharp Japanese katanas wearing only t-shirts in the very first lesson.

But that is what we do in IT:

- "C" is NOT the candidate for the "first programming language" and it never was!

- learning to configure tools like "HP Operations Manager" or "HP BSM" on-the-job when installing a productive system was never a good idea!

- designing a distributed system by "letting it grow" (think about SCADA systems) has proven to be disastrous!

 

Maybe we need even bigger disasters, before we can start with a safer mindset?

 

But can we blame all that to just that small group called "security researchers" (them being the bad guys, then)?

 

Why don't we read about the people making the (very bad) decisions to NOT include any kind of security into the design, the development, the roll-out, and/or the maintenance of these systems?

Certainly, we all have read about people like @0xcharlie or @taviso being blamed for their research, but what are the names of the people having caused the bad designs/systems/implementations which allowed them to find something?

 

Regards,

Wodisch

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
Dr. Ryan K L Ko is a researcher with the Cloud and Security Lab, HP Labs Singapore. He currently leads HP Labs' TrustCloud project and Cloud...
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.