Following the Wh1t3 Rabbit - Practical Enterprise Security

Enterprise Security organizations often find themselves caught between the ever-changing needs of the agile business, and the ever-present, ever-evolving threats to that business. At the same time – all too often we security professionals get caught up in “shiny object syndrome” which leads us to spend poorly, allocate resources unwisely, and generally de-couple from the organization we’re chartered to defend. Knowing how to defend begins with knowing what you’ll be defending, why it is worth defending, and who you’ll be defending from… and therein lies the trick. This blog takes the issue of enterprise security head-on, challenging outdated thinking and bringing a pragmatic, business-aligned, beyond the tools perspective … so follow the Wh1t3 Rabbit and remember that tools alone don’t solve problems, strategic thinkers are the key.

Rafal (Principal, Strategic Security Services)

Displaying articles for: October 2009

Automated Security Testing - Can't I Just Point-n-Click? (Part 3)

 So now that you've got the background from my other 2 posts in this series, you know the options and you have some background.  Let's talk about the limitations of technology and why your brain is still required to do your job.  Many folks continue to try and push the boundaries of technology, and while I applaud this effort greatly, I for one can't see us security analysts  ever being replaced entirely by technology as some would have you believe.  The analytical mind still trumps technology ... although I think there are some limitations based on levels of experience, etc.  Read on for more ...



  • Technology & Automation's Limitations


Let's face it, there are some very serious limitations to technology today, even in the product-filled world of web app security.  I think you will probably agree that there are many products that solve non-existant problems ... or what I would refer to as "brilliant solutions without purpose"... but we'll save that conversation for another time.  Right now if we look at automation logically we can simply state that automation ,more specifically software, has its limitations at pattern-matching, for the most part (more in a minute).  Immediately we can say that pattern-matching is a severe limitation to any technology, just look at the failed anti-virus installation on your computer.  Does it protect you from every strain of every virus?  what about new malware?  Of course not ... that's why everyone pretty much agrees anti-virus in present form is a dead concept.  Moving this into the web app sec world we can easily say that pattern matching is next to impossible when you look at static analysis (analysis of source-code) because thanks to the brilliance of the human mind we all do things just a little bit differently.  To prove the point, ask 10 developers to write the same piece of code, even a simple function, you may find 2 that are the same ... maybe.


Static analysis is particularly difficult from the perspective of automation, although there are great attempts out there I will acknowledge, because you're dealing with code.  As I've written previously static code analysis has enough of a hard time with dealing with producing theoretical vulnerabilities, much less trying to understand every developer's code.  This is why there is no such thing as an "out of the box" tool that works on static code analysis.  Let's be logical about it ... your "sanitization" function can't possibly be anticipated by the tool you just bought to analyze your code ... and while the tools available can make attempts to "learn" the way your developers code, and what functions are safe, which are scrubbers, etc in the end it's just hours and hours of "tuning" that require ... ta-da ... human intervention!


Taking the case to dynamic analysis doesn't make it any more pretty.  Again, here we're pattern-matching against expected outcomes to "negative testing".  We push javascript (such as the ever-popular pop-up) into a form field and expect it to come back to the browser in the same way that it was sent and execute a pop-up.  Then we can determine that it's a vulnerability... right?  It's not that simple though - because when you look at code coming into the browser you have to analyze the context it's being piped into!  If you're pushing code you have to make sure it will actually execute first ... which is the challenge.  Next, for your consideration think about how we test for database manipulation (SQL Injection).  We send database command syntax appended or injected into the regular application fields to try and elicit a database response.  Of course ... if the developers suppress databse responses to the end-user this makes it very difficult to detect injection when it's "incorrect"...  Concepts like Blind SQL Injection are even more tricky because you're injecting database commands and not expecting a direct response but a change in page-state, or a positive/negative response which is also extremely difficult to script and contextualize.  You'll notice that a lot of this comes back to context and while software can do catagorization pretty efficiently a la pattern matching, it's impossible to account for all possible states, responses and configurations.  Yikes!  This is all enough to make your head spin!


I certainly don't envy those developers who are writing security analysis tools, and I can tell you first-hand that the folks that work in our HP Web Security Research Group are absolute geniuses.  Scripting and pattern-matching gray-area responses is like walking a tightrope between false-positives and false-negatives ... and remember that no matter what you do people will attempt to discount the tools you build because they're either too noisy, or miss too much.  This is especially why I am so big on human interaction in the process! 



  • Your Brain Required


Now we get to it ... your brain will continue to be required for the forseeable future in security, more specifically the analytical part of web app security.  While technology and innovation will continue to drive better and "smarter" engines for analyzing and attacking web applications, my crystal ball tells me that people will always be necessary.  Actually, not just people.  People with a clue will always be necessary - there is a huge distinction!  Let's venture into why humans are necessary, and why anyone selling "a point-n-click" security tool should be laughed out of the building.  You see, people build software.  Even the smartest people make mistakes.  Therefore, even the best software will have mistakes which often manifest themselves as security vulnerabilities.  Given that, why would you trust the analysis of this potentially vulnerable software with more potentially buggy software?  Make sense yet?


Software-based testing, even software-driven testing is fine as long as there is someone who is schooled and reasonably accomplished in the art and science of interpreting results and analyzing them.  What is required here is a 2-step method we like to refer to as "validation" of findings.  You see, automated tools continue to get better at finding more and more complex defects yet the analysis of findings will always be the trickiest part of a security testing strategy.  Looking at what an program/script/tool has uncovered and being able to critically deduce whether this is a positive vulnerability, a false-positive, or whether it simply requires more attention is critical to a security analyst's position and job description.  The power of the human mind often kicks in where software leaves off, and can trigger a multitude of findings that would otherwise go undiscovered.


A great example of this type of need for a human analyst is from a penetration test I was a part of a while back.  The automated tool uncovered a treasure trove of low-hanging vulnerabilities including some cool SQL Injection and Cross-Site Scripting issues, as well as a crossdomain.xml issue that was pertinent to our attack.  On their own these attacks could do some damage but it wasn't until the analyst actually dug into these attacks and noticed that they could be chained together to produce an incredible attack vector that there was (at the time) no solution for!  You see, we could test for XSS, and SQLi, and even the crossdomain.xml vulnerability ... but the software couldn't string those together and notice a gaping flaw in the design of the application that allowed for a complete compromise of the online application.


So the bottom line here is that I want you to walk away from this series being able to not only understand but intelligently speak about why a "point-n-click" security testing tool will never suffice, and why you have to have the human intellect to back it.  That being said, there are a number of offerings such as HP's Web App SaaS offering which mix the automation and tools approach with an augmentation of the human factor for when you find yourself in a situation where you just don't have the in-house expertise!  What I'm saying is don't trust your web application security to a tool, or even a collection of them - because alone they aren't telling you the whole picture.  Throw away the notion that you can just point and click your way to being secure ... it's never going to happen that way.


The answer then?  Education, first and foremost, is key.  Make sure you either educate or hire smart & intelligent security analysts.  Make sure that you have people who understand how attacks work, why they work, and how to detect them manually.  Your analysts should be able to spot basic attacks like SQLi and XSS in a site by hand, and execute (or know where to get cheat-sheets for) the more complex attacks.  You don't have to hire the uber-hax0r, just know enough to call one when you need one.  The next thing is to ensure you've got the best tools in your toolbox... often this means mixing open-source and closed-source apps together into something that works best for you.  Know your applications and which attacks apply ... PHP-style attacks certainly won't work against IIS-based ASP.Net apps ... usually.  Be ready to raise your hand when you're in over your head.  There's no shame in asking or acknowledging when you don't know ... I do it all the time and it's quite liberating.


I hope I've managed to convince you that point-n-click security is a failed prospect.  What do you think?  If you're interested in a further conversation please feel free to email me (via this blog) or get a hold of me through your HP sales rep (you probably already have one!)

Automated Security Testing - Can't I Just Point-n-Click? (Part 2)



In the previous post - I tackled the question of automation, full automation, in web application security testing.  We discussed the problem in great detail and underlined some of the issues that we will need to address and understand.  In this post, I'm going to talk through the options and technological limitations that we face today and will continue to face deep into the future.




  • The Options


If you're going to attempt to test web applications with some measure of automation there are a few options you have available.  There are full and partial automation opportunities, and application separation as well as multiple tools.  Addressing them in order here...


Full automation is what most people still think of when it comes to security testing their web applications.  Full automation involves simply putting a URL into a field and clicking GO and standing back to watch the action.  There are times when this is practical but there aren't many of those times, unfortunately.  I've spoke with many folks recently who feel that web application security testing should be done like vulnerability scanning was when it first kicked off.  Point, click, and receive results.  This isn't practical because of the fact that there are many possible ways that this option can fail.  Sadly, the less people understand the more they want to push into full automation.  Let's think about full automation for a minute.  In order for a tool to be able to perform a fully automated scan you have to assume that the tool can analyze site structure and compute an attack strategy on the site without human intervention.  Forget that you're asking a whole lot from a computer program ... think about what that actually means.  You'll be asking the tool you're using to be able to understand every part of the application ... fully.  Can you say you can understand every part of the applications you test fully?  Remember software is only as good as the people who write it, and unfortunately the people who write testing software can only make it as good as the examples they have to work with.  Herein starts to peek a problem we'll address later ... mounting complexity.  Full-on automation requires that the tool analyze every AJAX call, every FLASH object, every piece of JavaScript, every nook and cranny and every workflow through the application.  If you're heard me talk about the failure of automation on the frontier of workflows you already know why this is such a losing proposition without human automation - but it gets more complex than that.  You're hoping that the automation component can do all the work in a pre-defined amount of time, right?  Let's be realistic, most automated tools, if not properly tuned will run for days, hours or weeks before running themselves out of memory of stack space - hopefully completing the scan.  The reasons this happens I will address later on in the technical limitations but you're asking an awful lot of software that's testing software.  Say you do get a complete scan.  Say for the sake of argument that the tool you're using manages to completely cover the web application attack surface and finds a whole mother-lode of vulnerabilities.  What you're saying now is that you want that same piece of automation (or software) to be able to validate its own findings.  Fail.  You already know that automation isn't perfect at finding vulnerabilities ... and now you want validation for the same price?  Consider that ask...


Partially manual testing is the next logical choice.  Involving the human being as little as possible but still allowing for some intervention to do the set up and validation makes logical sense.  The problem here is that the human being here has to understand what he or she is doing otherwise this process fails.  Integrating a human being into web application security testing is a scary thing ... because now you're asking a human being to complement the software you're using but it certainly has its advantages.  In fact, I would argue that it's better to have a human involved than to attempt to do everything with automation as you'll get better results 4 out of every 5 times.  The problem is in the human part of this equation.  Knowing what you're doing ("I'm testing a web application's security") and actually knowing what you're doing are drastically different.  You also have to be trained in the tool you're using otherwise you'll fail with even more vigor.  But here's the deal, partial automation involves the human being (tester) interfacing with the tool in order to provide it not only analytical insight but also guidance on what to test, what variables to use, what to tweak and what to avoid ... then analyzing the results.  This is what most knowledgeable penetration testers and web application security experts do today with varying degrees of success.  Don't let anyone fool you, it's a lot tougher than you'd think to get results particularly when they have to be consistent!  Tweaking a piece of software and using it like a sledgehammer to find the low-hanging fruit is fairly easy ... getting deeper and better results than the tool could do on its own is a little more tricky.  Lots of testers simply never master this craft and either end up blaming the tool, or simply giving up.  Partially automating a testing tool, particularly one that's built to do evil, is an art-form and must be well-understood or the results could not only be catastrophic, but also inconsistent and more dangerous than when the tool is run fully automated.


Your other option, of course is do testing in a partially automated way.  What you probably don't know is that tools like WebInspect can function in this capacity brilliantly.  "Penetration tester assistance mode" is what the folks who do this all the time call it.  As the penetration tester looks at different areas of the site a black-box scanning tool is used surgically, with a large amount of human guidance.  This use-case really isn't a human being assisting an automated tool as much as an automated tool is used as a supplement to the human being's abilities to do the mundane and simple tasks.  Furthermore, more advanced tasks can be performed such as advanced XSS or SQLi testing within the framework of the tool so the tester doesn't have to do it by hand.  Using the tools as an extension of the tester is a great way for someone advanced in the art and science of breakage to function ...but that expertise has to be there first.  You can't just jump feet-first into this type of usage model and expect to succeed.


So there we have it, 3 possible ways to engage in "automated" testing tools, a la black box security testing.  The thing you must think about is which one is right for the situation you find yourself in, your knowledge level and experience, and specific use-case.  What works for one may not work for others, your mileage may vary, batteries not included and some assembly is required.


 





Automated Security Testing - Can't I Just Point-n-Click? (Part 1)

I've been witness to an interesting phenomena.  Several otherwise rational folks- customers, prospective customers, and pundits alike - have posed the question to me now over a the last several months.  I've been thinking a lot about the topic and have some thoughts I think it's time I share.


The question for discussion is this: "Shouldn't a security testing tool (Web App security, black-box specifically) be able to just accept a URL and credentials and test my site, providing results without me having to intervene?"


The answer, quite simply is an unabashed "No"... but I think it needs more of an explanation than that.  It's often all too simple to provide an answer without explanation; or worse with an explanation that not everyone can understand, so I'll both answer the question, explain it in detail and give some real-life examples of why I'm answering this way.  Grab a cup of coffee, get comfortable and let's think this through rationally together.  I'm going to do this as a multi-part blog entry ... I can already see this as taking a few hours to write much less to read and fully comprehend... 


 



  • The Main Issue


The main issue in question here is not whether computers can replace humans entirely for security testing - which I hope we can all agree on is a solid no but whether computers and automation has come far enough to begin test automation to a point where a human can provide minimal input and have a test complete.  The problem with this request is that we're asking automation to make decisions within the process of testing.  Decision making, so far in evolution, is best left to the human analytical brain, rather than automation - and the primary rational is here is that humans possess the ability to reason rationally whereas computers ... cannot.  At the core of the question is the ability to make decisions or reason which then either makes or breaks an automated test.  Let's think about this in a different light... let's look at this from the viewpoint of a mechanic.  What we're really asking here is for a computer to hook up to the vehicle, diagnose the entire system without human input and then provide a solution, testing the effectiveness without a human in the loop.  Rationally we can already see where this would break down.  A computer can hypothesize a problem, apply a solution successfully without actually solving the problem the driver had in the first place.  Diagnosing a problem in a vehicle, as mechanics will tell you, is more than just something you can do from a text-book, or by taking a course.  It takes years of experience to understand vehicular cause and effect, and why a rattle in the front of the car may actually be a bad bearing in your rear wheel... computers can't tell you these things, yet.  The other issue here in the mechanical world is that not everything can be connected to a computer system for diagnostic yet - there are still limitations.  The problem can be easily extended to the digital world for web applications.  Not everything can be analyzed properly and we'll go into more detail in a minute for why that is.


Bringing this back to the question at hand and whether automation can simply "do the job" of assessing a web application's security viability ... we have to break the issue down into its bare components to further analyze.  First, there's the identification and site functional analysis ... typically we call this the "crawler phase" or "discovery phase" depending on which tool you're using.  Crawling the site (or application) means clicking buttons, inputting data, and traversing the site all while building a virtual map of what the site looks like, what the option trees are, and how traversal through the site is done legally without attempts to subvert the site.  The next major step is the pre-attack analysis - whereby the tool attempts to build the attack sequences and tree for how the site will be attacked.  This type of phase generally involves a lot of heavy memory and processor usage and building incredibly large and complex data structures (generally in machine memory).  Once this is done the attack sequence can begin.  Once the tool is confident that all attack patterns and plans have been laid out, the attacks are launched and the tool starts to do the heavy lifting it was built for.  Inevitably during the attack process something new is discovered.  Whether at attack pattern triggers some new function, or something breaks in a beautiful way ... the system has to put that newly found functionality back into the control-stack of the application for re-analysis and another pass.  The tool will continue making the start -> discover -> attack-build -> attack -> repeat loop over and over as long as new things are discovered... until there is nothing new left on the discovery stack.  Once the tool reaches that state it can be understood that the attack and discovery phases are complete and the tool moves to a final attack-analysis phase.  At this point it will have to correlate, verify and validate the findings from throughout the process to make sure that there aren't any issues with these findings.  The last step is to present it to the requester via a report.  Whether the report is a dashboard, a PDF, or exposted XML or CSV the reporting piece is usually pretty standard and well understood.  Having this process completely self-contained and automated is what some people seem to want - and I'm here to tell you that's a dangerous thing to ask for.


So now that we have the problem identified ... let's go talk about what options we have, why people are required and doing this completely in an automated fashion is a bad, bad idea.


...



There you have it ... the problem is now identified, unmasked, and ready to be discussed in detail.  The upcoming post will detail some of the options we have for solving this issue and what technological limitations we are faced with today, and into the future.  The last post in this series will go deep into the reasoning for why I continue to say that your brain will always be required.  Until next time!

Is Anybody Listening?

Greetings, I am finally back home after an exhausting trip which had me speaking at 2 conferences back-to-back in separate countries and on opposite side of the coast!  I did learn some valuable lessons from speaking at these two wildly different conferences thought, so I thought I would share them with you here for your benefit too.


First off, the Information Security conference I attended on Tuesday in Toronto called "SecTor" was brilliantly run and targeted towards Canadian-based information security professionals and wanna-be security professionals.  It's OK to say it, there are plenty of people that attend these conferences who are looking to break into the business and want to learn about information security enough to get a grounding of what the industry is about... so they attend these conferences.  My talk "When Web 2.0 Attacks" was well-attended and I even had some big names in my audience (thanks to RSnake, Hoff and a few others that wandered in and out) and I think the overall impression was that the stuff I presented was relevant to people's daily lives in Information Security.  That's kind of the problem though...


You see, while I ordinarily wouldn't think twice about educating those in my field ... someone that's been doing this for a while longer than I reminded me a while back that this is what we would call "preaching to the choir".  Sure, I tend to agree that even within Information Security not enough people understand Web App Sec well enough to build a program and actually reduce any real risks - but those folks have been hearing this talk for years upon years right?  At some point I'm bound to hit the law of diminishing returns; and furthermore, people who didn't agree with me 6 months ago aren't likely to agree with me today.  Great conference, great mind-share but it's definitely time to reach a broader audience.


That's where the next conference I spoke at comes in.  Wednesday morning, at 4:00am Central time (yea, AM) while some of my colleagues were stumbling into their hotel rooms in downtown Toronto I was hopping into a car and being driven to the airport to head out west.  My destination was Anaheim, CA where I would speak at StarWest later that day.  I'm still not sure how through the delayed flight, sickness, and almost-missed connection I made it out to the West Coast by 2pm, but I did... and Star West was awesome.


StarWest (run by the SQE folks (www.SQE.com) is nicely put together and serves an entirely new audience of people.  Here at StarWest (although I did find it strange that we were in the heart of DisneyLand!) the audience was almost entirely composed of software test engineers, managers and those related to the field.  This was a completely different set of ears than what I'm used to ... this was a good thing.


The first thing I heard when I put my welcome slide up was "Hey, isn't security supposed to be done by the security people?"  Love it.  This is exactly the mentality and walls I was there to break down.  I think as we went through the hour-long session on "Detective Work for Testers..." I managed to convince a few people in the audience that their jobs were closely tied to mine in Information Security.  Maybe, maybe not.  The bottom line is that there were many great folks who came up to me and talked afterwards and through the end of the conference about the absolutely missing component in their SDL that was security.  I had one lady in the audience (although she fled before I could get more out of her, and had to track her down myself later on the show floor) tell me that her security team is the developers and that because they tell the bosses that they don't have security issues no one ever tests the code.  I wish I could recall where she worked, hopefully no place important like a bank or anything ...


The point is - this was the right audience.  If you were there and came to my talk, awesome!  If you missed it, slides are posted and we can talk about it whenever you have some time.


Do you believe that Information Security and Software Quality testing is one and the same?  Do you believe that a quality defect may as well be a security defect?  Can you successfully explain the difference between a security and quality bug?


... I'm fairly sure I have my target audience for the next foreseeable future.  Listen up quality testers - I'm coming to a conference near you!


 

Search
About the Author(s)
Follow Us
Twitter Stream


Community Announcements
HP Blog

Technical Support Services Blog

Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation