Following the Wh1t3 Rabbit - Practical Enterprise Security

Enterprise Security organizations often find themselves caught between the ever-changing needs of the agile business, and the ever-present, ever-evolving threats to that business. At the same time – all too often we security professionals get caught up in “shiny object syndrome” which leads us to spend poorly, allocate resources unwisely, and generally de-couple from the organization we’re chartered to defend. Knowing how to defend begins with knowing what you’ll be defending, why it is worth defending, and who you’ll be defending from… and therein lies the trick. This blog takes the issue of enterprise security head-on, challenging outdated thinking and bringing a pragmatic, business-aligned, beyond the tools perspective … so follow the Wh1t3 Rabbit and remember that tools alone don’t solve problems, strategic thinkers are the key.

Rafal (Principal, Strategic Security Services)

Harsh Reality - Life in InfoSec

  It's Monday again, and it's absolutely brain-numbingly cold here in Chicago... but I wanted to get these thoughts down before they fell out of my brain to make room for new stuff.

  Last week I had the pleasure of meeting with a group of guys that are running the Information Security practice within one of the largest and most respected retailers to the "hip" crowd... these folks live sales volume and press... good or bad.  I think they've got some extremely unique challenges so I wanted to present the angle I proposed in case it's useful to anyone else.

  First off, they have a very small "security" team, mainly consistent of compliance activities and common "operational security" tasks such as identity provisioning, anti-virus, firewall, you get the picture.  They also have a relatively well-established QA team which is critical to the success of their online retail component - so the established value of that team is there.  This is unlike the value of the security team - which unfortunately doesn't have a good foot-hold... not for lack of trying from what I heard.  Their problem?  No one cares about security.  (Sound familiar yet?)

  To overcome some of these challenges we focused on what was important to the business from an IT perspective - Software Quality.  More specifically the quality of the online application(s) was important to this customer.  Having their eCommerce site(s) up, and available for business is top-priority.  Given that information we can quickly re-tool our approach and make *security* a component of the overall quality cycle.  I know, some of you security purists are probably mad at me right now, but this is the harsh reality of life in a downturn.  Why not though, use the business-critical areas to get the job done?  The Security guys know they need tighter security but maybe the business doesn't care so much - except to check the box of compliance (PCI-DSS) - so I think taking a modified approach is the only way to fly in cases like this.

  Making security a sub-component of overall software quality works like this.  Security, amongst other things, aims to keep a site/application "up and running" and resistant to hacking.  Now, hacking often-times causes Denial-of-Service conditions so there we have link #1 to quality and uptime.  The second link comes in a little more vague.  Hacking an application means loss of data, potentially - and that can lead to downtime and disrupt the consumer's ability to purchase or buy - basically data corruption.  I know these aren't ideal links, and you'll like the PCI "compliance" link even less I'm sure - but there you have it.

  Those 3 links into application quality may be the difference between *zero* security budget and getting *some* security budget.  Now, the question of TTH (from Jeremiah Grossman, Time-To-Hack) may come into play again... we have to ask ourselves if what we're doing makes any difference in the time that it takes to take the app down, and steal the data.  Maybe yes, maybe no right?  The main point here for these guys is to demonstrate due-dilligence for PCI comliance.  While this is a bit of a sad commentary on the way of the world and how much security *really* matters... at least they're doing something.

   Keep pushing guys, you're on the right track!

PCI Compliance Madness - See! I'm not insane!

 Rich Mogull over at Securosis totally nailed it.  This article he put up talking about the Web Application Firewall (although it's still a mis-named product, see my rant here) vs. secure coding is brilliant.  I've been saying this since I can remember hearing about "WAFs"... and it's nice to see someone out there that people actually recognize (Rich is an industry heavyweight) echo this sentiment... although the analogy of using Cajuns and gumbo is probably beyond my abilities :smileyhappy:

Still thinking about this as I sat here and re-read the PCI DSS current standard (and supporting documentation)

6.6 For public-facing web applications, address new threats and vulnerabilities on an ongoing basis and ensure these applications are protected against known attacks by either of the following methods:

  • Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods, at least annually and after any changes
  • Installing a web-application firewall in front of public-facing web applications



A few things immediately hit me that I felt the immediate need to comment on, because my mind now thinks in terms of "if I'm a business leader, how do I find loopholes in this...".  Here are my thoughts:

  1.  I am having an issue with the term public-facing being there.  I'd be OK with business-critical or something that indicates the application/site hosts critical data (such as user information, credit card numbers, etc).  What if I'm a business and I have 100 "public-facing" sites, but they just all happen to be brochure-ware.  Granted I am a card processor.  Does it make sense to put non-mission-critical (or containing no critical data) sites through this review process?
  2. "... after any changes" - so if I change the background, or add new legal verbiage I have to re-submit my site to inspection?  That makes no sense from a business perspective... does it?
  3. Notice that it says "Review" and not "Review and mitigate any critical issues found within x time-frame"; does this bother anyone else?
  4. The word "either" implies an OR clause here... why does the PCI DSS council see Security Review and added protection as an OR?
As you can guess, I can come up with no less than 5 scenarios where I'm going to be horribly security-deficient while still being PCI Compliant.  So once again, I'm going to return back to this question and I want everyone to think about this carefully.  Would you rather be PCI Compliant, or secure?  Further, does compliance equal security?

Navigating the PCI DSS Standards...

For those of you who keep up with the PCI DSS standard, the coucil today has issued an update titled: Information Supplement: Requirement 6.6 Code Reviews and Application Firewalls Clarified.

  The standard item 6.6 has been further clarified in one of two options, as before, being either Application Code Reviews or an Application Firewall.  I'll address the first option, since that is the more logical one, but will briefly talk about the Application Firewall as well - just to clear the air a bit.  While the Standards Council continues to add clarification, which makes the standard more usable, more opportunities for compliance surface with less cost and effort.  No doubt the IT manager feels like this is a good thing because now the cost of compliance won't necessarily be astronomical - and thereby make it viable.  As we all know, the issue of compliance to a non-government regulation is always a balancing act.  Compliance, as with most security components, is an equation balancing risk against spending and business value.  We all know the results... if the equation balances just right, the business benefits from the added security and sees value while not spending more money than the risk is worth - and security feels worthwhile because risk has been decreased by some factor which affects the business in a positive manner.  Granted, it doesn't always work out quite so rosy - but the PCI DSS standard is going a long way to make sure that these equations that happen every day, in many businesses throughout the world - balance.

  Now - on to the meat of the standards update.  First off, let me address the Web Application Firewall issue.  While this is a topic that deserves a whole article onto itself, the short version is this - web app firewalls are very expensive, complex band-aids.  That's the reality.  While many of them work phenomenally well, and in fact I can name a few that do, they are difficult to implement into an existing production environment, they are primarily signature-based (remember how well we stop "unknown" viruses?), or have some other architectural quirk that makes them an impossibility in your enterprise.  Personally, at my previous company I started to implement a particular WAF ... but it took over a year and a half of research, testing, approvals, and more testing to get them into a newly built environment... not even into a legacy production environment where they would have provided the most value.  Anyway... the point is - a WAF is a tool you use when you don't have the resources to "do it right"... fix the code itself.

  Section 6.6 of the PCI DSS standards, option 1 (the Application Code Reviews) now has 4 basic alternatives.  Candidates are urged to implement at least one of the following...

  • Manual review of application source code
  • Propse use of automated application source code analyzer tools (static code scanners)
  • Manual web application security vulnerability assessment
  • Propse use of automated web application security vulnerability assessment tools

  First, a few things of note.  The above does not necessarily call for a "penetration test" which exploits vulnerabilities by an ethical hacker... only an "assessment" (which identifies but does not exploit) vulnerabilities is required.  The distincion is important because it means that you can now do this on production code, or a production environment without the risk of damaging your applications by necessity to prove their vulnerability.

  I find it interesting that the update goes and directly says "In all cases, the individual(s) must have the proper skills and experience to understand the source code and/or web application, know how to evaluate each for vulnerabilities, and understand the findings".  The fact that the DSS requirement 6.6 now specifically addresses competence in the assessor should mean that there was some ... question... over the competence of assessors or possibly a need to specifically stamp out that only qualified people should be doing assessments.  Interesting, at either angle.  The same statement goes for assessors using automated tools - but now we have an interesting proposition.  Do you (a) hire an extremely qualified application vulnerability tester, or (b) hire a knowledgeable user, and give him a software testing/scanning tool and some training... and are those even the same?  Obviously the dollar amounts for the two are different... or are they?  There is also the point about the testers having to be (the authors use the word "should be") organizationally separate from those writing the code... well that makes sense.  No one wants the fox guarding the hen-house, right?  You don't want the same developers that are potentially churning out insecure code to then review it and give themselves gold stars.  So far so good.

  So now we have 2 options for doing this internally - which will help our bottom line (external 3rd parties are typically very, very expensive)...

  1. First option is to do an SDLC-integrated code review... actually reviewing the application code before it gets compiled and leaves the development group's control.  We have the option to do it manually, or with some tools - using only highly trained and knowledgeable people.
  2. Second option is to do a post-development analysis of the code.  Once the code is written, built, and tested for usability issues it's time to security-test it with, again, either a human being, or some black-box testing tool(s) - but again, you must use trained and knowledgeable people here as well.

  Well, if I'm a security manager...this is great!  Thinking logically - you always want the security expertise in-house, and why in the world wouldn't you want to do application security throughout the application lifecycle?  The DSS update also goes on to remind us of requirement 6.3, and the need to have an effective change-control process such that the security reviews are not bypassed, at any level.  While the final sign-off must be done when the code is ready for production - it's imperative that the effectiveness of the application security policy be enacted to push as far back into the development (pre-development planning?  requirements gathering?) lifecycle as possible (more on that in a separate article).

  As a final note - the update talks about the need to stay current and abreast of new developments in application security testing.  It's essential that whatever tools are purchased (whether they be the full SDLC suite from HP/ASC, or some other vendor) - that these tools and their users be continually updated from the brightest minds in the field.  This is unfortunately a one-up battle against the "bad guys"... if you're behind you're sunk.

So what have we learned from the new Section 6.6 update?

  • There are at least 4 ways to interpret the "Application Code Review" guideline
  • You can have your code "reviewed" internally, as long as your assessor is trained and competent (but to who's qualifications?)
  • You can use automated tools, either static code analysis or black-box testing software, if you have your people trained in those tools, and application security
  • Your testers/assessors have to be organizationally separate from the development organization (but at what level?)
  • Your organization should absolutely integrate application security as early into the SDLC as possible, using "tools and rules" in combination
  • Your testers should always be up-to-date on the latest developments, techniques, and methods... otherwise you're bringing a knife to a gunbattle

Thanks for your attention!

About the Author(s)
Follow Us
Twitter Stream

Community Announcements
HP Blog

Technical Support Services Blog

The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation