Enterprise Security organizations often find themselves caught between the ever-changing needs of the agile business, and the ever-present, ever-evolving threats to that business. At the same time – all too often we security professionals get caught up in “shiny object syndrome” which leads us to spend poorly, allocate resources unwisely, and generally de-couple from the organization we’re chartered to defend. Knowing how to defend begins with knowing what you’ll be defending, why it is worth defending, and who you’ll be defending from… and therein lies the trick. This blog takes the issue of enterprise security head-on, challenging outdated thinking and bringing a pragmatic, business-aligned, beyond the tools perspective … so follow the Wh1t3 Rabbit and remember that tools alone don’t solve problems, strategic thinkers are the key.
Rafal (Principal, Strategic Security Services)
"What do you say to organizations considering software security, but struggling with adoption due to the inevitable, additional drag on release cycles?" -- I say read this, because there is a discussion to be had, still...
Lots going on in the enterprise space right now, including the rush to push out mobile apps. They're springing up like weeds, replacing websites, and are gaining multi-factor authentication for security... but wait, does any of this added security make sense, especially on the mobile platform?
Poor software quality creates fragile ecosystems in software.
A great piece I read recently made me think about the ripple effect software quality can have downstream and how waves get bigger the further they are from the source...
DtR Podcast Episode 26 is with the man many of you love to hate on - but he's doing a phenomenal job ... hear his story as Adobe's Brad Arkin tells you about "Software Security Under Pressure"...
Software security is a key piece of your enterprise security strategy ... what do we think about that, and how do we help thousands of global customers get a handle on this very difficult problem? Check out this video from iconic Time Square, NYC...
Ask yourself something -who pays for the security bugs in the enterprise software you buy to be fixed?
What about all of the hidden, often forgotten costs? I address that in this post, and tell you a little bit about what we're doing to alleviate customer pains in this area.
Today I'm taking a break from the daily news of information security a bit to reflect on the nature of the security of 'code'. I will readily tell you I'm a terrible programmer, and I learned that way back when I wrote my first BASIC programs, and then onto Turbo Paschal (that was sure a dead end wasn't it?) in early high school and C, C++ and so on. I'm not very good at writing optimized, intelligent code - but what I've learned to do over the years is read an incredible array of languages for the subtleties that make them 'break' in the security sense. I've studied Ada95, FORTRAN, MIPS-RISC and other weird and now long dead languages and formats over the years and it all comes down to the same thing every time.
I've always found sandboxes interesting, particularly from a cost-benefit analysis perspective.
As a developer you should be writing good code, period. But when the pace of developing new functionality outpaces the ability to do complete software security analysis we see security organizations turning to sandboxing as a method of limiting the amount of damage an exploited piece of code can do. Just ask Adobe if you want a good example.
Does it make sense to spend time designing, coding, testing and deploying a sandbox, when the real issue is in the underlying application you're trying to protect the operating system from? I'll let you answer that for yourself.
How do you, the owner of a compromised web site, re-set your user's accounts after a full compromise like Care2 or any number of recently hacked sites recently experienced? It's a lot more complex than you think, and sites that haven't planned for this type of compromise ahead of time in their design are finding their password re-set controls are wholly inadequate, and lacking the ability to keep attacks at bay post-compromise.
I've been thinking about where the Internet as we know it will be evolving to a lot lately, given the technology space I work in and the type of research going on around here at HP ...but one really interesting theme lately has been this heralding of the "Death of the Web" ...or put more accurately - the "death of the document-based web". This article on GigaOM by Dominiek ter Heide caught my attention... because it was actually a really good, rational explanation of what I completely agree with is in the process of already happening.
This post is a follow-up to the previous one on QA: Defect vs. Vulnerability. All the highly-intelligent responses I received got me thinking further, and so here I present my additional thoughts.
This may not be revolutionary - but given the response I received regarding the terminology difference between defect and vulnerability I think the only logical conclusion we can reach is that if security is not a foundational business requirement, we're sunk.
To expand on this point a little more I think it's important to follow non-technical critical-thinking here. Anything that does not make it into the functional specification of an application is an afterthought. It has been conclusively proven that anything that is not "baked in" as a requirement is nearly impossible to fix later on, as an after-thought. So we're presented with a puzzler. Security must be a business-level requirement. So how then does one translate vulnerabilities into a business requirement, sanely? Simply stating "... the application shall be free of unintended design flaws and security vulnerabilities" is like asking an architect to build a structure that will withstand every known (and unknown) possible attack - it's simply illogical.
Strangely, program leads that manage these large-scale web applications at the heart of nearly every major breach want concise, identified things to not put into the code... but since that list is a moving target the security team gets penalized for the nature of security itself. This is the reason why black-listing input is a losing proposition... you're always going to be in an arms race with the bad guys... and you'll never win.
I've heard some recent conversations hit the wire around using the CWE Top 25 or some other list as a definitive list of coding errors to avoid in web applications but I'm not sure if that will actually solve the problem. The problem with this approach is and will be that these lists are exclusionary measures. These lists illustrate what we must exclude to be secure. Turning it around and making statements like validate all input makes little more sense, especially given that input validation must be defined in the context of the situation, and there is never a one-size-fits-all answer. To illustrate the point further - input validation may mean excluding certain character sets/patterns and pre-defining acceptable input options ... but this does not account for things like free-form input or other use-case specific examples.
In the end, the crux of the problem lies in the nature of security vulnerabilities. Security vulnerabilities are a moving target and although they can be loosely defined and lumpted into Top 7/10/25 lists it is not logical to consider these lists complete or even functional for designing software. Will a web application be secure if it follows the CWE Top 25 and addresses those issues? What about the OWASP Top 10? I don't think anyone has that answer, or at least is willing to stake their reputation on it.
So back to defining security as a business-level requirement... can it be done? Can one clearly articulate requirements to secure data/transactions/processes/whatever *before* the technologists get involved; meaning, before the means to execution are defined? I will leave that up for debate.
"Is Open-Source software more secure than Closed-Source software?"
Of all the questions I get asked on a regular basis on web application security - perhaps this is one of the toughest. The answer, quite simply, is no. There are arguments that can be made for as well as against the security of either - but I think it's most prudent to lay out the Pros/Cons of each approach so that it's possible to look at the case for each more objectively.
- Open-Source: A case can be made that with open-source software (OSS) there is complete transparency, or at least the appearence of such. The software's code is freely available, and one can modify it as he or she so chooses. In theory, vulnerabilties of the intentional variety would be more difficult to hide in this scenario (albeit not impossible, obviously). With complete transparency comes the natural ability for many, many people (including security-conscious folks) to put eyeballs on the code and find and disclose bugs more readily. In this case the good guys get an even crack at defects.
- Closed-Source: With closed source, the source code is unavailable, thus the security by obscurity model falls in place. Also, closed-source is typically for-profit software which is given a great deal of scrutiny to be more secure as security defects can have a devastating effect on the application itself. Security defects in closed-source often go un-noticed for long periods of time, and some are never discovered. With the lack of source code, finding security defects in closed-source software requires extreme amounts of negative testing, using techniques such as fuzzing. Without direct knowledge of the code the process of writing exploits is often trial-and-error
- Open-Source: Since open-source is more transparent, security defects are found more rapidly - often by the bad guys. Also, open-source software is often built by many different people, and more often than not - on a shoestring budget. The low-budget (free software doesn't generate a lot of revenue) aspect, it can be argued, prevents the developer from having the proper resources at his or her disposal to produce less defective code. Also, because OSS is often a collaborative effort, it's relatively easy to make mistakes in the way the different pieces interact with each other, which can lead to security defects. Open-Source Software quite simply doesn't typically have the monetary backing to produce good quality, more-secure code.
- Closed-Source: Since closed-source software doesn't make the source code available, and it's typically illegal to reverse engineer the code - it becomes up to the developer to produce verifiably secure code. This also puts the good guys at a distinct disadvantage, because while the hackers care naught for the DMCA and otherlaws, the white hats have to follow the rules. Not getting a chance to pour over the code makes it difficult to find or expose critical security defects, until it's often way too late. Closed-source software also suffers from the arrogance problem - it goes something like this: "We're a big professional software developer, not some open-source fly-by-night group, our code is by far superior because we have money and highly-paid developers. Trust us, and don't reverse-engineer our code or we'll have you arrested and charged". This truly becomes a problem for researchers and penetration testers trying to do good, legitimate work.
So, what's the verdict? Each has their merits and problems, much like the debate over MS Windows vs. Linux... the answer to which is this: "Each, in the hands of a poorly trained chimp, can be exploited rather quickly". So should you be making the switch to OSS (Open-Source Software) and abandoning closed-source? Probably not - but on that same token, don't discount one or the other without fully understanding the ramifications of each.
Merry Christmas, Happy Holidays...