How Much Responsibility Should Developers Have For Security?

 

One debate that remains incandescent in the security world is the question of how much developers should be held accountable for security. Dinis Cruz did a presentation at OWASP recently on why security should be invisible to developers.

 

His basic argument is that security is for security people and building things is for people who build things. He says that security people should stop rubbing developers’ noses in their problems and make security transparent so developers don’t need to think about it.

 

This is mostly a horrible idea.

 

The easiest way to see this is to take the concept of “building” to any other domain. Quite simply, anyone who “builds” something needs to be responsible for its security. Whether it’s a skyscraper or an automobile, the excuse of “You didn’t give me secure stuff to build with so I made a death trap.” isn’t a strong defense.

 

It’s true that there are different types of people who “build” buildings. There are those who design them and then there are those who put drywall in and nail up plywood. Perhaps the argument is that people who do basic construction shouldn’t have to know how to build a structurally sound skyscraper.

 

I could grant that, but it doesn't mean that all builders are unaccountable. Someone on the team creating that structure has to confirm to the earthquake codes, the fire codes, etc. There is a person who's reputation is on the line if they erect a structure that has safety issues.

 

So, if we’re saying hammer and nails construction people are like entry-level developers who don’t need to know the ins and outs of security, then I ask you who the architect is. Remember that you can’t just send a bunch of hammer and nail guys in to build a skyscraper — you need an architect to lay out an approved plan.

 

That architect has his license and reputation at risk, and that’s the piece that we’re missing in software. Saying that "developers" don't need to understand security is just wrong. Coders need to be identified as one of two types: hammer and nails types, or design/architecture types. If they’re hammer and nails guys then they shouldn’t be allowed to code without the supervision and review of who is able to put her name on the line.

 

The one thing that’s completely out of the question is the notion of separating "building" from "security" altogether. It’s not true anywhere else, and it shouldn’t be true for software. You cannot claim to be a "good" developer if you create things you don't understand -- especially when those elements that are nebulous to you have security/safety implications.


If the earthquake certification engineers ask an architect how his building will withstand a 7.0 earthquake on the 19th floor, his answer better not be, "Yeah, I just deal with the stacking of the floors on top of each other -- not so much the making sure they don't fall down."

 

Security is now part of the process, and it will only become more so as time goes on. If Dinis's only argument was to say we as an industry should make it *easier* for developers to be good at understanding the security of their applications, then I agree wholeheartedly. But he didn't make that argument. Instead he essentially said that they shouldn't be troubled with the issue at all because they're doing the privileged work of building. He wants a clear distinction there, and that's where the mistake was made.

 

Building something is inexorably tied to securing it. This is true whether we're talking about castles, baby strollers, automobiles, or software applications. Developers don’t get a pass. Building things is hard precisely because there are so many considerations. If a developer doesn't understand how to build securely there's only one proper name for him: a junior developer. ::

Comments
Dinis Cruz | ‎10-22-2011 02:56 PM
Hi Daniel, Thanks for your comments, I think you make a good representation of the security camp that defends that "security is EVERY developer's business" which although well intended, unfortunately doesn't scale, and, in fact it doesn't work. 

We will never achieve secure applications at a large scale if we require ALL developers (or even most) to be experts at security domains like Crypo, Authentication, Authorization, Input validation/sanitation, etc...

Note that I didn't say that NOBODY should be responsible for an Application's security. Of course that there needs to be a small subset of the players involved that really cares and understands the security implications of what is being created.

The core idea is that developers should be using Frameworks, APIs and Languages that allow them to create secure applications by design (where security is there but is invisible to developers). And when they (the developers or architects) create a security vulnerability, at that moment (and only then), they should have visibility into what they created (i.e. the side effects) and be shown alternative ways to do the same thing in a secure way.

The other idea that I'm trying to push our (the application security) industry to adopt, is this concept: "One can't protect/analyze what is not understood, so application security teams create models (and tools) that help them to visualize and understand how the apps works, and since this 'application visualization metadata' is also VERY valuable to developers, let's work together (devs+qa+appsec) so that we can embed application security knowledge and workflows into the SDL"

For example, a very good and successfully example of making security 'invisible' for developers was the removal of 'buffer overflows' from C/C++ to .Net/Java (i.e. from unmanaged to managed code). THAT is how we make security (in this case Buffer Overflow protection) Invisible to developers 

If you are looking for an analogy, "a chef cooking food" is probably the better one. Think of software developers that are cooking with a number of ingredients (i.e. APIs). Do you really expect that chef to be an expert on how ALL those ingredients (and tools he is using) were created and behave? It is impossible, the chef is focused on creating a meal. Fortunately the chef can be confident that some/all of his ingredients+tools will behave in a consistent and well documented way (which is something we don't have in the software world). I like the food analogy because, as with software, one bad ingredient is all it takes to ruin it.
 
 
Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
http://www.danielmiessler.com/about
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.