The security industry should hold itself to higher standards

At a previous job I worked on the application testing side of web security—breaking in-house/contract built applications, commercial off-the-shelf (COTS) applications, appliances, and partner’s sites (which were built with all of the above). While most of these weren’t security related, more than a few of them were.

Time after time, web applications or appliances built by “security companies” turned up a ridiculous amount of vulnerabilities. I won’t go so far as to make bold statements about how every developer at every security company should be an expert (though it would be nice), but certainly these places should have rigorous testing methodologies at all stages before product release… right? (see Rafal’s blog for lots more posts about that). Security products should have fewer vulnerabilities… right?

One of the under-used features over at OSVDB.org is the ability to search by products that are identified as “security software.”This search reveals fourteen pages—over 400 issues—of flaws in security products (though not all of them web related). Sadly, if my experience is any indication of how this works throughout the industry, there are tons more that were never publicly released due to contract and political restrictions.

Take, for example, a certain network traffic collection, storage and security analysis appliance I tested 3 or 4 years ago. It had a “hidden” web directory of administration scripts, and about half of them lacked an authentication call. This let you do “minor” things like running shell command or viewing all the captured network data. After communicating with the vendor, they fixed the problem half a year later—but it was never publicly disclosed.

So, how can you trust your security products you may need to rely on in court, when they are not secure? Two years ago, and upheld last month by an appeals court, a judge determined that the failure of the manufacturer to release the source code to a breathalyzer machine was a violation of due process and thus impacted the defendant’s ability for a fair trial—the test results were not admissible as evidence. This was a bold defense strategy, and, in my opinion, the right decisions by the judges involved.

So how far off are we from a defendant questioning forensic evidence or logs when the software or tool that collected it had a critical security flaw? Or has this already happened and I missed it?

Maybe it doesn’t matter in the long-run. Paul Ohm writes in his blog post Being Acquitted Versus Being Searched (YANAL) that by the time the trial comes and you try to make a compelling argument about tainted evidence, they’ve likely gathered so much more via surveillance/warrants that it won’t matter if you are able to discredit some portion of it.

Despite what Ohm says, I still like the idea that a real-life Alan Shore will raise the bar for security software makers. Only if the bottom-line is in jeopardy will most companies decide investments in getting their own security-house in order is worth it.

Comments
| ‎02-14-2009 03:56 AM

Pingback from  fire  » Blog Archive   » The security industry should hold itself to higher standards - The …

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.