WireHarbor Security, Inc had a very thought-provoking post titled "The high cost of poor software quality" recently, and it really got me thinking. Poor software quality (thinking about quality holistically) creates fragile ecosystems which are both painful and costly to our customers and to ourselves. The issue here is that software that isn't built properly has more than just the obvious security implications of being hackable, but it creates an entire ecosystem around that software which has the potential for creating more brittle, one-off, and vulnerable components than the enterprise bargains for.
When you think of software security, you need to be thinking of the complete software quality triad - functionality, performance and security. Every software developer needs to be aware of all 3 components, otherwise the three-legged stool falls over. When thinking about releasing a piece of software we need to ask ourselves 3 critical questions when deciding whether the app should be released:
- Does it function in a manner that satisfies requirements? (and do we have ample evidence?)
- Does it perform to the specified requirements (and do we have ample evidence?)
- Is the application reasonably secure? (and what evidence do we have of such?)
If we cannot provide concrete answers to all 3 of those questions, as basic requirements for release, then we're playing a dangerous game. Passing the technical debt downstream has been talked about many times before and shouldn't be forgotten. Furthermore, passing technical debt downstream to the customer creates a serious situation for not only the provider but the customer who has to do maintenance and up-keep as well.
The Obvious Security Issues
Software that doesn't get the proper attention throughout the software development lifecycle from business analysts, developers, designers, testers, operations teams, and ultimately security analysts will likely have security issues. This isn't really rocket science and has been proven over and over as software released by organizations that bypass the security component produces advisories and public disclosures all over... or worse, a breach.
It's widely accepted, or it should be by now, that testing your software secure, that is leaving "security" until the end as a validation step, is a certain recipe for failure. What is less widely accepted is the belief that forcing security into the hands of the developers can be equally fruitless. Security isn't a checkbox or a task in the software development lifecycle, it must inevitably become a part of development, release and maintenance methodology.
Some organizations simply get lucky, and having good developers coincidentally produces code with low levels of security risk. By and large, most organizations aren't so lucky. The bottom line is there are millions of security testers out there ready to perform billions of iterations of pattern-based and logic-based security testing on your applications. The question really is, will you do it before they do?
Building Vulnerable Scaffolding
Think of releasing a poor quality piece of software this way - the less you do to fortify your software against functional, performance and security defects the more your customers have to do to compensate. If your software performs poorly, your customers may have to have massive server farms (real or virtual) to accommodate. This wastes licenses, space, power, and so on. If your software functions poorly, your customer will likely have to apply their own fixes, staff up their help desk for the inevitable calls, or maybe even buy another software package and link them together in new ways to accommodate functionality they're expecting.
Now, if your software has poor security, your customer may have to compensate heavily for your lack of fortification - such as a Web Application Firewall (WAF) for web applications... but we know how effective WAFs are against non-patterned attacks, right? So we'll build up WAFs, add extra monitoring at that UTM firewall (shudder), and do tons of extra logging and manual verification... and perhaps penetration testing and additional analysis (which is costly!). I think the picture is clear?
All these systems, whether you're supporting poor functionality, performance, or security, create a scaffolding of sorts that creates in itself additional opportunity for failure. Every system added in-line creates an opportunity to break legitimate functionality in the application. Every support system creates additional opportunity for a process break-down, corruption of data, and other operations nightmares. This is not conducive to rev'ing faster code and having better stability.
The worst possible outcome is that which we run into regularly as third party advisors and consultants. The 'dinosaur system' is the bane of many IT departments and causes untold numbers of outages and emergency expenditures. The dinosaur system is one that pre-dates the current crop of employees, was delivered of poor quality from the vendor, and was supported by years of scaffolding and undocumented fixes, patches and upkeep. It is now in an extremely unsteady state as the years of adding on has finally created a Frankenstein's monster that cannot continue forward...and it starts to unravel. When you go back to the vendor they probably don't support that old version or your 'customizations' and don't have a clear upgrade path to a modern version. Fail sandwiches all around.
Poor quality applications invite the customer (and sometimes even the vendor if the product is being implemented by the vendor) to build their own support structure (aka scaffolding) which leads to the inevitable - an unstable, unsupportable product with no path forward.
Cost is an Issue
It would appear as though every time poor quality software goes out the door both customer and vendor are forced to build custom support structures to compensate for the software's shortcomings and the costs skyrocket.
Fixing software quality issues after the fact is expensive... and it gets worse the longer you wait. I'm sure by now you've heard that the further away (in time) you are from the point at which code is written it gets more expensive (in terms of time required, effort needed) to make fixes. I can make an intelligent hypothesis that this cost is non-linear, meaning that in two years it isn't 2x the cost of 1 year ... but rather something more complex like a hockey-stick curve. The more time that passes the more expensive it gets and the faster that expense grows. This is truth, I believe.
There is also the issue of the human capital it takes to keep a poorly performing, poorly functioning or poorly secured piece of software running and available. As a rule the lower the quality of the software, the more people it will take to keep it running at some acceptable level... and again this raises costs.
Yet another issue is the scaffolding itself. Software that performs poorly often requires drastic measures to keep it rolling at some acceptable pace. If a Web application was supposed to perform at 10,000 concurrent users, but falls over at anything over 1,000 concurrent users, you're likely going to have to have 10x the equipment, plus some way to distribute load (which will likely have to be hacked in -creating more operational risk) to keep it performing to original specifications. How many projects have the ability to simply request 10x the hardware or virtualized infrastructure? Not many, I fear. Once the Frankenstein's monster is hobbling along, you need a small army to keep it going and make sure that you can add bits as they start to fall off during operations - this is no small task and can burn out high-quality employees rather quickly.
Costs add up fast and can grow exponentially all because of some poor holistic quality issues.
The Lesson Here
The lesson here is this - the true cost of poor holistic software quality can be immense, and much more than your customer or you ever accounted for. Selling technical debt downstream makes this matter worse in the long-run, and even if you don't plan on being around at that point in time, it is a dangerous situation all around.
Software quality must be thought of holistically, with all 3 pillars having proportional stake in the release process. Sometimes conscious decisions are made to move forward with release in spite of a few lingering defects, but these are calculations that are not to be taken lightly. An hour saved during the release cycle to cut corners and release faster can lead to hundreds of thousands of dollars in lost capital downstream. That money has to come from somewhere...