Technical Debt vs. Time-to-Market (using the Evernote breach as an example)

The Information Security community spent a lot of energy talking about technical debt, and largely determined that it's a bad thing because the further away in time we get from the point at which a bug is entered into code, the more expensive and difficult it is to remediate.

 

So the easy example is that if a developer writes a piece of code which makes a database call and doesn't properly code and has a bug such as SQL Injection. Every day that passes increases the time and resource cost to fix that bug. If that code is written today, and the fix is done in the next few days it's still fresh in the developer's mind and they can make the fix quickly. If the bug is identified in a month and the developer is asked to fix it, odds are they'll have to re-acquaint themselves with the code, and spend more time to fix the same bug. If that bug is identified and fix is requested in a year we don't even know if the same developer will be around, much less know that code well enough to just go to the offending line or method to make the fix. Furthermore, the code around it may have changed and thus we may require much more testing, and additional resources... so it's much more expensive and costly in terms of time.

 

No disagreement from me.

 

Then a few weeks ago I had a client meeting where my statement of that premise was challenged - but not in the standard enterprise context but rather in the context of the start-up, or an organization rushing to market. My claim that technical debt will lead to increase costs down the road was accepted, but challenged as to it being something negative that program or project owners would not desire. An interesting discussion ensued which basically found me defending the position against a VP in a new products group who had a slight twist on the matter.

 

This person, who was a security professional by trade, was arguing that sometimes it's actually OK to accrue technical debt in the name of innovation and speed-to-market even though it costs more later. It's a bit difficult for us as security professionals to grapple with letting security bugs go to production just so we can get to market faster... what if that bug is exploited?!

 

Turns out the answer may be really simple, and not really comfortable. While it may not be the 'secure' thing to do, sometimes products just need to make it to market, otherwise security may not matter. Let's use a concrete example here, to illustrate the point.

 

Pretend you work for the ACME Widget Corp. ACME Widget Corp manufactures a lot of stuff, but they've recently entered the mobile market and are rushing to bring to market a revolutionary new mobile app. Competition is fierce, and there are several competitors who are rushing to beat them to market... the first to market with the coolest features will likely be the winner. In "choose your own adventure" style, we [the security organization] can proceed one of two ways. We can enforce security practice, testing, and fixing of bugs while limiting features to those that can be done securely ... or we can just let the developers be creative, quick, and only look for obvious security issues that are simple to fix. (I'm over-simplifying for brevity)

 

If we let security bugs/defects go through, we're accruing technical debt, which we know we'll likely end up paying for later. In fact, we know that a simple bug now which may take an hour to fix, will likely cost us at least that much time later, plus additional testing and re-deployment, in fact, we suspect that it may be worse if the issue is exploited. The fact is, the people pushing this imaginary app through simply don't care. Why? It's simple - if they succeed and make it to market they're happy to pay the technical debt from the piles of money they're making. If they were to slow down or stop their rush to deploy due to some security issues, they may not have a product or a user base... and therefore it wouldn't matter if the product is secure or not. Interesting perspective, and a very valid one.

 

Now let's turn the lens over to Evernote. An application like this was likely developed quickly, with features, functions, and time on the front-of-mind rather than security. Now that Evernote is immensely popular, they can go back and start to fix bugs they've let go through the various development cycles. The major dependency here is the customer. How many of the 50 million end users do you suppose have deleted, or quit their Evernote accounts over this breach they just experienced? Any guesses? I know I didn't... did you? I don't have any particular insight on Evernote, so I can't say with certainty whether this in fact was what happened, or if they simply didn't care about security ... or some other reason existed, but it fits. In my opinion (and Chris Wysopal did his AppSec USA 2011 preso on this topic) technical debt may actually be OK for start-ups, or even non-start-ups where the priority is speed-to-market.

 

So as it turns out, technical debt is a valid point, and it most definitely costs you more to fix bugs in your code later... but there are times when you simply don't care because while you have no money now the goal is to get the product to market and so you gladly pay the technical debt later. Just another example of security in the real world... and how even sound theory doesn't always fly in the business world.

Comments
Benson_1 | ‎03-05-2013 01:09 AM

Wow, is this something we're talking about?  In dev circles, tech debt is thoroughly understood, and it's extremely similar to any other kind of debt -- you're borrowing against your code's future, and the cost to repay the debt compounds over time.  You're absolutely spot on, but I really didn't think this was news to anyone these days.  

 

For what it's worth, enterprises sometimes need to rush things to market too. If you think of tech debt just like you think of monetary debt, it ends up working out pretty well: Do I have the resources to do this right, or do I need to borrow from somewhere?  If I can't borrow from a friend, I'll have to borrow against my own future. 

 

It's also worth noting that using the right tool for the job can massively mitigate tech debt -- if your code is DRY, and you had to duct tape some garbage together you can fix it in one place instead of many.  If you use an ORM correctly (i.e. without sending it raw strings for WHERE clauses), chances are somebody else's fix will protect you from the SQLi attacks.  Using the right rapid development framework can save you a massive amount of debt, and that's why tools like Django and Rails are so popular.  Not only do they simplify the work you have to do, they can help you compartmentalize your tech debt so it's easier to fix when you have the engineer hours to spare. 

Adrian Sanabria(anon) | ‎03-05-2013 08:08 AM

Benson,

 

Unfortunately, many people in the infosec community have never worked outside of it, and there are many who acutally think that 100% of known issues in a product should be fixed prior to production. Of course, in this scenario, there would be no products, or the company unfortunate enough to attempt this would both run out of money and get trounced on time-to-market by competitors.

LeonF(anon) | ‎03-06-2013 12:20 PM

There are some vaid points, but I don't really understand why the author compares security and technical debt.  Security is not a feature, it's not a option, so people who forego "security" (enterprise or not) should not be compared to people who accrue technical debt either because of incompetency or necessity.

 

Full disclosure: I actually argue in favor of measuring technical debt vs actual goals when it's required (http://omniti.com/seeds/your-code-may-be-elegant)

Robert David Graham(anon) | ‎03-06-2013 03:43 PM

“Technical debt” is a communist idea, thinking debt is a bad thing. To a capitalist, debt means capital. Sure, it incurs an obligation in the future, but it means capital now that can be used to build a factory, manufacture widgets, and then sell them at such huge profit that it becomes easy to pay back the debt.

 

So we have the funny situation where cybersec people go to the CxO in charge and try to use “technical debt” to explain why more needs to be invested now in the code in order to avoid the later technical debt. The CxO hears the opposite thing, and thinks technical debt is great, and wants to know how to get more, so fires half the coders in order to get even more technical debt.

 

What’s funny is that while “technical debt” is wrong the way cybersec people use it, it’s right the way capitalists interpret it. Startups want to go into steep technical debt. If the startup fails, it lessens their loss. If the startup succeeds, then they’ve got money to pay back the debt, to fix all the code or rewrite it completely.

 

The thing is that in the end, though, the “debt” analogy falls down. Corporations never pay back all their debt. Neither do governments. Instead, they “grow” out of their obligations. As long as government debt grows slower than GDP, then debt becomes smaller and smaller as a percent of GDP. The same applies to people: as long as your borrowing grows slower than your wage raises, you never really need to pay it back. Cybersec is different, though. You do have to fix all your SQLi vulns that allow hackers to break in. You can’t let them go indefinitely like you can with debt.

 

Nick Owen(anon) | ‎03-07-2013 08:54 AM

A couple of things to add to Rob's comments:  note that Equity has an obligation to be repaid in some form in the future, just with less specification than debt and is therefore  more risky and demands a higher return.  A VC wants 50% per year so, an Angel investor wants significantly more.  If you make it through your proof of concept phase,  start signing up users and move toward a VC, round your cost of capital drops from, say 200-500% to 50% in a short period of time.  In this scenario, with starting capital so precious it makes a great deal of sense to incur 'technical debt'. 

 

One issue with this type of terminology is that it's not really accounting. If you borrow $800,000 to buy a $1,000,000 you have building have an equity entry of $200,000. As you pay down the loan you increase you equity.   What is the corresponding double entry for technical debt?  

 

Software assets are written down via depreciation, but rarely is that translated into re-investment.  If software is depreciated by the accountants over 3 years, does the dev team get the equivalent $$ in budget?  I would argue this is why you see large companies with 'mature' technologies get hammered by security issues.  Accountants like to see low book values for assets generating cash.

 

You could argue that 'rugged' software, built-to-last would have longer depreciation schedule or that 'rugged' software is designed to outlast it's depreciation schedule.

 

 

Richard_Bishop | ‎03-11-2013 03:52 PM

Hi Raf,

I think that this article is "spot on". You always need to make a judgement call and balance the desperate need to get a product "out of the door" with the cost of trying to get it 100% right.

The recent banking-related technical failures in the UK and the Evernote breach that you mention will help to give the term, "technical debt" more prominence and rightly so. Outside the "geekiverse" that we inhabit, I was surprised to hear the BBC (usually very non-technical) describing "technical debt" in a recent article that I referred to on my blog .People need to understand the risks that they take with a new IT system in the same way that they calculate the risk when crossing the road. Ultimately this should lead to better software and better decisions from people who are cognisant of all the facts and not just those that the project team choose to give to them :-)

Thanks for the article, keep blogging.

Bish

TestWithUs(anon) | ‎05-22-2013 05:32 AM
Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
About the Author


Follow Us
Community Announcements
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation