The Information Security community spent a lot of energy talking about technical debt, and largely determined that it's a bad thing because the further away in time we get from the point at which a bug is entered into code, the more expensive and difficult it is to remediate.
So the easy example is that if a developer writes a piece of code which makes a database call and doesn't properly code and has a bug such as SQL Injection. Every day that passes increases the time and resource cost to fix that bug. If that code is written today, and the fix is done in the next few days it's still fresh in the developer's mind and they can make the fix quickly. If the bug is identified in a month and the developer is asked to fix it, odds are they'll have to re-acquaint themselves with the code, and spend more time to fix the same bug. If that bug is identified and fix is requested in a year we don't even know if the same developer will be around, much less know that code well enough to just go to the offending line or method to make the fix. Furthermore, the code around it may have changed and thus we may require much more testing, and additional resources... so it's much more expensive and costly in terms of time.
No disagreement from me.
Then a few weeks ago I had a client meeting where my statement of that premise was challenged - but not in the standard enterprise context but rather in the context of the start-up, or an organization rushing to market. My claim that technical debt will lead to increase costs down the road was accepted, but challenged as to it being something negative that program or project owners would not desire. An interesting discussion ensued which basically found me defending the position against a VP in a new products group who had a slight twist on the matter.
This person, who was a security professional by trade, was arguing that sometimes it's actually OK to accrue technical debt in the name of innovation and speed-to-market even though it costs more later. It's a bit difficult for us as security professionals to grapple with letting security bugs go to production just so we can get to market faster... what if that bug is exploited?!
Turns out the answer may be really simple, and not really comfortable. While it may not be the 'secure' thing to do, sometimes products just need to make it to market, otherwise security may not matter. Let's use a concrete example here, to illustrate the point.
Pretend you work for the ACME Widget Corp. ACME Widget Corp manufactures a lot of stuff, but they've recently entered the mobile market and are rushing to bring to market a revolutionary new mobile app. Competition is fierce, and there are several competitors who are rushing to beat them to market... the first to market with the coolest features will likely be the winner. In "choose your own adventure" style, we [the security organization] can proceed one of two ways. We can enforce security practice, testing, and fixing of bugs while limiting features to those that can be done securely ... or we can just let the developers be creative, quick, and only look for obvious security issues that are simple to fix. (I'm over-simplifying for brevity)
If we let security bugs/defects go through, we're accruing technical debt, which we know we'll likely end up paying for later. In fact, we know that a simple bug now which may take an hour to fix, will likely cost us at least that much time later, plus additional testing and re-deployment, in fact, we suspect that it may be worse if the issue is exploited. The fact is, the people pushing this imaginary app through simply don't care. Why? It's simple - if they succeed and make it to market they're happy to pay the technical debt from the piles of money they're making. If they were to slow down or stop their rush to deploy due to some security issues, they may not have a product or a user base... and therefore it wouldn't matter if the product is secure or not. Interesting perspective, and a very valid one.
Now let's turn the lens over to Evernote. An application like this was likely developed quickly, with features, functions, and time on the front-of-mind rather than security. Now that Evernote is immensely popular, they can go back and start to fix bugs they've let go through the various development cycles. The major dependency here is the customer. How many of the 50 million end users do you suppose have deleted, or quit their Evernote accounts over this breach they just experienced? Any guesses? I know I didn't... did you? I don't have any particular insight on Evernote, so I can't say with certainty whether this in fact was what happened, or if they simply didn't care about security ... or some other reason existed, but it fits. In my opinion (and Chris Wysopal did his AppSec USA 2011 preso on this topic) technical debt may actually be OK for start-ups, or even non-start-ups where the priority is speed-to-market.
So as it turns out, technical debt is a valid point, and it most definitely costs you more to fix bugs in your code later... but there are times when you simply don't care because while you have no money now the goal is to get the product to market and so you gladly pay the technical debt later. Just another example of security in the real world... and how even sound theory doesn't always fly in the business world.