The Ultra-Legacy Problem - Systems so old...

hamster-wheel financial translation.jpg"That app is so old, they've found caves with the source code scrawled on them!" ... sounds funny right? It may be funny, unless you're the person heading up the organization which needs to "do something about" that ultra-legacy code or system in your environment.


Say you're a sizeable institution here in the United States.  Say also that over the last two decades you've amassed lots of interesting technologies and platforms that run your business, in a time before the Information Security organization did much more than install anti-virus on your desktop... and now that technical debt has come back to haunt you.


If you're not familiar with the notion of technical debt as applied to Information Security, it's something you must read, and can certainly research yourself additionally.  The problem with the masses of technical debt that we've accumulated in the various industries is that it's not going away via "natural obsolescence".  It would also seem to concentrate heavily in finance, particularly with the "old financials" which translates to companies that have been around for 30+ years either as banks,  or other forms of financial services.  These organizations have archaic systems that very few of us are even qualified to address anymore ... remember Foxpro? Those systems don't really have people who can easily decipher decades-old software and translate it into more modern systems without severe pain, or maybe a seance!


Let's take a system I was introduced to recently where the problem isn't the desire to move off a 30-year-old platform, it's the ability that's the show stopper. When a friend of mine took this organization over at CISO, he immediately noticed that there were systems out on his network that were should I put this ... antiquated. How in the world would you migrate that Foxpro system over to your current web-based platform? There just aren't any simple roadmaps or recipes to do this, so the ultra-legacy stuff keeps churning.


It makes sense that the biggest risks in any organization are the unknowns and systems that are more than 10, 15 or even 20 years old definitely classify into that category. Odds are pretty good that not only are the original architects, developers and implementers not around anymore, but that documentation and knowledge has left the building with Elvis. In a situation where you're not entirely sure what a system does, how it behaves, or what it depends on you have no choice but to try and understand it and what risks it may post first before you go to anything else. I've tackled one of these projects before, and it wasn't even this difficult. One day I'll have to write up a post about porting a Phion firewall to Checkpoint, but I digress.


Let's take a minute to reflect on why those ultra-legacy systems exist in the first place. It's generally one of two things that cause ultra-legacy systems to stick around - cost or knowledge ...and sometimes both are the problem. Cost is often the biggest reason due to the enormous costs that are required to migrate some of these ages-old systems to modern technology. For example, migrating one simple timecard tracking system on a shop floor (which was running Windows for Workgroups) in 2003 was projected to cost north of $300,000.00 USD to bring it up to 'modern' code standards. The result was the application was kept around until "it could no longer be reasonably maintained or used". The idea behind technical debt is that the longer a system sticks around the higher the costs of modernizing its internals and technology, or fixing bugs. So after, say, 15 years you're in serious financial ruin trying to migrate a system that should be simple in the first place. The other big problem is knowledge. Yes, that timecard system wasn't overly complex but it was written in C, and the original source is long gone so to effectively duplicate that system we must find the original business process documentation, right? Right ...good luck. If only it was that easy. In places where you've got both problems the costs tend to be higher and the life-support requirements elongated.


On a system where simply 'securing it' doesn't fit with modern technology you have at your disposal what do you do? When you can't patch a system because it relies on technology that has been out-of-support since the Clinton presidency you may have a problem. It's interesting to note that there are plenty of solutions out there that claim to solve many of your legacy problems with a new box on your network, or a proxy-based approach, or whatever ... be careful when you go to these types of 'solutions'.


What do you do, then, if you've got some of these ultra-legacy systems sitting around causing you nightmares and cold sweats? Stay tuned, that's coming in my next post in this series, because this issue isn't going to just 'go away' or solve itself.


Note: If you maintain, work with, or know first-hand of these types of ultra-legacy systems I want to talk to you. I want to hear from you what you're doing to mitigate the risks, migrate the platform, or whatever else your strategy may involve. Heck, if there's a wrecking ball in the future then even better! Either leave a comment or email me directly (Wh1t3Rabbit at hp) or find me on Twitter (@Wh1t3Rabbit) ... I look forward to hearing from you!

katzmandu(anon) | ‎11-27-2012 02:04 PM

I have a few good stories for this one, and thanks to some harassment on twitter, I've been invited to post here.


Circa 2004 we had an ancient IBM/Power (Probably a first-generation AIX, Power-based system) at a former employer. It had an old database on it that was occasionally used for forensic work (info in the database would be pertinent to investigating a history of fraud, for instance.) However, we knew it wasn't Y2K complaint, and we were responsible for making sure it stayed "up" continually. Although sometimes the system wouldn't be accessed for several months it needed to be up and running for when it would be accessed (and it was used maybe twice a year.) Because it wasn't Y2K compliant, we had to roll back the clock on the system every year or so in order for it to stay "up." It didn't matter if the clock was five years out of date, the data on the system just needed to be accessible. 


A colleague of mine discussed his time in the Air Force being forward-deployed during a conflict in the 1990s. He lamented that one of the fire-control systems (not fire-alarm sprinkler, but directional control for guns/missiles) still made use of paper tape, and that they had to use it sparingly, as finding replacement paper tape was difficult.


Finally, in a former life when I was selling Sun gear (in 2001) I was on-site at a US Defense contractor who still had a VAX 11/780 up and running in full-glory. It was a beautiful thing to behold, because even though it was older than I am, and was running at a whopping 5Mhz, it was working flawlessly 23 years later. And it was pristine. Unlike some legacy systems in datacenters that remain plugged in but ignored, this system was as sparkling and clean as it had to have been when it was first taken out of its crate. I believe this system performed specific industrial control functions under VMS. 

Tero(anon) | ‎11-28-2012 07:01 AM

Very interesting topic - apologies in advance if following is bordering on the too-long-didn't-read threshold :-)


I work in a large automotive enterprise and the "ultra-legacy" problem is definitely part of everyday life for us.


Manufacturing lines have over the years evolved from manual to mechanical to digital and as you can imagine the supporting (IT-)systems have very much followed the same chain. Requirements and architectural challenges were very different a decade or two ago compared to today's always online just-in-time manufacturing and logistics.


Moving a piece of old infrastructure is one thing, it's complicated - but in most cases your hardware supplier will be able to assist you - but trying to migrate entire ultra-legacy system, is a challenge which shouldn't be tackled lightly.


I've been working on moving a ultra-legacy b2b-integration system to a new platform for the past couple of years. The project has been running for a total of 6 years, which almost makes the actual project "ultra-legacy" in itself (the people who started the project are not around anymore, the current team is fourth iteration). Reflecting on it for too long will put you somewhere in the vicinity of The Inception, so be careful - it's a multi-layered concept :-)


Enterprises and especially integration within enterprises is often organic and always grows with the principle of least resistance. A typical ultra-legacy system migration involves everything from infrastructure to application to routines to Business Logic. The way your business works now, isn't anywhere close to the way it used to work when your ultra legacy system was part of the bleeding edge.


In practice this means your system is most likely submerged in the middle of a huge boiling spaghetti bowl - work-arounds have been made to accommodate the ever expanding and changing business, there might even be work-arounds to work-arounds. If you're lucky, they might be documented, but most likely they're not. Your system might be enclosed in a wrap-around system (or two) in order to make it compliant to current policies. And suddenly your stakeholder list starts growing exponentially.


For example, in my case the system mostly uses FTP for external communication - while this might have been ok 18 years ago, it certainly isn't going to pass any policy scrutiny today. So it chains communication to another system which connects to third - all with different protocols. Also, since the network topology around it has also changed, a bunch of VPN tunnels connect it to external systems (it originally spoke through X.25 and ISDN connections). The backend ERP systems have changed, so there's neverending flora of different message formats to take care of. There are thousands of receiving and sending external partners who might or might not be dead.


There is no out-of-the-box magic solution to help you - the ultra-legacy system is ultra-legacy because it's just like tree-ring dating - it reflects the changes in your business throughout the years.


And before you go and start your new product selection process - anchor the business decision as high up in your organization as you possibly can - this will help enormously later on as you'll be forced to pull in stakeholders who don't necessarily even know they are stakeholders in your system migration.


Make sure you spend some time with your crystal ball as well. Your project might run for multiple years - you don't want to deliver your new shiny platform straight to legacy status.


I mentioned organic growth above - if your new system can accommodate this, and you can offer your enterprise low migration threshold, it will allow your enterprise to "migrate itself" off the old system. Basically - don't make it unnecessarily hard for the business to migrate, love your spaghetti incident, don't fear it :-)


Selecting something that is easily extendable and modifiable is preferred in these cases, even though your IT-policies might want to disagree and buy in something "standardized". Forcing your business back into the "standard box" requires army of magicians, so my advice is to follow your business, it ensures success, even though you might have to sacrifice some standardization.


Rethink your security. Your security landscape has guaranteed changed completely. You need to look at your processes, business logic and components. You might have to change or tweak all of them and you probably have a lot of catch-up to do with your complicance and policies. You need backing from your security org (hope you anchored your project high-enough).


Assemble a small team of dedicated people - you don't necessarily need huge amount of specialist resources as long as you can pull in specialists from other teams when needed (again, anchor things high up in your org before you start). Try to keep your core team work agile, lean or whatever methodology it is you use. The simpler the better, because in multi-year project the people are going to get worn out if you don't.


I've blogged a bit about spaghetti integration and security over here if anyone is interested in reading more about how to cook your enterprise al'dente :-)


Matt Schofield(anon) | ‎11-28-2012 12:37 PM

I do a lot of Post Merger Management for retailers, banks, insurance.


The reason we don't replace legacy isn't CODING, or integrating or whatever. It is TESTING this new code which has to black-box reproduce something that has had millions and millions of live customers go through it with constant tinkering. You just can't come up with the test conditions or the database.


So unless there is some tearing new business case which is going to pay for a NEW set of functionality, it just makes no sense at all to take legacy out. And we manage this heterogenous set of loosely coupled systems, not too visible to the customer, just reliably chugging along on hardward that is ever cheaper.





Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
About the Author

Follow Us
Community Announcements
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation