Just to be clear, I love open source software. But, it also drives me nuts and I'll think twice before supporting my own systems again.
You've probably all heard the arguments (you might have even been representing the case for the affirmative), "it's free (as in speech), widely supported by a community and we even have the source code, so we can debug our own problems! Why would we waste money buying packaged software/services from a vendor?!"
Heading to your local "app store" might only further embolden you. Look at all the great stuff there that's "free."
But if my experience of the last few months are anything to go by, it might be free, but "free as in tears," not "speech," with real consequences for enterprise IT leaders planning the future of their services, staffing, technology and vendors.
Do as I say, not as I do
As any of my friends will tell you, I never left love of tinkering with technology behind. Whether its consumer or enterprise grade, I get a thrill out of getting my hands dirty. Sometimes just for the sheer intellectual stimulation, sometimes to solve a real-world problem I'm dealing with for friends and family.
Any technically-minded readers who've helped out friend and family before are also aware that our recommendations come with an implied contract of ongoing technical support. So if you're anything like me, you tend to chose carefully knowing that it's the gift that keeps on giving if you get it wrong.
However, I'm prepared to bend the rules when it comes to my own working tools. I've been practising BYOx before anyone had thought to name it consumerisation and especially when it comes to my personal and professional workflow, anything I can do to find an efficiency or edge is always interesting as well as potentially beneficial for my team if it can be made to scale.
For a variety of reasons, my need was for storage. Fast, lots of it, network attached, broad protocol support and most importantly reliable. With a bit of arm twisting I am sure I could have found something from HP that would have fit the bill, but I had heard so much from my enterprise customers about the lure of "free software" that I thought I would take the plunge and try to build my own and see what it's really like for a semi-technical person to use open-source in a "production" environment.
My criteria for the experiment was:
- all software used should be free (as in speech)
- ship as pre-compiled binaries (for ease of installation)
- be open-sourced (for bug fixes)
In keeping with the "good enough IT" mentality I would assemble the hardware from white-box gear just to see what it's really like on the other side.
I'll skip the harrowing eight weeks that I tried in vain to build a workable solution other than to say that I am now running a commercially supported platform and have dispensed with being my own engineering department.
Here's what I learned.
0). Free as in not.
My first task was to choose the software stack. This turned out to be harder than I expected. A lot of time went into not only trying to determine which stacks would support my functional requirements, but more importantly, I had to deal with several false starts where I had selected and implemented a pilot system only to find out that the features I really wanted were only available under traditional (and in this case prohibitively expensive) commercial terms.
Lesson: read ALL the fine print and test your full requirements before taking the plunge and committing to a software stack
1). You're the integrator
Having settled on my choice of software, I then had to find the hardware. Despite having chosen what I thought was a widely used NAS software stack, it turns out that no one maintains an up-to-date list of supported hardware it will run on. The speed of obsolescence in the hardware space means that any published configuration guides were out-of-date even if only recently published (ironically except for the major hardware vendor’s equipment). This resulted in some costly mistakes, three project restarts and more than a few precious hours back and forth between the local tech parts supplier.
Lesson: there's a reason your vendor spend a lot of time and money testing and qualifying hardware and software stacks - it's so that you don't have to - don't underestimate the effort that goes into it or the reason why it's not updated as quickly as you might expect
2). Other people's code is hard to debug
Having managed to get it all up and running it was time to start using it all in anger. Slowly ramping up the load and users to what would, migrating over my data and putting the system through its real-world paces. That's when the wheels started to fall off. I encountered a number of instability issues that only hit after the system had been in constant use for two weeks. Spurious errors and crashes that at first I thought might have been due to my mistakes.
One of the great things you hear about open-source is there's often a large community of users out there who can help, and it's true. What I discovered after weeks of plowing through the forums was that I had encountered the same bugs as others and was waiting for the next release OR plunge in to try and find a bug that the author themselves was struggling to locate.
As an aside, search engines are GREAT, but you quickly realise that finding the solution to your problem through Google consumes more time and money than paying someone to solve it for you.
Lesson: community is great, the wait for a fix isn't
3). Just because you fixed it, doesn't mean it gets fixed
In a few cases, members of the community had themselves found a solution and patched their own systems and submitted the fix to the core-team (the people who manage the formal release process). However, the fix had not found its way into production because the core-team (many of whom were volunteers) were so busy with the sheer volume of work around the next release not to mention their day jobs that the fix was sitting on the shelf, waiting to be implemented for the masses.
Lesson: it's not the size of the community that matters; it's the pace at which the core team moves that counts.
4). 80% right and working is better than 100% right and late
Supporting my NAS was not my full-time job, producing and communicating my work is. In the end, my need to be able to trust my systems trumped my desire to have exactly what I wanted. I needed to know that someone was behind me, with sufficient engineering discipline to keep me out of trouble and sufficient resources to fix my problems should I encounter any.
Lesson: Focus on where the real value is created versus where money might be saved. Don't lose sight of core vs. context.
5). And then there were none
During the process of this work, there was a major wave of operating system upgrades that hit us late last year. As luck would have it, two of the drivers for components I needed were produced by volunteers that had decided to hang up their spurs and focus on other work. Result, no upgrade path and little pressure to follow through.
Lesson: it's YOUR system, including managing obsolescence
6). Free as in tears
I was talking to my friend and author Gene Kim about my project and he said it reminded him of a similar, long abandoned project he had sitting in his attic. He's a talented engineer who managed to get his system working, but like mine, he mothballed it. He called it a "snow flake" system, one that was so unique to him that no one else could touch or support it without the risk of irreparable damage - so it was better off switched off. The sheer opportunity cost of using his or your valuable time supporting a "free" system is foolish economics, at least based on my experience.
As I said at the opening, I'm ALL for open source, but as a business leader myself, I feel much better educated about the pros and cons of "free" in my production environment.
Lesson: the software's free, if you can afford the tears
I really enjoyed my detour off the tracks of supported software and systems. It helped ground me in the realities of the real cost of modern free software and some insight to how that would scale in an enterprise setting and why vendors like HP, Redhat and even Apple invest so much time in wrapping enterprise level hardening and support around even highly professional open source projects like Linux and OpenStack. It also explains why CIOs who are serious about open source invest heavily and resource their teams accordingly, often working with the vendor community.
I have a newfound appreciation for the hard work that goes into engineering "simple" solutions and a desperate need to offload a bunch of crappy motherboards, comically thin sheet metal cases and a bunch of drives...