12 IT Trends for 2012: #5 Scale-out datacenter architectures

"If you were plowing a field, which would you rather use: Two strong oxen or 1024 chickens?"  Seymour Cray

 

This was the question posed to me by friend and genius developer, Chris Cogdon. Scale-up (aka the oxen) won that round but in 2012 I believe their chickens might come home to roost (pun very much intended) with real-world planning implications for IT leaders in terms of improved price/performance and energy efficiency but also application development and team skills. 

 

First, let me shed some light of Seymour's’ analogy in order to set some context for the next couple of posts that can all trace their roots back to this idea. 

 

Back when I was a lad (please, no abacus jokes), the doomsday crowd were predicting that we’d very soon be bumping up against the upper limits of physics and progress observed by Moore’s “law” would soon grind the a halt (sounds familiar?). While we now know that didn’t happen, at the time computer  scientists and vendors were scurrying around trying to determine how they harness micro-processor s could create mainframe style “grunt” by scaling workloads across multiple CPUs. 

 

As a developer, Chris’ point was that each architecture represented a choice, a choice bounded by the type of work you wanted to accomplish (for those inclined to get their geek on, see this Wiki article for a treatise on SIMD vs MIMD). That choice then drove consideration of operating system, programming languages and I/O architectures. 

 

At the time there were two competing camps – the “two strong oxen” model championed by the mainframe/midrange system types (powerful RISC and CISC CPUs well suited to a handful of big, batch tasks) versus the “1024 chickens” camp (thousands of lower power CPUs doing large number of simple tasks repeatedly). Enterprises largely chose the former and the academic/science community chose the latter, the most famous being Danny Hillis’ now defunct Thinking Machines 65,536 processor Connection Machine beast beloved of biochemists and nuclear weapons researchers everywhere. 

 

What makes all this relevant to CIOs today is that engineers at “.com” companies such as Facebook, Google and their progenitors recognized that their use case (lots of users, performing the same basic task such search or updating friends on what they’re eating) was architecturally similar to the use-case of a biochemist using 64K CPUs to simulate protein folding and they borrowed liberally from their architectural and programming language toolkits. Toolkits which are now migrating into the enterprise.

 

In the past, the biggest obstacle was finding a computer scientist to program these beasts to maximise their potential. The great news is the arrival of  modern development languages and architectures, in particular Erlang  along with Ruby (including Ruby on Rails), Struts, PHP and Python. Having been developed or derived from the days of massively parallel systems, these languages behind the rapid growth of scale-out applications capable of tapping the power of the 1024 chickens without requiring an army of resident computer scientists. Like COBOL and Fortran before them, this doesn’t spell the end of Visual BASIC, C and Java, but it does mean that CIOs should be assessing their future orientation readiness with HR and their leadership team and prepare plans to hire or develop skills in for both developers and, operations staff. 

 

What’s true of software is also true of hardware. With Web 2.0 applications and cloud architectures now designed to exploiting distributed, scale-out arrays of cheaper compute, network and storage I believe CIOs should be using early 2012 to re-evaluate their datacenter plans and standards to ensure that they’re building on a modernized, converged architecture that’s capable of accommodating both scale up and scale out models without requiring a rip and replace between the two (not one to plug product, I’ll break with policy here and recommend you look at HP’s 3PAR storage as a great proof-point  of what “2.0” hardware architectures can bring to the table).

 

Given that most of these skills are in high demand by “.com” startups everywhere, I’m wondering if anyone’s having trouble ramping their skills fast enough or if it’s just business-as-usual?

 

P.S. both of these topics naturally lead us to the big-data and ARM vs Intel discussions, each of which I’ll cover in separate posts - stay tuned!

 

Comments
JudyRedman | ‎01-18-2012 09:45 AM

Paul,  I would like to see chickens plowing a field!    Enjoyed this post and looking forward to reading  the entire series on what you think are the top trends in IT. 

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
About the Author
Paul Muller leads the global IT management evangelist team within the Software business at HP. In this role, Muller heads the team responsib...


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation