Infrastructure Management Software Blog

Capacity planning in virtualized environments

In our joint webinar with VMware last week “Reduce Costs and Gain Control of Your Virtualized Infrastructure with Consolidated Management”, a question arose about capacity planning in a virtualized environments. I asked Rob Carruthers of Hyperformix to share his expertise on the topic. Readers of this blog may recall his podcasts on “virtualization and IT transparency” and “virtualization and capacity management”.  Rob's response follows.
-Peter


Capacity planning is becoming more important because virtualization offers a safe and logical method to get more out of IT infrastructure elements at a lower cost. For example, many companies are stacking between 10 and 15 virtual machines on a single server. Traditionally, companies have been using rules-of-thumb estimates and spreadsheets for capacity planning. Over the past few years a host of automated solutions have come on the market to assist IT professionals with getting the most out of virtualized environments. These tools fall into two broad categories – Placement tools and capacity planning tools.
 
Placement tools – Help answer the question: How do I correctly place applications within a virtual cluster for optimal performance? These tools provide analysis of current resource consumption, behavior, and assist with placing complementary workloads within a virtual cluster. Some of these tools perform linear forecasting to estimate future resource consumption based on past growth.
 
Limitations: Growth is often not linear. Demand for some applications grows faster than others, which can result in mismatches in capacity. Some clusters may have excess capacity, while others have shortages. Most placement tools do not factor in end user response time SLAs, which can result in “green” status for capacity but “red” status when users experience application slow-downs.
 
Capacity planning tools – Examine both the migration of applications to the virtual environment, and the post migration capacity requirements. These tools generally allow IT to plan growth based on the number and types of users, and take into account the non-linear aspects of business growth and can translate that into an actionable capacity plan. More mature tools also consider the end user experience and can forecast response times that users will actually experience for the business transactions they are running. Tools which provide ongoing (not just one-shot plans) are referred to as capacity management tools.
 
Limitations: These tools generally require more granular data and automated collection to produce accurate and timely results. Hypervisor and system management vendors have been adding support for key virtualized metrics of late, remediating this issue and making capacity planning easier and faster.
 
The market need for both placement tools and capacity planning tools has encouraged several vendors to create solutions. Some key players in this space include: VMware, Vizioncore, Platespin, Cirba, Neptuny, and Hyperformix.


When evaluating which method and product to use, customers should first ask several big picture questions:



  • How can I get the most cost savings with virtualization without sacrificing the ability of my company to transact business?

  • What tools will improve my agility when Lines-of-Business owners change the plan, or our end customers do it for us?

  • Is my capacity planning tool integrated with my Hypervisor and System Management tools to provide “one source of the truth”?

  • Are the answers provided by my capacity planning tool accurate and reliable?


The full potential of virtualization depends on delivering financial returns back to the business. Capacity planning can ensure economic returns can be delivered safely without negatively impacting performance.


Rob Carruthers 
Director of Product Marketing 
Hyperformix


 

Capacity Planning: A long dead mystical art

.... well, maybe not.


Back in the mid-80s and early 90s I made my living doing performance analysis and capacity planning for HP customers.


The roots of capacity planning as a discipline come from the mainframe days where enormously expensive hardware that hosted multiple applications, all competing for the resources, made it essential to be able to plan for new workloads or hardware changes.


Even with the HP mini-computers which I worked with, when someone was considering spending hundreds of thousands of dollars on an upgrade, it was realistic to invest in a few days of consulting. We would build a model of the applications and the hardware and do some serious "what if" analysis to determine what configuration was actually required to do the job.


But times change. Hardware prices dropped significantly - particularly with the introduction of Intel based "industry standard servers". Increasingly applications were deployed in distributed configurations with one application per server.


The capacity planning problem became much 'simpler'. With one application per server - so no issues with applications interacting with each other - and cheap hardware, capacity planning took a back seat to the "throw hardware at it" approach. It became cheaper to add a CPU or more memory and see what happened, than it was to conduct a capacity planning exercise.


What goes around comes around. I'm seeing a couple of things happening which are changing attitudes towards performance management and capacity planning.


First is that most organizations are trying "do more with what they have". There is an awareness that there is likely to be spare capacity somewhere in the environment - it's just a matter of finding it. So we're seeing a lot more demand for enterprise wide performance data collection and reporting. It's worth spending a little money and some time in understanding what server and network resources you have available. It's also the type of diligence that CxOs are expecting in the current economic climate - they expect staff to have exhausted all reasonable options before asking to purchase additional assets.


The second item is the return of the mainframe. I mean that figuratively of course, but large Virtual Server hosts are "the new mainframe". The hardware costs can be substantial as organizations provision powerful, resilient platforms to host multiple virtual machines. And the challenge of having multiple workloads competing for resources is back. In this case each workload is a VM. Organizations want to make optimum use of these expensive VM hosts resources, but they also want to ensure that service levels are maintained when combining VMs. And that requires good performance data collectors that can collect data to support capacity planning from virtualized platforms.
 
I have seen a number of customer requests recently where tools to support capacity planning activities - across enterprises and within virtualized environments - have been front and center.
 
Looks like the mystical art has risen from its grave.


For Operations Center, Jon Haworth

Search
Showing results for 
Search instead for 
Do you mean 
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.