Organizations across the Globe are adopting Cloud at a staggering rate, consuming various flavors of cloud such as Software-as-a-Service (SaaS) ,Platform-as-a-Service( PaaS ) & Infrastructure-as-a-service (IaaS). As a result, traditional and new breed of applications are moving from in-house hosted datacenters to cloud. Typically Performance testing is considered “luxury item”, a “nice to have” before go-live of application because of various reasons e.g. Project deadline, costs etc. But when applications are moved into cloud, there are additional layer of unknowns that stakeholders should test out before go-live, performance of application being one of them.
Wouldn’t it be easier if you kept all of your lights on in your house all of the time? Think how cool that would be. No more getting out of bed to turn lights on. No more fumbling around aimlessly for a light switch. No more banging your shins on furniture. By keeping every light on, you’d be assured that whenever you need light, you’ll have light.
Sure you’ll be over-provisioned 99% of the time, but hey, who’s getting tired of bruised shins?
Well, if home lighting worked like traditional IT, this actually wouldn’t be a bad model. You could keep all the lights on to avoid extensively long light procurement cycles when demand increased. You’d pay for the lighting in a large, one-time capital investment so budgeting would be predictable. And, you’d have the peace of mind knowing that no matter the circumstance, each of your house guests would have light when they need it, ensuring lifestyle continuity and house guest satisfaction.
Of course, the reason why we don’t use this capacity planning model is that home lighting is provided in an on-demand, pay-as-you-use service model.
Elastic resources is an old concept
Lighting is very similar to cloud compute resources – it’s elastic. The reason is obvious – the demand on lighting fluctuates to extremes and in condensed timeframes. Consider the following demand drivers:
- Time of day
- Occupancy of the rooms
- Need for lighting, e.g., sleeping or reading
- Time of year (Christmas lights vs. daylight savings)
Fortunately for us, our homes come with power switches so that we can regulate our lighting consumption and manage the utility-based cost of electricity. In essence, we do our best to optimize our electricity bills by using lights only when needed and using energy efficient light bulbs to further minimize the cost. Also, if you’ve owned your home for a few years, you instinctively understand when your budget needs to increase depending on the situation, e.g., higher electricity bills during certain times of the year. After a while, you really aren’t surprised by the electricity bill as it becomes very predictable.
Transforming Capacity Planning to Elasticity Planning
So, when it comes to home lighting, you’ve instinctively used an ‘elasticity planning’ model in lieu of a capacity planning model.
Pretty cool, huh?
Interesting side note on elasticity planning in action…
My sister’s family just came back from a week long vacation and their home power bill was 25% less than normal due to lower power consumption.
Back to blog…
If only optimizing the cost of cloud compute resources was that easy. Hmmm, well maybe it is that easy. After all, it seems like doing our best to minimize the cost of cloud would be paramount since lower costs is one of the cloud’s promises:
- How can we optimize the resources that are already in use?
- How can we optimize the amount of resources depending on the fluctuating demand?
- How can we make the variable pricing model of the cloud predictable?
… and The BIG question is…
- How can we change from traditional on-premise capacity planning to cloud-based elasticity planning?
The irony of the cloud
What makes elasticity planning even more important to cloud is that elastically expanding more cloud compute resources doesn’t necessarily result in meeting more business demand. For example, if your application in the cloud is slow due to inefficient methods, expanding compute resources will not allow you to meet greater business demand.
These types of performance problems will impact both low and peak usage. The cloud creates what I refer to as a ‘business value trap’ – it beckons you with promises of lower cost, but may actually result in higher costs… oh the irony.
Making the cloud deliver on its cost promise
The first step in elasticity planning is to tune your application, thereby optimizing the required compute resources. This is equivalent to using an energy efficient light bulb – higher efficiency leads to less electricity, which results in lower costs.
Tuning the application means that method call chains and SQL statements are efficient and optimized. It also means that there are no memory leaks, so that all required CPU and memory resources are minimized to support maximum business demand.
Once you’ve tuned the application in the cloud, you need to right-size the application’s compute resource footprint. In essence, you need to know the optimal compute resource footprints to support fluctuating business demands.
Keeping a huge compute resource footprint deployed in the cloud to service low business demand makes about as much sense as keeping all of your lights on in your house during day time.
So, if you benchmark properly through performance testing, you’ll know the various compute resource footprints needed to support low usage (off-season), medium usage (mid-season) and peak usage (holiday season). This results in two valuable outcomes – one, you’ll validate your application’s global class scale; and two, you’ll make your variable costs in the cloud extremely predictable.
HP Cloud Assure for cost control
Performing true elasticity planning in the cloud requires the proper toolset and expertise. HP Cloud Assure for cost control is a service provided by HP SaaS and is meant to help you with your elasticity planning transformation. Its intention is to assure you the right size of your cloud compute footprint, at the right cost.
Avoid the business-value-trap of the cloud. Perform proper elasticity planning!
"It scared me to death. It just doesn't make sense. You're still on your motorcycle at the height of the jump going 'this thing's not going to rotate around.' I knew it was possible. It just doesn't seem logical."
- Travis Pastrana, motocross rider quoted after trying his first back flip
If an IT executive or QA manager were asked if a member of their load testing team do a midair back flip on a motorcycle, judging solely by their views on load testing, I’m pretty sure their response would be something like:
“Sure. They can start out on a leisure ride, and then they can gas it up the ramp and when they hit the apex, they’ll execute the back flip, then land safely, and then be on their way.”
Why do IT execs and QA managers believe that they can execute a successful spike load test by simply starting a traditional load test, and then ramping up the number of virtual users until they have a large spike load test? Very often, much to their disappointment, it’s not that simple. Spike load puts the extreme in load testing.
Like extreme sports, spike load testing raises the stakes of successful outcomes. The great news though is that when those outcomes are met, the results are amazing. Let’s review the attributes of a spike load test:
- Uses tens and maybe even hundreds of thousands of virtual users (puts the rapid acceleration into the leisure ride)
- Requires the orchestration of an extra-large, on-demand test bed with the compute power to generate the spike load (puts the ramp into the leisure ride)
- Requires robust data planning and data refresh strategy (puts the airborne into the leisure ride)
- Is bounded by a non-negotiable deadline because large load testing prepares for the peak load of a specific event (puts the back flip into the leisure ride)
- May involve Web 2.0 / RIA front ends, which invalidates previous benchmarking and adds complexity to the technical preparation (puts a flaming hoop into the leisure ride)
- May involve load testing during a maintenance window, which means you must have a successful spikeload test without the possibility of a second chance (puts the Grand Canyon into the leisure ride)
The stakes rise with each added challenge that spike load testing brings to the table. Because the scale of a company’s business and reputation are directly tied to the scale of their websites, the extreme stakes must be dealt with for extreme business results.
There are three things that you absolutely need in order to successfully perform spike load testing and protect your business and reputation:
- An elastic test framework which can expand in an on-demand fashion to generate large loads
- An easy way to create virtual users for both traditional websites and for today’s rich-internet-application technology such as AJAX, Flex and Flash
- The experience, knowledge and best practices to streamline the large load testing processes to ensure your outcomes are met
Note that all three focus on not just ensuring outcomes, but also expediting the time-to-value.
Now, you may be thinking that I’m overusing the terms, ‘best practices’ and ‘experience’. Quite frankly, I feel they are often overused, especially in the IT world. But when it comes to spike load testing, experience cannot be over-valued. Here is a list of questions that an experienced spike load tester should be able to answer with confidence:
How are virtual user scripts created so that they are ultra-scalable?
How are virtual users ramped up during a large load test?
What run-time settings should be set during a large load test?
What are the special data handling considerations for large load testing?
If you don’t know the answers to those questions, then your chances of successfully pulling off a large load test are greatly diminished.
Check out the new solution from HP SaaS called, HP Elastic Test. It’s architected and priced in a cloud compute, elastic fashion:
A common expression used to describe the ability to expand and contract compute resources in an on-demand fashion. The purchasing of elastic compute power is utility-based or otherwise known as ‘pay-as-you-go’.
Example: Validating the performance of internet, global-class applications requires an elastic load testing solution.
It’s also backed by 9 years of spike load testing experience. HP SaaS performs the scripting and spike load test orchestration, using all of their experience and best practices.
Validating the scale of your website represents business stakes at extreme levels. If you think about it, load testing is all about risk mitigation and protecting your business and reputation. Why not extend your risk-mitigation strategy by going with a proven vendor with industry leading technology?
Or put in another way…
This is load testing…
This is load testing with HP Elastic Test….