With written contribution from Andrew Wahl and Sathya Sastry
Imagine what you could do with the power of HP Cloud Service Automation (CSA). You could evolve your existing infrastructure and virtualized environments into scalable cloud services. You could respond to changing business demands quickly and economically while realizing faster time-to-value.
Now your dreams can be a reality with CSA. To take your first steps as an Enterprise-Grade Cloud Service Broker to IT and Line of Business Users, continue reading to uncover how to harness the power of CSA. In the coming weeks, we will provide a practitioner’s view of the key factors you need to consider, as you plan your journey to be a cloud service broker.
For now, learn how you can use HP CSA to improve the administration of your cloud services.
Over the past couple of weeks, HP has announced several HP CloudSystem enhancements that focus on working with partners - particularly VMware and Microsoft. I also had the opportunity to spend time speaking with a number of customers on the show floor at VMworld last week.
One of the key questions I got was the difference between VMware's cloud management products and HP's CloudSystem. The same question of course applies to comparing Microsoft's cloud management products versus HP's CloudSystem.
It’s exciting to see the momentum of HP CloudSystem with over 700 customers to-date around the world, ranging from industries banking to healthcare to telecom. Customers are building and managing private, public and hybrid environments to deliver cloud services powered by HP CloudSystem.
At HP, we believe that customers will all have a unique journey to their cloud environment. The first steps often include standardization, consolidation, virtualization and automation and then building to an internal IT Infrastructure-as-a-Service offering.
One of the other key things we have learned from working with thousands of customers is that cloud implementations often have a bigger impact on the people and process of IT than on the technology of IT. Therefore, it is often good to start with a small project - often internal IT focused - to understand how cloud will impact the unique processes and the people in that specific organization before growing to a larger implementation. In other words, many organizations will want to start small and grow tall.
Written by: Richard Arthur
In previous blogs I discussed factors in market segmentation and feedback from a recent summit concerning public cloud services being launched today. Those can be found here. In this blog I assume you have some idea of which services you want to launch, now is time to think about your platform for launch.
Many service providers have tools and portals in place for customer services that they will want to reuse. However do not underestimate the flexibility requirements of a cloud services platform. As we have seen time and time again in the Internet – knowing which services you want to launch today does not mean you know which ones you will need to launch (and retire!) in six months. Flexibility at all levels in a Cloud services platform is critical. Further, consistency for the end-customer will be critical. End-customers will view you as a “one-stop shop” for cloud services and they need to have them presented that way.
Finally, an “all-in-one” cloud services platform will reduce risk and investment for each service being launched.
Define or identify the roles in your organization that will be responsible, accountable, consulted, informed (RACI) for each part of the cloud services platform.
Written by: Ken Won, Director of Product Marketing, Cloud Service Automation
This week at Discover 2012, HP announced a new version of HP Cloud Service Automation, available as a core element of CloudSystem Enterprise or as a standalone software product, for managing private and hybrid clouds. As a key element in HP’s Converged Cloud strategy, HP Cloud Service Automation is the industry’s most comprehensive, unified solution for brokering and managing application and infrastructure services in private and hybrid clouds, helping IT organizations increase agility and reduce cost, via a self-service portal and highly-automated lifecycle management.
HP, my dedicated co-workers, and I are proud to announce HP CloudSystem Matrix and HP CloudSystem Matrix Software version 7.1 today. This release has a combination of new features as well as improvements to existing functionality. A lot of this work is based on ideas and feedback from our customers - so thank you for providing your input!
Some of the new features include:
My 5 year old asked me today, “Why do I have to go to school? Can’t I work and make money without going to school?”
How do you answer that? Is it as simple as saying “YES!”, or does it need further explanation?
If you think of it simplistically – the end result is the same – to make money and support yourself. However, the route to get there, and the quality of life would be different.
I think of cloud in the same way. You can think of it one step at a time with piecemeal cloud solutions. Or you can look at the bigger picture and have the end-game in sight so that each decision you make gets you one-step closer on your cloud journey to hybrid delivery.
I read this article written by Richard Arthur yesterday and thought it would be worth promoting on the forum. It's a good read.
The cloud opportunity that service providers have today goes far beyond Infrastructure as a Service that (IaaS) offers that have already been launched globally. Gartner estimates that business process services represent $71.1 billion vs only $31 billion for IaaS. According to “marketsandmarkets.com” SaaS is 73% of the cloud market – including players such as Google Mail, Yahoo Mail, Adobe Web connect and, presumably large enterprise players such as salesforce. Read the entire article...
For more information on HP CloudSystem visit www.hp.co/go/cloudsystem
Follow us in Twitter @HPCloudSystem
Wouldn’t it be easier if you kept all of your lights on in your house all of the time? Think how cool that would be. No more getting out of bed to turn lights on. No more fumbling around aimlessly for a light switch. No more banging your shins on furniture. By keeping every light on, you’d be assured that whenever you need light, you’ll have light.
Sure you’ll be over-provisioned 99% of the time, but hey, who’s getting tired of bruised shins?
Well, if home lighting worked like traditional IT, this actually wouldn’t be a bad model. You could keep all the lights on to avoid extensively long light procurement cycles when demand increased. You’d pay for the lighting in a large, one-time capital investment so budgeting would be predictable. And, you’d have the peace of mind knowing that no matter the circumstance, each of your house guests would have light when they need it, ensuring lifestyle continuity and house guest satisfaction.
Of course, the reason why we don’t use this capacity planning model is that home lighting is provided in an on-demand, pay-as-you-use service model.
Elastic resources is an old concept
Lighting is very similar to cloud compute resources – it’s elastic. The reason is obvious – the demand on lighting fluctuates to extremes and in condensed timeframes. Consider the following demand drivers:
- Time of day
- Occupancy of the rooms
- Need for lighting, e.g., sleeping or reading
- Time of year (Christmas lights vs. daylight savings)
Fortunately for us, our homes come with power switches so that we can regulate our lighting consumption and manage the utility-based cost of electricity. In essence, we do our best to optimize our electricity bills by using lights only when needed and using energy efficient light bulbs to further minimize the cost. Also, if you’ve owned your home for a few years, you instinctively understand when your budget needs to increase depending on the situation, e.g., higher electricity bills during certain times of the year. After a while, you really aren’t surprised by the electricity bill as it becomes very predictable.
Transforming Capacity Planning to Elasticity Planning
So, when it comes to home lighting, you’ve instinctively used an ‘elasticity planning’ model in lieu of a capacity planning model.
Pretty cool, huh?
Interesting side note on elasticity planning in action…
My sister’s family just came back from a week long vacation and their home power bill was 25% less than normal due to lower power consumption.
Back to blog…
If only optimizing the cost of cloud compute resources was that easy. Hmmm, well maybe it is that easy. After all, it seems like doing our best to minimize the cost of cloud would be paramount since lower costs is one of the cloud’s promises:
- How can we optimize the resources that are already in use?
- How can we optimize the amount of resources depending on the fluctuating demand?
- How can we make the variable pricing model of the cloud predictable?
… and The BIG question is…
- How can we change from traditional on-premise capacity planning to cloud-based elasticity planning?
The irony of the cloud
What makes elasticity planning even more important to cloud is that elastically expanding more cloud compute resources doesn’t necessarily result in meeting more business demand. For example, if your application in the cloud is slow due to inefficient methods, expanding compute resources will not allow you to meet greater business demand.
These types of performance problems will impact both low and peak usage. The cloud creates what I refer to as a ‘business value trap’ – it beckons you with promises of lower cost, but may actually result in higher costs… oh the irony.
Making the cloud deliver on its cost promise
The first step in elasticity planning is to tune your application, thereby optimizing the required compute resources. This is equivalent to using an energy efficient light bulb – higher efficiency leads to less electricity, which results in lower costs.
Tuning the application means that method call chains and SQL statements are efficient and optimized. It also means that there are no memory leaks, so that all required CPU and memory resources are minimized to support maximum business demand.
Once you’ve tuned the application in the cloud, you need to right-size the application’s compute resource footprint. In essence, you need to know the optimal compute resource footprints to support fluctuating business demands.
Keeping a huge compute resource footprint deployed in the cloud to service low business demand makes about as much sense as keeping all of your lights on in your house during day time.
So, if you benchmark properly through performance testing, you’ll know the various compute resource footprints needed to support low usage (off-season), medium usage (mid-season) and peak usage (holiday season). This results in two valuable outcomes – one, you’ll validate your application’s global class scale; and two, you’ll make your variable costs in the cloud extremely predictable.
HP Cloud Assure for cost control
Performing true elasticity planning in the cloud requires the proper toolset and expertise. HP Cloud Assure for cost control is a service provided by HP SaaS and is meant to help you with your elasticity planning transformation. Its intention is to assure you the right size of your cloud compute footprint, at the right cost.
Avoid the business-value-trap of the cloud. Perform proper elasticity planning!
One of my favorite shows on television is MythBusters on the Discovery Channel. The entertaining hosts recreate scenarios and then either prove or debunk common myths. For example, the show has scientifically proven that wearing metal jewelry does not increase your chances of being struck by lightning. And, believe it not, you do not suffer a worse hangover when overindulging with both “hard” and “soft” liquor.
While cloud computing seems to be getting most of the buzz these days, a slew of myths regarding Software as a Service (SaaS) have come to the forefront. Let’s do a little myth busting of our own regarding the most common SaaS misconceptions.
Myth #1: SaaS and cloud computing are exactly the same thing. While most Software-as-a-Service solutions fall under the larger cloud computing definition as “massively scalable and deployed via the Internet” they also have a number of other defining factors. First, SaaS is software that is owned, developed, hosted and managed remotely by one or more vendors for use by customers over the Internet. Also known as “software on demand,” SaaS customers subscribe to a service on a “pay-as-you-go” or for a certain time period rather than having to own the infrastructure, hardware and software to make the application available to the business. Think of cloud as the larger concept,and SaaS as one type of cloud computing—renting application functionality over the Internet.
Myth #2: SaaS is not for enterprise business. In today’s economy, SaaS is a sensible way for businesses to reduce costs and expand capabilities. It is ideal for complex, global implementations required by today’s largest companies. Leading analysts predict that large companies will deploy at least one-fourth of their business applications via SaaS in the next few years.
Myth #3: SaaS is a business purchase that circumvents IT. Again, this myth needs debunking. SaaS is considered one of many tools that IT uses to deliver value to the business. Because SaaS is a cost-effective delivery model for many applications, IT is one of the main purchasers of SaaS solutions.
Myth #4: SaaS adds risk to your security. HP SaaS is ISO 27001 Information Security Management Systems certified and audited by KPMG. The ISO 27001 standard is designed to ensure the protection of critical information and is required by many industries, such as in the finance, health, public and IT sectors.
Myth #5: SaaS is merely ASP repackaged. Application Service Providers (ASPs) and SaaS are related, but definitely not the same. Back in the late 1980s and the 1990s, ASPs offered an early form of application hosting. ASPs essentially moved a customer’s applications into a huge, hosted data center where the customer could access the data. While many ASPs exist today and are viable for certain industries and for certain applications, the model does not lend itself to widespread use. HP SaaS applications are vendor built, pre-deployed, supported 24/7 and do not require moving customer applications into humongous data centers.
Myth #6: SaaS will not last. SaaS is a proven, mature model providing long term benefits for both user and provider. You may not know this, but HP has been in the SaaS business since 2000. In fact, LoudCloud, founded in 1999 by Marc Andreessen, was one of the pioneers of SaaS, with one of the first commercially viable Infrastructure as a Service models. LoudCloud became Opsware, a company HP acquired in 2007. And HP successful deployed utility computing (a predecessor to SaaS), benefiting customers way back in the 1990s. Leading analysts predict that large companies will deploy at least one-fourth of their business applications via SaaS in the next few years. For many applications, SaaS makes the most sense for a long time to come.
Do you have a myth busting idea that you would like the HP Software & Solutions myth busters to confront? If so, we would like to hear from you.
"It scared me to death. It just doesn't make sense. You're still on your motorcycle at the height of the jump going 'this thing's not going to rotate around.' I knew it was possible. It just doesn't seem logical."
- Travis Pastrana, motocross rider quoted after trying his first back flip
If an IT executive or QA manager were asked if a member of their load testing team do a midair back flip on a motorcycle, judging solely by their views on load testing, I’m pretty sure their response would be something like:
“Sure. They can start out on a leisure ride, and then they can gas it up the ramp and when they hit the apex, they’ll execute the back flip, then land safely, and then be on their way.”
Why do IT execs and QA managers believe that they can execute a successful spike load test by simply starting a traditional load test, and then ramping up the number of virtual users until they have a large spike load test? Very often, much to their disappointment, it’s not that simple. Spike load puts the extreme in load testing.
Like extreme sports, spike load testing raises the stakes of successful outcomes. The great news though is that when those outcomes are met, the results are amazing. Let’s review the attributes of a spike load test:
- Uses tens and maybe even hundreds of thousands of virtual users (puts the rapid acceleration into the leisure ride)
- Requires the orchestration of an extra-large, on-demand test bed with the compute power to generate the spike load (puts the ramp into the leisure ride)
- Requires robust data planning and data refresh strategy (puts the airborne into the leisure ride)
- Is bounded by a non-negotiable deadline because large load testing prepares for the peak load of a specific event (puts the back flip into the leisure ride)
- May involve Web 2.0 / RIA front ends, which invalidates previous benchmarking and adds complexity to the technical preparation (puts a flaming hoop into the leisure ride)
- May involve load testing during a maintenance window, which means you must have a successful spikeload test without the possibility of a second chance (puts the Grand Canyon into the leisure ride)
The stakes rise with each added challenge that spike load testing brings to the table. Because the scale of a company’s business and reputation are directly tied to the scale of their websites, the extreme stakes must be dealt with for extreme business results.
There are three things that you absolutely need in order to successfully perform spike load testing and protect your business and reputation:
- An elastic test framework which can expand in an on-demand fashion to generate large loads
- An easy way to create virtual users for both traditional websites and for today’s rich-internet-application technology such as AJAX, Flex and Flash
- The experience, knowledge and best practices to streamline the large load testing processes to ensure your outcomes are met
Note that all three focus on not just ensuring outcomes, but also expediting the time-to-value.
Now, you may be thinking that I’m overusing the terms, ‘best practices’ and ‘experience’. Quite frankly, I feel they are often overused, especially in the IT world. But when it comes to spike load testing, experience cannot be over-valued. Here is a list of questions that an experienced spike load tester should be able to answer with confidence:
How are virtual user scripts created so that they are ultra-scalable?
How are virtual users ramped up during a large load test?
What run-time settings should be set during a large load test?
What are the special data handling considerations for large load testing?
If you don’t know the answers to those questions, then your chances of successfully pulling off a large load test are greatly diminished.
Check out the new solution from HP SaaS called, HP Elastic Test. It’s architected and priced in a cloud compute, elastic fashion:
A common expression used to describe the ability to expand and contract compute resources in an on-demand fashion. The purchasing of elastic compute power is utility-based or otherwise known as ‘pay-as-you-go’.
Example: Validating the performance of internet, global-class applications requires an elastic load testing solution.
It’s also backed by 9 years of spike load testing experience. HP SaaS performs the scripting and spike load test orchestration, using all of their experience and best practices.
Validating the scale of your website represents business stakes at extreme levels. If you think about it, load testing is all about risk mitigation and protecting your business and reputation. Why not extend your risk-mitigation strategy by going with a proven vendor with industry leading technology?
Or put in another way…
This is load testing…
This is load testing with HP Elastic Test….
Introductions are in order
Q: What do John Grisham, Tom Clancy and Patricia Cornwell have in common?
A: They all had careers in fields that were a heck of a lot more interesting than IT Management.
This is why they all have what seems to be an endless reservoir of material on which to draw upon to create incredible reading experiences for their fans. It’s also the main reason why I haven’t become a multi-millionaire by writing a book called, ‘The Firm’, about some piece of firmware that is the front for a back-dating stock option scheme.
Alas, I’ll do my best to use my career experience in IT Management as motivation to provide you dear readers with interesting, insightful and yes, even controversial reading experiences. Of the numerous blogs written about the IT Management sphere, I don’t expect you to blindly select and read my blog posts out of the many – I hope to earn your interest and your repeat business.
Combustible engines: Game-changers that shake up the world
Of course I’ll be the first to admit that IT Management may actually be exciting, especially when a game-changer comes along and redefines all that we knew before. The game-changer I’ll be writing about is cloud computing, which has the potential to redefine the rules of how IT Management may function and support business outcomes. In short, cloud computing may very well be that game-changer that injects some excitement into our world and is the impetus for this new HP SaaS blogging initiative. Being part of the cloud has provided us with some strong opinions and also intrigued us on what others think about the cloud.
There are numerous ways to define cloud computing. I actually once heard somebody define cloud computing as a set of best practices used for datacenter management. I’m not saying this opaque definition is incorrect; however I’d certainly never define it this way.
So, for purposes of discussion, allow me to level set on how HP SaaS views cloud computing. From my perspective, we pretty much fall inline with how Gartner defines cloud computing in their press release, ‘Gartner Highlights Five Attributes to Cloud Computing’, June 29, 2009'.
The five key attributes are:
- Scalable and Elastic
- Shared (resources)
- Metered by Use
- Uses Internet Technologies
I believe that these five attributes, when used wisely, provide tremendous business benefits to both cloud consumer and cloud provider, especially with respect to Total Cost of Ownership (TCO) and Return On Investment (ROI).
Okay okay, I had to go there and use two of the most overly used three-letter acronyms in marketese. However in this case I truly believe the previous sentence is extremely valid and should be taken seriously. Please don’t let the appearance of ‘TCO’ and ‘ROI’ act as if they were the aftermarket wing that’s been glued on the back of a nice BMW 330ci thus turning a solid cloud computing benefit into a cheesy, throwaway line that’s the prose equivalent of a ‘Fast and The Furious’ 3-series with neon-green racing stripes.
Okay, no more digression (although I can’t promise).
Is cloud computing the next electric car?
The potential business benefits of the Cloud are very clear, yet Enterprises have yet to really adopt cloud computing as a major portion of their sourcing strategies. Here is an excerpt from the ‘Enterprises say no to cloud computing’. TechWorld article:
Forrester recently found that 25 percent of enterprises with at least 1,000 employees are using or plan to use hosted virtual server offerings such as Amazon EC2, and that fewer than 20 percent of smaller companies plan to do so. Earlier this year, Gartner said that cloud application infrastructure technologies are not yet mature and that adoption right now is limited mostly to "pioneers and trailblazers."
Inside this same article, the main inhibitor to enterprise’s adoption of cloud computing is the lack of security in the cloud.
Frank Gens, a chief analyst at IDC, also published a report called ‘Clouds and Beyond: Positioning for the Next 20 Years of Enterprise IT’, which adds application performance and availability to security, thus rounding out the top 3 inhibitors to cloud adoption.
The way I see it, these inhibitors roll up to a broader inhibitor which is that enterprises must give up some level of control of their IT environment to reap the benefits of cloud computing. It goes without saying that we in IT management, tend to be control freaks to the point of obsession/compulsion. And it’s this diminished ability to control that leads to uncertainties such as the lack of security in the cloud.
Of course, what’s obvious is that control must be relinquished as a byproduct of deploying an application off-premise. What may not be so obvious is that the amount of control that must be relinquished varies depending on how enterprises consume the cloud. To put it another way, the amount of responsibility the cloud consumer has with respect to security, performance and availability, is dynamic depending on how the cloud is being consumed.
60,000 mile check-up – whose responsibility?
Cloud computing is commonly broken down into three varieties:
- Infrastructure-as-a-service (IaaS)
- Platform-as-a-service (PaaS)
- Software-as-a-service (SaaS)
Click here to learn more about each variety.
Understanding the dynamic nature of responsibility between cloud consumer and cloud provider may be easier if you view the cloud as an actual cloud with varying levels of visibility.
Try to stay with me on this.
Visibility in a white misty cloud is much clearer versus visibility in a heavy, dark cloud. Think of IaaS as a white misty cloud where visibility is, for the most part clear. In this cloud, you can see the host machines, the operating systems and the IP addresses. View PaaS as a grey cloud where visibility is somewhat obscured, yet there is still some visibility. In this cloud, you can see the platform, the development environment and some web services. View SaaS as a dark cloud where visibility is next to nil. In this cloud, all you can see is the web application that provides the service. What you can’t see is the type of host machines, the operating systems that are running and the private IP addresses.
It’s this dynamic change in visibility within the cloud that creates the dynamic change of responsibility between cloud consumer and cloud provider. Simply put, the more visible the cloud is, the more responsibility the consumer has. The less visible the cloud is, the more responsibility the cloud provider has.
To enforce my point, let’s take security in IaaS as an example. Because IaaS has clear visibility, it’s the consumer’s responsibility to ensure:
- Network ports are secure
- Operating systems are hardened
- Middleware is protected
- Applications are secure
It’s the provider’s responsibility to ensure:
- Web application firewalls (WAFs) are in place and configured
Let’s take the same example but change IaaS to SaaS. SaaS is a much darker portion of the cloud where visibility is obscured. In this case the provider takes on most, if not all responsibility to ensure security of the application in the cloud.
Fixing a flat tire without an air compressor
For the most part, cars are still a very convenient way to get from point A to point B. However, would we still drive cars if mechanics and subject matter experts didn’t exist? What if we had to fix our own cars? What if tools didn’t exist and we had to not only fix our own cars, but also had to create our own tools to fix our cars?
There was another layer of innovation that was created as a result of the car invention. This innovation took form in both from a solution standpoint – e.g., oil filter wrench; and from a services standpoint – e.g., smog checks. It was this layer of innovation which enabled the car to eventually thrive and become the dominant mode of transportation in the world.
I believe that HP SaaS is providing services and solutions which transforms the unpaved, single track path to the cloud into a smooth multi-lane highway that would give the autobahn an inferiority complex. As evidence to my opinion, our first innovation to market was Cloud Assure, which enables enterprises to gain some measure of control back with respect to security, performance and availability of applications in the cloud. It addresses enterprise’s top three inhibitors to cloud adoption with the ultimate goal of enabling the enterprise to benefit from the cloud while mitigating risk.
The triple-crown and grand trifecta of automobiles
The three things I want from my car:
- better be fast
- better be reliable
- better be economically friendly
Sadly, I can only have two out of the three. If I get two; they will negate the third.
However, through our goal to make life easier for enterprises in the cloud, we at HP SaaS try our best to provide all three:
- Our services and solutions are fast – they are ready to use immediately so that you have a very fast time to value
- Our services are reliable – we have a 99.9% availability service level as well as 9 years of performing SaaS services for over 700 customers
- Our services are economically friendly – our services have proven to lower TCO by up to 30% and are term-based so that you may leverage operational expense budgets to achieve your business outcomes
Innovation begets innovation. Not only do I think we provide some measure of a supporting infrastructure layer of innovation to assist enterprises to the cloud, but I believe we do it in such a way that makes economic and value sense to the consumer. After all, who can resist the holy-trinity of services and solutions for cloud adoption?
What are your thoughts on the division of responsibility in the cloud? From the enterprise standpoint, will the cloud be the next Ford, or will it be the next electric car? What other supporting innovations should be brought to market to ease enterprises’ transition to the cloud. And, is there any way you can achieve the triple-crown of services and solutions towards cloud adoption besides going with a SaaS model?
I look forward to your thoughts.