To virtualise or not — that is the (rhetorical) question

Today we are facing more and more complexity in our IT landscapes. We are forever bolting on newer, faster, cheaper solutions to deliver business services. Also, as if that weren’t recipe enough to give us night terrors, we continue to layer on top of existing systems—be they back-end transactional, data repositories or utility services—to be able to reuse as much as possible.

 

The challenge is that as the next major project comes along, we then have a dilemma around how we test a new solution in preproduction. What’s worse is that we don’t always have control of those systems to be able to test against them. Even more worrisome is that we don’t want to be firing test transactions against any live systems for fear of loss of integrity—or the inevitable "There is no one left who knows how it works, don't touch it!"

 

We must take a simple, scientific approach whereby we isolate all but the variable under test, keeping everything else constant. Easier said than done, right? Further, we need to be able to repeat tests, perform them earlier in the cycle, to be able to clean out environments to ensure a known starting point and to ensure that there is no data corruption. This all adds costs and makes it less feasible to maintain a real replica environment.

 

The thing is, we need to test. It is not optional; just look at the news recently and count the number of outages that made the headlines that maybe could have been mitigated through more thorough testing. We need to be able to develop and test end-to-end, early and often. The earlier we can test, the sooner we identify defects, and the more money we save in error-prevention. The benefits are clear: Some studies quote a 1:10:100 multiplier effect for the cost of fixing a defect in design, coding and production (http://www.riceconsulting.com/public_pdf/STBC-WM.pdf).

 

Going back to the system under test, we have options of how we handle dependent systems. We can stub out external integrations to always get a predicted response, but this doesn’t really test the integration code—and, let’s face it, we have all had to stand in front of the boss and explain how a stubbed routine made it into production, right? No? Oh, just me then. I'll just get my coat on the way out then!

 

So if we need to test, but don’t want to risk stubbing something and looking daft, what can we do? We virtualise.

Service virtualisation is a software solution consisting of a virtual service created to emulate a system we interface without the need to maintain it or to have to clean it out after each test run. This removes the costs of additional (sometimes archaic, always expensive) hardware; it removes the risks of stubbing and removes the delays of cleaning and preparation before reuse.

 

Increasing the frequency of testing and doing it earlier has shown to reduce the number of major bugs significantly. An HP customer reports a 30 percent decrease in issues relating to code quality. Widening this, studies suggest that virtualisation can deliver benefit in these areas too:

  • elimination of a performance test delay due to missing or unstable components of the application
  • reduction in hours spent coding, configuring and maintaining custom stubs or homegrown virtualization
  • reduction in hours of overtime or after-hours work for late-night performance testing

Service virtualisation is a great benefit, but it comes with some overheads. Test scripts being driven at the front-end need to be synchronised with test responses at the back-end. Orchestrating this could get complex, though no more complex than having to maintain a mirror environment. My question amongst all of this is a simple one: “Do you virtualise or not? Can you use virtualisation in other areas?” One such pondering moment had me thinking if maybe we could use it as a honeypot in security testing, where we provide the impression to an "eIntruder” or “eBurglar" that they have breached our network when really we are merely containing them in a secure area or the network letting them exhaust themselves trying to breach something pretending to be something it is not.

 

To learn more, download the free HP/Forrester whitepaper "Service Virtualization And Testing (SVT): How Application Development, Testing, And Delivery Leaders Can Speed Up Delivery And Improve Quality Of Applications" [reg. req'd.].

 

What are your feelings toward service virtualisation? Is it something you use, see a need for? Or is it not? Are there other uses for it that we are not tapping into right now? I would be interested in your viewpoints, so please share them in the comments below.

 

Ken O'Hagan is director of software presales at UK&I at Hewlett-Packard. Before coming to HP, Ken amassed close to 10 years of technical experience, working for companies such as Perot Systems and The Bank of Ireland. During his time at the latter, he was responsible for architecture definition/validation, hardware specification, technical design, and implementation and was a key part of the team that successfully implemented the five largest programs ever delivered for Bank of Ireland.

Labels: virtualization
Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
This account is for guest bloggers. The blog post will identify the blogger.
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.