Looking through the lens of the Software Testing Professionals conference…

Now, twist my arm, it was in New Orleans and during the best time of year to visit NOLA (pre-sweltering heat and before the NCAA playoffs and fans arrived), so I was happy to fly in and take my seat among the gathering testing gurus.

 

But, specifically, I am hired to care about finding the pain points customers are experiencing where HP Service Virtualization  can help and here are some of my key takeaways from the show - and please chime in with your own perspective especially if you disagree or if you want to add your thoughts on what hurdles need virtualizing…

 

Performance testing with scalability:  A large scale website provider, who knows a LOT about scale, made the repeated and emphatic point that Performance is more than speed.  Performance is part of every functional aspect of an application from stability to availability to scalability.  And while testing for all of these attributes was imperative…economy is a factor too…providing high quality performance testing cannot come at too high of a cost for the organization or it may be one of the things on the chopping block.  

è In my opinion, that is the red flag warning of a vicious cycle…if you cut performance testing it may impact the stability of an application, and impact the availability of that application during the two seconds that a customer is willing to wait for it to load…if the ad comes on TV but you can’t get on the site, you go back to your regularly scheduled programming.  Less customer revenue results in = more cuts to IT budgets and round and round we go.

è HP Service Virtualization and its integration with HP performance testing tools can give you virtualized services to test and stress to the breaking point without the need to access production systems

 

Performance Testing should FAIL: You *want* the system you are testing to fail, stressing the system to its breaking point allows you to gauge where and when your systems fail, and is that the acceptable point for your business?  Is your performance so good that you are wasting assets or people on incremental levels of performance that do not impact the bottom line?  Too much of a good thing?  Probably a problem that not too many testers have.  But look at other reasons a test could fail…”not getting a response from a third party service causes testers to give up testing that piece of functionality and that gets us in trouble.” 

à You want to fail but you want to fail for the right reasons, you are testing to the breaking point because you have found it….not because you gave up looking.

à Virtualized services allow you to hit virtual systems to the point of breaking without breaking the production system or accruing the cost of hitting the actual service or application. 

 

These were two key points from an excellent presentation, and they prompted some “a-ha’s” and some lively debate. What do you think?

 

And if testing virtualized services is giving you a painful headache, check back with us soon, we might have found the “aspirin” that will cure all your development pains which includes more learning opportunities and dialogue around HP Service Virtualization. www.hp.com/go/sv

 

Comments
| ‎04-11-2012 12:33 PM

Here's somthing to consider regarding your point:  "And while testing for all of these attributes was imperative…economy is a factor too…providing high quality performance testing cannot come at too high of a cost for the organization or it may be one of the things on the chopping block."

 

True, you can waste a lot of money perfecting a performance test environment.  But the costs you mitigate are things like:

 

A major Wireless Service Provider rolling out a new version of their CRM software and spending 2 weeks unable to activate a phone.  Their stock price plummetted low enough for a compeitor to pick them up for next to nothing.  Is the dissolution/acquisition of your company a risk worth taking?  This is not a hypothetical. 

 

A vendor keeps building new versions of an application it's selling to a Fortune 500 company (let's call them MegaCo) .  To properly review the performance of the application, Megaco has to get the code from the vendor and roll it into production, a process that can take 4-8 weeks due to scheduling and IT prod deployment constraints.  Every time, the performance is sub-par, but the vendor keeps claiming that the issue is resolved in the next patch.  How long is it worth it to MegaCo to keep getting bamboozled by their vendor?  In this case (also a true story), MegaCo hired a Performance Test consultant to test the vendor app and reduced the turnaround time on these new versions from 2 months to 1 week, saving MegaCo millions at a cost of a few thousand.

 

By the same token there are apps out there that have 20 users for non-critical functionality.  These are getting performance tested every day.

 

My point is this:  If you're rolling out an application where a few days of downtime can cost you millions of dollars, you would be a fool not to buy some relatively inexpensive insurance.  By the same token, if you're rolling out some software where the risk of it being down for 2 months is a loss of a few hundred dollars, it's probably not worth it to spend $50,000 performance testing the application.

 

The problem is not that "Performance Testing is too expensive", it's that project management has a habit of following a cookie-cutter development process without going through appropriate cost-benefit analysis.

 

A co-worker just gave us a report on the STP conference (he attended).  His takeaways were:

 

  • Companies are in favor of cutting out Performance testing since it doesn't fit into "Agile" style testing.  Instead, they just release and monitor/fix issues in prod to save money. 

In my opinion, this is because they don't understand how to performance test their applications.  You don't run through weekly iterations with performance testing.  Heck, it might take you a week just to fix/re-record all your scripts for the test again.  Instead, you wait until the app is in it's final iterations of functional test and run it through performance testing (typically coincident with UAT). 

 

If you have some specific concerns with how something will work, you can run component level performance tests, and then run end-to-end performance tests coincident with UAT to verify the entire system.  This would give you a way to get ahead of any expected performance issues.  Sidelining performance testing entirely is negligent if you're dealing with an application that can cost you millions of dollars/day in downtime. 

 

  • Other companies are in favor of releasing into "the cloud" and try to foist off responsibility for application performance on the cloud provider.

Genius.  I guess they haven't heard about Amazon's EC2 fiasco last year?  The whole point of any form of testing is to mitigate your risk.  Shifting your risk to a 3rd party provider doesn't do this, it just complicates the process of determining root cause.  How many times have departments within the same company fingerpointed months away while trying to avoid blame?  Now these bozos think that looping in another company is the solution?  This is a failure on so many levels it's criminal.

 

I didnt' attend the STP conference, but so far, I'm not getting warm fuzzies from the trends I'm hearing in IT Management.  It sounds like things are going from bad to abysmal.

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.