HP LoadRunner and Performance Center Blog

Video: HP LoadRunner in the Amazon EC2 Cloud

HP LoadRunner: Amazon Cloud View Product Walkthrough for Video Datasheet Join Mark Tomlinson as he provides a screen-by-screen walkthrough of HP LoadRunner working on the Amazon EC2 Cloud.

Prevailing winds bring announcement of HP LoadRunner in the Cloud

It’s a given that performance testing is an integral part of your application
lifecycle.  However, it is often done in an unplanned, ad-hoc manner.  Why?  Because at most companies’ time and money are limited resources – and when resources are scarce, something’s got to give.  Unfortunately testing usually ends up getting short end of the stick in such situations. When you have plenty of hardware and tools - plenty of resources – even then
it can take so much time to provision the test bed that you have little time
left over for much testing at all.  In either situation, it just puts your organization at great risk.


And let’s not forget: it takes money to make money, guys.  When you have almost no budget to invest in testing tools, you might be convinced to rely on inexpensive or open source testing
solutions that provide inadequate testing capabilities.  As my good friend and colleague
Steve Feloney says, “Ya know, you get what you pay for.”  With those naïve’ tools, a mistaken or miscalculated test result could cost you more than you can imagine.


Cloud Computing changes this equation – especially for testing – because it is fast
*and* cheap.  It’s fast, simultaneously provisioning two dozen servers in about 5 minutes, about as fast as you can swipe your credit card.  It’s also cheap.  The basic cloud machine instances cost less than $1 per hour.  This extremely low-cost pricing of Cloud resources is why many of the adolescent cloud-testing startups are finding some wind in their sails.  And you would expect that those startups would take advantage of their fair profits and convert the investment into building better testing solutions.  But they haven’t.  And the winds are about to change. 


Announcing:  HP LoadRunner in the Cloud


We combine the strengths and credibility of LoadRunner with the efficient and cost-effective power of Amazon EC2 to deliver an extremely valuable testing solution that enables performance validation across your application lifecycle.  HP LoadRunner in the Cloud makes performance testing and load testing ubiquitous to organizations and projects of all sizes. 


Key features of HP LoadRunner on
Amazon EC2:


·        Test web applications properly by leveraging market-leading technology


·        Ensure application quality using an affordable performance testing solution


·        Gain immediate access to pre-installed software that enables on-demand, unplanned testing


·        Obtain self-service access to a flexible, scalable testing infrastructure


Right now we are accepting requests for an extended Beta of HP LoadRunner in the Cloud.


Click here for more details on HP LoadRunner in the Cloud - we have lots more materials to share...


Sticky ToolLook Interview: Agile Performance Testing

I love my StickyMind!...actually, it's Sticky ToolLook that contacted me to ask a few questions about our observations of customers who are adopting Agile Performance Testing methods.  As you know in testing, asking good questions leads to finding good answers - that's exactly what Joey McAllister sent along...Excellent questions! 




So, check out the Sticky ToolLook newsletter here to learn more!

Set up an Oracle Monitor in LoadRunner Using a Baseball Bat [reblogged]

Scott Moore is one of our good friends and partners over at Loadtester, Incorporated - they have some very helpful articles on their website.

This recent article on setting up Oracle monitoring for LoadRunner is really great - Scott writes:

"Whenever I go into a project where previous LoadRunner testing did not include proper monitoring (by some other jive talking turkey who had no clue), there is always a concern by the database administrator when we start asking for permissions to monitor the database server within LoadRunner. After an offline conversation that last about five minutes, they usually understand what we are doing and have no problem giving us access. However, I always keep a baseball bat handy in case that doesn't work."

Read the entire article here...http://www.loadtester.com/set-oracle-monitor-loadrunner-using-baseball-bat



Roll up your sleeves: what to do when Performance Test Plan is missing

We've all been there.  Working with a performance testing tool and suddenly finding that you don't have all the information you need to acurately configure LoadRunner to simulate  You get the call that you're going to be thrown into the lab next week for performance testing.  This email came to me today and it was a perfect opportunity to explain how to quickly get a simple test plan together, even when the stakeholders for the project are pressuring you.


From: Narayanan, Kathiresan 
Sent: Thursday, February 18, 2010 3:46 AM
To: loadrunner @ hp.com
Subject: (HP LoadRunner and Performance Center Blog) : Multi threading in java vuser loadrunner protocol




I have to design a scenario, where i am using just one user in controller. The scenario is like this, first create a company under company create multiple users.This entire process is in one script. what i did is first created a company then i created user one by one using for loop, but what stakeholders want is , create a company then create users under it in parallel, simultaneously they should be created so we can expect a high load.


Could anyone please give me suggestion/program that creates a user simultaneously.



From: Tomlinson, Mark
Sent: Thursday, February 18, 2010 12:45 PM
To: Narayanan, Kathiresan
Subject: RE: (HP LoadRunner and Performance Center Blog) : Multi threading in java vuser loadrunner protocol


Hi Kathiresan,


There are 2 things you should work on before modifying any scripts or scenarios in LoadRunner.


First, you need to figure out if you can have a single script for both adding Companies and Users  Your goal is to align your test/script design with the way your stakeholders are thinking and talking about the business. Your test design should easily make sense to the stakeholders.   


  •  if they are thinking more like: "We have someone adding a Company and 50 Users in one session" - then you probably could do well to combine the Company and User actions into a single script.


  •  if they are thinking more like: "We need to simulate 100 New Companies per hour.  And also 5000 Users per hour across those new Companies." - then separate scripts are probably better. 


Next, create a table showing the activities and quantities of each activity for 1 hour.  For instance, how many companies need to be created in a single hour?  How many users need to be created for each company in 1 hour?  Also – how many SIMULTANEOUS Real Users will be doing each of these activities?  This will help you to figure out the maximum quantity of transactions for your test run.  This table will help describe to the stakeholders exactly what the 1 hour test run will do.


Activity Trans per User per Hour Total # of Real Users Total Trans/hour
Adding a Company


150 150

Adding Users to a Company

150 150 7500


And now you are ready to go back to LoadRunner and implement the test…and let’s assume you are going to have the combined activities in one virtual user script:


Script Name:  Add Company and Users
Actions:  Init, Action 1, Action 2, End
                Init: Login to the System

                Action 1: Add Company
                Description:  adds a single company to the system and writes the company name, duration 5 minutes
                Test Data:  requires no staged data, but will create a single User login and Company record

                Action 2: Add User to Company
                Description:  adds 50 Users (in a loop) to a single Company…with 55 second delay between each User
                Test Data:  requires a Company record, from Action 1 (above) to which the User will be added

                End:  Logout of the System

Scenario Configuration: 
Iteration Pacing:       No iterations configured – script will start and run until completed
            Real World Comparison:  1 virtual user simulates 1 real users in 1 hour (Concurrency Factor = 1)


Here are a couple other questions for you to consider about this suggested scenario:      

  • What will be the maximum # of running virtual users?
  • During the test run, when will the Company records be updated?
  • How would you make the test run twice as many transactions in the same 1 hour?

Good luck!!  Smile



Mark's technology predictions for the new year: 2010

Although this is my first public publishing of my technology predictions for the new year, I have always been a person who likes to imagine that I have some magical intuition about the future of technology.  At the same time my experience as a performance tester has taught me:  with technology, all that glitters (happy-shiny-new-toys) isn't gold.  Here are my almost always 100% accurate completely intuitive top ten predictions for technology in the year 2010.

1.  Rich Clients (AJAX, Flex, Silverlight, HTML5) hit the BIG TIME! It's not just a developers preference anymore, since more and more customers are making corporate policies to standardize on at least a few (if not a single) web technology platforms. Standardization will accelerate adoption.   

2.  Cloud Computing will be so ubiquitous, we will start calling it Blue Sky Computing. Everyone will catch some "nice rays" from Google, Amazon and Microsoft. More than predicted, the cost-benefit of distributed computing off-premise will surprise us.

3.  Web 3.0 will emerge from the ambiguous depths of technocracy - at least, someone will have to actually build something cool for a thesis to graduate this year, eh?  

4.  Application teams will start to learn that all the time savings and complexity reduction for developers have really just passed the costs on to testing teams, the business, the infrastructure, more hardware and other people's budgets.

5.  Open Source will continue to creep like ivy, fertilized by naivety. Several major companies will learn that the total cost of ownership means the vines finally cover the windows, doors and eventually the whole house.   

6.  Clifford Stoll will again NOT be proven right about the collapse of the internet, because engineers are inventing the next generation of switches, routers, storage, cpu and wasteless architectures that don't clog the pipes.   

7.  The only jobs for Data Center Operations will be somewhere near a massive hydro-electric or geo-thermal power source. Which means fishing, white-water rafting and geology fanatics have a new career path...in dead server recycling.  

8.  Larry will stop buying companies. Maybe.  

9.  In all types of testing, Resource Visibility and Performance Awareness will become the predictors for determining the likelyhood of success. If you know it's gonna be slow...it will not go!   

10.  (just a prediction)...I have a hunch some really awesome new features are coming in LoadRunner and Performance Center this year.

Good luck and Happy New Year!

Application Tuning? or Adding More Servers? Hyperformix and HP LoadRunner help you to decide!

You might have heard the phrase "don't just throw hardware at a performance problem" and that means you should be smart about your investments in solving performance issues and capacity plans.  As you might imagine, many customers have asked me about how the LoadRunner teams get along internally with the hardware divisions inside HP.  They ask, "Are the sales people competing against each other, or what?"  They are curious about the inherent incongruity between our software products that optimize applications to reduce the hardware requirements versus the obvious objectives of the server sales teams at HP that are goaled on selling lots more servers.   It's a good question - conceptually.  In reality, it's not such a big deal because the customer's situation and needs always take precedence.

In my experience, sometimes you have plenty of time to work on the application tuning and optimization and using HP Software solutions like LoadRunner and Diagnostics are a perfect fit for those projects.  Other times, we just can't spend any more time and have to look at physical scalability options for making a go-live date.  No matter what option you choose, you will still have similar questions about capacity:

  • If we are going to optimize the application performance, how do I adjust calculations for the differences between my test environment and the production environment?  What is the right hardware config for production?

  • If we are going to add hardware resources to the system supporting the application, which hardware resources do we need and how much?  Which architecture should we have?  Scale up?  or Scale out?

You can see that either way it is essential that you have the best insight into the performance data that you can get.  This is where our partner Hyperformix delivers really cool solutions for capacity planning, predicting and modelling application performance.  They have integrations with LoadRunner and Performance Center and by combining our test data results with Hyperformix algorithms you can compare costs and benefits across various topologies and configurations to determine the optimal choice for production performance. This ensures the right level of functionality and performance with the right level of investment.  If you are going to "throw hardware at a performance problem," you should use LoadRunner and Hyperformix to make sure your aim is right on target.

Here's a great introduction video from Bruce Milne at Hyperformix to help you learn more:

Check out this very funny cartoon video about a fictitious Bradley James Olsen who battles Dr. Chaos, when Brad's IT staff was hypnotized into ignorantly believing "The only answer is MORE SERVERS!!!!" 

And here are 2 older podcasts on the HP website:

»  Hyperformix
release new product: Capacity manager for virtual servers
Rob Carruthers of Hyperformix
discusses how Capacity Manager 5.0 maximizes company's virtualization
investments with Peter Spielvogel. 

»  Hyperformix
on capacity management: How virtualization affects performance

Rob Carruthers of Hyperformix
discusses how virtualization makes managing capacity and performance more
difficult with Peter Spielvogel. 

* (podcast audio is live from VMworld)

Happier Jetting with HP LoadRunner: at JetBlue Quality and Performance are not just IT issues…

Just last December I spent a tremendous amount of time flying all over the world and almost completely without any issues.  (Except in Heathrow Airport I was stopped by security because I was trying to take a picture of the sunrise in Terminal 5, and accidentally include a few snapshots of the X-Ray machines.  Ooops, sorry!).  But while I was cruising along at 450 miles per hour at 34,000 feet above sea level, I thought a lot about all the systems supporting me and whether they have all been properly tested.  In my analysis, I made some assumption that the hardware (landing gear, wings, engines, etc.) didn't undergo much change or disruption and due to industry regulations would be required to have serious schedule of maintenance to keep everything operating as designed, within tolerances.  However, the software systems that control the planes, surround and binds all the end-to-end business processes - I was not so confident about all that!  Software can change on-the-fly (no pun intended) and is exposed to much more risk of human error, even with the best professional software testers you can find.  Then I recalled my start in software testing on life-critical transportation systems.  I thought to myself, "I hope this airline uses a robust, exhaustive automated regression test to make sure all the dependent software systems are certified for life-critical use."

When I landed here back in California, I learned about a case study from one of our customers - JetBlue Airways.  

JetBlue's software
applications power all the essential business functions
everything from
flights, air traffic, and airport operations to financial management and online
reservations.  That's testing much more than just the website!  To optimize application uptime and performance,
JetBlue turned to
HP Quality Center software, HP QuickTest Professional software, and HP
LoadRunner software.  They were so excited about the results that they agreed to share with us exactly the improvements they made and how they did it.  Well, since implementing the
solutions the company reduced post-production application failures by 80%;
accelerated testing cycles by 40%; cut testing costs by 73%; and optimized
availability of its online reservations system; thus avoiding many thousands of dollars
in lost revenue per minute of downtime.  That's pretty great results!

A few key things I found interesting about this case study:

  • Testing in a Virtualized Environment - with several HP products including the LoadRunner Load Generators (see other video here)

  • 150 Different applications to be tested - with several integrations and dependencies, across new and old heterogeneous systems

  • Combined this change with re-engineering the best practices for IT and Quality - reducing 'departure delays' for IT pipeline considerably


Click here to learn more about JetBlue’s
impressive results and how they achieved them


Video: Real Stories of Load Testing Web 2.0 - Part 4

The fourth and final video of our Load Testing Web 2.0 series covers a very common difficulty in testing nearly any system, even older architectures; dependent calls external to the system under test. The concept of "stubbing" isn't anything new, to be honest I've been doing this for nearly 15 years and it's very common when there is a back-end mainframe required for the test. But now with Web 2.0 architectures, it seems that stray calls to a web service are eluding many testers and this is resulting in some nasty surprises from externally impacted vendors. Here is Part 4 of the series, "Real Stories of Load Testing Web 2.0: Load Testing with Web 2.0 External Calls (please try not to test stuff that's not yours)."

(if your browser doesn't show the video in-line, just click here)

At the end of this video we mention another partner that built a professional "stubbing" solution.  Visit the iTKO LISA Virtualize page for more details.

Video: Real Stories of Load Testing Web 2.0 - Part 3

The third part in our Load Testing Web 2.0 series covers a not-so new concept of server-side data processing.  Don't be fooled into thinking you already know about server performance, because these new architectures are using client-like Javascript on the server; which is sometimes called reverse Javascript.  This video will describe how performance issues can sneak into this type of architecture and how even a simple component can result in serious latency and server-side resource overhead.  Here is Part 3 of the series, "Real Stories of Load Testing Web 2.0: Server-side Javascript Impacts (opposable thumbs not required)."

(if your browser doesn't show the video in-line, just click here)

FREE Trial of Shunra's WAN emulation within HP LoadRunner

Who said good things don't come for free?  Recently I've spent much more time with our partners at Shunra Software...and I've learned more about networking and performance than I've ever imagined.  In celebration of HP Software Universe 2009 in Hamburg, Germany this week, they have posted a special FREE trial of the VE Desktop for HP Software.  WOW!  This is the integration that we introduced in version 9.5 of LoadRunner and Performance Center which has become extremely popular.  Here's a capture from the Shunra blog entry:

"In celebration of HP Software Universe in Hamburg, Germany Shunra is offering a free trial of WAN emulation within HP LoadRunner, VE Desktop for HP Software. You can use VE Desktop for HP Software to measure your application performance through a variety of emulated WAN conditions, including replication of production, mobile or theoretical networks. Click here to register for more information, receive download instructions, and get started making your application performance testing network-aware!"

  I guess this means: "Happy Holidays from Shunra"!  :smileyhappy:

Video: Real Stories of Load Testing Web 2.0 - Part 2

As the second part in this series we now, we highlight another common challenge we hear from customers testing new web applications; client-side performance. As you add these exciting and new components into the once-upon-a-time-very-thin browser, you'll find increased CPU and Memory resource utilization on the client.  Perhaps as no surprise, this can result in slower response times from page rendering or Java script processing overhead.  Here is Part 2 of the series, "Real Stories of Load Testing Web 2.0: Web 2.0 Components - Client-side Performance (Fat clients, you make the rockin' world go round)."

(if your browser doesn't show the video in-line, just click here)

Video: Test Mobile Performance with Shunra (...can you hear me now?...now?)

Imagine yourself with 3 colleagues riding on an evening train from Berlin to Hamburg, and each of you have the same mobile service provider and similar phones. For 90 minutes, each of you are getting connected with different roaming providers, connection stability, sometimes with 3G and sometimes NOT!  (It boggles the mind, doesn't it!?)

Well, if you're lucky enough to be sitting next to a guru like David Berg from Shunra, you'll get the whole story on why this is happening.  In this brief, but highly valuable video interview we help to explain how the Shunra VE Desktop and HP LoadRunner can be used together to produce a robust load testing solution for mobile applications, complete with mobile WAN emulation.  Watch...and learn!

(if your browser doesn't show the video in-line, just click here)

Also, David has an even more detailed explanation on his blog, which goes much deeper into the challenges of developing mobile applications that REALLY work well.

LoadRunner Blog-o-rama: LIVE from HP Software Universe 2009 Hamburg

I'm now on the second day of this week, getting ready for a customer advisory board meeting all day here in Hamburg, Germany.  This week we are having the 2009 HP Software Universe conference and I will try to be blogging and twittering all the day long.  Until I just cant type with thumbs into my phone any longer.  

If you are a blogger and you are attending HP Software Universe
Hamburg...it would be great to meet you in person.  Seven other HP bloggers will be updating live from the event. Please look them up:

     Genefa Murphy - covering Agile
Development and Requirements Management

     Amy Feldman – @HPITOps, covering Application
Performance  Management

     Aruna Ravichandran - covering Application
Performance  Management

     Mark Tomlinson - covering Load
Runner and Performance Center

     Michael Procopio - @MichaelAtHP, covering Application
Performance  Management
and Business Service Management

     Mike Shaw – interested in meeting IT Operations management for something
new in 2010

     Peter Spielvogel - @HPITOps, covering Operations Management


You can also follow
the entire HPSU event on Twitter – Twitter.com/HPSWU
@HPSWU, hashtag #HPSWU, or become a Facebook

More on HP Software Universe 2009 Hamburg

· Sneak
Preview – Application Performance Management Blog

· Sneak
Preview – Infrastructure Management Software Blog

· Optimizing
cost by automating your ITIL V3 processes

· Event Information


Video: Real Stories of Load Testing Web 2.0 - Part 1

We know many of you are finding great challenges with testing new web applications built with modern architectures and components like Silverlight, Flex, JavaFX and AJAX. So, Steve and I decided to put together a multi-segement video covering some of the top issues that make LoadRunner web testing much harder and more complex. Here is Part 1 of the series, "Real Stories of Load Testing Web 2.0: Impacts of WAN latency of Web 2.0 apps. (Mark Googles himself and Steve offers to go to Paris)."

(if your browser doesn't show the video in-line, just click here)


In this video we also mention one of our new partners delivering integrations for WAN Emulation that integrates with HP LoadRunner and Performance Center.  Visit the Shunra VE Desktop page for more details and free trial download.


Video: How LoadRunner Works

Sure, I know that we already posted a screen-shot video about LoadRunner Walkthrough, but it might also be nice to share another video whiteboard session to introduce HP LoadRunner and how the individual components of HP LoadRunner and HP Diagnostics are installed. 

(if your browser doesn't show the video in-line, just click here)




Understanding the language of hosted load testing

I recently got an email from a colleague which was an invitation from a testing service vendor which does load testing on "The Cloud" (internet-hosted testing service).  The invitation included some misleading language about load testing that I think can be confusing to engineers that are new to performance testing.  So, here's the hook question they used in their invitation email:

     " Interested in learning how to load test your
applications from an outside-in customer perspective, so you can find and
resolve problems undetectable by traditional behind-the-firewall tools?"

Aside from the overly-casual, marketing-savvy tone of the sentence, there's actually so many hidden assumptions in this sentence it might be helpful for us to break it down:

     "Interested in learning how to load test your applications..."

Well, that's obvious...of course we are interested in learning about load testing our applications.  When this term is used generically as 'load test' I always point out that there is an assumption about the definition for load testing.  Don't be fooled by this over-simplified language - because you might be led to think that doing performance testing is simple and easy.  Like anything in IT...it's usually not simple, and often much less easy.  Also, there are many different forms of performance testing, depending on the objectives for the testing; capacity planning, scalability, configuration optimization, query tuning, migration, disaster recovery & failover and stress testing...just to mention a few. 

     "from an outside-in customer perspective..."

We know this vendor was offering testing services from OUTSIDE the firewall, generating load IN to the application under test.  This is usually a phase of testing that comes in addition your normal "in-house" performance tests, right before the system goes live, by running the load from the external network and infrastructure outside the company firewall or at a hosted facility.  But it is more important to understand the concept of "outside-in" is actually the normal definition of most kinds of testing and especially for black box test design.  To understand this, just ask yourself the inverse question: how would you conduct "inside-out" testing?  My point here is that they mention "customer perspective" which is inherently an "outside-in" perspective...because end-user customers see almost every application from some type of external interface (GUI, or CLI).  Essentially every good test is inherently designed from a customer-perspective. Customer requirements do exist even if you do not document them, or even think about them.  There is a customer (or end-user) somewhere that will be impacted by the system's behavior.  In your tests, those requirements should be directly applied in the test script and test cases themselves.

     "find and resolve problems"

Well, it would be a shame to find a problem and not resolve it.  Wouldn't you agree?  For many years now there have been complementary solutions to performance testing tools that enable profiling and diagnostics on the system under test.  It's very common now to have a performance team that includes not only testers, but developers and architects that can repair the application code and queries on-the-spot, right when a bottleneck is found.  We hear from many developers using LoadRunner for unit performance testing, and they find & fix bottlenecks so quickly...it is perceptibly a single task to them.

     "undetectable by traditional behind-the-firewall tools"

Undetectable?  Really?  There's an implication here that your performance testing environments do not include enough of the production infrastructure to find and resolve bottlenecks that would usually only exist in production.  That's only true if you don't replicate 100% of the production environment into your test lab - which is a common limitation for some companies.  But let me be very clear - that is not a limitation of the testing tool.  It is only a limitation of your resources or imagination.  To be totally honest, LoadRunner already fully supports testing and monitoring and diagnostics of nearly 100% of your production environment.  You can even run LoadRunner against your ACTUAL production systems, if you want to (although we don't recommend overloading production...in fact, please don't do that to yourself).  And don't forget, a good replacement for the actual production infrastructure is a virtualized or emulated infrastructure - using solutions like Shunra or iTKO LISA Virtualize.

The word "traditional" is just a derogatory connotation which seeks to discredit any technology that existed before today.  This usually means that there is also very little respect for the existing discipline of performance testing as it is commonly defined and conducted today.  The truth is there is nothing traditional about LoadRunner or load testing.  And to be very honest there's nothing modern about this "outside-the-firewall" testing service provider.  HP SaaS (formerly Mercury's ActiveTest) has been doing this type of testing for nearly 10 years...and they've been doing it successfully with LoadRunner, year-over-year with every new technology innovation that's come along the way.

Don't get me wrong - I do agree there are some physical bottlenecks that cannot be detected "behind-the-firewall".  Those are bottlenecks you might find with your ISP or Teleco provider in their systems or switch configurations.  Maybe the routing tables for global internet traffic aren't ideal for your application end-users in New Guinea.  Or maybe the CDN systems are having difficulty with performance throughput and simultaneous cache replication.    But if you find a bug with those OTHER COMPANIES...how do you get those bugs fixed?  Can you force them to optimize or fix the issue?  Is your only option to switch to another external provider with better performance, but perhaps other risks?

So, if we were to re-write this sentence with something more accurate, transparent and honest:

     "Are you interested in learning how to conduct effective seasonal spike testing of your production systems from outside-the-firewall, so you can enhance your existing internal performance testing efforts by diagnosing additional problems that you'll find with the external production infrastructure that you probably don't have in your own testing lab?"

(I guess it doesn't sound as catchy, eh?)


Testing is Back on the Charts

...and Quality sounds better than ever!  The latest release entitled Here Comes Science from the Grammy-winning duo They Might Be Giants (John Linnell and John Flansburgh) includes a track called "Put It to the Test" which celebrates the enthusiasm of testing a hypothesis for the purposes of ratifying our understanding of the truth.   In short - testing is actually COOL!  This is not a song just for kids - no, no, NO!   If you've been a veteran software tester or worked in any capacity for quality assurance, I think you'll find the sincere advocacy for testing very refreshing.  I'll admit it had me singing along in the car:

"If there's a question bothering your brain - That you think you know how to explain
You need a test - Yeah, think up a test

If it's possible to prove it wrong - You're going to want to know before too long
You'll need a test

If somebody says they figured it out - And they're leaving any room for doubt
Come up with a test  - Yeah, you need a test

Are you sure that that thing is true? Or did someone just tell it to you?
...Find a way to show what would happen - If you were incorrect 
...A fact is just a fantasy - Unless it can be checked 
...If you want to know if it's the truth - Then, my friend, you are going to need proof 

Don't believe it 'cause they say it's so - If it's not true, you have a right to know"

These words are not just literally music to my ears as a die-hard software tester and also as person who respects the processes and disciplines of the scientific community.  You should know that in our world of computer science and software there is an active resurgence of quality initiatives - a testing renaissance; integrating QA and testing into agile development, testing in production environments, testing for green carbon footprint and even testing requirements before we build anything.  That's right just thinking about your own thinking is a form of testing - that is, if you're willing to question your thoughts honestly!

The new Music CD & Video DVD combo includes also a video for the song which can be seen on YouTube:

At your next SCRUM or Team Meeting - please add an agenda item to listen to the song, watch the video, discuss the lyrics and try to relate the ideas to how you are doing testing in your projects.  Testing is back and on the rise - and now we have a cool song to go with it!

But don't just take my word for it...take my recommendation and PUT IT TO THE TEST!  Geeked

Update: 15-hours into the Non-stop HP Virtual Conference

*original post was at 11:30pm on 9/29/2009*

Okay - so I've been working the Worldwide HP Virtual Conference: Functional, Performance and Security Testing for about 15 hours now - nearly non-stop and I've had some great chat sessions with customers from all over the world.  I've been on since 8:30am PST from Mountain View, California where I live.  I've had 3 medium-sized cups of coffee - actually called Depth Charges from the coffee shop by my house.  These are 16 ounces of regular coffee with 2 shots of espresso added for and extra charge.  I'm posting live updates on Twitter and the feedback has been great.  Also, I've been broadcasting live from my home studio on ustream.tv the entire time (almost).

The HP Virtual Conference materials are still available through November 2009

The LoadRunner Non-stop HP Virtual Conference

THE TIME HAS COME!! I really don't want you to miss the Worldwide HP Virtual Conference: Functional, Performance and Security Testing in Today’s Application Reality.  And just to show you how committed I am to you and to the conference, I'm going to attend the conference non-stop for the entire duration, just as we did in the promotional video!  I will be online in chats, at the booth, in the lounge and presenting the entire time and I will also be documenting the experience live from my home office, with video streaming and chatting.  Click here to register for free, now!!!

Follow Mark on twitter and ustream.tv for the duration of the entire conference!
Follow the Entire HP Conference on twitter

This conference is going to be the most awesome!


  • 40 sessions covering Agile, Cloud, Web 2.0, Security, customer best practices and more

  • Localized booths and content for Benelux, Nordics, Austria & Switzerland, France, Germany, Japan, Korea, Iberia and Israel

  • Live worldwide hacking contest




Here are just the Top 5 reasons you should attend:


#5 - No need to spend days out of the office
#4 - No travel or registration fees
#3 - No long registration lines
#2 - No need to choose one session over another - see them all!
#1 - No need to choose one representative from your department to attend.  Everyone can attend and learn!

(Of course, I think the best reason is it's a great excuse to work from home for the entire day...which is exactly what I'll be doing!)

Conference Dates and Schedule

Americas – 29 September: 
11am – 7pm EDT/ 8am – 4pm PDT

APJ – 30 September: 
11:00am – 5:00 pm (AEST) / 10:00am – 4:00pm (Tokyo) 
 9:00am – 3:00pm (SG time) / 6:30am – 12:30 pm (Bangalore)-

EMEA – 30 September:
8:00am – 4:00 pm (GMT+1) / 8:00am - 4:00pm (UK) 
9:00am to 5:00pm (Amsterdam, Berlin, Paris, Rome)



See you online, at the SHOW!

HP Performance Engineering Best Practices Series

Just to let you know that we've been putting together some published practices for LoadRunner and Performance Testing...and the electronic version of the book(s) are available free of charge!

This one is "Performance Monitoring Best Practices" authored by Leo Borisov from our LoadRunner R&D team.  Leo sent along this description and instructions for downloading the book:


"We have always
recognized the fact that having best practices and methodology would greatly
simplify the life of performance engineers and managers, and now we are
beginning to fill this need. The book is available with the SP1 
Access it from the product documentation library, or from the help menu.

download a copy from the hp software support site:

  1. go to http://h20230.www2.hp.com/selfsolve/manuals 

  2. log in
    using an hp passport or register first at and then log in

  3. in the list of hp
    manuals choose either LoadRunner or Performance Center – the book
    is listed under both

  4. select product version: 9.51 and operating system:

  5. click search

Since this is the first book in the series covering
various aspects of methodology, we would really appreciate your feedback. Please
send your feedback directly to me or lt_cust_feedback @ hp.com."

Congratulations Leo - thanks for your efforts!


Video: Ten Virtual Users Just Aren't Enough

And the Rolling Stones once sang a song, “You can’t always get what you want” – but that should all make sense when you watch this video about stress testing.

(if your browser doesn't show the video in-line, just click here)

Taking visibility for granted?

Testers are naturally curious people. We enjoy the creation of questions for the purpose of finding the truth, or better still for the purpose of creating new truths. For centuries, the true professional testers have been scientists who more often than not were the determiners of new truths and thus we recall them as inventors. Back in the mid-1700's a theologian and scientist Joseph Priestley conducted several experiments to determine what types of gasses were generated by plants. It started with a curiosity about the observed behavior of a wax candle burning within a glass jar. And it was truly a curious thing because the wax candle burned out long before it exhausted the fuel supply of wax or wick. The flame had consumed all of the fuel supply of oxygen from the sealed environment, proven by the fact that when he tried to re-ignite the candle inside the jar using a simple magnifying glass to converge intense rays of sunlight on the candle's wick, it failed.

When Priestley attempted the same experiment but this time adding a sprig of mint into the glass jar with the candle the result was similar at first: the candle burns, oxygen is consumed, candle goes out, can't re-ignite it with sunbeams. But after nearly a month with the candle left isolated in the jar with the sprig of mint, Priestley then re-attempted to ignite the candle with the magnifying glass and rays of sunshine. And of course it worked. The candle was lit and once again consumed the oxygen until it burned out. Priestley deduced that the plant was somehow producing a gas that allowed the candle's flame to burn once again. What he could not have predicted, is that he would be the first to discover a new truth about the role of oxygen in photosynthesis.

Priestley's methods really got me thinking, especially about his test tools and techniques. All he used were rudimentary equipment and simple deductive reasoning for analysis. What if we were to attempt this same experiment today using contemporary scientific inventions? We would probably use a oxygen sensor to measure the amount of oxygen in the jar. This would make all the other implements for the experiment obsolete: the wax candle, the magnifying glass, the need for bright sunshine. The creation of the lambda probe oxygen sensor in the late 1960's was a response to the demand for measurement of oxygen in an experiment, machine or system. The design of the sensor allowed for visible measurement of an invisible gas. Even Priestly himself would have appreciated the sensor not simply for the new visibility it provided, but for the ease of use and accuracy in measuring his experiment.

For Priestley in the mid-1700's photosynthesis was unknown and oxygen was both invisible and immeasurable. Boris Beizer more than 200 years later was challenged to measure the known workings of the computer which were, for all intents and purposes, inside an invisible black box. The discoveries and solutions that resulted from their work show us how inappropriate it is to take any prior science for granted and also provide for us a new baseline for how to test. Simply, it is easier to correlate a visible test measurement to the tests objectives or pass/fail criteria. As a result, testing tools today already make test measurements visible, actionable and automatically correlate* the results back to pass/fail criteria. Today we take for granted that nearly every testing tool comes with mechanisms for "making visible" the performance metrics from the system under test. Just as we take for granted that every modern automobile now uses oxygen sensors and an onboard computer as essential components to improve fuel efficiency.

But just making something visible isn't enough. Consider LoadRunner's monitoring and diagnostics capabilities. Could you imagine today having to monitor CPU resource utilization without having the test tool automatically make the measurement visible for you? In the 1980's Boris Beizer shared stories about his counting CPU ticks with an AM radio next to the machine. That sounds like such an old solution - almost like having to measure oxygen with a wax candle in a jar. My point is that visibility should be understood as a means to improving measurability. Measurability is what truly accelerates the testing process. Innovation in performance testing should improve and extend the visibility and measurability we have today. What more can we make visible? What new methods of measuring, arranging and correlating test data can we create? Can we automate the capabilities we have today or build intelligence to aggregate or parse this new data?

And we don't have to start with a wax candle in a jar or an AM radio.

*- see lr_end_transaction("login", LR_FAIL);

Email Questions about Think Times


From: Prasant 
Sent: Monday, August 04, 2008 7:55 AM
To: Tomlinson, Mark
Subject: Some questions about LR 

Hi Mark,

I am Prasant . I got your mail id from yahoo LR group. I have just started my career in Performance testing. I got a chance to work on LR . Currently I am working with LR 8.1. I have one doubt regarding think time. While recoding one script automatically think time got recorded in the script. While executing the script I am ignoring the think time. Is it required to ignore the think time or we have to consider the think time while executing the script.

I have questions in mind like, when think time is considerd as the user is taking before giving input to the server . In that case while recording any script for a particular transaction I may take 50 seconds as think time and my friend who is recording the same script will take less than 50 seconds (let's say 20 seconds). So, in his script and in my script the think time will vary for same transaction. If I will execute both the scripts considering the think time the transaction response time will vary . It may create confusion for the result analysis. Can you please give some of your view points about this.


From: Tomlinson, Mark 
Sent: Thursday, August 07, 2008 2:59 AM
To: Prasant
Subject: RE: Some questions about LR 

Hi Prasant,

Yes – it is good that think time gets recorded, so the script will be able to replay exactly like the real application – with delays for when you are messing around in the UI. But you must be careful, if you are recording your script and you get interrupted…or perhaps you go to the bathroom, or take a phone call…you will see VERY LONG THINK TIMES getting recorded. You should keep track of this, and then manually go edit those long delays – make them shorter in the script. Make them more realistic, like a real end user.

Also, as a general rule of thumb you should try *not* to include think time statements in between your start and end transactions. You are right that it will skew the response time measurements. But for longer business processes where you have a wrapper transaction around many statements…it might be impossible to clean every transaction.

Here are 3 other tips for you:

First – in the run time settings, you have options to limit or adjust the think time settings for replay…you can set a maximum limit, or multiply the amount. The combinations are very flexible. You can also choose to ignore think times and run a stress test, although I typically will include even 1 second iteration pacing for most stress tests I run.

Second – you can write some advanced functions in the script to randomize the think times programmatically. This could be used to dynamically adjust the think time from a parameter value, in the middle of the test.

Third – even if you do have think times inside your start and end transactions, there is an option in the Analysis tool to include or exclude the think time overhead in the measurements displayed in the Analysis graphs and summary.

I hope you’ll find that with those 3 tips, you can get all the flexibility you need to adjust think times in your scripts – try to make the most realistic load scenario you can.

Best wishes,

Flash compared to Silverlight

Today I read an article from my colleague Brandon which compares Adobe Flash and Microsoft Silverlight. He makes some excellent points about the strengths of Flash's market penetration compared to Silverlight's latest enhancements. For rich internet application, I think we still see Flash as the primary Ux platform out there…and it is a challenge for any testing tools to keep up with the fast pace of Adobe's innovations.

Brandon points out one of the main advantages that Silverlight has is the "Speed to Production" - getting the app delivered, out to production quickly. The advantage is better responsiveness and agility for the business. Unfortunately this usually equates to less time for proper testing, and especially performance testing.


It's also interesting how he points out performance comparison at the presentation layer - which I think could be described as the "perceived performance" for the entire application system. In Enterprise RIA or mashed-up application users might not perceive on-par performance from either Flash or Silverlight, depending on the backend systems.

In a big RIA, you have multiple points of exposure to latency risk introduced in the data services calls behind the application - so even if the UI is responsive to the user, the data retrieval might be slow. Check out James Ward's blog on "Flex data loading benchmarks" - showing the combination of AMF and BlazeDS, which is proving to be a very scalable and responsive combination.

Tags: Silverlight

Welcome to Loadrunner at hp.com

Greetings fellow LoadRunner Gurus! This is the introductory entry to the new LoadRunner @ hp.com blog, written by yours truly - Mark Tomlinson, the Product Manager for LoadRunner here at HP Software.

As you might be expecting a lengthy personal introduction for the first blog entry, I've decided to not deliver on that foolishly stereotypical initiation. Instead I'd like to start off here with a few opportunities to engage with you directly for the betterment of LoadRunner's future.

First, we are looking for a few good and challenging applications for LoadRunner to go after - as a our New Horizon research team are developing some very new exciting solutions for advanced client record and replay. If you got an application with any extreme architecture including:

  1. Web v2.0 or Rich Internet Applications on Java, .NET or Adobe
  2. Multi-multi-multi-protocol…like a mashed-up app with several backend systems
  3. Encoded/Serialized or Secured communication protocols
  4. Asynchronous, multi-threaded client(s) or data-push technologies
  5. Any combination or all of the above.

If you have a challenge for LoadRunner, we'd love to hear from you.

Second, we have a new release of LoadRunner coming soon and we are just getting our plans for the early-access beta program put together. If you're an existing customer and you're interested in getting into our formal beta program for LoadRunner drop us an email. We have an early-access program that does require your feedback, production usage and reference for the new release. We'd love to have your support for all that - but I certainly understand some folks just want to share some feedback on the new stuff. We need that also, if that's all you can do.

Lastly, I'd love to hear from you - so drop me an email (loadrunner at hp . com). What do you love about the product, what do you not like so much? What kinds of testing are you doing? What new applications are you being asked to test? How do you get your testing done? How do you generate meaning from the load test results? What is your favorite BBQ restaurant? Let me know your thoughts and feedback - the good, the bad, the ugly. I have been using LoadRunner for nearly 15 years - so I plan include your input into our strategy for moving forward with innovating our solutions. I will post back weekly with some Q&A, if you'd like to share the conversation with our community.

Again - all of these initiatives are really important to the future of LoadRunner. Your participation is encouraged and greatly appreciated!

Thanks - and again, welcome to the blog!

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • Lending 20 years of IT market expertise across 5 continents, for defining moments as an innovation adoption change agent.
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • GTM Marketing for HP Software's ADM team. I am passionate about design, digital marketing, and emerging tech.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • Rick Barron is a Program Manager for various aspects of the PM team and HPSW UX/UI team; and working on UX projects associated with HP.com. He has worked in high tech for 20+ years working in roles involving web design, usability studies, and mobile marketing. Rick has held roles at Motorola Mobility, Symantec and Sun Microsystems.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.