HP LoadRunner and Performance Center Blog

Webinar: Move Performance Testing to the Next Level with HP Performance Center

Are you satisfied with your performance testing processes and standards? Do disparate teams in IT and in your LOB organizations share performance testing resources and results? Can you trace all defects and how overall quality is being applied across your projects and releases? If not, it’s time to think standardize, share, and even Center of Excellence.

This webinar will focus on these best practices and highlight them by demonstrating the capabilities of HP Performance Center – solution expressly designed for end-to-end application performance.


Register here

Get connected with HP Live Network, your performance testing community

Connect, learn and share the new performance testing community created in the HP Live Network (HPLN) platform.  This new community, called “HP LoadRunner and HP Performance Center”, brings you the opportunity to connect with other organizations, partners and peers. You’ll be able to share your ideas, scripts, process and best practices while learning from others.


Continue reading to find out what this means for you.

4 Tips for Replaying Remote Desktop Protocol (RDP) scripts in LoadRunner

LoadRunner scripts that use the Remote Desktop Protocol (RDP) don’t always replay smoothly.  My colleague Kert (Yang Luo) from the LoadRunner R&D team has collected some tips that will save you time and effort, and help you ensure that your RDP scripts run correctly.   Continue reading to learn more.

Start your mobile application testing with the discounted HP Performance Validation Suite

Last week I attended a class about Mobile application testing and learned a great deal about what needs to be tested and how, specifically from a mobile application point of view. In a matter of minutes, we were able to find all sorts of bugs in the applications. Overall, it was a real fun and empowering experience.  


Is testing mobile applications the same as testing web applications? It seems like it is, but it is so NOT!


Today HP is launching a time limited, bundle incentivized, mobile performance testing promotion to help you get started with this most important initiative.


Read the blog to learn more about mobile application testing and the discounted HP Performance Validation Suite.



Performance testing from a woman’s perspective

For all dear women: Happy International Women’s day!


Today is a special day for all women! It is the Women’s international day celebrated across the globe! It is YOUR day!



Guys, please take the time to acknowledge all women that are part of your life today!  This includes your wife, girlfriend, your co-workers, sisters and friends. I am sure they will appreciate the acknowledgement.


To celebrate the day I want you to “meet” a great performance engineer; who inspires other women that are currently working in this area or are planning to become a performance engineer.



I would like to introduce Megan Shelton. She has a deep passion for performance testing and performance management, and has been working as a Performance Engineer for 10 years.  

Unlock the Power of Performance Center with the My PC User Interface

Want to learn about a great new way to connect to Performance Center?    A way to be more productive, efficient and effective?....


You do?   Great.  


Then check out this quick tip about how to get the most of of "My PC".

Results Positive, HP partner, now offering HP LoadRunner on Demand in the Cloud!

I am pleased to announce that Results Positive, is now offering performance testing solution on demand in the cloud, powered by HP LoadRunner. The new solution, LoadRunner on Demand, provides options to test from a hour to a day, as well as managed services, scriting and consulting services.


Read the blog to learn more about the LoadRunner on Demand.



1 more thing to be excited about HP TruClient – a beta opportunity!

I sure hope that you’ve heard about the great innovation in HP TruClient, which is one of the G8480103022008_SmallJPG.jpgprotocols supported in HPLoadRunner 11 and  HP Performance Center 11.  We realized the challenges that many of you face when creating load testing scripts for modern web applications, where there are significant client side scripts.  The feedback and response to TruClient has been remarkable, and we’ve continued to innovate and extend the TruClient technology.  Now, we’re at a point, that we are ready to start beta testing the next release of TruClient, which will support IE 9.  Many of you have asked us about MS Internet Explorer support, and we’re very close to done


If you don’t know what TruClient is, then you can watch this short movie : http://www.youtube.com/watch?v=cR8-3DRE2zo . Up until now, it only supported the Firefox browser. 

Webinar: Testing Smarter and Faster with Virtualization

Wednesday, April 21
10:00 am PT/1:00 pm
Testing Smarter and Faster with Virtualization

"It's hard to hit a moving target" is something we often say about
application development and technology projects. New virtualization technologies
have brought increased flexibility and immediacy into our efforts to build
applications right and ensure minimal risks. This webinar will help you learn
how to harness the power of virtualization technologies to improve your testing
efficiencies, quality outcomes and hit the "virtual" target with increased

Please join Me,
we discuss:

  • Functional testing automation, performance testing and engineering, and
    security concerns for virtualized applications.

  • How virtualized test and development solutions can help you move to agile
    processes and adopt more efficient collaboration with a new virtualized form of

Testing virtualized can be your ticket to testing
smarter and faster. 


Application Tuning? or Adding More Servers? Hyperformix and HP LoadRunner help you to decide!

You might have heard the phrase "don't just throw hardware at a performance problem" and that means you should be smart about your investments in solving performance issues and capacity plans.  As you might imagine, many customers have asked me about how the LoadRunner teams get along internally with the hardware divisions inside HP.  They ask, "Are the sales people competing against each other, or what?"  They are curious about the inherent incongruity between our software products that optimize applications to reduce the hardware requirements versus the obvious objectives of the server sales teams at HP that are goaled on selling lots more servers.   It's a good question - conceptually.  In reality, it's not such a big deal because the customer's situation and needs always take precedence.

In my experience, sometimes you have plenty of time to work on the application tuning and optimization and using HP Software solutions like LoadRunner and Diagnostics are a perfect fit for those projects.  Other times, we just can't spend any more time and have to look at physical scalability options for making a go-live date.  No matter what option you choose, you will still have similar questions about capacity:

  • If we are going to optimize the application performance, how do I adjust calculations for the differences between my test environment and the production environment?  What is the right hardware config for production?

  • If we are going to add hardware resources to the system supporting the application, which hardware resources do we need and how much?  Which architecture should we have?  Scale up?  or Scale out?

You can see that either way it is essential that you have the best insight into the performance data that you can get.  This is where our partner Hyperformix delivers really cool solutions for capacity planning, predicting and modelling application performance.  They have integrations with LoadRunner and Performance Center and by combining our test data results with Hyperformix algorithms you can compare costs and benefits across various topologies and configurations to determine the optimal choice for production performance. This ensures the right level of functionality and performance with the right level of investment.  If you are going to "throw hardware at a performance problem," you should use LoadRunner and Hyperformix to make sure your aim is right on target.

Here's a great introduction video from Bruce Milne at Hyperformix to help you learn more:

Check out this very funny cartoon video about a fictitious Bradley James Olsen who battles Dr. Chaos, when Brad's IT staff was hypnotized into ignorantly believing "The only answer is MORE SERVERS!!!!" 

And here are 2 older podcasts on the HP website:

»  Hyperformix
release new product: Capacity manager for virtual servers
Rob Carruthers of Hyperformix
discusses how Capacity Manager 5.0 maximizes company's virtualization
investments with Peter Spielvogel. 

»  Hyperformix
on capacity management: How virtualization affects performance

Rob Carruthers of Hyperformix
discusses how virtualization makes managing capacity and performance more
difficult with Peter Spielvogel. 

* (podcast audio is live from VMworld)

Video: Real Stories of Load Testing Web 2.0 - Part 4

The fourth and final video of our Load Testing Web 2.0 series covers a very common difficulty in testing nearly any system, even older architectures; dependent calls external to the system under test. The concept of "stubbing" isn't anything new, to be honest I've been doing this for nearly 15 years and it's very common when there is a back-end mainframe required for the test. But now with Web 2.0 architectures, it seems that stray calls to a web service are eluding many testers and this is resulting in some nasty surprises from externally impacted vendors. Here is Part 4 of the series, "Real Stories of Load Testing Web 2.0: Load Testing with Web 2.0 External Calls (please try not to test stuff that's not yours)."

(if your browser doesn't show the video in-line, just click here)

At the end of this video we mention another partner that built a professional "stubbing" solution.  Visit the iTKO LISA Virtualize page for more details.

Video: Real Stories of Load Testing Web 2.0 - Part 3

The third part in our Load Testing Web 2.0 series covers a not-so new concept of server-side data processing.  Don't be fooled into thinking you already know about server performance, because these new architectures are using client-like Javascript on the server; which is sometimes called reverse Javascript.  This video will describe how performance issues can sneak into this type of architecture and how even a simple component can result in serious latency and server-side resource overhead.  Here is Part 3 of the series, "Real Stories of Load Testing Web 2.0: Server-side Javascript Impacts (opposable thumbs not required)."

(if your browser doesn't show the video in-line, just click here)

FREE Trial of Shunra's WAN emulation within HP LoadRunner

Who said good things don't come for free?  Recently I've spent much more time with our partners at Shunra Software...and I've learned more about networking and performance than I've ever imagined.  In celebration of HP Software Universe 2009 in Hamburg, Germany this week, they have posted a special FREE trial of the VE Desktop for HP Software.  WOW!  This is the integration that we introduced in version 9.5 of LoadRunner and Performance Center which has become extremely popular.  Here's a capture from the Shunra blog entry:

"In celebration of HP Software Universe in Hamburg, Germany Shunra is offering a free trial of WAN emulation within HP LoadRunner, VE Desktop for HP Software. You can use VE Desktop for HP Software to measure your application performance through a variety of emulated WAN conditions, including replication of production, mobile or theoretical networks. Click here to register for more information, receive download instructions, and get started making your application performance testing network-aware!"

  I guess this means: "Happy Holidays from Shunra"!  :smileyhappy:

Video: Real Stories of Load Testing Web 2.0 - Part 2

As the second part in this series we now, we highlight another common challenge we hear from customers testing new web applications; client-side performance. As you add these exciting and new components into the once-upon-a-time-very-thin browser, you'll find increased CPU and Memory resource utilization on the client.  Perhaps as no surprise, this can result in slower response times from page rendering or Java script processing overhead.  Here is Part 2 of the series, "Real Stories of Load Testing Web 2.0: Web 2.0 Components - Client-side Performance (Fat clients, you make the rockin' world go round)."

(if your browser doesn't show the video in-line, just click here)

Video: Real Stories of Load Testing Web 2.0 - Part 1

We know many of you are finding great challenges with testing new web applications built with modern architectures and components like Silverlight, Flex, JavaFX and AJAX. So, Steve and I decided to put together a multi-segement video covering some of the top issues that make LoadRunner web testing much harder and more complex. Here is Part 1 of the series, "Real Stories of Load Testing Web 2.0: Impacts of WAN latency of Web 2.0 apps. (Mark Googles himself and Steve offers to go to Paris)."

(if your browser doesn't show the video in-line, just click here)


In this video we also mention one of our new partners delivering integrations for WAN Emulation that integrates with HP LoadRunner and Performance Center.  Visit the Shunra VE Desktop page for more details and free trial download.


Where does Performance Testing fit in Agile SDLC?

Is agile software development lifecycle (SDLC) all about sprinting i.e. moving stories from product backlog to sprint backlog and then executing iterative cycles of development and testing? IMHO, not really! We all know that certain changes in an application can be complex, critical or have a larger impact and therefore require more planning before they are included in development iterations. Agile methodologies (particularly Scrum) accommodate for application planning and long-term complex changes to the application in a release planning sprint called Sprint 0 (zero), which primarily driven by business stakeholders, application owners, architects, UX designers, performance testers, etc.

Sprint 0 brings a bit of waterfall process in the agile processes, with two major differences – sprint 0 is shorter in duration (2-4 weeks) and the stress on documentation is not as much as in the waterfall method. In my experience, sprint 0 is more efficient when it is overlapping, so while the development team and testers are working on sprints of the current release; stakeholders, architects, application owners, business analysts, leads (development, QA, performance testing, user interface design), and other personas get together to scope, discuss and design their next release. Sprint 0 is executed like any other sprint, which has contributors (pigs) and stakeholders (chickens) and they meet daily to discuss their progress and blockages. Moreover, sprint 0 need not be as long as the development iteration.

I have seen organizations further divide sprint 0 into two sprints i.e. sprint -1 (minus one) and sprint 0. Sprint -1 is a discovery sprint, to go over user stories to be included in the release and discover potential problems/challenges in the application, processes, infrastructure, etc. The output of sprint -1 results in an updated release backlog, updated acceptance criteria for more clarity, high-level architectural designs, high-level component designs, user interface storyboards and high-level process layouts. Sprint 0 then becomes the design sprint that goes a level deeper to further update the release backlog and acceptance criteria, and delivers user interface wireframes, detailed architectural & component designs, and updated process flows.

The big question is, where does performance testing requirement fit in an agile SDLC described above? While “good” application performance is an expected outcome of any release, its foundation is really laid out during the release planning stage i.e. in sprints -1 and 0. We know that user stories that describe the performance requirements of an application can impact various decisions taken on the application vis-à-vis its design and/or on its implementation. In addition, functional user stories that can potentially affect the performance of an application are also looked at in detail during the release planning stage. Questions like these are asked and hopefully addressed – whether or not the application architecture needs to be modified to meet the performance guidelines; whether or not the IT infrastructure of the testing and production sites are to be upgraded; whether or not newer technologies such as AJAX that are being introduced in the planned release can degrade the performance of the application; whether or not user interface designs that are being applied in the planned release can degrade the performance of the application; whether or not making the application available to new geographies can impact the performance of the application; whether or not expected increase in application usage going to impact its performance; etc. At the end of the sprint -1, the team may choose to drop or modify some performance related stories or take a performance debt on the application.

Going into Sprint 0, the team will have an updated release backlog and acceptance criteria for the accepted user stories. During this sprint, the team weighs application’s performance requirements against the functional and other non-functional requirements to further update the release backlog. At the end of sprint 0, some requirements (functional and non-functional) are either dropped or modified, and detailed designs are delivered for the rest of the stories. Sprint 0 user stories then transition into the sprint planning session for sprints 1-N of the development and testing phase. Throughout these 1-N sprints, the application is tested for functionality, performance and other non-functional requirements so that at the end of every sprint, completed stories can be potentially released.

Agile methodologies also allow for a hardening sprint at the end of sprints 1-N, for an end-to-end functional, integration, security and performance testing. The hardening sprint need not be as long as development sprints (2-4 weeks) and is an optional step in an agile SDLC. This is the last stage where performance testers can catch any issues vis-à-vis performance, before the applications gets deployed to production. But we all know that performance issues found at this stage are more expensive to fix and can have bigger business implications (delayed releases, dissatisfied end-users, delayed revenue, etc.) If the planning in sprint -1 and sprint 0 and subsequent execution in sprint 1-N were done the right way, chances are that the hardening sprint is more of a final feel-good step before releasing the application.

Testing is Back on the Charts

...and Quality sounds better than ever!  The latest release entitled Here Comes Science from the Grammy-winning duo They Might Be Giants (John Linnell and John Flansburgh) includes a track called "Put It to the Test" which celebrates the enthusiasm of testing a hypothesis for the purposes of ratifying our understanding of the truth.   In short - testing is actually COOL!  This is not a song just for kids - no, no, NO!   If you've been a veteran software tester or worked in any capacity for quality assurance, I think you'll find the sincere advocacy for testing very refreshing.  I'll admit it had me singing along in the car:

"If there's a question bothering your brain - That you think you know how to explain
You need a test - Yeah, think up a test

If it's possible to prove it wrong - You're going to want to know before too long
You'll need a test

If somebody says they figured it out - And they're leaving any room for doubt
Come up with a test  - Yeah, you need a test

Are you sure that that thing is true? Or did someone just tell it to you?
...Find a way to show what would happen - If you were incorrect 
...A fact is just a fantasy - Unless it can be checked 
...If you want to know if it's the truth - Then, my friend, you are going to need proof 

Don't believe it 'cause they say it's so - If it's not true, you have a right to know"

These words are not just literally music to my ears as a die-hard software tester and also as person who respects the processes and disciplines of the scientific community.  You should know that in our world of computer science and software there is an active resurgence of quality initiatives - a testing renaissance; integrating QA and testing into agile development, testing in production environments, testing for green carbon footprint and even testing requirements before we build anything.  That's right just thinking about your own thinking is a form of testing - that is, if you're willing to question your thoughts honestly!

The new Music CD & Video DVD combo includes also a video for the song which can be seen on YouTube:

At your next SCRUM or Team Meeting - please add an agenda item to listen to the song, watch the video, discuss the lyrics and try to relate the ideas to how you are doing testing in your projects.  Testing is back and on the rise - and now we have a cool song to go with it!

But don't just take my word for it...take my recommendation and PUT IT TO THE TEST!  Geeked

Update: 15-hours into the Non-stop HP Virtual Conference

*original post was at 11:30pm on 9/29/2009*

Okay - so I've been working the Worldwide HP Virtual Conference: Functional, Performance and Security Testing for about 15 hours now - nearly non-stop and I've had some great chat sessions with customers from all over the world.  I've been on since 8:30am PST from Mountain View, California where I live.  I've had 3 medium-sized cups of coffee - actually called Depth Charges from the coffee shop by my house.  These are 16 ounces of regular coffee with 2 shots of espresso added for and extra charge.  I'm posting live updates on Twitter and the feedback has been great.  Also, I've been broadcasting live from my home studio on ustream.tv the entire time (almost).

The HP Virtual Conference materials are still available through November 2009

The LoadRunner Non-stop HP Virtual Conference

THE TIME HAS COME!! I really don't want you to miss the Worldwide HP Virtual Conference: Functional, Performance and Security Testing in Today’s Application Reality.  And just to show you how committed I am to you and to the conference, I'm going to attend the conference non-stop for the entire duration, just as we did in the promotional video!  I will be online in chats, at the booth, in the lounge and presenting the entire time and I will also be documenting the experience live from my home office, with video streaming and chatting.  Click here to register for free, now!!!

Follow Mark on twitter and ustream.tv for the duration of the entire conference!
Follow the Entire HP Conference on twitter

This conference is going to be the most awesome!


  • 40 sessions covering Agile, Cloud, Web 2.0, Security, customer best practices and more

  • Localized booths and content for Benelux, Nordics, Austria & Switzerland, France, Germany, Japan, Korea, Iberia and Israel

  • Live worldwide hacking contest




Here are just the Top 5 reasons you should attend:


#5 - No need to spend days out of the office
#4 - No travel or registration fees
#3 - No long registration lines
#2 - No need to choose one session over another - see them all!
#1 - No need to choose one representative from your department to attend.  Everyone can attend and learn!

(Of course, I think the best reason is it's a great excuse to work from home for the entire day...which is exactly what I'll be doing!)

Conference Dates and Schedule

Americas – 29 September: 
11am – 7pm EDT/ 8am – 4pm PDT

APJ – 30 September: 
11:00am – 5:00 pm (AEST) / 10:00am – 4:00pm (Tokyo) 
 9:00am – 3:00pm (SG time) / 6:30am – 12:30 pm (Bangalore)-

EMEA – 30 September:
8:00am – 4:00 pm (GMT+1) / 8:00am - 4:00pm (UK) 
9:00am to 5:00pm (Amsterdam, Berlin, Paris, Rome)



See you online, at the SHOW!

Performance Testing Needs a Seat at the Table

It is time Performance Testing gets a seat at the table. Architects and developers like to make all the decisions about products without ever consulting the testing organization. Why should they? All testers have to do is test what's created. If testers can't handle that simple task, then maybe they can't handle their job.

I know this thought process well. I used to be one of those developers :smileyhappy:. But I have seen the light. I have been reborn. I know that it is important to get testers input on products upfront. And actually it is becoming more important now than ever before.

With RIA (Web 2.0) technologies there are many different choices that developers can make. Should they use Flex, Silverlight, AJAX, etc... If they use AJAX, which frameworks should they use. If Silverlight then what type of back end communication are they going to use?

Let's just take AJAX as an example. There are hundreds of frameworks out there. Some are popular and common frameworks but most are obscure or one off frameworks. Developers like to make decisions on what will make their life easier and what is cool. But what happens if their choices can't be performance tested? Maybe the performance team doesn't have the expertise in-house, or maybe their testing tools don't support the chosen framework. What happens then?

I can tell you that many times the apps get released without being tested properly and they just hope for the best. It's a great solution. I like the fingers crossed method of releasing apps.

How could this situation be avoided? Simply include the performance testing group upfront. Testing is a major cog in the application life cycle. They should be included at the beginning. I'm not talking about testing earlier in the cycle (although that is important and it should be done). I'm talking about getting testing involved in architecture and development discussions before development takes place.

If developers and architects knew up front that certain coding decisions would make it hard or impossible to performance test, then maybe they would choose other options for the application development. Many businesses would not want to risk releasing an application if they knew that it could be tested properly. But when they find out too late, then they don't have a choice except to release it (the finger crossing method).

If the performance team knew upfront that they couldn't test something because of skills or tools, then at least they would have a heads-up and they could begin planning early for the inevitable testing. Wouldn't it be nice to know what you are going to need to performance test months in advance? No more scrambling at the 11th or 12th hour.

Think about this. If testing was invloved or informed upfront, then maybe they, along with development, could help push standards across development. For example, standardizing on 1 or 2 AJAX frameworks would help out testing and development. It will make the code more maintainable because more developers will be able to update it and it will help ensure that the application is testable.

We need to get more groups involved up front. The more you know, the better the decisions, the better off the application is going to be.

What are the Goals of Performance Testing?

So what is the point of performance testing?  I get this question often.  And depending on who you talk to, you get different answers.

First let me begin by telling you what are NOT the goals of performance testing / validation.

  • Writing a great script

  • Creating a fantastic scenario

  • Knowing which protocols to use

  • Correlating script data

  • Data Management

  • Running a load test

This is not to say that all of these are not important. They are very important, but they are not the goals. They are the means to the end.

So why DO people performance test? What are the goals?

  • Validating that the application performs properly

  • Validating that the application conforms to the performance needs of the business

  • Finding, Analysing, and helping fix performance problems

  • Validating the hardware for the application is adequate

  • Doing capacity planning for future demand of the application

The outcomes of the performance test are the goals of testing. It seems basic. Of course these are the goals. But...

  • How many people really analyse the data from a performance test?

  • How many people use diagnostic tools to help pinpoint the problems?

  • How many people really know that the application performs to the business requirements?

  • How many people just test to make sure that the application doesn't crash under load?

Even though they seem obvious, many testers/engineers are not focusing on them correctly, or are not focused on them at all.

  • Analysing the data is too hard.

  • If the application stays up, isn't that good enough?

  • So what if it's a little slow?

These are the reasons that I hear. Yes, you want to make sure that the application doesn't crash and burn. But who wants to go to slow website. Time is money. That is not just a cliche, it's the truth. Customers will not put up with a slow app/website. They will go elsewhere and they do go elsewhere. Even if it is an internal application, if it is slow performing a task, then it takes longer to get the job done, and that means it costs more to get that job done.

Performance engineering is needed to ensure that applications perform properly and perform to the needs of the business. These engineers do not just write performance scripts. Just because someone knows Java does not mean that they are a developer. And just because a person knows how to write a performance script does not mean they they are a performance engineer.

Performance engineering requires skills that not all testers have. They need to understand the application under test (AUT), databases, web servers, load balancers, SSO, etc.... They also have to understand the impact of cpu, memory, caching, i/o, bandwidth, etc.... These are not skills are learned overnight, but skills that are acquired overtime.

I wrote a previous blog entry on "you get what you pay for". If you pay for a scripter, you get a scripter. If you pay for a performance engineer, you get a performance engineer (well not always :smileyhappy:. Sometimes people exaggerate their skills :smileyhappy: ).

Companies can always divide and conquer. They can have automaters/ scripters create the scripts and the tests, then have performance engineers look at the test and analysis the results. In any case the performance engineer is a needed position if you want to properly performance test/validate.

It needs to be mandatory to know what metrics to monitor and what those metrics mean. Also knowing how to use diagnostic tools needs to be mandatory. Again in a previous blog I mentioned that if you are not using diagnostics you are doing an injustice to your performance testing. Without this analysis knowledge you are not truly performance testing, you are just running a script with load. Performance testing is both running scripts and analysing the runs.

By looking at the monitoring metrics and diagnostic data, one can begin to correlate data and help pinpoint problems. They can also notice trends that may become problems overtime. Just running a loadtest without analysis will not give you that insight. It will just let you know that the test appeared to run ok for that test run. Many times just running the test will give you a false positive. People wonder why an application in production is running slow if it already passed performance validation. Sometimes this is the reason (You never want this to be the reason). Proper analysis will ensure a higher quality application.

As I said, these are not skills that are created overnight. Performance engineers learn on the job. How do you make sure that this knowledge stays with a company as employees come and go? That is where a Center of Excellence (CoE) comes into play (You knew I was going to have to pitch this :smileyhappy: ). If you centralize your testing efforts, then the knowledge becomes centralized as opposed to dispersed through a company only to get lost if those employees with the knowledge leave. You can read yet another one of my blogs for more information on the CoE. Wow! I've just been pitching my blogs entries today :smileyhappy:. But I digress.

Let's stop thinking that proper performance testing is writing a good script and agree that performance engineering is not an option but a must. Let's start to focus on the real goals of performance testing and then all the of the important "means to the end" will just fall into place.

HP Performance Engineering Best Practices Series

Just to let you know that we've been putting together some published practices for LoadRunner and Performance Testing...and the electronic version of the book(s) are available free of charge!

This one is "Performance Monitoring Best Practices" authored by Leo Borisov from our LoadRunner R&D team.  Leo sent along this description and instructions for downloading the book:


"We have always
recognized the fact that having best practices and methodology would greatly
simplify the life of performance engineers and managers, and now we are
beginning to fill this need. The book is available with the SP1 
Access it from the product documentation library, or from the help menu.

download a copy from the hp software support site:

  1. go to http://h20230.www2.hp.com/selfsolve/manuals 

  2. log in
    using an hp passport or register first at and then log in

  3. in the list of hp
    manuals choose either LoadRunner or Performance Center – the book
    is listed under both

  4. select product version: 9.51 and operating system:

  5. click search

Since this is the first book in the series covering
various aspects of methodology, we would really appreciate your feedback. Please
send your feedback directly to me or lt_cust_feedback @ hp.com."

Congratulations Leo - thanks for your efforts!


ROI: You Get What You Pay For

We've all heard that saying. But how many times do we really follow it? We have bought, ok I have bought, cheap drills, exercise machines, furniture, only to be sorry about when they break prematurely. Or you find a great deal on shoes only to have them fall apart on you while you are in a meeting with a customer. I'm not saying that happened to me, but I know how that feels.

Cheaper always seems like it's a better deal. Of course it's not always true. I can tell you that now I pay more for my shoes and I'm much happier for it :smileyhappy:. No more embarrassing shoe problems in front of customers (not saying that it happened to me). In fact when my latest pair of shoes had an issue, I sent them back to the dealer and they mailed me a new pair in less than a week! That's service. You get what you pay for.

The same holds true for cars, clothes, hardware, repairs, and of course software testing tools. You knew I was going to have to go there.

I hear from some people that Performance Center is too expensive. I'm always amazed when I hear that. I'm not saying Performance Center is for everyone. If you don't need PC's features and functionality, it may appear pricey. If you are only looking for a simple cell phone, then a phone that connects to the internet and to your email and also has a touch screen may seem a little pricey. But if you need those features then you are able to see the value in those features.

I can sit here, go through each unique feature in Performance Center and explain to you the value (Not saying that it will not come in a future blog :smileyhappy: ). But why would you listen to me, I'm the product manager. Of course I'm going to tell you that there is a lot of value to PC. Well IDC,a premier global provider of market intelligence and advisory services, has just released an ROI case study around HP Performance Center.

A global finance company specializing in real estate finance, automotive finance,
commercial finance, insurance, and online banking was able to achieve total ROI in 5.6 months. Yes! Only 5.6 months, not 1 or 2 years. But a total return on investment in 5.6 months. If anything I think we should be charging more for Performance Center :smileyhappy:. This company did not just begin using PC, they have been using PC for the last 4 years. And during that time they have found a cumulative benefit of $24M. I'd say that got a lot more than what they were paying for. Not only did they see a 5.6 month return on the investment but they are seeing a 44% reduction in errors and a 33% reduction in downtime in production.

What gave them these fantastic numbers?

Increased Flexibility
By moving to Performance Center they were able to schedule their tests. Before PC they had controllers sitting around being idle while other controllers where in high demand based on the scenarios that were on them. But once they were able to start to schedule their tests, they began performing impromptu load tests and concurrent load tests. They started to see that they were able to run more tests with fewer controllers.

Increased Efficiency
While they are able to increase their testing output, they didn't increase their testing resources.
Their testers/engineers were able to more through PC than what they could do with any other tool.

Central Team
With the central team they were able to increase their knowledge and best practices around performance testing. By doing this along with performing more test, they were able to reduce their error rate by 44% and their production downtime by 33%.

So you get what you pay for. Put in the time and money. Get a good enterprise performance testing product. Invest in a strong central testing team. You will get more out, than what you put in. In the case of this customer they got $21M out over 4 years.

Also invest in good shoes. Cheap shoes will only cause you headaches and embarrassment (Not saying that it happened to me).

Managing Changes – Web 2.0, where's your shame

Oh, this strange fascination started in 2004 when they coined this new generation of ‘web development’ called Web 2.0.   I witnessed this evolution of technology from my seat in steerage at Microsoft as customers switched from the old Active Architecture (remember Windows DNA?) to the warm impermanence .NET and J2EE architectures for web applications.  Out with the old and in with the new, but the performance problems were generally the same – memory management, caching, compression, heap fragmentation, connection pooling, and so on.  It might have had a new name, but it was the same people making the same mistakes.  Back then we dismissed some of these new architecture as unproven or non-standard.  But that didn’t last long.  Now almost 5 years later with Web 2.0, any major player in the software industry that hasn’t adopted the latest web architectures is being spit on as being plainly outdated or stuck with the label of being traditional.

When it comes to testing tools and Web 2.0, I think that “traditional” does not equate to obsolete – no matter how some of the “youngsters” in the testing tool market may like to imply.  The software industry is competitive, certainly and I think new companies and software should just evangelize the positive innovations they have and then the facts can speak for themselves.  If the ‘old guys’ can’t support new Web 2.0 stuff…then it will be obvious soon enough.  For instance, if a new testing tool company doesn’t fully support AJAX clients it’s just unacceptable at this point.

However, I do believe it is fair game to evaluate existing software solutions (pre-Web 2.0) on how well they can be adapted to support newer innovations in technology.  As for LoadRunner, I think we have a long history of adapting and embracing every new technology that has come along.  I started using LoadRunner with X-Motif systems running on Solaris.  That era and generation of technology is long since died (no offense intended to Motif or Sun).  Today, the same concepts for record, replay , execution, scripting, and analysis are still innovative and very relevant.  As long as the idea for the product is still valid, you can still deliver a valid product.

Adapting to changes here in LoadRunner we usually start with overcoming the technical hurdles for creating a new virtual user, or updating an existing one.  And as I stated above, we have a long and rich history of doing this – probably more than any other testing tool.  As an example, in versions 9.0, 9.1 and 9.5 we have continued to improve our support for AJAX, Flex and Asynchronous applications.  We respond to change quite well and even if we take some extra time to evaluate every aspect of what this ‘new web’ change means to our customers.  It’s worth getting right and not being swayed by the hype of the ‘Web 2.0’ label.

Let me finish by stating that these new web technologies as challenge to testing tools, but it’s even more of a change to testers.  I’ve heard that many-a-tester gets a surprise by the next version of the AUT which secretively has implemented new Web 2.0 architecture or even started using web services calls to a SOA.  Change is a surprise only if you’re unaware or unconscious.  Sure, it would be a failure to not communicate to QA that there were some significant technology changes coming, right?  To some, this would sound too familiar.  Like an institutionalized version of “throw it over the wall” behavior, but honestly these new technologies (like AJAX) have been around for nearly 5 years now.

As for most testers, here’s a thanks to Web 2.0 – “You've left us up to our necks in it!”

Email Questions about Think Times


From: Prasant 
Sent: Monday, August 04, 2008 7:55 AM
To: Tomlinson, Mark
Subject: Some questions about LR 

Hi Mark,

I am Prasant . I got your mail id from yahoo LR group. I have just started my career in Performance testing. I got a chance to work on LR . Currently I am working with LR 8.1. I have one doubt regarding think time. While recoding one script automatically think time got recorded in the script. While executing the script I am ignoring the think time. Is it required to ignore the think time or we have to consider the think time while executing the script.

I have questions in mind like, when think time is considerd as the user is taking before giving input to the server . In that case while recording any script for a particular transaction I may take 50 seconds as think time and my friend who is recording the same script will take less than 50 seconds (let's say 20 seconds). So, in his script and in my script the think time will vary for same transaction. If I will execute both the scripts considering the think time the transaction response time will vary . It may create confusion for the result analysis. Can you please give some of your view points about this.


From: Tomlinson, Mark 
Sent: Thursday, August 07, 2008 2:59 AM
To: Prasant
Subject: RE: Some questions about LR 

Hi Prasant,

Yes – it is good that think time gets recorded, so the script will be able to replay exactly like the real application – with delays for when you are messing around in the UI. But you must be careful, if you are recording your script and you get interrupted…or perhaps you go to the bathroom, or take a phone call…you will see VERY LONG THINK TIMES getting recorded. You should keep track of this, and then manually go edit those long delays – make them shorter in the script. Make them more realistic, like a real end user.

Also, as a general rule of thumb you should try *not* to include think time statements in between your start and end transactions. You are right that it will skew the response time measurements. But for longer business processes where you have a wrapper transaction around many statements…it might be impossible to clean every transaction.

Here are 3 other tips for you:

First – in the run time settings, you have options to limit or adjust the think time settings for replay…you can set a maximum limit, or multiply the amount. The combinations are very flexible. You can also choose to ignore think times and run a stress test, although I typically will include even 1 second iteration pacing for most stress tests I run.

Second – you can write some advanced functions in the script to randomize the think times programmatically. This could be used to dynamically adjust the think time from a parameter value, in the middle of the test.

Third – even if you do have think times inside your start and end transactions, there is an option in the Analysis tool to include or exclude the think time overhead in the measurements displayed in the Analysis graphs and summary.

I hope you’ll find that with those 3 tips, you can get all the flexibility you need to adjust think times in your scripts – try to make the most realistic load scenario you can.

Best wishes,

Flash compared to Silverlight

Today I read an article from my colleague Brandon which compares Adobe Flash and Microsoft Silverlight. He makes some excellent points about the strengths of Flash's market penetration compared to Silverlight's latest enhancements. For rich internet application, I think we still see Flash as the primary Ux platform out there…and it is a challenge for any testing tools to keep up with the fast pace of Adobe's innovations.

Brandon points out one of the main advantages that Silverlight has is the "Speed to Production" - getting the app delivered, out to production quickly. The advantage is better responsiveness and agility for the business. Unfortunately this usually equates to less time for proper testing, and especially performance testing.


It's also interesting how he points out performance comparison at the presentation layer - which I think could be described as the "perceived performance" for the entire application system. In Enterprise RIA or mashed-up application users might not perceive on-par performance from either Flash or Silverlight, depending on the backend systems.

In a big RIA, you have multiple points of exposure to latency risk introduced in the data services calls behind the application - so even if the UI is responsive to the user, the data retrieval might be slow. Check out James Ward's blog on "Flex data loading benchmarks" - showing the combination of AMF and BlazeDS, which is proving to be a very scalable and responsive combination.

Tags: Silverlight

Welcome to Loadrunner at hp.com

Greetings fellow LoadRunner Gurus! This is the introductory entry to the new LoadRunner @ hp.com blog, written by yours truly - Mark Tomlinson, the Product Manager for LoadRunner here at HP Software.

As you might be expecting a lengthy personal introduction for the first blog entry, I've decided to not deliver on that foolishly stereotypical initiation. Instead I'd like to start off here with a few opportunities to engage with you directly for the betterment of LoadRunner's future.

First, we are looking for a few good and challenging applications for LoadRunner to go after - as a our New Horizon research team are developing some very new exciting solutions for advanced client record and replay. If you got an application with any extreme architecture including:

  1. Web v2.0 or Rich Internet Applications on Java, .NET or Adobe
  2. Multi-multi-multi-protocol…like a mashed-up app with several backend systems
  3. Encoded/Serialized or Secured communication protocols
  4. Asynchronous, multi-threaded client(s) or data-push technologies
  5. Any combination or all of the above.

If you have a challenge for LoadRunner, we'd love to hear from you.

Second, we have a new release of LoadRunner coming soon and we are just getting our plans for the early-access beta program put together. If you're an existing customer and you're interested in getting into our formal beta program for LoadRunner drop us an email. We have an early-access program that does require your feedback, production usage and reference for the new release. We'd love to have your support for all that - but I certainly understand some folks just want to share some feedback on the new stuff. We need that also, if that's all you can do.

Lastly, I'd love to hear from you - so drop me an email (loadrunner at hp . com). What do you love about the product, what do you not like so much? What kinds of testing are you doing? What new applications are you being asked to test? How do you get your testing done? How do you generate meaning from the load test results? What is your favorite BBQ restaurant? Let me know your thoughts and feedback - the good, the bad, the ugly. I have been using LoadRunner for nearly 15 years - so I plan include your input into our strategy for moving forward with innovating our solutions. I will post back weekly with some Q&A, if you'd like to share the conversation with our community.

Again - all of these initiatives are really important to the future of LoadRunner. Your participation is encouraged and greatly appreciated!

Thanks - and again, welcome to the blog!

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • Lending 20 years of IT market expertise across 5 continents, for defining moments as an innovation adoption change agent.
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • GTM Marketing for HP Software's ADM team. I am passionate about design, digital marketing, and emerging tech.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • Rick Barron is a Program Manager for various aspects of the PM team and HPSW UX/UI team; and working on UX projects associated with HP.com. He has worked in high tech for 20+ years working in roles involving web design, usability studies, and mobile marketing. Rick has held roles at Motorola Mobility, Symantec and Sun Microsystems.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.