HP LoadRunner and Performance Center Blog

Let's talk about: The Concurrency Factor

Recently I had this question come from some internal peers of mine:

“We have a customer who is being pushed to emulate a load test with larger number of Vusers ... by manipulating think time in the script. This customer is trying to run some 3000 Vusers for 300 vuser license. The argument is like this: 1 vuser takes 5 seconds of think time in between process. If they reduce the think time to 0, they are emulating to run 5 times more Vusers for the same license. If you have any input, let me know. Thanks!”

  So, yes - there is an issue here, but only with regards to the risks they are taking at getting a false positive test result. That scenario he explained is referring to what I call a "concurrency factor" for the load test design. 

  Basically, if you have a script that runs 1 real world user taking 10 minutes per iteration and you instead run it at 5 minute pacing it's a concurrency factor of 2x. And given the amount of pacing time (lowering from 10 minutes to 5 minutes) that would be reasonable and extrapolating the results mathematically is quite credible. But there is a major caveat to taking this approach for load testing.

  As the concurrency factor increases, the more difficult it is to have a credible extrapolation to the real-world conditions. In short, a higher concurrency factor is less of a real-world test - because it is running each virtual user thread at super-human speeds. It is the weakest point in the load test design, and here's why:

  If the think time or iteration pacing approaches zero, or almost none - it may show dramatically different resource consumption on the server. This can be seen easily when we describe the transaction throughput (or rate) for the load test. Take for example a 1000 trans/hour load test:

That could be 1000 virtual users running at 1 transaction in 60 minutes

  • the users log on each session, which consumes lots more memory
  • the users hold open their session for very long time, which tests the configuration for timeouts, connections, etc.
  • the users hardly consume much CPU for each session, so the context switching, paging and swapping may be much higher

That could be 1 virtual user running at 1000 transactions in 60 minutes

  • the user log on only 1 session, which consumes almost no memory and very different from the real-world
  • the user hold open their session for very, very short time, which is unrealistic hammering of session id and creation
  • the user takes each session and pushes through highest transaction rate, which hammers the CPU differently

Basically, this is a misunderstanding of the difference between real-world load testing and stress testing.

If a customer were to conduct testing with such high concurrency factor they may find painfully that they got a false positive from the test. In our experience, this situation is exactly what leads so many engineers down a path to being overly confident about the performance of their applications. And it leads directly (and blindly I might add) to massive performance issue for scaled-out production systems, which is pervasive architecture for the web.  They have greatly increased the risk of failure.

Labels: LoadRunner

LoadRunner is Grand Prize Winner: 2008 Rockstar of Testing

Okay – after a very long couple of weeks crawling through the jungle of the back office systems at HP and days of constant writing up new sales kick-off materials for next month, I got some news to truly improve my mood.


You will find in the November (2008) edition of Software Test & Performance a summary of the 2008 Tester Choice Awards—which they also called the "2008 Rockstars of Testing." You might notice the Grand Prize Winner: HP LoadRunner. We won a total of 7 awards – more than any other vendor this year. What a really great award!


In the very generous introduction from Edward J. Correia's writeup: "I mean like last year; the players were the same, dude, but the awards went to Hewlett-Packard, which acquired Mercury in November 2006. Bonus! These tools have received our standing ovations for four years running. They are like truly amazing and excellent." Muchas gracias to all who voted at STP and to Edward for the enthusiastic review! I would also like to include very kind recognition to all the other testing vendors also nominated in these awards – our products wouldn't be nearly as strong without each other. We might all be lost without the brilliant persistence of our testing practitioners and performance engineers.


But truly, this award is to be credited to the HP and Mercury individuals who didn't take their foot of the accelerator during a potentially disruptive acquisition last year. My hat is off to them – each one. Nice work!


  • Erez Barak…former LoadRunner Product Manager
  • Stephen Feloney…Performance Center Product Manager
  • Priya Kothari…Product Manager
  • Ravit Danino…Functional Architect in R&D
  • Alexey Demidov…Tester of testers in LoadRunner QA
  • Lior Manor…former manager of LoadRunner R&D
    …just to name a few!


Oh by the way – LoadRunner Beta program is open now, so send me an email if you want to join up and give feedback. (loadrunner@hp.com)

Labels: LoadRunner

Jerry Douglas is a Performance Tester?

Okay - so I admit it is completely unconfirmed, but in the liner notes of Jerry's latest album entitled "Glide" you will find a thanks and shout-out to Jim N' Nick's BBQ (http://www.jimnnicks.com/).


Now, I don't imagine that there is a direct connection between Jerry and software performance testing - but definitely Jerry understands both performance and speed (in dobro playing). He also understands excellence - perhaps, both in musical aesthetics and BBQ.


So - I'll add Jim N' Nicks to the list of famous BBQ places.

Labels: LoadRunner

Using SiteScope to monitor Oracle RAC

Our partner at Loadtester Incorporated, Anthony Lyski just released a new whitepaper on how to configure SiteScope to monitor Oracle RAC step-by-step, along with explanations for the most common errors that might be encountered. Thanks Anthony for sharing your experience and publishing a great technote!


Labels: LoadRunner

Performance Testing Guidance

One of the last things I worked on before leaving Microsoft was a book on Performance Testing Guidance, specifically with some of the best performance engineers that I'd known for years at Microsoft. These guys were intense about the subject and really worked the authors and reviewers like crazy to get it done.


This guide shows you an end-to-end approach for implementing performance testing. Whether you are new to performance testing, or looking for ways to improve your current performance testing approach, you will find insights that you can tailor for your specific scenarios.This guide covers Microsoft's recommended approach for implementing performance testing for Web applications. These provide steps for managing and conducting performance testing. For simplification and tangible results, they are broken down into activities with inputs, outputs, and steps. You can use the steps as a baseline or to help you evolve your own process.


Written by:  J.D. Meier, Scott Barber, Carlos Farre, Prashant Bansode, and Dennis Rea


Reviewed by: Alberto Savoia, Ben Simo, Cem Kaner, Chris Loosley, Corey Goldberg, Dawn Haynes, Derek Mead, Karen N. Johnson, Mike Bonar, Pradeep Soundararajan, Richard Leeke, Roland Stens, Ross Collard, Steven Woody, Alan Ridlehoover, Clint Huffman, Edmund Wong, Ken Perilman, Larry Brader, Mark Tomlinson, Paul Williams, Pete Coupland, and Rico Mariani.





The book is an excellent starting point for learning about performance testing and performance engineering practices and I can tell you it is loaded with good advice about how to "think" about load testing. It covers the concepts and perspectives that some of the best engineers in our industry apply every day, to some of the toughest performance problems.

One of those guys is Scott Barber (note: an unabashed plug for Scott's work) who was a major contributor to this book. According to Scott's own blog entry about the book: "Even though this as a Microsoft patterns&practices book, it is a tool, technology, & process agnostic book...the book should apply equally well to a LoadRunner/Eclipse/Agile project it applies to a VSTS/.NET/CMMI project."  You can read more of Scott's blog


Check out the book (buy it, get the PDF, or view online.)...it's a great way to get started with the performance testing discipline.

here. In retrospect, I think I failed to truly appreciate Scott's experience and contributions to the writing - in fact, I know I did. 


Labels: LoadRunner

Email Questions about Think Times


From: Prasant 
Sent: Monday, August 04, 2008 7:55 AM
To: Tomlinson, Mark
Subject: Some questions about LR 

Hi Mark,

I am Prasant . I got your mail id from yahoo LR group. I have just started my career in Performance testing. I got a chance to work on LR . Currently I am working with LR 8.1. I have one doubt regarding think time. While recoding one script automatically think time got recorded in the script. While executing the script I am ignoring the think time. Is it required to ignore the think time or we have to consider the think time while executing the script.

I have questions in mind like, when think time is considerd as the user is taking before giving input to the server . In that case while recording any script for a particular transaction I may take 50 seconds as think time and my friend who is recording the same script will take less than 50 seconds (let's say 20 seconds). So, in his script and in my script the think time will vary for same transaction. If I will execute both the scripts considering the think time the transaction response time will vary . It may create confusion for the result analysis. Can you please give some of your view points about this.


From: Tomlinson, Mark 
Sent: Thursday, August 07, 2008 2:59 AM
To: Prasant
Subject: RE: Some questions about LR 

Hi Prasant,

Yes – it is good that think time gets recorded, so the script will be able to replay exactly like the real application – with delays for when you are messing around in the UI. But you must be careful, if you are recording your script and you get interrupted…or perhaps you go to the bathroom, or take a phone call…you will see VERY LONG THINK TIMES getting recorded. You should keep track of this, and then manually go edit those long delays – make them shorter in the script. Make them more realistic, like a real end user.

Also, as a general rule of thumb you should try *not* to include think time statements in between your start and end transactions. You are right that it will skew the response time measurements. But for longer business processes where you have a wrapper transaction around many statements…it might be impossible to clean every transaction.

Here are 3 other tips for you:

First – in the run time settings, you have options to limit or adjust the think time settings for replay…you can set a maximum limit, or multiply the amount. The combinations are very flexible. You can also choose to ignore think times and run a stress test, although I typically will include even 1 second iteration pacing for most stress tests I run.

Second – you can write some advanced functions in the script to randomize the think times programmatically. This could be used to dynamically adjust the think time from a parameter value, in the middle of the test.

Third – even if you do have think times inside your start and end transactions, there is an option in the Analysis tool to include or exclude the think time overhead in the measurements displayed in the Analysis graphs and summary.

I hope you’ll find that with those 3 tips, you can get all the flexibility you need to adjust think times in your scripts – try to make the most realistic load scenario you can.

Best wishes,

Flash compared to Silverlight

Today I read an article from my colleague Brandon which compares Adobe Flash and Microsoft Silverlight. He makes some excellent points about the strengths of Flash's market penetration compared to Silverlight's latest enhancements. For rich internet application, I think we still see Flash as the primary Ux platform out there…and it is a challenge for any testing tools to keep up with the fast pace of Adobe's innovations.

Brandon points out one of the main advantages that Silverlight has is the "Speed to Production" - getting the app delivered, out to production quickly. The advantage is better responsiveness and agility for the business. Unfortunately this usually equates to less time for proper testing, and especially performance testing.


It's also interesting how he points out performance comparison at the presentation layer - which I think could be described as the "perceived performance" for the entire application system. In Enterprise RIA or mashed-up application users might not perceive on-par performance from either Flash or Silverlight, depending on the backend systems.

In a big RIA, you have multiple points of exposure to latency risk introduced in the data services calls behind the application - so even if the UI is responsive to the user, the data retrieval might be slow. Check out James Ward's blog on "Flex data loading benchmarks" - showing the combination of AMF and BlazeDS, which is proving to be a very scalable and responsive combination.

Tags: Silverlight

Welcome to Loadrunner at hp.com

Greetings fellow LoadRunner Gurus! This is the introductory entry to the new LoadRunner @ hp.com blog, written by yours truly - Mark Tomlinson, the Product Manager for LoadRunner here at HP Software.

As you might be expecting a lengthy personal introduction for the first blog entry, I've decided to not deliver on that foolishly stereotypical initiation. Instead I'd like to start off here with a few opportunities to engage with you directly for the betterment of LoadRunner's future.

First, we are looking for a few good and challenging applications for LoadRunner to go after - as a our New Horizon research team are developing some very new exciting solutions for advanced client record and replay. If you got an application with any extreme architecture including:

  1. Web v2.0 or Rich Internet Applications on Java, .NET or Adobe
  2. Multi-multi-multi-protocol…like a mashed-up app with several backend systems
  3. Encoded/Serialized or Secured communication protocols
  4. Asynchronous, multi-threaded client(s) or data-push technologies
  5. Any combination or all of the above.

If you have a challenge for LoadRunner, we'd love to hear from you.

Second, we have a new release of LoadRunner coming soon and we are just getting our plans for the early-access beta program put together. If you're an existing customer and you're interested in getting into our formal beta program for LoadRunner drop us an email. We have an early-access program that does require your feedback, production usage and reference for the new release. We'd love to have your support for all that - but I certainly understand some folks just want to share some feedback on the new stuff. We need that also, if that's all you can do.

Lastly, I'd love to hear from you - so drop me an email (loadrunner at hp . com). What do you love about the product, what do you not like so much? What kinds of testing are you doing? What new applications are you being asked to test? How do you get your testing done? How do you generate meaning from the load test results? What is your favorite BBQ restaurant? Let me know your thoughts and feedback - the good, the bad, the ugly. I have been using LoadRunner for nearly 15 years - so I plan include your input into our strategy for moving forward with innovating our solutions. I will post back weekly with some Q&A, if you'd like to share the conversation with our community.

Again - all of these initiatives are really important to the future of LoadRunner. Your participation is encouraged and greatly appreciated!

Thanks - and again, welcome to the blog!

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.