HP LoadRunner and Performance Center Blog

Displaying articles for: November 2008

Let's talk about: The Concurrency Factor

Recently I had this question come from some internal peers of mine:

“We have a customer who is being pushed to emulate a load test with larger number of Vusers ... by manipulating think time in the script. This customer is trying to run some 3000 Vusers for 300 vuser license. The argument is like this: 1 vuser takes 5 seconds of think time in between process. If they reduce the think time to 0, they are emulating to run 5 times more Vusers for the same license. If you have any input, let me know. Thanks!”

  So, yes - there is an issue here, but only with regards to the risks they are taking at getting a false positive test result. That scenario he explained is referring to what I call a "concurrency factor" for the load test design. 

  Basically, if you have a script that runs 1 real world user taking 10 minutes per iteration and you instead run it at 5 minute pacing it's a concurrency factor of 2x. And given the amount of pacing time (lowering from 10 minutes to 5 minutes) that would be reasonable and extrapolating the results mathematically is quite credible. But there is a major caveat to taking this approach for load testing.

  As the concurrency factor increases, the more difficult it is to have a credible extrapolation to the real-world conditions. In short, a higher concurrency factor is less of a real-world test - because it is running each virtual user thread at super-human speeds. It is the weakest point in the load test design, and here's why:


  If the think time or iteration pacing approaches zero, or almost none - it may show dramatically different resource consumption on the server. This can be seen easily when we describe the transaction throughput (or rate) for the load test. Take for example a 1000 trans/hour load test:

That could be 1000 virtual users running at 1 transaction in 60 minutes

  • the users log on each session, which consumes lots more memory
  • the users hold open their session for very long time, which tests the configuration for timeouts, connections, etc.
  • the users hardly consume much CPU for each session, so the context switching, paging and swapping may be much higher

That could be 1 virtual user running at 1000 transactions in 60 minutes

  • the user log on only 1 session, which consumes almost no memory and very different from the real-world
  • the user hold open their session for very, very short time, which is unrealistic hammering of session id and creation
  • the user takes each session and pushes through highest transaction rate, which hammers the CPU differently

Basically, this is a misunderstanding of the difference between real-world load testing and stress testing.

  
If a customer were to conduct testing with such high concurrency factor they may find painfully that they got a false positive from the test. In our experience, this situation is exactly what leads so many engineers down a path to being overly confident about the performance of their applications. And it leads directly (and blindly I might add) to massive performance issue for scaled-out production systems, which is pervasive architecture for the web.  They have greatly increased the risk of failure.

Labels: LoadRunner
Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
Follow Us


HP Blog

HP Software Solutions Blog

Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation