HP LoadRunner and Performance Center Blog

Video: Real Stories of Load Testing Web 2.0 - Part 3

The third part in our Load Testing Web 2.0 series covers a not-so new concept of server-side data processing.  Don't be fooled into thinking you already know about server performance, because these new architectures are using client-like Javascript on the server; which is sometimes called reverse Javascript.  This video will describe how performance issues can sneak into this type of architecture and how even a simple component can result in serious latency and server-side resource overhead.  Here is Part 3 of the series, "Real Stories of Load Testing Web 2.0: Server-side Javascript Impacts (opposable thumbs not required)."



(if your browser doesn't show the video in-line, just click here)

FREE Trial of Shunra's WAN emulation within HP LoadRunner

Who said good things don't come for free?  Recently I've spent much more time with our partners at Shunra Software...and I've learned more about networking and performance than I've ever imagined.  In celebration of HP Software Universe 2009 in Hamburg, Germany this week, they have posted a special FREE trial of the VE Desktop for HP Software.  WOW!  This is the integration that we introduced in version 9.5 of LoadRunner and Performance Center which has become extremely popular.  Here's a capture from the Shunra blog entry:









"In celebration of HP Software Universe in Hamburg, Germany Shunra is offering a free trial of WAN emulation within HP LoadRunner, VE Desktop for HP Software. You can use VE Desktop for HP Software to measure your application performance through a variety of emulated WAN conditions, including replication of production, mobile or theoretical networks. Click here to register for more information, receive download instructions, and get started making your application performance testing network-aware!"

  I guess this means: "Happy Holidays from Shunra"!  :smileyhappy:

Video: Real Stories of Load Testing Web 2.0 - Part 2

As the second part in this series we now, we highlight another common challenge we hear from customers testing new web applications; client-side performance. As you add these exciting and new components into the once-upon-a-time-very-thin browser, you'll find increased CPU and Memory resource utilization on the client.  Perhaps as no surprise, this can result in slower response times from page rendering or Java script processing overhead.  Here is Part 2 of the series, "Real Stories of Load Testing Web 2.0: Web 2.0 Components - Client-side Performance (Fat clients, you make the rockin' world go round)."



(if your browser doesn't show the video in-line, just click here)

Video: Test Mobile Performance with Shunra (...can you hear me now?...now?)

Imagine yourself with 3 colleagues riding on an evening train from Berlin to Hamburg, and each of you have the same mobile service provider and similar phones. For 90 minutes, each of you are getting connected with different roaming providers, connection stability, sometimes with 3G and sometimes NOT!  (It boggles the mind, doesn't it!?)


Well, if you're lucky enough to be sitting next to a guru like David Berg from Shunra, you'll get the whole story on why this is happening.  In this brief, but highly valuable video interview we help to explain how the Shunra VE Desktop and HP LoadRunner can be used together to produce a robust load testing solution for mobile applications, complete with mobile WAN emulation.  Watch...and learn!



(if your browser doesn't show the video in-line, just click here)


Also, David has an even more detailed explanation on his blog, which goes much deeper into the challenges of developing mobile applications that REALLY work well.

LoadRunner Blog-o-rama: LIVE from HP Software Universe 2009 Hamburg

I'm now on the second day of this week, getting ready for a customer advisory board meeting all day here in Hamburg, Germany.  This week we are having the 2009 HP Software Universe conference and I will try to be blogging and twittering all the day long.  Until I just cant type with thumbs into my phone any longer.  


If you are a blogger and you are attending HP Software Universe
Hamburg...it would be great to meet you in person.  Seven other HP bloggers will be updating live from the event. Please look them up:


     Genefa Murphy - covering Agile
Development and Requirements Management
 


     Amy Feldman – @HPITOps, covering Application
Performance  Management
 


     Aruna Ravichandran - covering Application
Performance  Management


     Mark Tomlinson - covering Load
Runner and Performance Center


     Michael Procopio - @MichaelAtHP, covering Application
Performance  Management
and Business Service Management


     Mike Shaw – interested in meeting IT Operations management for something
new in 2010


     Peter Spielvogel - @HPITOps, covering Operations Management


 


You can also follow
the entire HPSU event on Twitter – Twitter.com/HPSWU
@HPSWU, hashtag #HPSWU, or become a Facebook
fan


More on HP Software Universe 2009 Hamburg


· Sneak
Preview – Application Performance Management Blog


· Sneak
Preview – Infrastructure Management Software Blog


· Optimizing
cost by automating your ITIL V3 processes


· Event Information
Page


 

Video: Real Stories of Load Testing Web 2.0 - Part 1

We know many of you are finding great challenges with testing new web applications built with modern architectures and components like Silverlight, Flex, JavaFX and AJAX. So, Steve and I decided to put together a multi-segement video covering some of the top issues that make LoadRunner web testing much harder and more complex. Here is Part 1 of the series, "Real Stories of Load Testing Web 2.0: Impacts of WAN latency of Web 2.0 apps. (Mark Googles himself and Steve offers to go to Paris)."



(if your browser doesn't show the video in-line, just click here)


 


In this video we also mention one of our new partners delivering integrations for WAN Emulation that integrates with HP LoadRunner and Performance Center.  Visit the Shunra VE Desktop page for more details and free trial download.


 

Video: How LoadRunner Works

Sure, I know that we already posted a screen-shot video about LoadRunner Walkthrough, but it might also be nice to share another video whiteboard session to introduce HP LoadRunner and how the individual components of HP LoadRunner and HP Diagnostics are installed. 



(if your browser doesn't show the video in-line, just click here)


 


 


 

Understanding the language of hosted load testing

I recently got an email from a colleague which was an invitation from a testing service vendor which does load testing on "The Cloud" (internet-hosted testing service).  The invitation included some misleading language about load testing that I think can be confusing to engineers that are new to performance testing.  So, here's the hook question they used in their invitation email:


     " Interested in learning how to load test your
applications from an outside-in customer perspective, so you can find and
resolve problems undetectable by traditional behind-the-firewall tools?"


Aside from the overly-casual, marketing-savvy tone of the sentence, there's actually so many hidden assumptions in this sentence it might be helpful for us to break it down:


     "Interested in learning how to load test your applications..."


Well, that's obvious...of course we are interested in learning about load testing our applications.  When this term is used generically as 'load test' I always point out that there is an assumption about the definition for load testing.  Don't be fooled by this over-simplified language - because you might be led to think that doing performance testing is simple and easy.  Like anything in IT...it's usually not simple, and often much less easy.  Also, there are many different forms of performance testing, depending on the objectives for the testing; capacity planning, scalability, configuration optimization, query tuning, migration, disaster recovery & failover and stress testing...just to mention a few. 


     "from an outside-in customer perspective..."


We know this vendor was offering testing services from OUTSIDE the firewall, generating load IN to the application under test.  This is usually a phase of testing that comes in addition your normal "in-house" performance tests, right before the system goes live, by running the load from the external network and infrastructure outside the company firewall or at a hosted facility.  But it is more important to understand the concept of "outside-in" is actually the normal definition of most kinds of testing and especially for black box test design.  To understand this, just ask yourself the inverse question: how would you conduct "inside-out" testing?  My point here is that they mention "customer perspective" which is inherently an "outside-in" perspective...because end-user customers see almost every application from some type of external interface (GUI, or CLI).  Essentially every good test is inherently designed from a customer-perspective. Customer requirements do exist even if you do not document them, or even think about them.  There is a customer (or end-user) somewhere that will be impacted by the system's behavior.  In your tests, those requirements should be directly applied in the test script and test cases themselves.


     "find and resolve problems"


Well, it would be a shame to find a problem and not resolve it.  Wouldn't you agree?  For many years now there have been complementary solutions to performance testing tools that enable profiling and diagnostics on the system under test.  It's very common now to have a performance team that includes not only testers, but developers and architects that can repair the application code and queries on-the-spot, right when a bottleneck is found.  We hear from many developers using LoadRunner for unit performance testing, and they find & fix bottlenecks so quickly...it is perceptibly a single task to them.


     "undetectable by traditional behind-the-firewall tools"


Undetectable?  Really?  There's an implication here that your performance testing environments do not include enough of the production infrastructure to find and resolve bottlenecks that would usually only exist in production.  That's only true if you don't replicate 100% of the production environment into your test lab - which is a common limitation for some companies.  But let me be very clear - that is not a limitation of the testing tool.  It is only a limitation of your resources or imagination.  To be totally honest, LoadRunner already fully supports testing and monitoring and diagnostics of nearly 100% of your production environment.  You can even run LoadRunner against your ACTUAL production systems, if you want to (although we don't recommend overloading production...in fact, please don't do that to yourself).  And don't forget, a good replacement for the actual production infrastructure is a virtualized or emulated infrastructure - using solutions like Shunra or iTKO LISA Virtualize.


The word "traditional" is just a derogatory connotation which seeks to discredit any technology that existed before today.  This usually means that there is also very little respect for the existing discipline of performance testing as it is commonly defined and conducted today.  The truth is there is nothing traditional about LoadRunner or load testing.  And to be very honest there's nothing modern about this "outside-the-firewall" testing service provider.  HP SaaS (formerly Mercury's ActiveTest) has been doing this type of testing for nearly 10 years...and they've been doing it successfully with LoadRunner, year-over-year with every new technology innovation that's come along the way.


Don't get me wrong - I do agree there are some physical bottlenecks that cannot be detected "behind-the-firewall".  Those are bottlenecks you might find with your ISP or Teleco provider in their systems or switch configurations.  Maybe the routing tables for global internet traffic aren't ideal for your application end-users in New Guinea.  Or maybe the CDN systems are having difficulty with performance throughput and simultaneous cache replication.    But if you find a bug with those OTHER COMPANIES...how do you get those bugs fixed?  Can you force them to optimize or fix the issue?  Is your only option to switch to another external provider with better performance, but perhaps other risks?


So, if we were to re-write this sentence with something more accurate, transparent and honest:


     "Are you interested in learning how to conduct effective seasonal spike testing of your production systems from outside-the-firewall, so you can enhance your existing internal performance testing efforts by diagnosing additional problems that you'll find with the external production infrastructure that you probably don't have in your own testing lab?"


(I guess it doesn't sound as catchy, eh?)


 

Testing is Back on the Charts

...and Quality sounds better than ever!  The latest release entitled Here Comes Science from the Grammy-winning duo They Might Be Giants (John Linnell and John Flansburgh) includes a track called "Put It to the Test" which celebrates the enthusiasm of testing a hypothesis for the purposes of ratifying our understanding of the truth.   In short - testing is actually COOL!  This is not a song just for kids - no, no, NO!   If you've been a veteran software tester or worked in any capacity for quality assurance, I think you'll find the sincere advocacy for testing very refreshing.  I'll admit it had me singing along in the car:


"If there's a question bothering your brain - That you think you know how to explain
You need a test - Yeah, think up a test

If it's possible to prove it wrong - You're going to want to know before too long
You'll need a test

If somebody says they figured it out - And they're leaving any room for doubt
Come up with a test  - Yeah, you need a test

Are you sure that that thing is true? Or did someone just tell it to you?
...Find a way to show what would happen - If you were incorrect 
...A fact is just a fantasy - Unless it can be checked 
...If you want to know if it's the truth - Then, my friend, you are going to need proof 


Don't believe it 'cause they say it's so - If it's not true, you have a right to know"


These words are not just literally music to my ears as a die-hard software tester and also as person who respects the processes and disciplines of the scientific community.  You should know that in our world of computer science and software there is an active resurgence of quality initiatives - a testing renaissance; integrating QA and testing into agile development, testing in production environments, testing for green carbon footprint and even testing requirements before we build anything.  That's right just thinking about your own thinking is a form of testing - that is, if you're willing to question your thoughts honestly!


The new Music CD & Video DVD combo includes also a video for the song which can be seen on YouTube:



At your next SCRUM or Team Meeting - please add an agenda item to listen to the song, watch the video, discuss the lyrics and try to relate the ideas to how you are doing testing in your projects.  Testing is back and on the rise - and now we have a cool song to go with it!


But don't just take my word for it...take my recommendation and PUT IT TO THE TEST!  Geeked

Update: 15-hours into the Non-stop HP Virtual Conference

*original post was at 11:30pm on 9/29/2009*


Okay - so I've been working the Worldwide HP Virtual Conference: Functional, Performance and Security Testing for about 15 hours now - nearly non-stop and I've had some great chat sessions with customers from all over the world.  I've been on since 8:30am PST from Mountain View, California where I live.  I've had 3 medium-sized cups of coffee - actually called Depth Charges from the coffee shop by my house.  These are 16 ounces of regular coffee with 2 shots of espresso added for and extra charge.  I'm posting live updates on Twitter and the feedback has been great.  Also, I've been broadcasting live from my home studio on ustream.tv the entire time (almost).



The HP Virtual Conference materials are still available through November 2009

The LoadRunner Non-stop HP Virtual Conference

THE TIME HAS COME!! I really don't want you to miss the Worldwide HP Virtual Conference: Functional, Performance and Security Testing in Today’s Application Reality.  And just to show you how committed I am to you and to the conference, I'm going to attend the conference non-stop for the entire duration, just as we did in the promotional video!  I will be online in chats, at the booth, in the lounge and presenting the entire time and I will also be documenting the experience live from my home office, with video streaming and chatting.  Click here to register for free, now!!!

Follow Mark on twitter and ustream.tv for the duration of the entire conference!
Follow the Entire HP Conference on twitter

This conference is going to be the most awesome!


 



  • 40 sessions covering Agile, Cloud, Web 2.0, Security, customer best practices and more

  • Localized booths and content for Benelux, Nordics, Austria & Switzerland, France, Germany, Japan, Korea, Iberia and Israel

  • Live worldwide hacking contest


 


 


 


Here are just the Top 5 reasons you should attend:


 


#5 - No need to spend days out of the office
#4 - No travel or registration fees
#3 - No long registration lines
#2 - No need to choose one session over another - see them all!
#1 - No need to choose one representative from your department to attend.  Everyone can attend and learn!


(Of course, I think the best reason is it's a great excuse to work from home for the entire day...which is exactly what I'll be doing!)


Conference Dates and Schedule


Americas – 29 September: 
11am – 7pm EDT/ 8am – 4pm PDT


APJ – 30 September: 
11:00am – 5:00 pm (AEST) / 10:00am – 4:00pm (Tokyo) 
 9:00am – 3:00pm (SG time) / 6:30am – 12:30 pm (Bangalore)-


EMEA – 30 September:
8:00am – 4:00 pm (GMT+1) / 8:00am - 4:00pm (UK) 
9:00am to 5:00pm (Amsterdam, Berlin, Paris, Rome)


REGISTER NOW!!!


 


See you online, at the SHOW!

What are the Goals of Performance Testing?

So what is the point of performance testing?  I get this question often.  And depending on who you talk to, you get different answers.

First let me begin by telling you what are NOT the goals of performance testing / validation.



  • Writing a great script

  • Creating a fantastic scenario

  • Knowing which protocols to use

  • Correlating script data

  • Data Management

  • Running a load test


This is not to say that all of these are not important. They are very important, but they are not the goals. They are the means to the end.


So why DO people performance test? What are the goals?



  • Validating that the application performs properly

  • Validating that the application conforms to the performance needs of the business

  • Finding, Analysing, and helping fix performance problems

  • Validating the hardware for the application is adequate

  • Doing capacity planning for future demand of the application


The outcomes of the performance test are the goals of testing. It seems basic. Of course these are the goals. But...



  • How many people really analyse the data from a performance test?

  • How many people use diagnostic tools to help pinpoint the problems?

  • How many people really know that the application performs to the business requirements?

  • How many people just test to make sure that the application doesn't crash under load?


Even though they seem obvious, many testers/engineers are not focusing on them correctly, or are not focused on them at all.



  • Analysing the data is too hard.

  • If the application stays up, isn't that good enough?

  • So what if it's a little slow?


These are the reasons that I hear. Yes, you want to make sure that the application doesn't crash and burn. But who wants to go to slow website. Time is money. That is not just a cliche, it's the truth. Customers will not put up with a slow app/website. They will go elsewhere and they do go elsewhere. Even if it is an internal application, if it is slow performing a task, then it takes longer to get the job done, and that means it costs more to get that job done.


Performance engineering is needed to ensure that applications perform properly and perform to the needs of the business. These engineers do not just write performance scripts. Just because someone knows Java does not mean that they are a developer. And just because a person knows how to write a performance script does not mean they they are a performance engineer.


Performance engineering requires skills that not all testers have. They need to understand the application under test (AUT), databases, web servers, load balancers, SSO, etc.... They also have to understand the impact of cpu, memory, caching, i/o, bandwidth, etc.... These are not skills are learned overnight, but skills that are acquired overtime.


I wrote a previous blog entry on "you get what you pay for". If you pay for a scripter, you get a scripter. If you pay for a performance engineer, you get a performance engineer (well not always :smileyhappy:. Sometimes people exaggerate their skills :smileyhappy: ).


Companies can always divide and conquer. They can have automaters/ scripters create the scripts and the tests, then have performance engineers look at the test and analysis the results. In any case the performance engineer is a needed position if you want to properly performance test/validate.


It needs to be mandatory to know what metrics to monitor and what those metrics mean. Also knowing how to use diagnostic tools needs to be mandatory. Again in a previous blog I mentioned that if you are not using diagnostics you are doing an injustice to your performance testing. Without this analysis knowledge you are not truly performance testing, you are just running a script with load. Performance testing is both running scripts and analysing the runs.


By looking at the monitoring metrics and diagnostic data, one can begin to correlate data and help pinpoint problems. They can also notice trends that may become problems overtime. Just running a loadtest without analysis will not give you that insight. It will just let you know that the test appeared to run ok for that test run. Many times just running the test will give you a false positive. People wonder why an application in production is running slow if it already passed performance validation. Sometimes this is the reason (You never want this to be the reason). Proper analysis will ensure a higher quality application.


As I said, these are not skills that are created overnight. Performance engineers learn on the job. How do you make sure that this knowledge stays with a company as employees come and go? That is where a Center of Excellence (CoE) comes into play (You knew I was going to have to pitch this :smileyhappy: ). If you centralize your testing efforts, then the knowledge becomes centralized as opposed to dispersed through a company only to get lost if those employees with the knowledge leave. You can read yet another one of my blogs for more information on the CoE. Wow! I've just been pitching my blogs entries today :smileyhappy:. But I digress.


Let's stop thinking that proper performance testing is writing a good script and agree that performance engineering is not an option but a must. Let's start to focus on the real goals of performance testing and then all the of the important "means to the end" will just fall into place.

HP Performance Engineering Best Practices Series

Just to let you know that we've been putting together some published practices for LoadRunner and Performance Testing...and the electronic version of the book(s) are available free of charge!


This one is "Performance Monitoring Best Practices" authored by Leo Borisov from our LoadRunner R&D team.  Leo sent along this description and instructions for downloading the book:


 


"We have always
recognized the fact that having best practices and methodology would greatly
simplify the life of performance engineers and managers, and now we are
beginning to fill this need. The book is available with the SP1 
installation.
Access it from the product documentation library, or from the help menu.



To
download a copy from the hp software support site:


  1. go to http://h20230.www2.hp.com/selfsolve/manuals 

  2. log in
    using an hp passport or register first at and then log in

  3. in the list of hp
    software
    product
    manuals choose either LoadRunner or Performance Center – the book
    is listed under both

  4. select product version: 9.51 and operating system:
    windows

  5. click search


Since this is the first book in the series covering
various aspects of methodology, we would really appreciate your feedback. Please
send your feedback directly to me or lt_cust_feedback @ hp.com."


Congratulations Leo - thanks for your efforts!


 

Video: Running LoadRunner Virtualized

If you've ever needed to understand how LoadRunner should be implemented in a virtual environment, you should enjoy this video walkthrough explaining the best practices to do just that.  Make a specific note about how your Iteration Pacing and Think Time settings really effect the health, scalability and accuracy of your load test.



(if your browser doesn't show the video in-line, just click here)

Video: Ten Virtual Users Just Aren't Enough

And the Rolling Stones once sang a song, “You can’t always get what you want” – but that should all make sense when you watch this video about stress testing.




(if your browser doesn't show the video in-line, just click here)

HP Software Universe Online

If you missed our awesome gathering at HP Software Universe 2009, here’s your chance to get the session information from the private and quiet comfort of your own computer.


HP Software Universe Online, offers:


· Session PDFs and audiocast recordings to more than 180 presentations delivered by HP experts, customers, and partners. Topics included implementation best practices, new releases, and product roadmaps.


· Partner literature—white papers and product briefs—from one of the dozens of our strategic partners.


· Live, on-demand webcasts featuring product-migration information, tips and tricks, best practices, and product launches


Click on the link here to go to the registration page.

Video: LoadRunner Walk-through

 


Here's a great introductory video that explains how LoadRunner's different applications work together.  It will give new users a look at the Virtual User Generator, Controller and Analysis tools - and how they generally work.



 


 

Labels: LoadRunner

Learn: what can you really get out of a Performance CoE

I've been working in centralized performance testing organizations for more than 6 years, giving presentations and consulting on Perf CoE - to the point where I didn't think there was much more to learn.  Then I read the preliminary research from Theresa Lanowitz who digs deeper into the true value of doing centralized performance testing.  There is compelling new evidence to show the real value that our customers are finding, the clear benefits and some of the hidden value.   


Please consider attending this session from Theresa Lanowitz, founder of voke Inc., as she presents the findings of a recent Market Snapshot on Performance Centers of Excellence. In this presentation, she will discuss:



  • Building a performance CoE

  • Achieving performance CoE ROI – qualitative and quantitative

  • Gaining organizational maturity through a performance CoE

  • Realizing the benefits, results, and strategic value of a performance CoE

Attendees will receive a copy of the voke "Market SnapshotTM Report: Performance Center of Excellence."


Click here to Register

Labels: LoadRunner

Managing Changes – Web 2.0, where's your shame

Oh, this strange fascination started in 2004 when they coined this new generation of ‘web development’ called Web 2.0.   I witnessed this evolution of technology from my seat in steerage at Microsoft as customers switched from the old Active Architecture (remember Windows DNA?) to the warm impermanence .NET and J2EE architectures for web applications.  Out with the old and in with the new, but the performance problems were generally the same – memory management, caching, compression, heap fragmentation, connection pooling, and so on.  It might have had a new name, but it was the same people making the same mistakes.  Back then we dismissed some of these new architecture as unproven or non-standard.  But that didn’t last long.  Now almost 5 years later with Web 2.0, any major player in the software industry that hasn’t adopted the latest web architectures is being spit on as being plainly outdated or stuck with the label of being traditional.


When it comes to testing tools and Web 2.0, I think that “traditional” does not equate to obsolete – no matter how some of the “youngsters” in the testing tool market may like to imply.  The software industry is competitive, certainly and I think new companies and software should just evangelize the positive innovations they have and then the facts can speak for themselves.  If the ‘old guys’ can’t support new Web 2.0 stuff…then it will be obvious soon enough.  For instance, if a new testing tool company doesn’t fully support AJAX clients it’s just unacceptable at this point.


However, I do believe it is fair game to evaluate existing software solutions (pre-Web 2.0) on how well they can be adapted to support newer innovations in technology.  As for LoadRunner, I think we have a long history of adapting and embracing every new technology that has come along.  I started using LoadRunner with X-Motif systems running on Solaris.  That era and generation of technology is long since died (no offense intended to Motif or Sun).  Today, the same concepts for record, replay , execution, scripting, and analysis are still innovative and very relevant.  As long as the idea for the product is still valid, you can still deliver a valid product.


Adapting to changes here in LoadRunner we usually start with overcoming the technical hurdles for creating a new virtual user, or updating an existing one.  And as I stated above, we have a long and rich history of doing this – probably more than any other testing tool.  As an example, in versions 9.0, 9.1 and 9.5 we have continued to improve our support for AJAX, Flex and Asynchronous applications.  We respond to change quite well and even if we take some extra time to evaluate every aspect of what this ‘new web’ change means to our customers.  It’s worth getting right and not being swayed by the hype of the ‘Web 2.0’ label.


Let me finish by stating that these new web technologies as challenge to testing tools, but it’s even more of a change to testers.  I’ve heard that many-a-tester gets a surprise by the next version of the AUT which secretively has implemented new Web 2.0 architecture or even started using web services calls to a SOA.  Change is a surprise only if you’re unaware or unconscious.  Sure, it would be a failure to not communicate to QA that there were some significant technology changes coming, right?  To some, this would sound too familiar.  Like an institutionalized version of “throw it over the wall” behavior, but honestly these new technologies (like AJAX) have been around for nearly 5 years now.


As for most testers, here’s a thanks to Web 2.0 – “You've left us up to our necks in it!”

Video: Understanding Iteration Pacing

I recently got a question about LoadRunner’s Iteration Pacing configuration which is part of the Virtual User Run-time settings. The best way to illustrate the difference between think times and iteration pacing is to show you in a video whiteboard session.




(if your browser doesn't show the video in-line, just click here)

Labels: LoadRunner

Hey EMC – how’s it feel to run a 100,000 user benchmark?

Back in November 2008 we got a phone call from our long-time customers at EMC Documentum. They were going to attempt a very large load test pushing their solution scalability to 10-times their normal volume. This would be an absolutely huge test for the Documentum system and they believed that it would be the world’s largest Enterprise Content Management (ECM) benchmark ever to succeed.


Of course they wanted LoadRunner to run 100,000 virtual users. And they needed lots and lots of HP hardware and storage.


So, we’ve got some good friends up in the Microsoft Enterprise Engineering Center and we all got together to deliver this massive test. And recently we published all the results and a cool video that explains how the testing was accomplished.


Check out the EMC benchmark results here!


The surprising thing to us was the LoadRunner Controller performance on the latest HP commodity hardware. It did really well at running 50k vusers and probably more than that if we had more memory. Also, when you consider the whole system they were testing it didn’t take much hardware at all to deliver sub-second performance for 100k user load. And don’t forget – the database was SQL Server 2008, the latest release from Microsoft. You can see a picture of the 64 CPU’s on the SQL database Craig’s blog.


Congrats EMC and SQL Server…it’s nice to see super scalable systems in action!

Labels: LoadRunner

LoadRunner Cloud: Get out your Umbrellas and Rubber Boots

So - today I read about the new HP annoucement for "HP Cloud Assure" and how we are going to help customers to gain more confidence in cloud-based applications. HP Cloud Assure delivers industry-leading IT management solutions as a managed service to help companies gain the benefits of cloud services without sacrificing control and increasing business risk." (just to quote the website). What this really means for quality's sake, is to ensure we don't become too casual about the responsibility for application performance in the cloud. "Don't look at me, man - performance is something the cloud guys do" should not be a common excuse for cutting performance-corners from your test plan.


As a performance tester, I'm reminded of a very large load test I worked years ago where we were running almost 1,000,000 browser connections from load generators all over the world. It was nearly impossible back then to get our quite simple website to scale - the infrastructure had issues, the firewalls had issues, the server-to-server clustering had issues and the database honestly was the last of our bottlenecks to be resolved. The scale of that test is dwarfed by comparison to today's cloud infrastructures, especially when you consider the almost exponential increases in density and computing power. I think that most engineers should be very concerned about the challenges of testing cloud-based applications with LoadRunner.





  • the topology for running an application in the cloud has transformed dramatically...it's not the same ole' web architectures you already know





  • monitoring the system-under-test will be a challenge if you don't have local access to servers and/or proper permissions




  • your capability to conduct root-cause analysis and hunt down a bottleneck in the servers will be limited, sometimes dramatically



So, if you have applications that are hosted out in the cloud or have dependencies on web services running remotely across the internet, you would probably realize a lot of benefit from working with HP SaaS. They have more than 9 years experience conducting performance testing "in the cloud" and by the way, they have some of the best LoadRunner guys I've ever known working there.



Check out the live webcast


Labels: LoadRunner

Taking visibility for granted?

Testers are naturally curious people. We enjoy the creation of questions for the purpose of finding the truth, or better still for the purpose of creating new truths. For centuries, the true professional testers have been scientists who more often than not were the determiners of new truths and thus we recall them as inventors. Back in the mid-1700's a theologian and scientist Joseph Priestley conducted several experiments to determine what types of gasses were generated by plants. It started with a curiosity about the observed behavior of a wax candle burning within a glass jar. And it was truly a curious thing because the wax candle burned out long before it exhausted the fuel supply of wax or wick. The flame had consumed all of the fuel supply of oxygen from the sealed environment, proven by the fact that when he tried to re-ignite the candle inside the jar using a simple magnifying glass to converge intense rays of sunlight on the candle's wick, it failed.


When Priestley attempted the same experiment but this time adding a sprig of mint into the glass jar with the candle the result was similar at first: the candle burns, oxygen is consumed, candle goes out, can't re-ignite it with sunbeams. But after nearly a month with the candle left isolated in the jar with the sprig of mint, Priestley then re-attempted to ignite the candle with the magnifying glass and rays of sunshine. And of course it worked. The candle was lit and once again consumed the oxygen until it burned out. Priestley deduced that the plant was somehow producing a gas that allowed the candle's flame to burn once again. What he could not have predicted, is that he would be the first to discover a new truth about the role of oxygen in photosynthesis.


Priestley's methods really got me thinking, especially about his test tools and techniques. All he used were rudimentary equipment and simple deductive reasoning for analysis. What if we were to attempt this same experiment today using contemporary scientific inventions? We would probably use a oxygen sensor to measure the amount of oxygen in the jar. This would make all the other implements for the experiment obsolete: the wax candle, the magnifying glass, the need for bright sunshine. The creation of the lambda probe oxygen sensor in the late 1960's was a response to the demand for measurement of oxygen in an experiment, machine or system. The design of the sensor allowed for visible measurement of an invisible gas. Even Priestly himself would have appreciated the sensor not simply for the new visibility it provided, but for the ease of use and accuracy in measuring his experiment.


For Priestley in the mid-1700's photosynthesis was unknown and oxygen was both invisible and immeasurable. Boris Beizer more than 200 years later was challenged to measure the known workings of the computer which were, for all intents and purposes, inside an invisible black box. The discoveries and solutions that resulted from their work show us how inappropriate it is to take any prior science for granted and also provide for us a new baseline for how to test. Simply, it is easier to correlate a visible test measurement to the tests objectives or pass/fail criteria. As a result, testing tools today already make test measurements visible, actionable and automatically correlate* the results back to pass/fail criteria. Today we take for granted that nearly every testing tool comes with mechanisms for "making visible" the performance metrics from the system under test. Just as we take for granted that every modern automobile now uses oxygen sensors and an onboard computer as essential components to improve fuel efficiency.


But just making something visible isn't enough. Consider LoadRunner's monitoring and diagnostics capabilities. Could you imagine today having to monitor CPU resource utilization without having the test tool automatically make the measurement visible for you? In the 1980's Boris Beizer shared stories about his counting CPU ticks with an AM radio next to the machine. That sounds like such an old solution - almost like having to measure oxygen with a wax candle in a jar. My point is that visibility should be understood as a means to improving measurability. Measurability is what truly accelerates the testing process. Innovation in performance testing should improve and extend the visibility and measurability we have today. What more can we make visible? What new methods of measuring, arranging and correlating test data can we create? Can we automate the capabilities we have today or build intelligence to aggregate or parse this new data?


And we don't have to start with a wax candle in a jar or an AM radio.


*- see lr_end_transaction("login", LR_FAIL);

ALM webcast with Brad and Paul next week

Next week Paul Ashwood and Bradd Hipps will be giving a webcast over on theserverside.com, presenting a whitepaper and case study Application Lifecycle Management (ALM). There will be some interesting details on how performance testing can be integrated into the lifecycle for application development and delivery to production. If your company or testing organization includes performance as part of the SDLC or Lifecycle processes, this might be very useful information and I'm sure Brad would love to hear some questions from you.


Here's the official abstract:


"Effective companies are riding the latest waves in application modernization. These waves touch nearly all of IT from technology and staffing to application architectures and release strategies.


HP Application Lifecycle Management (ALM) solutions help your IT organization make the most of these trends and avoid being swamped in the process.  ALM from HP is an integrated suite of leading solutions that enables your IT leaders to answer comprehensively the key questions business stakeholders have regarding application modernization.


You are cordially invited to view HPs unique perspective on what ALM is and how our solution renders better business outcomes. All attendees will receive our new white paper, “Redefining the application lifecycle: Looking beyond the application to align with business goals.”


Click here to register

Labels: LoadRunner

Tomorrow: a webinar with Nationwide

 

















HP


Webinar with Nationwide
























HP Software recently released the 9.5 versions of HP LoadRunner software and HP Performance Center software. With these new releases, HP is addressing today's top of mind application challenges around rapid technology change and adoption of new processes. In this challenging economic environment, companies have to be as lean and agile as possible. Please join HP and Nationwide for an informative Webinar where you will hear the highlights of the latest release as well as get a preview of an actual implementation of HP Performance Center 9.5 from John Seling of Nationwide. You will hear:



  • What new capabilities are included in the 9.5 release

  • How Nationwide leveraged the new features in 9.5 to shorten their test timeframes and make their tests more realistic

  • More about the new integration with Shunra Software

Join us to find out how HP's performance validation solutions can enable you to achieve legendary QA projects. All attendees will receive our new white paper, "Innovations in enterprise-scale requirements management, quality planning, and performance testing".

















Register Now »


















Take control of application performance and scalability challenges

























DATE:


March 17, 2008







TIME:


10:00 a.m. PT / 1:00 p.m. ET







SPEAKERS:


John Seling, Performance & Data Engineering Manager, Nationwide Priya Kothari, Product Marketing Manager (LoadRunner & Performance Center), HP Software







DURATION:


60 minutes with Q&A







REGISTER:


Click here to register for the webinar











Labels: LoadRunner

Explained: Virtual User Days, ("a day in the life of a virtual user")

In the complex world of LoadRunner licensing we have a mechanism that allows you to have a special pool of virtual user licenses that are called Virtual User Days.  The textbook description of a Virtual User Day (a.k.a. VUD) is:














Virtual User Days are the licensing mechanism that allows Virtual Users to be executed in an unlimited number of runs against a single AUT within a single 24-hour period of time.


 


As you can see, that description isn’t very clear about describing how the VUD licenses actually get implemented in the LoadRunner Controller.  I’ve answered a few emails over the last few months to describe this, so I thought it might be good to share the answers and explanations here.  Virtual User Days are sold in a quantity of Virtual Users that can be executed within a 24-hour period, where once the vuser license is consumed in the first test run it can continue to be used for multiple test runs for the next 24-hours. 



For instance, the same 1000 VUDs can be used for executing tests as follows:




- unlimited test runs that run a maximum of 100 virtual users…for 10 days

- unlimited test runs that run a maximum of 500 virtual users…for 2 days


- unlimited test runs that run a maximum of 1000 virtual users…for 1 day




Keep in mind, that the 24-hour timer starts from the first time you run a test for that day.  You can request that HP Support give you a specific time-of-day to start your testing, like 9 AM.  The proper understanding here is that VUDs are only decremented from the pool when the vuser thread first runs and it will only decrement more vusers from the pool if:




A. It is 24-hours later.

        -or-


B. Another test run needs more vusers than have already been decremented in the last 24 hours.




Example:




  • Customer runs test #1 for 1000 virtual users for 4 hours  (1000 VUDs are decremented from the pool)


  • Customer runs test #2 for 1500 virtual users for 2 hours (500 more VUDs are decremented from the pool)


  • Customer runs test #3 for 400 virtual users for 6 hours (0 VUDs are decremented from the pool)


  • Customer goes to bed after a 12 hour day (while they are dreaming about advanced LoadRunner correlation…VUDs expire at 12:00am)


  • Customer wakes up and runs test #4 for 300 virtual users for 2 hours (300 VUDs are decremented from the pool)



For a real customer there was some confusion about how the VUDs get used, or “activated” was the language they used.  The hypothetical situation is that they had 500 Virtual User Day licenses and proposed 3 scenarios:




Scenario #1:  We activate 100 VUDs at the start of the day. We run a load test for 100 VUDs, then follow this with another load test for 100 VUDs on the same day. Our understanding is that we have used up 100 VUDs at end of the day. We have exhausted all the 100 VUDs at the end of the day. We are left with 400 unused VUDs.



Answer:  that is correct…if the second run goes to 150 virtual users, they will only have 350 VUDs remaining for the next 24-hour cycle.



Scenario #2: We activate 100 VUDs at the start of the day. We run a load test for 50 VUDs, then follow this with another load test for 50 VUDs on the same day. Our understanding is that we have used up 50 VUDs at end of the day and that there are about 50 unused VUDs. We are now left with 450 VUDs, inspite of activating 100 VUDs at the start of that day.



Answer:  that is incorrect – there is no way to “activate” a virtual user manually.  The only way to “activate” a VUD is to run a virtual user and decrement a license from the VUD pool.  So, the VUDs are decremented from the pool when they first enter the running status.  The customer’s first run in 24 hours sets the bar for VUD consumption for the 24-hour period.  They will still have 450 VUDs remaining at the end of the day.



Scenario #3: We activate 100 VUDs at the start of the day. Due to some unforeseen issue, no test runs could be initiated. Our understanding is that at the end of the day, we are left with 500 VUDs.



Answer:  that is only partially correct – again, there is no way to manually “activate” the vusers, so if they never got any vusers running then it wouldn’t decrement any VUDs from the license pool.  They would still have 500 VUDs remaining at the end of the day.


Labels: LoadRunner

Steve's blogging about Performance Center

My colleague Steve Feloney is the Product Manager for HP Performance Center and he has recently started a new blog on PC over here:




 




 




Performance Center has been around for a few years, but I think you'll find that the new version has some significant improvements that are really cool for performance center of excellence (CoE) and centralized management of performance testing projects.


 



Steve and I will bring you information about LoadRunner and Performance Center and hopefully we can describe the very cooperative vision for how these two products evolve together.


 



Please welcome Steve to the blogosphere and be a pal - subscribe! Thanks!!

Labels: LoadRunner

Everything you wanted to find about LoadRunner, but didn't know the URL

Recently I've had some inquiries about finding all the information available in the entire universe about HP LoadRunner, specifically stating that it's very common to get "not found" as a return to results. So, I thought I'd take a few minutes of my generous free time to list the main links to find the official (and some unofficial) links to LoadRunner information on the web.







Let's start with the Hewlett-Packard website - which can just seem like the deepest jungles in the Amazon where you will find that Google does a better job searching and indexing our site that we do.

The main concern about the URLs on the HP website is that they appear to be very long and cryptic and often look to be dynamic strings.

So, with special thanks to our really good friends at Google, here are some very handy links to find LoadRunner information in the www.hp.com website:
















Of course, there's all kinds of external sites - ranging from interactive forums, to FAQ's and answers to every possible interviewing question that you might get asked (I remind you it is unethical to cheat on interviews) and there are some quite large communities of LoadRunner gurus out there. Here's a few forums and blogs that I know, love and use daily:












And a posting like this should also include some of the key individual contributors to the LoadRunner performance testing and performance engineering discipline - our leaders and true gurus:








Although it may seem obvious that I'm just posting a huge bunch of links to get click-throughs on this blog, I honestly must tell you that was not my intention. I have heard from numerous customers that there just isn't the same community of LoadRunner users and engineers out there - like we used to have in the Mercury knowledge base. Hopefully you'll find these links helpful and lasting - people and places you can trust will always have great information about LoadRunner and performance testing.





(note:  updated on 3/9/2009…to accurately indicate affiliation for the individuals listed.)




Labels: LoadRunner

How To: Understand and calculate Virtual User "footprint"

One of the most common questions we get about LoadRunner Virtual Users relates to the resources required to execute the scripts on the Load Generator.  One advantage of the maturity of LoadRunner is that we have supported so many different drivers and protocols and environments over the past 2 decades.  We've learned so much about how to give a more detailed response to really advise users on how much Load Generator resources will be required to be successful.  You might imagine that the answer isn't black & white or even close to a 1 sentence answer.  Here are some simple ideas that can help you determine how to configure your Load Generators.
 
For Memory: each protocol has different parts that affect how much memory is required, so there is no single answer across all virtual users - Web is different RDP, which is different from SAP Click & Script, which is different from RMI-Java.  Some vuser types have external drivers (like ICA, RDP or SAP) so the guidelines didn’t include the footprint for the external executable driver. The Click & Script vuser types really can confuse you, because these seem like new versions of old protocols...but that's not actually true - the C&S protocols are completely new architecture, but more than anything, every vuser’s memory foot print is GREATLY impacted by the following factors:

  • the size of the driver library (this is fairly static)
  • the # of lines of code included in the recording (varies greatly by customer)
  • the # and size of parameters included in the recording (varies greatly by customer and script type)

 
For CPU: of course, each driver has slight differences in CPU overhead, but for the most part they are all very efficient (and - yes, we will continue to improve Click & Script to be better!!) the amount of CPU used on a LoadRunner load generator will vary by the following factors:

  • iteration and pacing, which controls how “active” the vuser is (varies greatly by customer)
  • stress testing scenarios usually use more CPU, as opposed to real-world testing which has slower vusers (but more of them)
  • customized code or extra script processing (like string manipulation, or calculations) will chew up more CPU

 

For Disk: the main thing here is logging, the more customers increase the details and amount of logging the more disk will be consumed external parameter files (writing or reading from individual vuser threads) will really hammer local disk some vusers with external driver executables will have additional logging of their own, or caching of content.

 

For Network: the result of multiple virtual users running on single load generator is a concentration of all those vusers network traffic on single NIC the payload of network api calls varies greatly for each and every different application stress testing (e.g. fast iteration pacing, no think-times) could easily result in over-utilization of NIC bandwidth

 

When it comes to calculating your virtual user footprint, it's actually quite easy.  But first, let me tell you that not everyone should need to do extensive calculations of the virtual user resource utilization.  This is important *only* when you have a very high number of virtual users or a very limited number of Load Generators.  The basic approach is to run a preliminary test with just 1 script, while you measure the resource utilization on the Load Generator directly.  You are specifically interested in the mmdrv.exe process on the Load Generator, which is LoadRunner's primary multi-threaded driver.  Measuring the private bytes reserved by this process for 1, 2, 3, 4 then 5 then 10 virtual users will give you some clue as to how much memory is required by each additional virtual user added.  Simultaneously you should monitor CPU, Network and Disk just to determine if there are any excessive utilizations.

 

mmdrv.jpg


It is important to note that you should be gathering information about the performance of your script running on the Load Generator - using the same run-time settings that you will use during the full scenario run.  If you are stress testing with very little think time or delay, then you'll want to use those same settings.

 

Labels: LoadRunner
Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.