HP LoadRunner and Performance Center Blog

Displaying articles for: December 2009

Video: Real Stories of Load Testing Web 2.0 - Part 4

The fourth and final video of our Load Testing Web 2.0 series covers a very common difficulty in testing nearly any system, even older architectures; dependent calls external to the system under test. The concept of "stubbing" isn't anything new, to be honest I've been doing this for nearly 15 years and it's very common when there is a back-end mainframe required for the test. But now with Web 2.0 architectures, it seems that stray calls to a web service are eluding many testers and this is resulting in some nasty surprises from externally impacted vendors. Here is Part 4 of the series, "Real Stories of Load Testing Web 2.0: Load Testing with Web 2.0 External Calls (please try not to test stuff that's not yours)."



(if your browser doesn't show the video in-line, just click here)


At the end of this video we mention another partner that built a professional "stubbing" solution.  Visit the iTKO LISA Virtualize page for more details.

Video: Real Stories of Load Testing Web 2.0 - Part 3

The third part in our Load Testing Web 2.0 series covers a not-so new concept of server-side data processing.  Don't be fooled into thinking you already know about server performance, because these new architectures are using client-like Javascript on the server; which is sometimes called reverse Javascript.  This video will describe how performance issues can sneak into this type of architecture and how even a simple component can result in serious latency and server-side resource overhead.  Here is Part 3 of the series, "Real Stories of Load Testing Web 2.0: Server-side Javascript Impacts (opposable thumbs not required)."



(if your browser doesn't show the video in-line, just click here)

FREE Trial of Shunra's WAN emulation within HP LoadRunner

Who said good things don't come for free?  Recently I've spent much more time with our partners at Shunra Software...and I've learned more about networking and performance than I've ever imagined.  In celebration of HP Software Universe 2009 in Hamburg, Germany this week, they have posted a special FREE trial of the VE Desktop for HP Software.  WOW!  This is the integration that we introduced in version 9.5 of LoadRunner and Performance Center which has become extremely popular.  Here's a capture from the Shunra blog entry:









"In celebration of HP Software Universe in Hamburg, Germany Shunra is offering a free trial of WAN emulation within HP LoadRunner, VE Desktop for HP Software. You can use VE Desktop for HP Software to measure your application performance through a variety of emulated WAN conditions, including replication of production, mobile or theoretical networks. Click here to register for more information, receive download instructions, and get started making your application performance testing network-aware!"

  I guess this means: "Happy Holidays from Shunra"!  :smileyhappy:

Video: Real Stories of Load Testing Web 2.0 - Part 2

As the second part in this series we now, we highlight another common challenge we hear from customers testing new web applications; client-side performance. As you add these exciting and new components into the once-upon-a-time-very-thin browser, you'll find increased CPU and Memory resource utilization on the client.  Perhaps as no surprise, this can result in slower response times from page rendering or Java script processing overhead.  Here is Part 2 of the series, "Real Stories of Load Testing Web 2.0: Web 2.0 Components - Client-side Performance (Fat clients, you make the rockin' world go round)."



(if your browser doesn't show the video in-line, just click here)

Video: Test Mobile Performance with Shunra (...can you hear me now?...now?)

Imagine yourself with 3 colleagues riding on an evening train from Berlin to Hamburg, and each of you have the same mobile service provider and similar phones. For 90 minutes, each of you are getting connected with different roaming providers, connection stability, sometimes with 3G and sometimes NOT!  (It boggles the mind, doesn't it!?)


Well, if you're lucky enough to be sitting next to a guru like David Berg from Shunra, you'll get the whole story on why this is happening.  In this brief, but highly valuable video interview we help to explain how the Shunra VE Desktop and HP LoadRunner can be used together to produce a robust load testing solution for mobile applications, complete with mobile WAN emulation.  Watch...and learn!



(if your browser doesn't show the video in-line, just click here)


Also, David has an even more detailed explanation on his blog, which goes much deeper into the challenges of developing mobile applications that REALLY work well.

LoadRunner Blog-o-rama: LIVE from HP Software Universe 2009 Hamburg

I'm now on the second day of this week, getting ready for a customer advisory board meeting all day here in Hamburg, Germany.  This week we are having the 2009 HP Software Universe conference and I will try to be blogging and twittering all the day long.  Until I just cant type with thumbs into my phone any longer.  


If you are a blogger and you are attending HP Software Universe
Hamburg...it would be great to meet you in person.  Seven other HP bloggers will be updating live from the event. Please look them up:


     Genefa Murphy - covering Agile
Development and Requirements Management
 


     Amy Feldman – @HPITOps, covering Application
Performance  Management
 


     Aruna Ravichandran - covering Application
Performance  Management


     Mark Tomlinson - covering Load
Runner and Performance Center


     Michael Procopio - @MichaelAtHP, covering Application
Performance  Management
and Business Service Management


     Mike Shaw – interested in meeting IT Operations management for something
new in 2010


     Peter Spielvogel - @HPITOps, covering Operations Management


 


You can also follow
the entire HPSU event on Twitter – Twitter.com/HPSWU
@HPSWU, hashtag #HPSWU, or become a Facebook
fan


More on HP Software Universe 2009 Hamburg


· Sneak
Preview – Application Performance Management Blog


· Sneak
Preview – Infrastructure Management Software Blog


· Optimizing
cost by automating your ITIL V3 processes


· Event Information
Page


 

Video: Real Stories of Load Testing Web 2.0 - Part 1

We know many of you are finding great challenges with testing new web applications built with modern architectures and components like Silverlight, Flex, JavaFX and AJAX. So, Steve and I decided to put together a multi-segement video covering some of the top issues that make LoadRunner web testing much harder and more complex. Here is Part 1 of the series, "Real Stories of Load Testing Web 2.0: Impacts of WAN latency of Web 2.0 apps. (Mark Googles himself and Steve offers to go to Paris)."



(if your browser doesn't show the video in-line, just click here)


 


In this video we also mention one of our new partners delivering integrations for WAN Emulation that integrates with HP LoadRunner and Performance Center.  Visit the Shunra VE Desktop page for more details and free trial download.


 

Autobiography of a Performance User Story

I am a performance requirement and this is my story. I just got built and accepted in the latest version of a Web-based SaaS (software as a service) application (my home!) that allows salespersons to search for businesses (about 14 million in number) and individuals (about 200 million in number) based on user-defined criteria, and then view the details of contacts from the search results. The application also allows subscribers to download the contact details for further follow-up.


I’m going to walk through my life in an agile environment—how I was conceived as an idea, grew up to become an acknowledged entity, and achieved my life’s purpose (getting nurtured in an application ever after). First a disclaimer – the steps described below do not exhaustively describe all the decisions taken around my life.



It all started about three months back. The previous version of the application was in production with about 30,000 North American subscribers. The agile team was looking to develop its newer version.


One of the strategic ideas that had been discussed quite in detail was to upgrade application’s user interface to a modern Web 2.0 implementation, using more interactive and engaging on-screen process flows and notifications. The proposed changes were primarily driven by business conditions, market trends and customer feedback. The management had a vision to capture a bigger pie of the market. The expectation was of adding 100,000 new subscribers in twelve months of release, all from North America. A big revenue opportunity! Because the changes were confined to its user interface, no one thought of potential impact on application performance. I was nowhere in the picture yet!


Due to the potential revenue impact of the user interface upgrade, the idea got moved high up in the application roadmap for immediate consideration. The idea became a user story that got moved from the application roadmap to the release backlog. Application owners, architects and other stakeholders started discussing the upgrade in more details. During one such meeting, someone asked the P-question—what about performance? How will this change impact the performance of the application? It was agreed that performance expectations of the user-interface changes should clearly be captured in the release backlog. That’s when I was conceived. I was vaguely defined as – “As an application owner of the sales leads application, I want the application to scale and perform well to as many as 150,000 users so that new and existing subscribers are able to interact with the application with no perceived delays.”


During sprint -1 (discovery phase of the release planning sprint), I was picked up for further investigation and clearer definition. Different team members investigated the implications of me as an outcome. The application owner considered application usage growth for the next 3 years and came back with a revised peak number of users (300,000). The user interface designer built a prototype of the recommended user-interface changes; focusing on the most intensive transaction of the application – when a search criterion is changed the number-of-contacts-available counter on the screen need to get updated immediately. The architect tried to isolate possible bottlenecks in the network, database server, application server and Web server, due to the addition of more chatty Web components such as AJAX, JavaScript, etc. The IT person looked at the current utilization of the hardware in the data center to identify any possible bottlenecks and came back with a recommendation to cater to the expected increased usage. The lead performance tester identified the possible scenarios for performance testing the application. At the end of sprint -1, I was re-defined as – “As an application owner of the sales lead application, I want the application to scale and perform well to as many as 300,000 simultaneous users so that when subscribers change their search criteria, an updated count of leads available is refreshed within 2 seconds on the screen.” I was defined with more specificity now. But was I realistic and achievable?


During sprint 0 (design phase of the release planning sprint), I was picked up again to see the impact I would have on the application design. IT person realized that to support revised number of simultaneous users, additional servers will need to be purchased. Since that process is going to take a longer time, his recommendation was to scale the number of expected users back to 150,000. To the short time, user interface designer decided to limit the Web 2.0 translation to the search area of the application and puts the remaining functional stories in the product backlog. The architect made recommendations to modify the way some of the Web services were being invoked and on fine tuning some of the database queries. A detailed design diagram was presented to the team leads along with compliance guidelines. The lead performance tester focused on getting the staging area ready for me. I was re-shaped to – “As an application owner of the sales lead application, I want the application and perform well to as many as 150,000 simultaneous users so that when subscribers change their search criteria, an updated count of leads available is refreshed with 2 seconds on the screen.” I was now an INVESTed agile story, where INVEST stands for independent, negotiable, valuable, estimable, right-sized and testable.


During the agile sprint planning and execution phase; developers, QA testers and performance testers were all handed over all the requirements (including mine) for the sprint. While developers started making changes to the code for the search screen, QA testers got busy with writing test cases and performance testers finalized their testing scripts and scenarios. Builds were prepared every night and incremental changes were tested as soon as new code was available for testing. Both QA testers and performance testers worked closely with the developers to ensure functionality and performance were not compromised during the sprint. Daily scrums provided the much-needed feedback to the team so that everyone knew what was working and what was not. Lot of time was spent on me to ensure my 2-second requirement does not slip to 3-seconds, as it will have a direct impact on customer satisfaction. I felt quite important, sometimes even more than my cousin story of search screen user interface upgrade! At the end of a couple of 4-week sprints, the application was completely revamped with Web 2.0 enhancements with functionally and performance fully tested – ready to be released. I was ready!


Today, I will be deployed to the production environment. No major hiccups are expected, as during the last two weeks I was beta tested by some of our chosen customers on the staging environment. The customers were happy and so were internal stakeholders with the outcome. During these two weeks, I hardened myself and got ready to perform continuously and consistently. Even though my story is ending today, my elders have told me that I will always be a role model (baseline) for future performance stories to come. I will live forever in some shape or form!


Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.