HP LoadRunner and Performance Center Blog

Displaying articles for: September 2009

Update: 15-hours into the Non-stop HP Virtual Conference

*original post was at 11:30pm on 9/29/2009*


Okay - so I've been working the Worldwide HP Virtual Conference: Functional, Performance and Security Testing for about 15 hours now - nearly non-stop and I've had some great chat sessions with customers from all over the world.  I've been on since 8:30am PST from Mountain View, California where I live.  I've had 3 medium-sized cups of coffee - actually called Depth Charges from the coffee shop by my house.  These are 16 ounces of regular coffee with 2 shots of espresso added for and extra charge.  I'm posting live updates on Twitter and the feedback has been great.  Also, I've been broadcasting live from my home studio on ustream.tv the entire time (almost).



The HP Virtual Conference materials are still available through November 2009

The LoadRunner Non-stop HP Virtual Conference

THE TIME HAS COME!! I really don't want you to miss the Worldwide HP Virtual Conference: Functional, Performance and Security Testing in Today’s Application Reality.  And just to show you how committed I am to you and to the conference, I'm going to attend the conference non-stop for the entire duration, just as we did in the promotional video!  I will be online in chats, at the booth, in the lounge and presenting the entire time and I will also be documenting the experience live from my home office, with video streaming and chatting.  Click here to register for free, now!!!

Follow Mark on twitter and ustream.tv for the duration of the entire conference!
Follow the Entire HP Conference on twitter

This conference is going to be the most awesome!


 



  • 40 sessions covering Agile, Cloud, Web 2.0, Security, customer best practices and more

  • Localized booths and content for Benelux, Nordics, Austria & Switzerland, France, Germany, Japan, Korea, Iberia and Israel

  • Live worldwide hacking contest


 


 


 


Here are just the Top 5 reasons you should attend:


 


#5 - No need to spend days out of the office
#4 - No travel or registration fees
#3 - No long registration lines
#2 - No need to choose one session over another - see them all!
#1 - No need to choose one representative from your department to attend.  Everyone can attend and learn!


(Of course, I think the best reason is it's a great excuse to work from home for the entire day...which is exactly what I'll be doing!)


Conference Dates and Schedule


Americas – 29 September: 
11am – 7pm EDT/ 8am – 4pm PDT


APJ – 30 September: 
11:00am – 5:00 pm (AEST) / 10:00am – 4:00pm (Tokyo) 
 9:00am – 3:00pm (SG time) / 6:30am – 12:30 pm (Bangalore)-


EMEA – 30 September:
8:00am – 4:00 pm (GMT+1) / 8:00am - 4:00pm (UK) 
9:00am to 5:00pm (Amsterdam, Berlin, Paris, Rome)


REGISTER NOW!!!


 


See you online, at the SHOW!

Performance Testing Needs a Seat at the Table

It is time Performance Testing gets a seat at the table. Architects and developers like to make all the decisions about products without ever consulting the testing organization. Why should they? All testers have to do is test what's created. If testers can't handle that simple task, then maybe they can't handle their job.

I know this thought process well. I used to be one of those developers :smileyhappy:. But I have seen the light. I have been reborn. I know that it is important to get testers input on products upfront. And actually it is becoming more important now than ever before.



With RIA (Web 2.0) technologies there are many different choices that developers can make. Should they use Flex, Silverlight, AJAX, etc... If they use AJAX, which frameworks should they use. If Silverlight then what type of back end communication are they going to use?




Let's just take AJAX as an example. There are hundreds of frameworks out there. Some are popular and common frameworks but most are obscure or one off frameworks. Developers like to make decisions on what will make their life easier and what is cool. But what happens if their choices can't be performance tested? Maybe the performance team doesn't have the expertise in-house, or maybe their testing tools don't support the chosen framework. What happens then?




I can tell you that many times the apps get released without being tested properly and they just hope for the best. It's a great solution. I like the fingers crossed method of releasing apps.



How could this situation be avoided? Simply include the performance testing group upfront. Testing is a major cog in the application life cycle. They should be included at the beginning. I'm not talking about testing earlier in the cycle (although that is important and it should be done). I'm talking about getting testing involved in architecture and development discussions before development takes place.


If developers and architects knew up front that certain coding decisions would make it hard or impossible to performance test, then maybe they would choose other options for the application development. Many businesses would not want to risk releasing an application if they knew that it could be tested properly. But when they find out too late, then they don't have a choice except to release it (the finger crossing method).




If the performance team knew upfront that they couldn't test something because of skills or tools, then at least they would have a heads-up and they could begin planning early for the inevitable testing. Wouldn't it be nice to know what you are going to need to performance test months in advance? No more scrambling at the 11th or 12th hour.



Think about this. If testing was invloved or informed upfront, then maybe they, along with development, could help push standards across development. For example, standardizing on 1 or 2 AJAX frameworks would help out testing and development. It will make the code more maintainable because more developers will be able to update it and it will help ensure that the application is testable.





We need to get more groups involved up front. The more you know, the better the decisions, the better off the application is going to be.

































What are the Goals of Performance Testing?

So what is the point of performance testing?  I get this question often.  And depending on who you talk to, you get different answers.

First let me begin by telling you what are NOT the goals of performance testing / validation.



  • Writing a great script

  • Creating a fantastic scenario

  • Knowing which protocols to use

  • Correlating script data

  • Data Management

  • Running a load test


This is not to say that all of these are not important. They are very important, but they are not the goals. They are the means to the end.


So why DO people performance test? What are the goals?



  • Validating that the application performs properly

  • Validating that the application conforms to the performance needs of the business

  • Finding, Analysing, and helping fix performance problems

  • Validating the hardware for the application is adequate

  • Doing capacity planning for future demand of the application


The outcomes of the performance test are the goals of testing. It seems basic. Of course these are the goals. But...



  • How many people really analyse the data from a performance test?

  • How many people use diagnostic tools to help pinpoint the problems?

  • How many people really know that the application performs to the business requirements?

  • How many people just test to make sure that the application doesn't crash under load?


Even though they seem obvious, many testers/engineers are not focusing on them correctly, or are not focused on them at all.



  • Analysing the data is too hard.

  • If the application stays up, isn't that good enough?

  • So what if it's a little slow?


These are the reasons that I hear. Yes, you want to make sure that the application doesn't crash and burn. But who wants to go to slow website. Time is money. That is not just a cliche, it's the truth. Customers will not put up with a slow app/website. They will go elsewhere and they do go elsewhere. Even if it is an internal application, if it is slow performing a task, then it takes longer to get the job done, and that means it costs more to get that job done.


Performance engineering is needed to ensure that applications perform properly and perform to the needs of the business. These engineers do not just write performance scripts. Just because someone knows Java does not mean that they are a developer. And just because a person knows how to write a performance script does not mean they they are a performance engineer.


Performance engineering requires skills that not all testers have. They need to understand the application under test (AUT), databases, web servers, load balancers, SSO, etc.... They also have to understand the impact of cpu, memory, caching, i/o, bandwidth, etc.... These are not skills are learned overnight, but skills that are acquired overtime.


I wrote a previous blog entry on "you get what you pay for". If you pay for a scripter, you get a scripter. If you pay for a performance engineer, you get a performance engineer (well not always :smileyhappy:. Sometimes people exaggerate their skills :smileyhappy: ).


Companies can always divide and conquer. They can have automaters/ scripters create the scripts and the tests, then have performance engineers look at the test and analysis the results. In any case the performance engineer is a needed position if you want to properly performance test/validate.


It needs to be mandatory to know what metrics to monitor and what those metrics mean. Also knowing how to use diagnostic tools needs to be mandatory. Again in a previous blog I mentioned that if you are not using diagnostics you are doing an injustice to your performance testing. Without this analysis knowledge you are not truly performance testing, you are just running a script with load. Performance testing is both running scripts and analysing the runs.


By looking at the monitoring metrics and diagnostic data, one can begin to correlate data and help pinpoint problems. They can also notice trends that may become problems overtime. Just running a loadtest without analysis will not give you that insight. It will just let you know that the test appeared to run ok for that test run. Many times just running the test will give you a false positive. People wonder why an application in production is running slow if it already passed performance validation. Sometimes this is the reason (You never want this to be the reason). Proper analysis will ensure a higher quality application.


As I said, these are not skills that are created overnight. Performance engineers learn on the job. How do you make sure that this knowledge stays with a company as employees come and go? That is where a Center of Excellence (CoE) comes into play (You knew I was going to have to pitch this :smileyhappy: ). If you centralize your testing efforts, then the knowledge becomes centralized as opposed to dispersed through a company only to get lost if those employees with the knowledge leave. You can read yet another one of my blogs for more information on the CoE. Wow! I've just been pitching my blogs entries today :smileyhappy:. But I digress.


Let's stop thinking that proper performance testing is writing a good script and agree that performance engineering is not an option but a must. Let's start to focus on the real goals of performance testing and then all the of the important "means to the end" will just fall into place.

Performance Management for Agile Projects

Performance management is an integral part of every software development project. When I think of agile projects, I think about collaboration, time to market, flexibility, etc. But to me the most important aspect of agile processes is its promise of delivering a “potentially shippable product/application increment”. What this promise means for application owners and stakeholders is that, if desired, the work done in iteration (sprint) has gone through enough checks and balances (including meeting performance objectives) that the application can be deployed or shipped. Of course, the decision of deploying or shipping the application is also driven by many other factors such as the incremental value added to the application in one sprint, the effect of an update on company’s operations, and the effect of frequent updates on customers or end-users of the application.


Often application owners fail to provide an objective assessment of application performance in the first few sprints or until the hardening sprint—just before the application is ready to be deployed or shipped. That is an “Agile Waterfall” approach, where performance and load testing is kept aside until the end. What if the architecture or design of the application needs change to meet the performance guidelines? There is also a notion that performance instrumentation, analysis and improvements are highly specialized tasks which result in resources not being available at the start of a project. This happens when the business and stakeholders are not driving the service level measurements (SLMs) for the application.


Application owners and stakeholders should be interested in the performance aspects of the application right from the start. Performance should not be an afterthought. The application’s backlog in agile contains not only the functional requirements of the application but also the performance expectations from the application. For example, “As a user, I want the application site to be available 99.999% of the time I try to access it so that I don’t get frustrated and find another application site to use”.  Performance is an inherent expectation behind every user story. Another example may be, “As an application owner, I want the application to support as many as 100,000 users at a time without degrading the performance of the application so that I can make the application available globally to all employees of my company”. These stories are setting the SLMs or business-driven requirements for the application, which in turn will define the acceptance criteria and drive the test scripts.


It is important that, if a sprint backlog has performance related user stories (and I’ll bet you nearly all of them do) its team has IT infrastructure and performance testers as contributors (“pigs” in Scrum terminology). During release planning (preferably) or sprint planning sessions these contributors must spend time in analyzing what testing must be performed to ensure that these user stories are considered “done” by the end of the sprint. Whether they need to procure additional hardware, modify the IT infrastructure for load testing, or work on the automation of performance testing; these contributors are an active member of the sprint team participating in daily scrums.  They must keep a constant pressure on developers and functional testers to deliver the functionality for performance testing. After all, the success of the sprint is measured as whether or not every member delivered the final product that fully met the acceptance criteria and on time.




To me, performance testing is an integral part of the agile process and it can save cost to an organization. The more you wait to conduct performance tests, the more expensive it will become for you to incorporate changes. So don’t just test early and often – test functionality and performance in the same sprint!


Offshoring / Outsourcing Performance Testing

There are 2 main reasons why companies utilize offshoring (outsourcing) for performance testing.

The main reason is cost savings. Obviously companies will try to choose locations where there is a lower cost of doing business. The 2nd reason is to be able to ramp up new testers quickly. If there is a greater demand for testing than what the current set of testers can handle then offshoring or outsourcing can be utilized to quickly gain more testers to help with the excess demand.

In an ideal world all performance testers are the same. If you can find cheaper testers elsewhere, then you will get immediate cost savings. But as we know we do not live in an ideal world. There are different levels of knowledge, skill and motivation. We have seen time and time again offshoring fail because companies do not have that correct expectations, they do not set up the proper training, and they do not have the correct set of tools.

You cannot assume (we all know what that does) that just contracting with a secondary company to provide all or partial performance testing will automatically start showing benefits.

There is no reason why offshoring cannot be a successful venture for companies. They must research the offshoring options and find ones that have a good fit with skill sets, low "turn over", and a proven record.

Once an outsourcing company has been chosen then there has to be training. They must understand how your company is expecting the testing to be performed. They must know what types of tests you want them to do (stress,load,failover, etc...), the kind of reports that you want and the SLAs that you expect them to achieve.

After you have chosen the team, provided the appropriate training and expectations, what is left? What tools are they going to be using? The same set of tools that you used when the entire team was internal? At first this seems like the correct response. If it worked internally, why wouldn't it work for an outsourcer? Let's explore this for moment.

First let's just talk licenses. How is the outsourcing group going to gain licenses. Do they have their own licenses that they can use? Most do not and they rely on the company to provide that. So do you transfer the licenses that you have internally to the outsourcer? Do you want to keep some of the licenses in house so that you can perform tests internally when it is needed? More than likely you will be keeping at least some of your performance testing licenses in-house. So that means that you will have to buy more licenses for the outsourced team. Can your current testing tool help with this?

What about testing machines? Do you need to get more controllers and load generators? Can the outsourced team utilize the machines that you currently have? Can your current testing tool help with this?

What about management? How do you know that the outsourced team is doing what they are supposed to do? How do you know if the tests that they are creating are correct? How do you know if they are testing enough? In short how do you know that they are doing a good job? Lack of proper management and oversight is one of the biggest reasons why offshoring fails. Can your current testing tool help with this?

What if you would like "follow the sun testing" or better collaboration with testing. Let's say that you have an important project that needs to get tested quickly. And the only way to get this done is to keep handing off the test to different testers around the world. So when one location is done for the day, a new tester can pick up where the last left off and continue with the testing. This becomes a real possibility with offshoring. A test can begin in-house and then shift to an outsourcer off-hours, thus decreasing the time it takes to get the results to the line of business. Can your current testing tool help with this?

 




HP Performance Center (PC) is the enterprise performance testing platform that can help you with your offshoring/outsourcing needs. Let's start from the top. PC has shared licenses. Anyone around the world, that is given the proper permission, can access the PC web interface and run tests. There is no need for more licences unless there is a demand for more concurrent testing. And if your demand for more simultaneous tests is growing, then you are doing something right.

Now let's move on to machines. With Performance Center all the machines (controllers and load generators) are centrally managed. There is no need to have LGs placed throughout the world. Testers, worldwide, have access to tests and machines through PC. Again the only time that more machines are needed is if the demand increases. No need to buy more machines just because you have outsourced the job.

Performance Center was created for performance testing management. From a single location you can see what projects and tests have been created, how many tests have been executed and who ran them. There is no need to have scripts and reports emailed or copied. All testing assets are stored centrally and accessible through PC's web interface.

Not only can you view the projects, scripts, and results, you can also manage the testing environment itself. You can run reports to see what the demand for your testing environment is an then plan for increases accordingly.

How about "follow the sun" testing? With HP Performance Center anyone with proper access can take over testing. Since all scripts, scenarios, and profiles are completely separated from controllers can stored centrally, it is easy for a new tester to pick up where a previous tester left off. There is no need to upload scripts and scenarios to a separate location, or remember to email them to the next tester. It is all available 24x7 through Performance Center.

Collaboration on testing becomes much easier in PC than with almost any other tool. If you need different people at different locations to all watch a test as it is running, PC can accommodate that. Just log on to the running test and choose the graphs that you are interested in. Now all viewers are watching the test with the information that they are interested in, all through one tool.

HP Performance Center is your best performance testing platform choice when it comes to offshoring and outsourcing.

So after you pick the correct outsourcing company, and properly train them, make sure that you use HP Performance Center to ensure the highest cost savings and highest quality.

Performance Center of Excellence

What is a Center of Excellence (CoE)?
Definition:


  • A centralized entity that drives standardization and processes across an organization in order improve quality, consistency, and efficiency

  • A central group of experts providing shared services and leadership to ensure high quality


So what does this mean for performance testing? It means that a Performance CoE can improve the quality, consistency, and efficiency of performance testing and validation across an entire company.


Performance testing is a specialized skill set. It requires knowledge of the applications, the hardware and third party systems. Not all testers can have this knowledge, and it takes years to fully develop the proper skill set.


If performance testers stay in individual project testing groups, it is hard to ensure that all applications are being properly performance tested to the same standard. Also when new technologies appear, these disparate groups will not all have the same expertise of those new technologies.


A CoE centralizes the testing expertise. As the central team develops more knowledge, all the applications will benefit. It is also easier to ensure the same standard or consistency of testing across all projects. It facilitates higher quality of tests and improves the efficiency of running and analyzing these tests.


With a CoE it becomes easier to reuse testing assets. Instead of keeping all the scripts, monitor profiles and scenarios on individual testing machines or with separate testing groups, you can centralize all of that data to make it easier to share and reuse. This cuts down the time it takes to create tests and makes the testing more efficient.


Don't just take my word on this. Voke, an analyst firm, conducted a study on performance Centers of Excellence and found that they...



  • Increased productivity

  • Increased quality

  • Decreased costs


Hang on! Did I just say that I can increase quality and efficiency and at the same time reduce my costs? Yes.


You can reduce the number of testing machines and testing tool licenses through a central organization. Every project testing group would no longer need to have its own performance testing tools. And when the testing systems become centralized it becomes easier to utilize the systems more efficiently by having less down time per machine and by allowing 24x7 testing.


If there is a need to standardize performance testing, increase the efficiency and quality of the performance testing process or just need to reduce the overall cost of performance testing, then a Performance Center of Excellence is worth looking into.


 


 


HP Performance Engineering Best Practices Series

Just to let you know that we've been putting together some published practices for LoadRunner and Performance Testing...and the electronic version of the book(s) are available free of charge!


This one is "Performance Monitoring Best Practices" authored by Leo Borisov from our LoadRunner R&D team.  Leo sent along this description and instructions for downloading the book:


 


"We have always
recognized the fact that having best practices and methodology would greatly
simplify the life of performance engineers and managers, and now we are
beginning to fill this need. The book is available with the SP1 
installation.
Access it from the product documentation library, or from the help menu.



To
download a copy from the hp software support site:


  1. go to http://h20230.www2.hp.com/selfsolve/manuals 

  2. log in
    using an hp passport or register first at and then log in

  3. in the list of hp
    software
    product
    manuals choose either LoadRunner or Performance Center – the book
    is listed under both

  4. select product version: 9.51 and operating system:
    windows

  5. click search


Since this is the first book in the series covering
various aspects of methodology, we would really appreciate your feedback. Please
send your feedback directly to me or lt_cust_feedback @ hp.com."


Congratulations Leo - thanks for your efforts!


 

Video: Understanding Concurrency and Concurrent Users

So, just what the is the proper definition for concurrency?  This is a hot topic sometimes when it comes to arguing about the validity and accuracy of the stress testing analysis.  Of course there is no ONE simple answer here, so it's up to you to establish a common definition on your teams or for the Performance CoE.  These videos will give you some tips on what concurrency is for performance testing.  You will also learn about a common set of definitions for concurrent users....concurrent, active and named users.


LoadRunner Concurrency Part 1:



(if your browser doesn't show the video in-line, just click here)


 


 


Of course, I thought of several other items for this...and so there is also LoadRunner Concurrency Part 2:



(if your browser doesn't show the video in-line, just click here)

Video: Running LoadRunner Virtualized

If you've ever needed to understand how LoadRunner should be implemented in a virtual environment, you should enjoy this video walkthrough explaining the best practices to do just that.  Make a specific note about how your Iteration Pacing and Think Time settings really effect the health, scalability and accuracy of your load test.



(if your browser doesn't show the video in-line, just click here)

Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog



Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.