HP LoadRunner and Performance Center Blog

Prevailing winds bring announcement of HP LoadRunner in the Cloud

It’s a given that performance testing is an integral part of your application
lifecycle.  However, it is often done in an unplanned, ad-hoc manner.  Why?  Because at most companies’ time and money are limited resources – and when resources are scarce, something’s got to give.  Unfortunately testing usually ends up getting short end of the stick in such situations. When you have plenty of hardware and tools - plenty of resources – even then
it can take so much time to provision the test bed that you have little time
left over for much testing at all.  In either situation, it just puts your organization at great risk.


And let’s not forget: it takes money to make money, guys.  When you have almost no budget to invest in testing tools, you might be convinced to rely on inexpensive or open source testing
solutions that provide inadequate testing capabilities.  As my good friend and colleague
Steve Feloney says, “Ya know, you get what you pay for.”  With those naïve’ tools, a mistaken or miscalculated test result could cost you more than you can imagine.


Cloud Computing changes this equation – especially for testing – because it is fast
*and* cheap.  It’s fast, simultaneously provisioning two dozen servers in about 5 minutes, about as fast as you can swipe your credit card.  It’s also cheap.  The basic cloud machine instances cost less than $1 per hour.  This extremely low-cost pricing of Cloud resources is why many of the adolescent cloud-testing startups are finding some wind in their sails.  And you would expect that those startups would take advantage of their fair profits and convert the investment into building better testing solutions.  But they haven’t.  And the winds are about to change. 


Announcing:  HP LoadRunner in the Cloud


We combine the strengths and credibility of LoadRunner with the efficient and cost-effective power of Amazon EC2 to deliver an extremely valuable testing solution that enables performance validation across your application lifecycle.  HP LoadRunner in the Cloud makes performance testing and load testing ubiquitous to organizations and projects of all sizes. 


Key features of HP LoadRunner on
Amazon EC2:


·        Test web applications properly by leveraging market-leading technology


·        Ensure application quality using an affordable performance testing solution


·        Gain immediate access to pre-installed software that enables on-demand, unplanned testing


·        Obtain self-service access to a flexible, scalable testing infrastructure


Right now we are accepting requests for an extended Beta of HP LoadRunner in the Cloud.


Click here for more details on HP LoadRunner in the Cloud - we have lots more materials to share...


Video: Real Stories of Load Testing Web 2.0 - Part 4

The fourth and final video of our Load Testing Web 2.0 series covers a very common difficulty in testing nearly any system, even older architectures; dependent calls external to the system under test. The concept of "stubbing" isn't anything new, to be honest I've been doing this for nearly 15 years and it's very common when there is a back-end mainframe required for the test. But now with Web 2.0 architectures, it seems that stray calls to a web service are eluding many testers and this is resulting in some nasty surprises from externally impacted vendors. Here is Part 4 of the series, "Real Stories of Load Testing Web 2.0: Load Testing with Web 2.0 External Calls (please try not to test stuff that's not yours)."

(if your browser doesn't show the video in-line, just click here)

At the end of this video we mention another partner that built a professional "stubbing" solution.  Visit the iTKO LISA Virtualize page for more details.

Video: Real Stories of Load Testing Web 2.0 - Part 3

The third part in our Load Testing Web 2.0 series covers a not-so new concept of server-side data processing.  Don't be fooled into thinking you already know about server performance, because these new architectures are using client-like Javascript on the server; which is sometimes called reverse Javascript.  This video will describe how performance issues can sneak into this type of architecture and how even a simple component can result in serious latency and server-side resource overhead.  Here is Part 3 of the series, "Real Stories of Load Testing Web 2.0: Server-side Javascript Impacts (opposable thumbs not required)."

(if your browser doesn't show the video in-line, just click here)

Video: Real Stories of Load Testing Web 2.0 - Part 2

As the second part in this series we now, we highlight another common challenge we hear from customers testing new web applications; client-side performance. As you add these exciting and new components into the once-upon-a-time-very-thin browser, you'll find increased CPU and Memory resource utilization on the client.  Perhaps as no surprise, this can result in slower response times from page rendering or Java script processing overhead.  Here is Part 2 of the series, "Real Stories of Load Testing Web 2.0: Web 2.0 Components - Client-side Performance (Fat clients, you make the rockin' world go round)."

(if your browser doesn't show the video in-line, just click here)

Video: Real Stories of Load Testing Web 2.0 - Part 1

We know many of you are finding great challenges with testing new web applications built with modern architectures and components like Silverlight, Flex, JavaFX and AJAX. So, Steve and I decided to put together a multi-segement video covering some of the top issues that make LoadRunner web testing much harder and more complex. Here is Part 1 of the series, "Real Stories of Load Testing Web 2.0: Impacts of WAN latency of Web 2.0 apps. (Mark Googles himself and Steve offers to go to Paris)."

(if your browser doesn't show the video in-line, just click here)


In this video we also mention one of our new partners delivering integrations for WAN Emulation that integrates with HP LoadRunner and Performance Center.  Visit the Shunra VE Desktop page for more details and free trial download.


Headless Performance Testing

What is headless testing?  It is any testing that does not have a GUI.  We are all used to creating a performance script from recording a business process.  This is accomplished by having a GUI for the business process that can be recorded.  But do you always have a GUI to record? No, but many performance testers just refuse to test anything that doesn't.  They say that without a GUI an app is not ready to be tested.  I know most developers would disagree with that statement since they have to unit test the code many times without having a GUI. 

So what happens?  Performance testers say they won't test without a GUI, and developers generally don't have a testable GUI until the end of their release; so that means that performance testing will be done at the end.  Hang on!  I know that one of the biggest complaints by performance testing teams is that they are always at the end of the cycle and they never have enough time to test.  Even if they do get the testing completed, if they find problems, the app gets released anyway because there was no time to fix the issues.

One of the biggest, if not the biggest, problem that performance testers have is that they are being relegated to the end of a release; yet they are perpetuating the problem by not testing earlier drops that do not have a GUI.  This seems quite strange to me. 

So what can performance testers do?  Start doing headless performance testing.  Start testing components before a GUI exists.  The earlier you are testing, the quicker the problems are found, the more likely it is that the problems will be fixed and that means higher quality releases.

How to do it?  For SOA there are products that will pull methods from WSDLs, and they will help you manually create scripts for the service.  If it is not a service or a WSDL is not available, you can work with the developers to create test harness that can be used to create performance scripts.  Many time the developers do not have to write a new test harness because they may have already written one to unit test their code or component.

Is a test harness enough?  In some cases, yes.  But in most cases, you will also need to employ stubs or virtual/simulated services.  Stubs are pieces of code that "stub" entities which do not exist.  When you are trying to test a service, you might not have a front end (GUI) and you probably do not have a backend to the service.  The service may talk to other services, servers or databases.  And if those backend entities do not exist, then you have to put something in place to simulate that backend so that the service you are attempting to test will function and react properly.

I've mentioned services quite bit.  These seemingly innocent components are exacerbating the need for headless testing but are also allowing it to be more feasible.  Before, with siloed applications, it was a problem to only test at the end; but hey that's what happened.  Now with SOA, services are being deployed and many, many applications are utilizing them.  It is no longer ok just to test one app to ensure that it will work as designed.  Services need to be performance tested by themselves to fully stress their anticipated load.  Just because a service seems to work with one application or another, all the applications combined that are using a single service may overly stress that service and cause it to fail.

The good news is that since these services should be well defined and encapsulated, it becomes possible to properly test them with a GUI--again, either by utilizing a WSDL or a test harness from developers.  Headless testing will not only help ensure proper performance of an app in production, but it also enables testing earlier in the lifecycle.  Before well-defined components, testing earlier was doable but so hard to do that most just refused.  SOA allows early testing to become a reality.

Performance testers need to understand this new reality.  It is time that we embrace the services and not just rely on developers to test them.  We have been complaining for years that we need to start testing earlier and now, that it can become a possibility, we need to jump at the opportunity.

What will happen if performance testers don't perform headless testing?  Well, performance testers will start to take on a smaller and smaller role.  Developers will have to fill the void and  "performance test" the services.  We know that  QA exists because developers can't test their code, but someone will need to test the services; and if the performance testers will not, the developers will have to.  The developers will then claim that, since they have tested the parts, testing the whole is not that important.  I have witnessed this happening today. 

Is Business Process testing and End-2-End Testing going away?  Of course not.  They are important, and they will always be.  Being able to test earlier will just allow many more performance issues to be found and corrected before applications get released.  Testing individual services is needed because, many times, services are released before applications begin using them.  I don't think anyone wants any component released into production without it being properly tested.

What have I been trying to say?  Performance testers need to step up and start testing individual components.  It may be difficult at first because it is a new skill that needs to be learned; however, once a tester gains that expertise, they will make themselves more relevant in the entire application lifecycle, gain more credibility with developers, and assist in releasing higher quality applications.  Leaving component testing to developers will eventually lead to poorer quality applications being delivered or a longer delay in releasing a quality application.  I can easily say this because QA was created to prevent the low quality apps that were being delivered.  If developers were great testers then QA would not exist.  That being said, QA can't abdicate their responsibility and rely on developers to test.

Headless testing: learn it, love it, performance test it.


Performance Testing Needs a Seat at the Table

It is time Performance Testing gets a seat at the table. Architects and developers like to make all the decisions about products without ever consulting the testing organization. Why should they? All testers have to do is test what's created. If testers can't handle that simple task, then maybe they can't handle their job.

I know this thought process well. I used to be one of those developers :smileyhappy:. But I have seen the light. I have been reborn. I know that it is important to get testers input on products upfront. And actually it is becoming more important now than ever before.

With RIA (Web 2.0) technologies there are many different choices that developers can make. Should they use Flex, Silverlight, AJAX, etc... If they use AJAX, which frameworks should they use. If Silverlight then what type of back end communication are they going to use?

Let's just take AJAX as an example. There are hundreds of frameworks out there. Some are popular and common frameworks but most are obscure or one off frameworks. Developers like to make decisions on what will make their life easier and what is cool. But what happens if their choices can't be performance tested? Maybe the performance team doesn't have the expertise in-house, or maybe their testing tools don't support the chosen framework. What happens then?

I can tell you that many times the apps get released without being tested properly and they just hope for the best. It's a great solution. I like the fingers crossed method of releasing apps.

How could this situation be avoided? Simply include the performance testing group upfront. Testing is a major cog in the application life cycle. They should be included at the beginning. I'm not talking about testing earlier in the cycle (although that is important and it should be done). I'm talking about getting testing involved in architecture and development discussions before development takes place.

If developers and architects knew up front that certain coding decisions would make it hard or impossible to performance test, then maybe they would choose other options for the application development. Many businesses would not want to risk releasing an application if they knew that it could be tested properly. But when they find out too late, then they don't have a choice except to release it (the finger crossing method).

If the performance team knew upfront that they couldn't test something because of skills or tools, then at least they would have a heads-up and they could begin planning early for the inevitable testing. Wouldn't it be nice to know what you are going to need to performance test months in advance? No more scrambling at the 11th or 12th hour.

Think about this. If testing was invloved or informed upfront, then maybe they, along with development, could help push standards across development. For example, standardizing on 1 or 2 AJAX frameworks would help out testing and development. It will make the code more maintainable because more developers will be able to update it and it will help ensure that the application is testable.

We need to get more groups involved up front. The more you know, the better the decisions, the better off the application is going to be.

What are the Goals of Performance Testing?

So what is the point of performance testing?  I get this question often.  And depending on who you talk to, you get different answers.

First let me begin by telling you what are NOT the goals of performance testing / validation.

  • Writing a great script

  • Creating a fantastic scenario

  • Knowing which protocols to use

  • Correlating script data

  • Data Management

  • Running a load test

This is not to say that all of these are not important. They are very important, but they are not the goals. They are the means to the end.

So why DO people performance test? What are the goals?

  • Validating that the application performs properly

  • Validating that the application conforms to the performance needs of the business

  • Finding, Analysing, and helping fix performance problems

  • Validating the hardware for the application is adequate

  • Doing capacity planning for future demand of the application

The outcomes of the performance test are the goals of testing. It seems basic. Of course these are the goals. But...

  • How many people really analyse the data from a performance test?

  • How many people use diagnostic tools to help pinpoint the problems?

  • How many people really know that the application performs to the business requirements?

  • How many people just test to make sure that the application doesn't crash under load?

Even though they seem obvious, many testers/engineers are not focusing on them correctly, or are not focused on them at all.

  • Analysing the data is too hard.

  • If the application stays up, isn't that good enough?

  • So what if it's a little slow?

These are the reasons that I hear. Yes, you want to make sure that the application doesn't crash and burn. But who wants to go to slow website. Time is money. That is not just a cliche, it's the truth. Customers will not put up with a slow app/website. They will go elsewhere and they do go elsewhere. Even if it is an internal application, if it is slow performing a task, then it takes longer to get the job done, and that means it costs more to get that job done.

Performance engineering is needed to ensure that applications perform properly and perform to the needs of the business. These engineers do not just write performance scripts. Just because someone knows Java does not mean that they are a developer. And just because a person knows how to write a performance script does not mean they they are a performance engineer.

Performance engineering requires skills that not all testers have. They need to understand the application under test (AUT), databases, web servers, load balancers, SSO, etc.... They also have to understand the impact of cpu, memory, caching, i/o, bandwidth, etc.... These are not skills are learned overnight, but skills that are acquired overtime.

I wrote a previous blog entry on "you get what you pay for". If you pay for a scripter, you get a scripter. If you pay for a performance engineer, you get a performance engineer (well not always :smileyhappy:. Sometimes people exaggerate their skills :smileyhappy: ).

Companies can always divide and conquer. They can have automaters/ scripters create the scripts and the tests, then have performance engineers look at the test and analysis the results. In any case the performance engineer is a needed position if you want to properly performance test/validate.

It needs to be mandatory to know what metrics to monitor and what those metrics mean. Also knowing how to use diagnostic tools needs to be mandatory. Again in a previous blog I mentioned that if you are not using diagnostics you are doing an injustice to your performance testing. Without this analysis knowledge you are not truly performance testing, you are just running a script with load. Performance testing is both running scripts and analysing the runs.

By looking at the monitoring metrics and diagnostic data, one can begin to correlate data and help pinpoint problems. They can also notice trends that may become problems overtime. Just running a loadtest without analysis will not give you that insight. It will just let you know that the test appeared to run ok for that test run. Many times just running the test will give you a false positive. People wonder why an application in production is running slow if it already passed performance validation. Sometimes this is the reason (You never want this to be the reason). Proper analysis will ensure a higher quality application.

As I said, these are not skills that are created overnight. Performance engineers learn on the job. How do you make sure that this knowledge stays with a company as employees come and go? That is where a Center of Excellence (CoE) comes into play (You knew I was going to have to pitch this :smileyhappy: ). If you centralize your testing efforts, then the knowledge becomes centralized as opposed to dispersed through a company only to get lost if those employees with the knowledge leave. You can read yet another one of my blogs for more information on the CoE. Wow! I've just been pitching my blogs entries today :smileyhappy:. But I digress.

Let's stop thinking that proper performance testing is writing a good script and agree that performance engineering is not an option but a must. Let's start to focus on the real goals of performance testing and then all the of the important "means to the end" will just fall into place.

Offshoring / Outsourcing Performance Testing

There are 2 main reasons why companies utilize offshoring (outsourcing) for performance testing.

The main reason is cost savings. Obviously companies will try to choose locations where there is a lower cost of doing business. The 2nd reason is to be able to ramp up new testers quickly. If there is a greater demand for testing than what the current set of testers can handle then offshoring or outsourcing can be utilized to quickly gain more testers to help with the excess demand.

In an ideal world all performance testers are the same. If you can find cheaper testers elsewhere, then you will get immediate cost savings. But as we know we do not live in an ideal world. There are different levels of knowledge, skill and motivation. We have seen time and time again offshoring fail because companies do not have that correct expectations, they do not set up the proper training, and they do not have the correct set of tools.

You cannot assume (we all know what that does) that just contracting with a secondary company to provide all or partial performance testing will automatically start showing benefits.

There is no reason why offshoring cannot be a successful venture for companies. They must research the offshoring options and find ones that have a good fit with skill sets, low "turn over", and a proven record.

Once an outsourcing company has been chosen then there has to be training. They must understand how your company is expecting the testing to be performed. They must know what types of tests you want them to do (stress,load,failover, etc...), the kind of reports that you want and the SLAs that you expect them to achieve.

After you have chosen the team, provided the appropriate training and expectations, what is left? What tools are they going to be using? The same set of tools that you used when the entire team was internal? At first this seems like the correct response. If it worked internally, why wouldn't it work for an outsourcer? Let's explore this for moment.

First let's just talk licenses. How is the outsourcing group going to gain licenses. Do they have their own licenses that they can use? Most do not and they rely on the company to provide that. So do you transfer the licenses that you have internally to the outsourcer? Do you want to keep some of the licenses in house so that you can perform tests internally when it is needed? More than likely you will be keeping at least some of your performance testing licenses in-house. So that means that you will have to buy more licenses for the outsourced team. Can your current testing tool help with this?

What about testing machines? Do you need to get more controllers and load generators? Can the outsourced team utilize the machines that you currently have? Can your current testing tool help with this?

What about management? How do you know that the outsourced team is doing what they are supposed to do? How do you know if the tests that they are creating are correct? How do you know if they are testing enough? In short how do you know that they are doing a good job? Lack of proper management and oversight is one of the biggest reasons why offshoring fails. Can your current testing tool help with this?

What if you would like "follow the sun testing" or better collaboration with testing. Let's say that you have an important project that needs to get tested quickly. And the only way to get this done is to keep handing off the test to different testers around the world. So when one location is done for the day, a new tester can pick up where the last left off and continue with the testing. This becomes a real possibility with offshoring. A test can begin in-house and then shift to an outsourcer off-hours, thus decreasing the time it takes to get the results to the line of business. Can your current testing tool help with this?


HP Performance Center (PC) is the enterprise performance testing platform that can help you with your offshoring/outsourcing needs. Let's start from the top. PC has shared licenses. Anyone around the world, that is given the proper permission, can access the PC web interface and run tests. There is no need for more licences unless there is a demand for more concurrent testing. And if your demand for more simultaneous tests is growing, then you are doing something right.

Now let's move on to machines. With Performance Center all the machines (controllers and load generators) are centrally managed. There is no need to have LGs placed throughout the world. Testers, worldwide, have access to tests and machines through PC. Again the only time that more machines are needed is if the demand increases. No need to buy more machines just because you have outsourced the job.

Performance Center was created for performance testing management. From a single location you can see what projects and tests have been created, how many tests have been executed and who ran them. There is no need to have scripts and reports emailed or copied. All testing assets are stored centrally and accessible through PC's web interface.

Not only can you view the projects, scripts, and results, you can also manage the testing environment itself. You can run reports to see what the demand for your testing environment is an then plan for increases accordingly.

How about "follow the sun" testing? With HP Performance Center anyone with proper access can take over testing. Since all scripts, scenarios, and profiles are completely separated from controllers can stored centrally, it is easy for a new tester to pick up where a previous tester left off. There is no need to upload scripts and scenarios to a separate location, or remember to email them to the next tester. It is all available 24x7 through Performance Center.

Collaboration on testing becomes much easier in PC than with almost any other tool. If you need different people at different locations to all watch a test as it is running, PC can accommodate that. Just log on to the running test and choose the graphs that you are interested in. Now all viewers are watching the test with the information that they are interested in, all through one tool.

HP Performance Center is your best performance testing platform choice when it comes to offshoring and outsourcing.

So after you pick the correct outsourcing company, and properly train them, make sure that you use HP Performance Center to ensure the highest cost savings and highest quality.

ROI: You Get What You Pay For

We've all heard that saying. But how many times do we really follow it? We have bought, ok I have bought, cheap drills, exercise machines, furniture, only to be sorry about when they break prematurely. Or you find a great deal on shoes only to have them fall apart on you while you are in a meeting with a customer. I'm not saying that happened to me, but I know how that feels.

Cheaper always seems like it's a better deal. Of course it's not always true. I can tell you that now I pay more for my shoes and I'm much happier for it :smileyhappy:. No more embarrassing shoe problems in front of customers (not saying that it happened to me). In fact when my latest pair of shoes had an issue, I sent them back to the dealer and they mailed me a new pair in less than a week! That's service. You get what you pay for.

The same holds true for cars, clothes, hardware, repairs, and of course software testing tools. You knew I was going to have to go there.

I hear from some people that Performance Center is too expensive. I'm always amazed when I hear that. I'm not saying Performance Center is for everyone. If you don't need PC's features and functionality, it may appear pricey. If you are only looking for a simple cell phone, then a phone that connects to the internet and to your email and also has a touch screen may seem a little pricey. But if you need those features then you are able to see the value in those features.

I can sit here, go through each unique feature in Performance Center and explain to you the value (Not saying that it will not come in a future blog :smileyhappy: ). But why would you listen to me, I'm the product manager. Of course I'm going to tell you that there is a lot of value to PC. Well IDC,a premier global provider of market intelligence and advisory services, has just released an ROI case study around HP Performance Center.

A global finance company specializing in real estate finance, automotive finance,
commercial finance, insurance, and online banking was able to achieve total ROI in 5.6 months. Yes! Only 5.6 months, not 1 or 2 years. But a total return on investment in 5.6 months. If anything I think we should be charging more for Performance Center :smileyhappy:. This company did not just begin using PC, they have been using PC for the last 4 years. And during that time they have found a cumulative benefit of $24M. I'd say that got a lot more than what they were paying for. Not only did they see a 5.6 month return on the investment but they are seeing a 44% reduction in errors and a 33% reduction in downtime in production.

What gave them these fantastic numbers?

Increased Flexibility
By moving to Performance Center they were able to schedule their tests. Before PC they had controllers sitting around being idle while other controllers where in high demand based on the scenarios that were on them. But once they were able to start to schedule their tests, they began performing impromptu load tests and concurrent load tests. They started to see that they were able to run more tests with fewer controllers.

Increased Efficiency
While they are able to increase their testing output, they didn't increase their testing resources.
Their testers/engineers were able to more through PC than what they could do with any other tool.

Central Team
With the central team they were able to increase their knowledge and best practices around performance testing. By doing this along with performing more test, they were able to reduce their error rate by 44% and their production downtime by 33%.

So you get what you pay for. Put in the time and money. Get a good enterprise performance testing product. Invest in a strong central testing team. You will get more out, than what you put in. In the case of this customer they got $21M out over 4 years.

Also invest in good shoes. Cheap shoes will only cause you headaches and embarrassment (Not saying that it happened to me).

Finding a Needle in a Haystack

The job of finding the proverbial needle in a haystack has always been a challenge. Digging through a haystack and struggling to find a needle is hard, if not near impossible. Plus you don't even know if there is a needle in there! After searching for while, I know (or at least I hope) that you'll be asking for some tools to help you out.

The first tool that I can think of is a metal detector. This makes sense. Use the metal detector to discover if there is a needle in the stack at all. If there isn't, then you don't have to waste your time looking for it. Yay!!! At least, now I know that I can quickly check stacks and only search through stacks that actually have a needle.

Is that the best or only tool that you can use? If you can cut the search time down more, wouldn't that good? Sure, unless you really like digging through that hay. What if you had a strong magnet? That would rock!!! First, use the metal detector to make sure that there is a needle; then bring in the strong magnet and voila! Out comes the needle, and you're done! No more searching required, and your job just got a whole lot easier.

As you may have already guessed, I'm not here to discuss haystacks. They are fun and all but not really the point. Let's tie this back to performance testing. That's more fun than haystacks anyway.

In the beginning, people tried to performance test manually with lots of people attacking an application at the same time. This was found to be time consuming, not repeatable, and not that beneficial, like digging through that haystack by hand.

Then came automated load testing with monitoring. Now, there's a repeatable process and monitors to help discover if there were problems. This is a tremendous time saver and helps to ensure the quality of apps going to production--the metal detector, if you will.

Most of you know and love the metal detector, as well you should. But it is about time to be introduced to the strong magnet of performance testing. Say hello to Diagnostics (Diag). Some call it deep-dive; others call it code level profiling. I call it the time saver. Diagnostics will pinpoint (or should I say needle point :-) ) problems down to the method or sql statement level. That is huge! Now you have much more information to take back to developers. No longer just transaction level information, now you can now show them the method or sql statement that is having the problem within the transaction. This slashes the time it takes to fix problems. Diagnostics has agents on the application under test (AUT) machines. It then hooks the J2EE and/or .NET code. This is how it can show where there are bottlenecks. Highlighting slow methods and sql statements are good, but being able to tie them back to a transaction is even better.

From personal experience, I can tell you that Diagnostics works extremely well. Back at good old Mercury Interactive, we had a product that was having some performance issues. Our R&D had been spending a few weeks looking for the problem(s). Finally, I approached them and asked if they had tried our Diagnostics on it. They of course said no; otherwise this would be a counterproductive story. After I explained to them in detail, the importance and beauty of Diagnostics, they set it up. Within the first hour of using Diagnostics, they found multiple problems. The next day, all of them were fixed. R&D went from spending weeks looking for the needles that they couldn't find to finding them within an hour with the magnet I gave them. And now they use always use Diagnostics as part of their testing process.

I've heard people complain that it takes too much time to learn and set up. First off, it isn't that complicated to set up. Secondly, yes it is something new to learn but it's not too difficult. Once you learn it, however, it is knowledge that you can use over and over again. It took time to learn how to performance test to begin with. Record, parameterize, correlate, and analyze all took time to learn. This is just one more tool (or magnet) in your tool belt. It will save you and the developers tremendous amounts of time.

Isn't Diagnostics something that IT uses in production? Yes it is! Both sides (testing and production) can gain valuable information from using Diag. And both for the same reason. Find the problem and fix it fast. Testers can always run the test again to see if they can reproduce a problem. But, in production, once the issue occurs, they want to be able grab as much information as possible so that they can prevent it from happening again. After production gets this information, they can pass it back to testing to have testing reproduce the issue. If performance group is using the same Diag as production, then it is easier to compare results. A dream of production and testing working together in harmony, but I digress.

I have said for years that if performance testers are not utilizing diagnostics, then they are doing a disservice to themselves and to the task. Stop digging through the haystack. Pick up that magnet and start pulling out those needles!

Click here to learn more

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • Lending 20 years of IT market expertise across 5 continents, for defining moments as an innovation adoption change agent.
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • GTM Marketing for HP Software's ADM team. I am passionate about design, digital marketing, and emerging tech.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • Rick Barron is a Program Manager for various aspects of the PM team and HPSW UX/UI team; and working on UX projects associated with HP.com. He has worked in high tech for 20+ years working in roles involving web design, usability studies, and mobile marketing. Rick has held roles at Motorola Mobility, Symantec and Sun Microsystems.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.