HP LoadRunner and Performance Center Blog

Displaying articles for: October 2009

Headless Performance Testing

What is headless testing?  It is any testing that does not have a GUI.  We are all used to creating a performance script from recording a business process.  This is accomplished by having a GUI for the business process that can be recorded.  But do you always have a GUI to record? No, but many performance testers just refuse to test anything that doesn't.  They say that without a GUI an app is not ready to be tested.  I know most developers would disagree with that statement since they have to unit test the code many times without having a GUI. 


So what happens?  Performance testers say they won't test without a GUI, and developers generally don't have a testable GUI until the end of their release; so that means that performance testing will be done at the end.  Hang on!  I know that one of the biggest complaints by performance testing teams is that they are always at the end of the cycle and they never have enough time to test.  Even if they do get the testing completed, if they find problems, the app gets released anyway because there was no time to fix the issues.


One of the biggest, if not the biggest, problem that performance testers have is that they are being relegated to the end of a release; yet they are perpetuating the problem by not testing earlier drops that do not have a GUI.  This seems quite strange to me. 


So what can performance testers do?  Start doing headless performance testing.  Start testing components before a GUI exists.  The earlier you are testing, the quicker the problems are found, the more likely it is that the problems will be fixed and that means higher quality releases.


How to do it?  For SOA there are products that will pull methods from WSDLs, and they will help you manually create scripts for the service.  If it is not a service or a WSDL is not available, you can work with the developers to create test harness that can be used to create performance scripts.  Many time the developers do not have to write a new test harness because they may have already written one to unit test their code or component.


Is a test harness enough?  In some cases, yes.  But in most cases, you will also need to employ stubs or virtual/simulated services.  Stubs are pieces of code that "stub" entities which do not exist.  When you are trying to test a service, you might not have a front end (GUI) and you probably do not have a backend to the service.  The service may talk to other services, servers or databases.  And if those backend entities do not exist, then you have to put something in place to simulate that backend so that the service you are attempting to test will function and react properly.


I've mentioned services quite bit.  These seemingly innocent components are exacerbating the need for headless testing but are also allowing it to be more feasible.  Before, with siloed applications, it was a problem to only test at the end; but hey that's what happened.  Now with SOA, services are being deployed and many, many applications are utilizing them.  It is no longer ok just to test one app to ensure that it will work as designed.  Services need to be performance tested by themselves to fully stress their anticipated load.  Just because a service seems to work with one application or another, all the applications combined that are using a single service may overly stress that service and cause it to fail.


The good news is that since these services should be well defined and encapsulated, it becomes possible to properly test them with a GUI--again, either by utilizing a WSDL or a test harness from developers.  Headless testing will not only help ensure proper performance of an app in production, but it also enables testing earlier in the lifecycle.  Before well-defined components, testing earlier was doable but so hard to do that most just refused.  SOA allows early testing to become a reality.


Performance testers need to understand this new reality.  It is time that we embrace the services and not just rely on developers to test them.  We have been complaining for years that we need to start testing earlier and now, that it can become a possibility, we need to jump at the opportunity.


What will happen if performance testers don't perform headless testing?  Well, performance testers will start to take on a smaller and smaller role.  Developers will have to fill the void and  "performance test" the services.  We know that  QA exists because developers can't test their code, but someone will need to test the services; and if the performance testers will not, the developers will have to.  The developers will then claim that, since they have tested the parts, testing the whole is not that important.  I have witnessed this happening today. 


Is Business Process testing and End-2-End Testing going away?  Of course not.  They are important, and they will always be.  Being able to test earlier will just allow many more performance issues to be found and corrected before applications get released.  Testing individual services is needed because, many times, services are released before applications begin using them.  I don't think anyone wants any component released into production without it being properly tested.


What have I been trying to say?  Performance testers need to step up and start testing individual components.  It may be difficult at first because it is a new skill that needs to be learned; however, once a tester gains that expertise, they will make themselves more relevant in the entire application lifecycle, gain more credibility with developers, and assist in releasing higher quality applications.  Leaving component testing to developers will eventually lead to poorer quality applications being delivered or a longer delay in releasing a quality application.  I can easily say this because QA was created to prevent the low quality apps that were being delivered.  If developers were great testers then QA would not exist.  That being said, QA can't abdicate their responsibility and rely on developers to test.


Headless testing: learn it, love it, performance test it.


 

Video: How LoadRunner Works

Sure, I know that we already posted a screen-shot video about LoadRunner Walkthrough, but it might also be nice to share another video whiteboard session to introduce HP LoadRunner and how the individual components of HP LoadRunner and HP Diagnostics are installed. 



(if your browser doesn't show the video in-line, just click here)


 


 


 

Understanding the language of hosted load testing

I recently got an email from a colleague which was an invitation from a testing service vendor which does load testing on "The Cloud" (internet-hosted testing service).  The invitation included some misleading language about load testing that I think can be confusing to engineers that are new to performance testing.  So, here's the hook question they used in their invitation email:


     " Interested in learning how to load test your
applications from an outside-in customer perspective, so you can find and
resolve problems undetectable by traditional behind-the-firewall tools?"


Aside from the overly-casual, marketing-savvy tone of the sentence, there's actually so many hidden assumptions in this sentence it might be helpful for us to break it down:


     "Interested in learning how to load test your applications..."


Well, that's obvious...of course we are interested in learning about load testing our applications.  When this term is used generically as 'load test' I always point out that there is an assumption about the definition for load testing.  Don't be fooled by this over-simplified language - because you might be led to think that doing performance testing is simple and easy.  Like anything in IT...it's usually not simple, and often much less easy.  Also, there are many different forms of performance testing, depending on the objectives for the testing; capacity planning, scalability, configuration optimization, query tuning, migration, disaster recovery & failover and stress testing...just to mention a few. 


     "from an outside-in customer perspective..."


We know this vendor was offering testing services from OUTSIDE the firewall, generating load IN to the application under test.  This is usually a phase of testing that comes in addition your normal "in-house" performance tests, right before the system goes live, by running the load from the external network and infrastructure outside the company firewall or at a hosted facility.  But it is more important to understand the concept of "outside-in" is actually the normal definition of most kinds of testing and especially for black box test design.  To understand this, just ask yourself the inverse question: how would you conduct "inside-out" testing?  My point here is that they mention "customer perspective" which is inherently an "outside-in" perspective...because end-user customers see almost every application from some type of external interface (GUI, or CLI).  Essentially every good test is inherently designed from a customer-perspective. Customer requirements do exist even if you do not document them, or even think about them.  There is a customer (or end-user) somewhere that will be impacted by the system's behavior.  In your tests, those requirements should be directly applied in the test script and test cases themselves.


     "find and resolve problems"


Well, it would be a shame to find a problem and not resolve it.  Wouldn't you agree?  For many years now there have been complementary solutions to performance testing tools that enable profiling and diagnostics on the system under test.  It's very common now to have a performance team that includes not only testers, but developers and architects that can repair the application code and queries on-the-spot, right when a bottleneck is found.  We hear from many developers using LoadRunner for unit performance testing, and they find & fix bottlenecks so quickly...it is perceptibly a single task to them.


     "undetectable by traditional behind-the-firewall tools"


Undetectable?  Really?  There's an implication here that your performance testing environments do not include enough of the production infrastructure to find and resolve bottlenecks that would usually only exist in production.  That's only true if you don't replicate 100% of the production environment into your test lab - which is a common limitation for some companies.  But let me be very clear - that is not a limitation of the testing tool.  It is only a limitation of your resources or imagination.  To be totally honest, LoadRunner already fully supports testing and monitoring and diagnostics of nearly 100% of your production environment.  You can even run LoadRunner against your ACTUAL production systems, if you want to (although we don't recommend overloading production...in fact, please don't do that to yourself).  And don't forget, a good replacement for the actual production infrastructure is a virtualized or emulated infrastructure - using solutions like Shunra or iTKO LISA Virtualize.


The word "traditional" is just a derogatory connotation which seeks to discredit any technology that existed before today.  This usually means that there is also very little respect for the existing discipline of performance testing as it is commonly defined and conducted today.  The truth is there is nothing traditional about LoadRunner or load testing.  And to be very honest there's nothing modern about this "outside-the-firewall" testing service provider.  HP SaaS (formerly Mercury's ActiveTest) has been doing this type of testing for nearly 10 years...and they've been doing it successfully with LoadRunner, year-over-year with every new technology innovation that's come along the way.


Don't get me wrong - I do agree there are some physical bottlenecks that cannot be detected "behind-the-firewall".  Those are bottlenecks you might find with your ISP or Teleco provider in their systems or switch configurations.  Maybe the routing tables for global internet traffic aren't ideal for your application end-users in New Guinea.  Or maybe the CDN systems are having difficulty with performance throughput and simultaneous cache replication.    But if you find a bug with those OTHER COMPANIES...how do you get those bugs fixed?  Can you force them to optimize or fix the issue?  Is your only option to switch to another external provider with better performance, but perhaps other risks?


So, if we were to re-write this sentence with something more accurate, transparent and honest:


     "Are you interested in learning how to conduct effective seasonal spike testing of your production systems from outside-the-firewall, so you can enhance your existing internal performance testing efforts by diagnosing additional problems that you'll find with the external production infrastructure that you probably don't have in your own testing lab?"


(I guess it doesn't sound as catchy, eh?)


 

Are We Done Yet?

When is a user story considered done in agile projects? Depending on whom in the project I ask this question, the response to this question will be different. A developer might consider a story done when it has been unit tested and its defects have been addressed. A QA person might consider a story done when its functionality has been successfully tested as per its acceptance criteria. An application owner or a stakeholder might consider a story done when the story has been architected, designed, coded, functionally tested, performance tested, integration tested, accepted by the end-user, beta tested, and successfully deployed.


Clearly, a standard is needed to properly define the term “done” in agile projects. Good news is that you can have your own definition for “done” for your agile projects. However, it is important that everyone in the team collaboratively agrees to this definition of done. The definition of done might vary by the adoption stage of agile methodologies in an organization (see figure below). During the early adoption days of agile methodologies, a team might agree that the definition of done is limited to Analysis, Design, Coding, and Functional and Regression Testing (the innermost circle). This means that the team is taking on a performance testing debt from each sprint and moving it to the hardening sprint. This is a common mistake, as most performance issues are design issues and are hard to fix at a later stage.




As the team becomes more comfortable and mature with agile methodologies, they expand the definition of done circle to first include Performance Testing and then include User Acceptance Testing – all within a sprint.


I have some tips for you to include performance testing in the definition of done,


·         Gather all performance related requirements and address those during system architecture discussions and planning


·         Ensure that team is working closely with the end-users/stakeholders to define acceptance criteria for each performance story


·         Involve performance testers early in the project, even in the Planning and Infrastructure stages


·         Make performance testers part of the development (sprint) team


·         Ensure that the performance testers work on test cases and test data preparation while developers are coding for those user stories


·         Get performance testers to create stubs for external Web services that are being utilized


·         Deliver each relevant user story to performance testers as soon as it is signed off by the functional testers


·         Ensure that performance testers are providing continuous feedback to developers, architects and system analysts


·         Share performance test assets across projects and versions


·         Schedule performance tests for off-hours to maximize the utilization of time within the sprint


It is important to remember that even performance tests are code, and should be planned just like coding the application, so it becomes part of the sprint planning and execution.


To me, including performance testing in the definition of done is a very important step in confidently delivering a successful application to its end-users. Only the paranoid survive – don’t carry a performance debt for your application!

Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • I have been working in the computer software industry since 1989. I started out in customer support then software testing where I was a very early adopter of automation, first functional test automation and them performance test automation. I worked in professional services for 8 years before returning to my roots in customer support where I have been a Technical Account Manger for HP's Premier Support department for the past 4 years. I have been using HP LoadRunner since 1998 and HP Performance Center since 2004. I also have strong technical understanding of HP Application Lifecycle Management (Quality Center) and HP SiteScope.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.