HP LoadRunner and Performance Center Blog

Load testing and continuous integration for Agile and non-agile environments

(This post was written by Gal Shadeck and Yuriy Pavlovsky from the LoadRunner R&D team)

 

As more and more software companies and departments switch to Continuous Integration and Continuous Delivery practices, it becomes crucially important to integrate load and performance tests into the process. Developers and DevOps engineers must make sure that each new build which goes directly to QA, or even directly to the end user, doesn’t introduce performance regressions. Additionally, it’s sometimes important to include existing unit tests in load-testing scenarios, or perform additional analysis on unit test execution.

 

Continue reading to find out about the new features in LoadRunner 11.52 that are aimed at continuous execution of performance tests and integration with unit testing frameworks.

Is the “Buddy system” the key to having better quality with Agile?

It is an interesting analogy. I scuba dive. And in diving, we believe in the buddy system, i.e. you never dive alone. But what does that have to do with Agile? Read on...

Sticky ToolLook Interview: Agile Performance Testing

I love my StickyMind!...actually, it's Sticky ToolLook that contacted me to ask a few questions about our observations of customers who are adopting Agile Performance Testing methods.  As you know in testing, asking good questions leads to finding good answers - that's exactly what Joey McAllister sent along...Excellent questions! 

 

299i7DB10A4B64FF684C

 

So, check out the Sticky ToolLook newsletter here to learn more!

Autobiography of a Performance User Story

I am a performance requirement and this is my story. I just got built and accepted in the latest version of a Web-based SaaS (software as a service) application (my home!) that allows salespersons to search for businesses (about 14 million in number) and individuals (about 200 million in number) based on user-defined criteria, and then view the details of contacts from the search results. The application also allows subscribers to download the contact details for further follow-up.


I’m going to walk through my life in an agile environment—how I was conceived as an idea, grew up to become an acknowledged entity, and achieved my life’s purpose (getting nurtured in an application ever after). First a disclaimer – the steps described below do not exhaustively describe all the decisions taken around my life.



It all started about three months back. The previous version of the application was in production with about 30,000 North American subscribers. The agile team was looking to develop its newer version.


One of the strategic ideas that had been discussed quite in detail was to upgrade application’s user interface to a modern Web 2.0 implementation, using more interactive and engaging on-screen process flows and notifications. The proposed changes were primarily driven by business conditions, market trends and customer feedback. The management had a vision to capture a bigger pie of the market. The expectation was of adding 100,000 new subscribers in twelve months of release, all from North America. A big revenue opportunity! Because the changes were confined to its user interface, no one thought of potential impact on application performance. I was nowhere in the picture yet!


Due to the potential revenue impact of the user interface upgrade, the idea got moved high up in the application roadmap for immediate consideration. The idea became a user story that got moved from the application roadmap to the release backlog. Application owners, architects and other stakeholders started discussing the upgrade in more details. During one such meeting, someone asked the P-question—what about performance? How will this change impact the performance of the application? It was agreed that performance expectations of the user-interface changes should clearly be captured in the release backlog. That’s when I was conceived. I was vaguely defined as – “As an application owner of the sales leads application, I want the application to scale and perform well to as many as 150,000 users so that new and existing subscribers are able to interact with the application with no perceived delays.”


During sprint -1 (discovery phase of the release planning sprint), I was picked up for further investigation and clearer definition. Different team members investigated the implications of me as an outcome. The application owner considered application usage growth for the next 3 years and came back with a revised peak number of users (300,000). The user interface designer built a prototype of the recommended user-interface changes; focusing on the most intensive transaction of the application – when a search criterion is changed the number-of-contacts-available counter on the screen need to get updated immediately. The architect tried to isolate possible bottlenecks in the network, database server, application server and Web server, due to the addition of more chatty Web components such as AJAX, JavaScript, etc. The IT person looked at the current utilization of the hardware in the data center to identify any possible bottlenecks and came back with a recommendation to cater to the expected increased usage. The lead performance tester identified the possible scenarios for performance testing the application. At the end of sprint -1, I was re-defined as – “As an application owner of the sales lead application, I want the application to scale and perform well to as many as 300,000 simultaneous users so that when subscribers change their search criteria, an updated count of leads available is refreshed within 2 seconds on the screen.” I was defined with more specificity now. But was I realistic and achievable?


During sprint 0 (design phase of the release planning sprint), I was picked up again to see the impact I would have on the application design. IT person realized that to support revised number of simultaneous users, additional servers will need to be purchased. Since that process is going to take a longer time, his recommendation was to scale the number of expected users back to 150,000. To the short time, user interface designer decided to limit the Web 2.0 translation to the search area of the application and puts the remaining functional stories in the product backlog. The architect made recommendations to modify the way some of the Web services were being invoked and on fine tuning some of the database queries. A detailed design diagram was presented to the team leads along with compliance guidelines. The lead performance tester focused on getting the staging area ready for me. I was re-shaped to – “As an application owner of the sales lead application, I want the application and perform well to as many as 150,000 simultaneous users so that when subscribers change their search criteria, an updated count of leads available is refreshed with 2 seconds on the screen.” I was now an INVESTed agile story, where INVEST stands for independent, negotiable, valuable, estimable, right-sized and testable.


During the agile sprint planning and execution phase; developers, QA testers and performance testers were all handed over all the requirements (including mine) for the sprint. While developers started making changes to the code for the search screen, QA testers got busy with writing test cases and performance testers finalized their testing scripts and scenarios. Builds were prepared every night and incremental changes were tested as soon as new code was available for testing. Both QA testers and performance testers worked closely with the developers to ensure functionality and performance were not compromised during the sprint. Daily scrums provided the much-needed feedback to the team so that everyone knew what was working and what was not. Lot of time was spent on me to ensure my 2-second requirement does not slip to 3-seconds, as it will have a direct impact on customer satisfaction. I felt quite important, sometimes even more than my cousin story of search screen user interface upgrade! At the end of a couple of 4-week sprints, the application was completely revamped with Web 2.0 enhancements with functionally and performance fully tested – ready to be released. I was ready!


Today, I will be deployed to the production environment. No major hiccups are expected, as during the last two weeks I was beta tested by some of our chosen customers on the staging environment. The customers were happy and so were internal stakeholders with the outcome. During these two weeks, I hardened myself and got ready to perform continuously and consistently. Even though my story is ending today, my elders have told me that I will always be a role model (baseline) for future performance stories to come. I will live forever in some shape or form!


Where does Performance Testing fit in Agile SDLC?

Is agile software development lifecycle (SDLC) all about sprinting i.e. moving stories from product backlog to sprint backlog and then executing iterative cycles of development and testing? IMHO, not really! We all know that certain changes in an application can be complex, critical or have a larger impact and therefore require more planning before they are included in development iterations. Agile methodologies (particularly Scrum) accommodate for application planning and long-term complex changes to the application in a release planning sprint called Sprint 0 (zero), which primarily driven by business stakeholders, application owners, architects, UX designers, performance testers, etc.


Sprint 0 brings a bit of waterfall process in the agile processes, with two major differences – sprint 0 is shorter in duration (2-4 weeks) and the stress on documentation is not as much as in the waterfall method. In my experience, sprint 0 is more efficient when it is overlapping, so while the development team and testers are working on sprints of the current release; stakeholders, architects, application owners, business analysts, leads (development, QA, performance testing, user interface design), and other personas get together to scope, discuss and design their next release. Sprint 0 is executed like any other sprint, which has contributors (pigs) and stakeholders (chickens) and they meet daily to discuss their progress and blockages. Moreover, sprint 0 need not be as long as the development iteration.


I have seen organizations further divide sprint 0 into two sprints i.e. sprint -1 (minus one) and sprint 0. Sprint -1 is a discovery sprint, to go over user stories to be included in the release and discover potential problems/challenges in the application, processes, infrastructure, etc. The output of sprint -1 results in an updated release backlog, updated acceptance criteria for more clarity, high-level architectural designs, high-level component designs, user interface storyboards and high-level process layouts. Sprint 0 then becomes the design sprint that goes a level deeper to further update the release backlog and acceptance criteria, and delivers user interface wireframes, detailed architectural & component designs, and updated process flows.


The big question is, where does performance testing requirement fit in an agile SDLC described above? While “good” application performance is an expected outcome of any release, its foundation is really laid out during the release planning stage i.e. in sprints -1 and 0. We know that user stories that describe the performance requirements of an application can impact various decisions taken on the application vis-à-vis its design and/or on its implementation. In addition, functional user stories that can potentially affect the performance of an application are also looked at in detail during the release planning stage. Questions like these are asked and hopefully addressed – whether or not the application architecture needs to be modified to meet the performance guidelines; whether or not the IT infrastructure of the testing and production sites are to be upgraded; whether or not newer technologies such as AJAX that are being introduced in the planned release can degrade the performance of the application; whether or not user interface designs that are being applied in the planned release can degrade the performance of the application; whether or not making the application available to new geographies can impact the performance of the application; whether or not expected increase in application usage going to impact its performance; etc. At the end of the sprint -1, the team may choose to drop or modify some performance related stories or take a performance debt on the application.


Going into Sprint 0, the team will have an updated release backlog and acceptance criteria for the accepted user stories. During this sprint, the team weighs application’s performance requirements against the functional and other non-functional requirements to further update the release backlog. At the end of sprint 0, some requirements (functional and non-functional) are either dropped or modified, and detailed designs are delivered for the rest of the stories. Sprint 0 user stories then transition into the sprint planning session for sprints 1-N of the development and testing phase. Throughout these 1-N sprints, the application is tested for functionality, performance and other non-functional requirements so that at the end of every sprint, completed stories can be potentially released.


Agile methodologies also allow for a hardening sprint at the end of sprints 1-N, for an end-to-end functional, integration, security and performance testing. The hardening sprint need not be as long as development sprints (2-4 weeks) and is an optional step in an agile SDLC. This is the last stage where performance testers can catch any issues vis-à-vis performance, before the applications gets deployed to production. But we all know that performance issues found at this stage are more expensive to fix and can have bigger business implications (delayed releases, dissatisfied end-users, delayed revenue, etc.) If the planning in sprint -1 and sprint 0 and subsequent execution in sprint 1-N were done the right way, chances are that the hardening sprint is more of a final feel-good step before releasing the application.

Are We Done Yet?

When is a user story considered done in agile projects? Depending on whom in the project I ask this question, the response to this question will be different. A developer might consider a story done when it has been unit tested and its defects have been addressed. A QA person might consider a story done when its functionality has been successfully tested as per its acceptance criteria. An application owner or a stakeholder might consider a story done when the story has been architected, designed, coded, functionally tested, performance tested, integration tested, accepted by the end-user, beta tested, and successfully deployed.


Clearly, a standard is needed to properly define the term “done” in agile projects. Good news is that you can have your own definition for “done” for your agile projects. However, it is important that everyone in the team collaboratively agrees to this definition of done. The definition of done might vary by the adoption stage of agile methodologies in an organization (see figure below). During the early adoption days of agile methodologies, a team might agree that the definition of done is limited to Analysis, Design, Coding, and Functional and Regression Testing (the innermost circle). This means that the team is taking on a performance testing debt from each sprint and moving it to the hardening sprint. This is a common mistake, as most performance issues are design issues and are hard to fix at a later stage.




As the team becomes more comfortable and mature with agile methodologies, they expand the definition of done circle to first include Performance Testing and then include User Acceptance Testing – all within a sprint.


I have some tips for you to include performance testing in the definition of done,


·         Gather all performance related requirements and address those during system architecture discussions and planning


·         Ensure that team is working closely with the end-users/stakeholders to define acceptance criteria for each performance story


·         Involve performance testers early in the project, even in the Planning and Infrastructure stages


·         Make performance testers part of the development (sprint) team


·         Ensure that the performance testers work on test cases and test data preparation while developers are coding for those user stories


·         Get performance testers to create stubs for external Web services that are being utilized


·         Deliver each relevant user story to performance testers as soon as it is signed off by the functional testers


·         Ensure that performance testers are providing continuous feedback to developers, architects and system analysts


·         Share performance test assets across projects and versions


·         Schedule performance tests for off-hours to maximize the utilization of time within the sprint


It is important to remember that even performance tests are code, and should be planned just like coding the application, so it becomes part of the sprint planning and execution.


To me, including performance testing in the definition of done is a very important step in confidently delivering a successful application to its end-users. Only the paranoid survive – don’t carry a performance debt for your application!

Performance Management for Agile Projects

Performance management is an integral part of every software development project. When I think of agile projects, I think about collaboration, time to market, flexibility, etc. But to me the most important aspect of agile processes is its promise of delivering a “potentially shippable product/application increment”. What this promise means for application owners and stakeholders is that, if desired, the work done in iteration (sprint) has gone through enough checks and balances (including meeting performance objectives) that the application can be deployed or shipped. Of course, the decision of deploying or shipping the application is also driven by many other factors such as the incremental value added to the application in one sprint, the effect of an update on company’s operations, and the effect of frequent updates on customers or end-users of the application.


Often application owners fail to provide an objective assessment of application performance in the first few sprints or until the hardening sprint—just before the application is ready to be deployed or shipped. That is an “Agile Waterfall” approach, where performance and load testing is kept aside until the end. What if the architecture or design of the application needs change to meet the performance guidelines? There is also a notion that performance instrumentation, analysis and improvements are highly specialized tasks which result in resources not being available at the start of a project. This happens when the business and stakeholders are not driving the service level measurements (SLMs) for the application.


Application owners and stakeholders should be interested in the performance aspects of the application right from the start. Performance should not be an afterthought. The application’s backlog in agile contains not only the functional requirements of the application but also the performance expectations from the application. For example, “As a user, I want the application site to be available 99.999% of the time I try to access it so that I don’t get frustrated and find another application site to use”.  Performance is an inherent expectation behind every user story. Another example may be, “As an application owner, I want the application to support as many as 100,000 users at a time without degrading the performance of the application so that I can make the application available globally to all employees of my company”. These stories are setting the SLMs or business-driven requirements for the application, which in turn will define the acceptance criteria and drive the test scripts.


It is important that, if a sprint backlog has performance related user stories (and I’ll bet you nearly all of them do) its team has IT infrastructure and performance testers as contributors (“pigs” in Scrum terminology). During release planning (preferably) or sprint planning sessions these contributors must spend time in analyzing what testing must be performed to ensure that these user stories are considered “done” by the end of the sprint. Whether they need to procure additional hardware, modify the IT infrastructure for load testing, or work on the automation of performance testing; these contributors are an active member of the sprint team participating in daily scrums.  They must keep a constant pressure on developers and functional testers to deliver the functionality for performance testing. After all, the success of the sprint is measured as whether or not every member delivered the final product that fully met the acceptance criteria and on time.




To me, performance testing is an integral part of the agile process and it can save cost to an organization. The more you wait to conduct performance tests, the more expensive it will become for you to incorporate changes. So don’t just test early and often – test functionality and performance in the same sprint!


Search
About the Author(s)
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • Mukulika is Product Manager for HP Performance Center, a core part of the HP Software Performance Validation Suite, addressing the Enterprise performance testing COE market. She has 14 years experience in IT Consulting, Software development, Architecture definition and SaaS. She is responsible for driving future strategy, roadmap, optimal solution usage and best practices and serves as primary liaison for customers and the worldwide field community.
  • HP IT solution architect. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
Follow Us
Twitter Stream


HP Blog

HP Software Solutions Blog

Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation