4 steps for developers and testers to create composite applications that are "Agile" not "fragile"

Test this application”—a straightforward task, right? A tested app delivers the functionality that users expect, responds in a reasonable amount of time and doesn’t timeout or open a security gap where user data gets compromised. And it seems what an application does is simple as well: it provides user or machine access, performs the designated action, delivers a result and has a graceful close. Seems easy enough…

 

Until recently, it seemed application testing was for the most part, well… manageable. Not necessarily easy, but manageable. Applications lived on a mainframe or large server with a single way in and out— we used to call it “the front-end”. The tester identified the application under test and then either manually stepped through its user interface using an authored test case, or created an automated GUI test using something like HP QTP (HP QuickTest Professional, now called HP Unified Functional Testing).  Then, as distributed networking became prevalent, people adopted n-tiered architectures. This allowed applications to scale across purpose-built systems and to distribute load—we all (well maybe, not all of us as I’m dating myself) remember the OSI stack, enterprise app integration (EAI) and middleware.

 

Fast Forward to Composite Applications

 

These architectural evolutions brought integration, or systems testing, to the forefront. Tools and techniques started to propagate widely to support testing integrations. This encouraged test management solutions, like HP Quality Center, to become enhanced with frameworks, like HP Business Process Testing, to ease the complexity of architecting integrated testing scenarios.

 

However, even with n-tier applications and multi-step processes, testing was still manageable. You could identify the application your development team was working on or changing, identify its tiers and how they would be implemented in your data center, and structure. You could also set up the test cases during your waterfall project lifecycle –which had the luxury of progressing over several months.  

 

Let's reset to 2013.  Now we live in a virtual, mobile, elastic, impatient and componentized world.   Our applications still require the quality users expect, in terms of functionality, performance and security. These applications still represent multi-step business processes, but the way that applications are created has changed significantly. Today’s applications are assemblies of modular, self-describing components that can be rapidly orchestrated to deliver functionality in very short cycles. The components often are made up of shared services that are re-used, built by Agile development teams in short sprints, or sourced from the cloud via managed APIs.  

 

Re-use—testing friend or foe?

 

While this architectural approach promises incredible speed and flexibility in delivery, it also creates big-time challenges for Agile development and testing teams. Teams now have to determine what to test and manage for on-going quality in a world with increasing dependencies. Re-useable components get orchestrated  and the result brings unforseen impacts—for example, re-using the component can often change load and response time in ways that cannot be forecasted.

 

An integration may introduce behavior or functional changes due to the types of data being exchanged, or it may cause changes in response order or session management. If a cloud-sourced component is introduced, it may be impossible to get access for testing. And without testing, the team can’t predict if the overall behavior of the business process may be modified by logic in the cloud component. Composite applications, IMO, are the purest example of “Murphy’s law” —if anything can go wrong it will.  If anything can’t go wrong, it will anyway.

 

 

Getting ahead of composite applications is like getting ahead of urban sprawl… we need help from technology and practices; it can’t be managed with brute force.  Agile development and testing teams need to plan and here are four key steps  to get started:

 

 

1.  Work on getting dev and test better interconnected with architecture teams—who are working on defining the shared service architecture, patterns for re-use and integration. And leverage ALM and requirements tools to get and share visibility over architecture and re-use dependencies. 

 

 2.  Set up a testing environment that supports multiple stages of testing—fully included in the Agile project lifecycle. In composite application environments, teams need to conduct early-unit, API and exploratory testing to know that the individual component they are building is working. Then they need an efficient, automated environment for on-going build verification, when the components start coming together. This environment is necessary to check integrity before commiting the sprint and to get you set up for on-going regression testing, which you also want to automate whenever possible.  The one-two punch of build verification and regression testing will ensure changes to one part of the app don’t cause hidden defects that result from the integration to slip through. 

 

3.  Ensure that performance and security are also tested anytime there is a change to an in-house built component or cloud sourced component—include security and performance testing as part of the sprint process and at staging to catch issues at the last mile.  Also, whenever possible bring secure development forward into the lifecycle so that developers can test the security integrity of their code even before committing it into the build process.    Learn more about security static code analysis for secure development here.

 

4.  Automate, automate, automate—teams need laser focus on what they are building and changing and the impact of change on the composite application fabric.  Focus must be on mitigating risk in this fabric and having real-time insights into the impact of change. If a change introduces risk, the dev and test teams need to organize quickly to run a functional, performance and/or security test case. They shouldn’t be slowed down by unavailable components or clumsy infrastructure provisioning. This is where virtualizing services and automating dev and test lab management come into play. Consider Service Virtualization and Lab Management Automation as required infrastructure for any team doing composite application architecture.

 

 

dilbert2.gif 

 

Courtesy of dilbert.com

 

Anyone tasked with managing the complex fabric of composite apps needs tools to help—virtualization and automation are key. Automation wrings out latency and allows teams to focus on exploratory testing, secure code analysis, integration and load testing. This allows your team to work where it matters.  Providing a test environment that supports testing early and often in sprints will help stymie Murphy’s law—and catch hidden defects, especially those that arise from re-use.

 

Want to learn more about ALM and testing solutions for not just surviving but thriving in the world of composite apps? Do you have experience with testing scenarios involving the re-use of components? I would love to hear from you.   

 

Check out these additional learning sources:

 

Composite application lifecycle management –watch this webinar

Continuous functional testing with HP’s Unified functional testing software—watch this webinar

Learn how service virtualization is key for today’s modern applications—download this white paper

 

Follow-us on Twitter at #HPSoftwareALM

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
About the Author
Kelly has over 20 years experience with enterprise systems and software in individual contributor and manager roles across product manageme...


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation