3 new HP technical white papers on automated testing now available

Three new HP technical white papers  on various aspects of automated testing are now available.  They are:

 

Click the links to read the documents.  Let me know what you think by leaving a comment in the box below.

 

Here's a summary of each of the white papers:

 

Building an Efficient Component-Based Test Automation Framework

 

Enterprises that achieve consistently high levels of quality have a number of characteristics in common: They design 

their tests in a way that encourages reuse, their test maintenance costs are low, and there is a high level of
communication and transparency between different roles, such as testers, developers, architects, and managers.
These enterprises typically have a testing framework in place, which helps them to achieve these characteristics by:

 

  • Allowing testers to perform manual or automated testing, while supporting migration toward automated testing
  • Providing a way for testers and other personas to collaborate and maximize reuse of testing assets in order to  minimize test development time, and to streamline test maintenance whenever a part of the application changes
  • Managing the relationships and history of each of the parts of the tests, including the results of the tests’ execution,  and allowing multiple personas in the organization to have access to that information.

These testing frameworks are typically founded on small, reusable, manual, or automated components. This is the first
in a series of papers that presents best practices for setting up a component-based testing framework to help your
organization maximize testing efficiency. This paper introduces the concept of component-based frameworks, and
describes how to identify the components in your application.

 

Automated Testing and Continuous Integration

 

Many development teams are turning to continuous integration (CI) as a means to improving the quality of their
software, and reducing the amount of time it takes to deliver it. CI provides feedback about the quality of a build as soon
as possible. It reduces the risk associated with integrating code back into the source code repository by encouraging
developers to commit their code frequently, by building the code as soon as it’s checked in, and running unit tests on the
resulting modules. The developer receives immediate feedback about whether the change prevents the code from
compiling or whether it introduced unintended consequences in the modified module, or related modules.
CI systems can be further configured to deploy the newly built modules to a production, or production-like environment,
and then perform further testing. This brings the following additional benefits:

 

  • Know immediately whether everything necessary to deploy the modules has been checked in
  • Data-driven automated testing can rapidly test a number of real-life scenarios
  • Quickly discover if the change created any adverse effects in the production system

This paper will explore how developers can take their CI to the next level by introducing automated testing to their
processes.

 

Getting Started with API Testing

 

Composite applications and service-oriented architectures (SOA) call services and components through each service and
component’s API. The API layer contains any number of services, message queues, database abstraction layers, and other mechanisms that supply important business logic and data to applications. Without proper functional testing of the APIs, poorly designed and functioning services and components can negatively impact the quality of the overall system.


Despite the inherent risk of depending on functionality provided through the API layer, the testing of services and  components is often limited to unit tests created by developers, if it’s done at all. There might not be a GUI; the service  might come from the cloud; it requires an understanding of the API; tools and practices focus on testing after the GUI
(if there is one) becomes stable, late in the application lifecycle.


It is simply incorrect to assume that business logic can be adequately validated through the application’s GUI. This approach has the obvious disadvantage of waiting until late in the application lifecycle to discover bugs when they’re more expensive to fix, but has other shortcomings as well. The GUI might exercise only a subset of the service’s functionality. What works for one application may fail later for another that exposes different paths in the business logic. And some aspects of the functionality are not exposed through a GUI, such as performance and security.


This document presents some best practices that you can use to introduce automated API testing into your organization.

Comments
Andy Jones(anon) | ‎10-30-2013 07:11 AM

The links no longer work. They result in a 404 error page.

‎10-31-2013 01:19 AM - edited ‎10-31-2013 01:19 AM

Thanks, Andy.  I've updated the links in the original post.  For convenience, here are the updated links:

 

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation