Automation Testing: Why are we making it so complex? Part 3: Just the automation facts ma’am

A new page has fallen off the calendar for 2013(I know, it happened a few months back) and here we sit again discussing automation.  The calendar flip is true, unless you are not using paper calendars; then you have to tab left to see last month.  But I digress (I tend to do that).

 

In my previous two posts, (part one and part two) we talked about simplifying the process of testing by having guidelines instead of ridged frameworks. This   allows our automation to be flexible and not burdened by the structure that a framework brings with it. We talked about modules and how to make them work for us rather than against us. We have looked at how to make our automation streamlined and reusable for us. Now we can really begin to perform massive amounts of automation because of the lower overhead and increased speed to test that the reusable modules give us.  But should we really focus on automating everything and how do we find out what the level of effort is going to be?

 

Meeting_At_TA.jpgWhen I talk with testers and Quality Assurance Organizations (QAO) at client sites, I always ask the same question.

 

Q: “How much automation do you think you need to do for a project?”

A: “We need to automate everything.”

 

I ask the question to mature and immature organizations. It doesn’t matter where they are in the process the answer is always the same—automate everything. It sounds like a great idea, and I think we can all understand why we would all want to reach a goal like that. But can it really be achieved? 

 

Is “automate everything” achievable or simply a pipe dream?

 

With the speed of application development, especially with Agile and Independent Change-based development, is it possible to have complete levels of automation across the board? Even with very modular and reusable assets in your automation arsenal, you still have:

  • Maintenance
  •  Data cleanup and mining
  • Introduction of new functionality 
  • Retirement of existing functionality

Can your testing organization keep pace with your development in the time period that you have allotted for testing?

 

 I think we can achieve these goals, but we will need to increase time for testing, increase personnel for testing, or bring testing cycles up earlier in development phase. All of these can impact your development timeline as well.  They all cost more time and money, an albatross that we can all agree that QAO’s have been plagued/saddled with from day one.  So, we need to automate what really counts instead of focusing on automating areas where we are going to waste time. This is how we can become more efficient and make sure we are providing the largest automation bang for the buck.

 

We know what testing is, the table below tells us when we are approaching any testing solution what we should be looking for from our tests.

 

Table 1  Guiding principles for automation design

Automation must be…

What does this mean?

Accurate

  • It is based on actual requirements?
  • It is goal unambiguous?
  • Does its manual test case make no assumptions?  (Make requests for more information from BA or Dev teams.)

Economical

  • Do what you need to do, not what you think you need to
  • Create the least number of modules that provide coverage for the given requirement
  • Focus first on high priority/high risk requirements

Consistent

  • Develop naming conventions and apply them consistently
  • Develop a glossary terms used within your automation

Appropriate

  • Does the test achieve the goal of verifying the requirement as it is specified, no more and no less?
  • Is the goal achievable by using valid data?

Traceable

  • Link automation to the appropriate requirements to show coverage
  • Link defects to the appropriate test executions to show defect flow

 

Those are great and I am sure anyone who has taken the QTP class in the past remembers these principles, but there are some additional areas that we can look at to make our automation even more specific and appropriate.  First, We need to understand the level of effort in automation, so we can have concrete numbers to show how this automation effort will impact or fit within project schedules. Here are some examples of areas you should be asking questions during the information gathering phase to give the automation team more data to work on.

 

Environmental Conditions: Knowing the environment the application is developed in can give us insight into the level of effort for automation and the stability of the platform:

  • What is the interface method? 
  • Is this a Development, SQA, Stage or some special AUT environment? 
  • Who are the owners of the AUT environment? 
  • Is there specialized client side setup that needs to take place for the testers to access this system?

 

Data Constraints: Knowing where the data is coming from and what the usage of this data is can help us tailor our strategy to how you automate those solutions:

  • Are we going to be responsible for the data we are using?  If not, who is giving us this data?
  • What is the nature of this data? 
  • Is it ‘using once’ or can we cycle over and over again on the same data? 
  • Are there linkages between data from one test case to the next?

Access Controls: Knowing how the testers will have access to the environment can impact timelines:

  • What level of access to the application do we need to sign in with? 
  • Are all the test cases for this project utilizing the same permission levels? 

Workflow Conditions: The more complex the flow, the more new functionality, the larger the impact on testing will be:

  • Do we have use case documentation for this project? 
  • Has the process been validated manually?
  • If it’s an existing system, do we have existing test cases documented (either in HP Quality Center or elsewhere)? 
  • Is there training material for your end users that we can leverage now to get more familiar with this system?

 

All of this data allows my organization to provide numbers on the effort that automation will take.  Once we look at the data, and calculate those numbers, we can now see what the actual cost of ‘automate everything’ will be.  More often than not, it way more effort to “automate everything” than we have time for. 

 

Instead, we have to focus and only look at automation of the things that make sense. The next question is, “How do you make sure you are automating the right thing at the right time?”  I think that sounds like an amazing topic for another month, don’t you? If you have any questions about the practicality of “automating everything” feel free to reach out to me in the comments section below.

 

Thanks for reading and I will catch you all on the flip side.

 

ITEC banner.jpg

Comments
cbueche | ‎04-30-2013 07:15 AM

First focus should be the risk of Business that is supported. Than you should prioritze the testcases and select for regression the mostly risky TC's. Only for release change all should be tested. So the time to the next release have to be phased in to the testing process and the maintenance for code changes on both side.

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
I have more than 12 years in IT, with over 10 years working with the HP Quality Management suite of tools—seven as a Professional Services c...
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.