Part 3 – Testing Evolution Series:
I would like to pose a question to the reader, is automated testing truly effective if you have someone watching scripts run? I would suggest that it wasn't the first thought that popped into the minds of the designers of the automation testing tools, in fact, I would suggest that it didn't make the top 100 reasons for automation testing. Think of it like this, the cost of developing an automated test script could cost upward of 1½ up to 3 times more than a single execution of one manual test, so to recoup the cost of a single automated script it should be executed at minimum of 2 times before automation could be consider cost effective. However, if the script developer is watching the script run theoretically you will never recoup the cost of that script; in fact I would suggest that you're losing money every time that script is executed even if the developer is multi-tasking at the time.
One argument that could hold water would focus on the speed in which an automated script can run could negate the fact that automated tests that are being monitored can still be cost-effective. I would nullify that argument by stating that a developer that creates automated test scripts makes 1½ times or more money than a typical manual tester. The only other argument is that automation gives the testing group consistent results; I would then pose that if the automator has to watch the script run the stability of the script would be called into question. In defense of my colleagues, in most cases this happens when an automator hasn’t been given enough time to maintain current libraries of automated scripts.
The only way to make the expense of an automated script developer cost effective is to have he or she developing or debugging test scripts, and if they are monitoring the execution of the automated test academically that would be the same as having an manual tester execute those same scripts only slightly slower. I believe that the corner stone of automation is the ability to test unattended 24 by 7 and if you can achieve this your effectively tripled the capacity of your testing groups effectiveness and moved the most tedious types of testing out of manual testers hands and freeing them up for high yielding testing, like security and exploratory testing.
I know by now you come to the conclusion that I’m anti automation and that couldn’t be farther from the truth, in fact, I would consider myself as an automation advocate. But to be good at my job, I need to know both sides of the argument and to know how to get the biggest bang for my clients.
If you are currently using automation and find yourself in a situation where you can’t develop any new scripts; either because you’re having to spend time maintaining your old scripts and/or you find yourself having to waste time monitoring the test runs; I would like to help you get back on track and successful.
First don’t give up on automation as it is the most cost-effective way to increase the quality of applications under development. Second reason is I’m about to give you some of my best-kept tips and tricks with a side order of best practices.
- If you’re having to monitor your automated testing, you may be using the wrong tool for the job. For example, if I’m having to intervene while a QTP script is running I should probably use QTP API calls or other tools that will allow me to insert the data behind the user interface. If you’re dealing with an object or group of objects that seem to be taking more than a couple of hours to troubleshoot, try using descriptive statements in your script and if that doesn’t work focus on inserting or checking the data from behind the presentation layer. Often when this problem arises it is either about animated objects or unsupported shareware a developer found on the web which should raise a whole new set of security issues.
- Think modular when creating scripts and focus on the re-use which will greatly lower your maintenance cost of the scripts. The best way implementing this type of methodology with structure within a short period of time is by getting the add-on for Application life cycle Management (ALM) called Business Process Testing (BPT) which I talked about in my last blog title “OOPs is the best way to describe Business Process Testing” and will be talking about in greater detail in later Blog’s as well. We need to use the same methods and process used by the developers to lower the cost of automation.
- Place redundant code and/or error traps within a function inside a function library. The reason for this is functional test tools typically load the function library into memory before executing the test script which is not a compiled program and normally not loaded into memory the same way, thus being susceptible to external influences. In addition, by default automated scripts executing on computers doesn’t get assign high priorities within the processor execution heap. You probably have noticed that I stated function library in the singular tense instead of stating it in plural tense, which in turn may provoke some readers of this blog; however, I believe that while loading a couple of functional libraries into memory may be unavoidable that whenever possible you need to consolidate your functions into a single functional library which is much easy to support.
- Use the build in error trapping functionality inside the tool to trap a large amount of the errors that may happen during an automated test run. You can’t predict every pop up or error that may happen during an unattended run so let the tool document the errors for you and move on. In fact, a lot of the testing tools will send you emails on an error if you set it up. While this may cause your script to run slower by seconds or minutes, it is definitely worth the tradeoff of having to watch the script run versus running 24 by 7. Commonly, after using the built-in error trapping functionality for a while, more than likely you will narrow down the trouble spots within an application and start writing you’re on cleanup functions.
- When possible and also based on your skill level use descriptive programing (DP) instead GUI maps. This can vary based on the maturity of your team and the scripts you support. In addition, only attempted DP on scripts that are stable and only after 2 or 3 runs without errors. Don’t attempt to use DP on a first run script it just too time consuming when developing new scripts and can add a layer of complexity to your trouble shooting. Descriptive programing if done correctly can make your scripts more dynamic thus more forgiving when issues arise. Please remember when using DP to document your work, it can make it really hard on the next person if you don’t document correctly.
I have a lot more hints to make automation more effective but I would like to save some them for my next blog and hopefully entice you the reader to return and send me feedback and hints.
As stated above, the biggest time saver is to modularized your scripts or use a tool like BPT to aid in the process of developing test scripts. One of the biggest trade off of using BPT is the loss of speed due to the fact that Business Components as well as several other items are always being loaded and unloaded from the local computer. For that reason BPT can run slower than a regular automated script which has led some users to argue that it's too slow and not feasible for them. A lot of the time the issue of BPT being slower is self-induced by the defaults of QTP or some other reason. Before giving you hints to avoid some of these timely delays, I would like to get to the heart of the matter. Functional testing is not about performance or how fast your script runs is about mimicking real user's interaction with an application or system and making sure that no new functionality has negatively impacted the original application. If you're scripting or test is running in unattended mode does it really matter if it takes 2 or 5 minutes to run? It has been my experience that having scripts that are stable beat speed every time. Plus it pays more to use a tool like LoadRunner for performance testing. As promised I will share some tips on improving BPT's script execution:
- Remember when running a normal script through QTP that the script is running locally and because of that we are inclined to judge time in milliseconds and when using BPT you are running through ALM, which has to download everything from a centralize repository which is best described as seconds for that reason please remember to optimize your ALM repository.
- Limit the number of Application Area to a bare minimum. Each time you load or unload and Application Area this also includes the loading and unloading of all of the object repositories, function Libraries, and everything else associated to that specific Application Area.
- When creating an object repository only learn the objects that will be either executed or checked. Please remember the smaller the object repository the smaller the load time. Furthermore, remember using descriptive programing will also reduce the size of your object repository thus reducing the load time.
- Turn off Smart ID when learning objects to be stored in the object repository; this can cause lengthy delays when running through BPT. Any function call (“wait(10)”, “win_exist(10)”) that has some type of check point embedded within, take the number inside the brackets and then times that number by 100 milliseconds if your using BPT, also remember if you see this a blank set of brackets(“()”) within a checkpoint is equal to 10x100 milliseconds. If you have to use a check point move it to a function in the function Library it seems to remove a lot of the delay.
- Limit the number of Object Repository assign to a single Application Area. When the Object Repository is loaded, they are assigned a sequential priority or place in a sequential order. Imagine it this way, if you have 10 Object Repositories and the object you’re looking for is in the last Object Repository the system will virtually open and close each library before finding the item you’re looking for. To some up the number of object repositories assigned to an Application Area can delay the load time as well as prolong the execution of your scripts.
Not a lot of people realize this about BPT, if setup correctly can be ran either automated or manual allowing the user the ability to quickly validating errors found by an automated run.
I hope you found this information informative and educational, if you have a tip or trick that I’ve not listed above please comment on this blog and list out some other items that may aid other readers. If you find any of the hints helpful please share the link with others. If you would like to know more about any of the products or processes mention in this blog please contact me or sent an email to your sales person and reference this editorial.