02-06-2014 09:51 AM
I want to test all my test cases for different browsers. I can create different test sets for each browser and pull the same test instances into each of these test sets. But the requirement to which these test cases are linked show as passed (in Coverage Analysis) as soon as I pass just one instance in one test set although I haven’t run the rest yet.
I was wondering if creating separate Cycles in the Releases module for each browser will help, if I assign these different test sets to cycles specific for a browser. Any feedback is appreciated.
Now I don’t want to create test configurations for each test case for the different browsers since I already have configurations for user-roles and I did not want to make it too complicated (at least I am assuming that it will get too complicated to maintain if I have too many configuration esp. the way it’s designed).
Please let me know if this is a preferable method or if there is a better way to track coverage.
Note: Using Test Plan and Test Lab. Not BPT
02-07-2014 01:04 PM
In your test case(s), add a test configuration, a configuration for each browser.
Then from your Requirement>Test Coverage, link to the appropriate Test Configuration for the Test Case. The Test Confgs are in this view in the bottom right of the GUI.
HP Technical Solutions Consultant V