The Future of Testing Blog
Business innovation thrives with software innovation and quality matters now more than ever. Application teams are running to keep up with business demand by embracing the technologies and methodologies that will help them build faster with confidence to engage consumers, go mobile and release continuously. How does a modern testing team keep up? This blog will focus on the importance and role of the tester, the innovations in testing processes, test tools, test automation solutions and best practices and discuss how to balance speed with quality through Agile, mobile, web, composite and cloud application testing.

UFT 11.50 is Ready to Download!

UFT as a PresentThe wait is over!  Just in time for the holidays, HP UFT 11.50 is ready for downloading.  Here's how you can get it.  As an end-of-year bonus, I've also provided some facts and figures about this release.

My dream functionality is now a reality with HP UFT 11.5

Tuft2.pngesters can embrace new technology and scoff at automation testing tools that still rely on object identification, events and property. Image-state technology isn’t a new concept. But in the past, it’s been very unstable, unreliable and most of all, unmanageable. That why I’ve renamed this feature of Unified Functional Testing (UFT) “Image-based Intelligence”.

Technical: How QTP identifies Objects

How QuickTest Professional Identifies Objects

-- A post by Motti Lanzkron from QTP R&D

 

Note: The following describes the inner workings of QuickTest; it is intended to help people understand how things work but may change in future versions and shouldn’t be depended on.

 

QuickTest Professional’s bread and butter is identifying controls in the application being tested and it faces a tough decision since there are two directions in which one can optimize.

  • Optimize for speed
  • Optimize for robustness

Unfortunately one usually comes at the price of the other. To alleviate this problem, QuickTest has two concepts called description and runtime ID which are used together to achieve both speed and robustness:

 

Description

You should be familiar with the method QuickTest uses to identify Test Objects. This is a set of property/value pairs which (given the Test Object’s parent) describe an object uniquely.

 

 321i463B4AFBB3BD1291

Figure 1: Object Identification Dialog

 

This description can be used to describe a control robustly so that it will still match the correct control even when run in a different environment or in different builds of the application. However, finding the object that matches this description may be slow.

 

Runtime ID

A runtime ID is a way to quickly identify a control in the application being tested; the actual implementation changes according to the technology in question. But first, a bit of background…

 

In Microsoft Windows, each control has a Window which is identified by a handle known as HWND for “Handle to WiNDow”. All of the Windows APIs use HWNDs to access controls. This method is very fast; however, the HWND is like a “black box”. You can’t assume anything about its value since Windows assigns each control an HWND from an internal pool and the value will change every time the application is run. This makes an HWND the perfect match to be the runtime ID of Win32 controls. For example, a WinButton Test Object’s runtime ID may be its HWND.

 

Combining the Two

Here is a summary of the characteristics of runtime IDs and descriptions:

 

322i995357BAAD55177C

Figure 2: Runtime ID vs. Description

 

Every Test Object may contain a runtime ID and/or a description. If both are missing, there is no way to identify the control in the application being tested and the Test Object is useless (with a few exceptions).

 

When a Test Object is created from the application being tested, it contains a runtime ID. If the Test Object is to be stored in the Object Repository (during learn or record) a description is created for this Test Object.

 

When running a test, the Test Object has its description filled in (either from the Object Repository or from Programmatic Description) and then its runtime ID is found. Since finding the runtime ID is potentially an expensive operation, the value of the runtime ID is cached to avoid fetching it multiple times. However since the runtime ID may change during the test run, QuickTest is careful to clear the stored value whenever a new step is run. This means that if you store the Test Object in a variable or use With the aforementioned runtime ID clearing will not take place and the runtime ID may be invalidated. If this happens you can call the RefreshObject method (available with QuickTest version 10 and later) which manually clears the runtime ID.

 

 323iBC39705A3934AD1A

Figure 3: Invalidating a Runtime ID

 

An unexpected pitfall came as a support call when a customer tried to call Exist on a Test Object returned from ChildObjects. Let’s follow the chain of events with our new knowledge of the inner workings of QuickTest.

1. ChildObjects creates Test Objects with their runtime IDs set. (The Test Objects are to be used immediately so there no point in creating their descriptions.)

2. Exist wants to check if the object still corresponds to a control in the application being tested. To do this it clears the runtime ID and tries to re-create it.

        a. The description is empty so Exist fails.

3. Further down the line the customer tries to use the Test Object again and the test fails since the Test Object doesn’t have a description or a runtime ID.

 

The problem is that QuickTest creates a layer of abstraction above the application being tested and like all sufficiently powerful abstractions, it’s leaky. We know what ChildObject does and we know what Exist does, but put them together and BOOM - the things you don’t know (or weren’t supposed to know, like runtime ID) come leaking through and punch you in the face.

 

Our solution was to have Exist always return true when run on a Test Object that has a runtime ID but no description (this is part of QuickTest 10 Release Update). We feel that this is correct behavior since you just got the object from the application using ChildObject; obviously it Exists otherwise it wouldn’t have been returned.

 

An alternative solution could have been to make the objects returned from ChildObject create their description, however that would slow down all tests that use ChildObject for the benefit of the vanishingly small number of tests that check for existence.

 

The best way to avoid the problem in the first place is to use objects created by ChildObject as soon as they are created and before anything changes in the application being tested.

Labels: QTP

Multi - Layered Testing with 'Unified Functional Testing'

During the last few years SOA-based / distributed applications have become more and more common. Applications that were built as silo entities are today interconnected, integrated, sharing components and services. A modern business process can begin with a transaction request on a web application, connect to the billing system, register a new transaction on an ERP system, send an email notification through the exchange system and once all steps are verified go back to the web application to finish the process with a confirmation message. Modern End-to-end business processes not only span across multiple applications but also have rich and complex steps that are happening below the Graphical User Interface (GUI) within what is sometimes called the ‘business layer’ through API calls and data bases interfaces. The business layer can consist of web services, dll files and other GUI-less entities.

This means few potential changes for the modern QA teams.

The good news:


·         The testable interfaces have expanded and allow for more elaborated testing and root cause analysis.


·         While traditional Functional Testing is mostly done on the GUI layer, modern functional testing can leverage the business layer to test earlier in the process (while the GUI is not ready or stable yet) and find bugs earlier than what was possible before.


However this also brings new challenges:


·         Sharable components and services create dependencies between applications and projects that impact how QA reacts to change


·         Testing on the business layer requires new skill sets that the QA engineer needs to acquire such as understanding WSDL files and web services.



Although the majority of functional testing is still done on the GUI layer, I see more and more of our customers complementing it with ‘head-less’ testing on the business layer. This is done by acquiring new skill sets by the QA teams, or bringing the SOA testers under the same group as the GUI test automation. Whatever the choice is, the trend is clear – the two worlds of GUI and headless testing are slowly merging.

This merge impacts the testing methodologies. Many of our customers (including HP IT itself) have today test scenarios where they interact with the GUI and the business layer within the same test scenario. For example: the first step of a test would fetch some data via a web service or an API interface to a DB and this data would be used during the rest of the test scenario on the GUI layer. Sometimes there are multiple shifts from the GUI layer to the business layer within the same test scenario. We call this ‘Multi-Layered Testing’. All of these techniques open new frontiers to the QA and automation teams, allowing them to test more, test earlier, enhance coverage and find bugs easily that were very hard to find before. However, all enterprise test tools of today (manual and automated) address either the GUI layer or the business layer. This does not allow the QA to perform multi-layered test scenarios and creates a separation between the testing methodologies of GUI and headless.

In order to address this challenge we in HP decided an integrated test automation solution that can address GUI and headless testing, is needed. Our current plans and what we are currently looking into is leveraging QuickTest Pro (the leading test automation tool in the world today) and integrating it with the innovative HP Service Test (HP’s tool dedicated for testing services). The integrated package will allow unifying the testing capabilities of both stand-alone functional testing tools, enabling multi-layered testing with both tools within the same test scenario like never before. We will call the package Unified Functional Testing (UFT).

Stay tuned for more about UFT in future posts to come. The more we speak with our customers about the UFT direction, the more use cases for it we see.

If you experience some of the multi layered testing challenges written above, I’d love to hear from you – feel free to comment below and I will follow up on anything interesting.

Roi Carmel

Testing Web 2.0 Applications

It's been a while since my last post. I have been traveling quite a bit, meeting customers, partners and together with QTP R&D looking at what the market is telling us and how we should respond in some areas and be proactive in others. One of the top things we got as requests during the last few months is requests to be able to test web 2.0 applications better.


Web 2.0 includes various technologies. When I did the market research I narrowed it down to Flex/Flash, Silverlight and Ajax. Since we already have support with QTP for Flex through our partnership and close relationship with Adobe it was mostly Silverlight and Ajax that were 'green fields' for us. We sat down few months ago and decided this is a major trend that we need to address and even though no vendor out there is doing a good job supporting these technologies well, it justifies substantial resources in order to become THE leader in testing web 2.0 applications. And so, we allocated the resources and I am pleased to see that we have made great progress relatively quickly - supporting Silverlight 2.0 already with a new add-in that is out there and are going to come out soon with an even stronger support for other areas. I am certain that with the upcoming Web 2.0 pack that will be released on top of QTP 10.0 and the next release of Functional Testing (QTP) it will be clear we are leading the automated functional testing market with regard to test Web 2.0 applications.


I look at the challenge of supporting these RIA (Rich Internet Applications) and Ajax technologies as one that consists of the following:



  1. What I call tactical support - having support out of the box for the most commonly used web 2.0 technologies (and latest versions) - Flex, Silverlight and few Ajax tool-kits.

  2. Strategic directions for testing web 2.0:

    1. Making our add-in extensibility (mostly web extensibility in this case) easier to use, faster to create assets with and separate from the actual QTP to allow as many users/partners as possible to create their own extensibility assets and extend QTP to support whatever is not supported out of the box. With the hundreds of Ajax tool-kits out there and new ones coming out every month or so, this is extremely important for our users as well as our partner ecosystem.
      We are addressing this with a new framework to create extensibility assets called Extensibility Accelerator (EA) which will also be part of our upcoming Web 2.0 pack on top of QTP 10.0
      Together with R&D we are also working on a new white paper that will describe this new EA and best practices on how to use it.

    2. Improving our record/reply and object recognition capabilities to cope better with these new type of controls and events.



We will keep following this area closely and make sure we treat it as a strategic one. Any feedback or comments about this is greatly appreciated.


Till next time,


Roi.

Who really does the testing ?

Today I bring you a post from Yoav Eilat - a coleague and friend that is focused on the business applications quality space. Thanks Yoav for contributing this post:

Hi,
Lately we’ve been talking to our customers’ IT organizations to figure out who actually does software testing.

Historically, we’ve always seen a broad spectrum of testers, from completely non-technical business users to expert QA. In a “typical” organization, if there is such a thing, it looks like most testing is done by business analysts or users, since the size of the QA staff (even with outsourcing help) is quite small compared to the size of the organization. Business users are better informed about ongoing changes to the application, especially for applications that implement corporate business processes like ERP/CRM. So business users are responsible for testing the processes they know, while the QA team focuses on testing infrastructure, test automation and quality processes. This means we need to make sure the tools that are used fit the skills of of the target audience.

I am interested to hear how close is this description to what you see in your company? How is the division of labor between business analysts/users and QA affected by the type of application under test, the testing methodology (automated vs. manual), and the industry you’re in?

Looking forward to your replies on this one.

Yoav.


 

Testing Definitions

Hey,
I have lately met with few of our customers discussing different testing issues and saw that there is a quite a lot of confusion around testing terms which makes is hard to communicate. I wanted to have a quick post on how I see these testing terms and us them in my blog, articles and communication. This is not a full or formal description but what I mean when I talk about these. 

Unit Testing: A white-box developer testing that is built within a framework such as NUnit or JUnit to cover all code paths and interfaces to a class or function. 

Component Testing: Testing of a compiled piece of software (not a white-box approach).Testing is done separately from the full system it is part of. This can be either a component such as a service or an application that is part of a business process that spans across several applications. 

Integration Testing: Testing interfaces between components (or applications that are part of a single business process). This can be tested as part of an end to end (E2E) test but the focus is on the interface between the components – testing edge cases, negative tests of invalid input from one component to another, etc. 

System Level Testing: Testing of a full application as the end user users it. If this application is not part of a larger business process that has other apps, this can also be called End to End Testing. 

End to End Testing: Usually tests a business process that spans across many applications, testing the full process from beginning to end. With Agile becoming more common and testing organization mature it is important to make sure that whatever terms you are using, that the scope of each test suite is well defined and put in the right place in the quality process. 

Roi

 

Funding the test automation team

First I’d like to thank all of you that read and responded to my last post. As this was the first post in this blog, it was great to follow up on the great replies and read your points of view. I specifically enjoyed reading your replies, George and LaBron – I think it is interesting to see how many flavors there are for the model I wrote about and the trick is finding the right one for your organization while keeping the eye on the ball and making sure the critical things that apply across the board are still kept. In a way my post this week will touch upon some of what you wrote LaBron, regarding passing on responsibility to SMEs from the business units.


 

 This week, I want to continue along the same lines of discussion but focus on a problem that every CoE manager or a manager of a centralized functional testing team struggles constantly – how do I scale up and support more and more projects and business units while working within a given budget (or let’s face it, in this economy – maybe even a shrinking one). Basically I see 3 main funding models among our customers and users:

  1.  Full service based model where any activity of the centralized FT (functional testing) team is funded by the project it supports and needs to convince it to fund and invest in automation, rather than keeping the existing manual test suites as they are.

  2. Fully self funded automation team where all activities of the FT team are funded by the team itself (which manages its own budget) and the team can scale up only up to its available budget

  3. Somewhere in the middle – the FT team has some budget to work with but not enough to carry on a full service to a business unit. At a certain point the testing project needs to be convinced and fund the rest of the investment in automation in order to get the ROI. 
Using quotes is always a bad sign for someone who thinks he has something to say but it’s still fun…J – Andre Gide said “Believe those who are seeking the truth; doubt those who find it.
As always, each of the 3 ways above has it’s pros and cons and it is up to us to understand them, consider the differences and choose which is best for our organization. Here is how I see it: 

Full Service Based Model


Pros:

  • Projects need to be proven as worth the effort with positive ROI before investing. This allows the organization as a whole to have a process that makes sure evangelists are not running mad and building their empires but invest their time where it is most beneficial to the organization.

  • Quantification of ROI usually improves and becomes much better when it’s the team only way to receive budget. This gives management better visibility.

  • The centralized automation team and its automation projects are fully scalable – a new project drives new budget to the team to expand its activities.

  • Once a project is on its way, the business unit getting the service is fully committed.
Cons:

  • Making change is hard. A project that might benefit a whole lot from automation might be missing it if the project manager has a hard time taking risks or creating change. This does happen.

  • There is very little innovation from the centralized FT team since projects want to pay only for clear deliverables.

  • Very hard to maintain a growing centralized infrastructure that is owned by the FT team – an activity which is usually hidden from the day to day project’s life.
Fully Self-Funded Centralized Automation Team

Pros:



  • The team has enough budget to innovate, maintain its infrastructure and invest a lot in convincing a project they want to port to using automation (sometimes automating large parts of the project as a POC)

  • The team can grow its knowledge and constantly try out new supporting tools which allow it to improve.
Cons:

  • The centralized automation team has a very hard time to scale – a fixed budget and growing number of projects will create a problem and the FT team will become a bottleneck for the organization’s move to automation.

  • Automation might grow where things are ‘cool’ to work on but not the most critical to the business.

  • There isn’t clear ROI business case unless someone chooses or instructed to calculate it and report on it

  • Even if a POC was positive and the project is ongoing with success, the business unit might never actually be totally bought-in and might decide to let the effort die, too easily, in the future.

Somewhere in the Middle

Pros:

  • The FT team can decide where is the best place to invest its own budget in convincing a project to invest in automation (best ROI for the POC) but once the POC is done the ongoing automation work is funded by the projects which allow to scale and redirect FT resources to the next new project.

  • Innovation within the FT team is possible but needs to be managed closely.

  • There is a clear point where the business unit that receives the service commits and takes the lead in terms of funding and interest in the automation effort.
Cons:

  • Very difficult to find the middle ground without hurting the centralized automation team – too small of a budget for a large organization will create failures and negative momentum for automation in the organization which is hard to fix sometimes.

  •  This approach can sometimes create frustration among managers of centralized automation teams – they are expected to push automation as they do have the budget for it but since it might not be enough they might feel the organization is expecting them for deliverables that they cannot actually deliver on. 
Choosing is hard and it really depends on the size of the organization you are in, its nature, its management style and more… 
There might be more than these 3 flavors and probably more pros and cons to the ones I listed. I am eager to hear from you which ones you think I missed and which you find best for you…and why. 
Till next time… 

Roi.

 


 

Centralizing the test automation team

OK, so before I start babbling on about all the aspects of testing that I think are really interesting, and before you jump in and decide whether this blog is worth spending the time on, I thought I’d give a short intro about why I decided to start this blog and what I hope for it. Later on, starting with this post and probably continuing with later ones, I’ll write about a change that is happening, for a while now, in the quality assurance industry and testing in particular in mid to large IT shops that is changing the way testing is being done, managed, planned and funded – Centralization or a formation of a ‘Center of Excellence’ (CoE).

Why does this blog exist?

Well, during the years of my professional life I experienced endless conversations, debates, arguments, failures, successes, epiphanies and unanswered enigmas. All those shaped my view on what I professionally do and how I choose to do it. Although I read blogs, blogging myself was as far from what I was planning to do as me being able to remember where I put my cell phone. After many conversations with customers of mine as well as our solution-engineers that are working with them, I understood how much such a blog can help not only me, in getting my thoughts and understandings well defined and put through but also allow amazing knowledge to be shared across the huge customer base and beyond it as well as continuously learn from you - the readers, through your replies and not just wait for the next meeting or customer visit. Of course this can happen only if you take the time to read and reply. Any feedback is welcome. In this blog, I will focus on thoughts and problems related to both manual and automated functional testing (FT).


OK, enough wasting electronic web pages…let’s get to it.

Centralization of Automated Functional Testing

With this post, I’d like to start describing a change that has been happening in the industry around centralization of quality assurance activities and automated testing in particular. For few years now, IT shops have been gradually centralizing many of the QA related activities, starting with shared administration of QA tools such as Quality Management, Source Control, Defect Management, Requirement Management and others. This was followed by centralization of infrastructure and whatever made operational and business sense. The centralized group is often referred to as the ‘Center of Excellence’ or CoE. During this whole time many shops continued to have FT, manual or automatic, silos in their business units either due to the domain expertise that was needed for each business unit or due to the relatively small size (if relating to automation teams) and in some cases low exposure to upper management.


During the last 2-4 years, I am seeing more and more organizations making the automation team part of their CoE or centralizing it in one way or another. This means these companies have a single team (sometimes few teams each supporting few business lines) that owns the automated FT tool, responsible to manage licenses and administrate them as well as distribute the tool if needed around the organization. These are the very basic functions that can cut down on cost, license consumption and maintain consistency in deployment. However, where I see the most value is where these teams also own the best practices that are then shared, and implemented in the different testing teams and the actual centralized the development of automation skill sets. Having these capabilities in a specialized team, allows the organization to learn and grow its test automation initiatives faster. It also means that there needs to be a well defined process for allocating the shared resources and sharing the knowledge or output throughout the organization.


Setting these processes and embedding them into the organization is not an easy task and usually requires effective documentation, hours of guidance to the relevant people in the business units and continued support by the automation CoE to the different business units. This also means there needs to be a strong enough owners from the recipient side (the line of business) that can observe these best practices and scale them further. Absorbing and growing the knowledge is best done by what I call a ‘Local Champion’. This key person seems to be one of the most important people for success of an automation initiative within the line of business. Their technical capabilities, partnership with the automation CoE and the level of guidance and support the CoE gives is a make or break in the success of such an effort.


Due to the difficulty of this partnership I have seen models where the automation CoE decided not to hand over the ownership to the business unit but rather automate in full the test suite and hand it over ready to run. This is a valid approach in my opinion as I have seen organizations using it with great success but I have to say that it does has the down side of slow scaling and can position the automation CoE as a bottleneck. The advantage of this approach is of course the maximization of knowledge re-use. It also makes reusability of test assets (reusable libraries, functions, etc.) much easier as there is 1 single team that creates and manages everything. With the other approach of having the CoE hand the ownership at some point to the local champion, there needs to be an on-going contact to keep circulating the knowledge that is gathered in the different projects and a well defined process to document the reusable assets that are made available to everyone. Nevertheless, even if reusability is not perfect in this approach, when the local champions are strong, this has great success and scales up fast.


With all of the above in mind, the CoE needs to carefully choose the tools that allow to effectively creating, managing, sharing and standardizing the processes, best practices and test assets.


To recap shortly, the key points, in my opinion, to a successful automation CoE are:



  1. Centralized ownership of the FT tool

  2. Standardization of test development process and best practices and the ability to share those across the organization

  3. Strong Local Champions in the lines of businesses (if the ownership is passed to the project).

  4. Strong and on-going relationship, having the automation CoE support the local champion and facilitate knowledge sharing between the local champions.

  5. Well defined processes that allow sharing of reusable assets between tests in the same test suite and between projects when possible.

  6. Automated functional testing tools and test management tools that support the needs above.

That’s it for this time…


Roi

 

 

Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • This account is for guest bloggers. The blog post will identify the blogger.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Cooper is a leader in the fields of quality assurance (QA), software testing, and process improvement. In November 2012, Michael joined the HP Software Applications team. Michael brings more than 15 years of hands on and QA and Testing leadership experience.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
Follow Us


HP Blog

HP Software Solutions Blog

Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation