The Future of Testing Blog
Business innovation thrives with software innovation and quality matters now more than ever. Application teams are running to keep up with business demand by embracing the technologies and methodologies that will help them build faster with confidence to engage consumers, go mobile and release continuously. How does a modern testing team keep up? This blog will focus on the importance and role of the tester, the innovations in testing processes, test tools, test automation solutions and best practices and discuss how to balance speed with quality through Agile, mobile, web, composite and cloud application testing.

Who really does the testing ?

Today I bring you a post from Yoav Eilat - a coleague and friend that is focused on the business applications quality space. Thanks Yoav for contributing this post:

Lately we’ve been talking to our customers’ IT organizations to figure out who actually does software testing.

Historically, we’ve always seen a broad spectrum of testers, from completely non-technical business users to expert QA. In a “typical” organization, if there is such a thing, it looks like most testing is done by business analysts or users, since the size of the QA staff (even with outsourcing help) is quite small compared to the size of the organization. Business users are better informed about ongoing changes to the application, especially for applications that implement corporate business processes like ERP/CRM. So business users are responsible for testing the processes they know, while the QA team focuses on testing infrastructure, test automation and quality processes. This means we need to make sure the tools that are used fit the skills of of the target audience.

I am interested to hear how close is this description to what you see in your company? How is the division of labor between business analysts/users and QA affected by the type of application under test, the testing methodology (automated vs. manual), and the industry you’re in?

Looking forward to your replies on this one.



Testing Definitions

I have lately met with few of our customers discussing different testing issues and saw that there is a quite a lot of confusion around testing terms which makes is hard to communicate. I wanted to have a quick post on how I see these testing terms and us them in my blog, articles and communication. This is not a full or formal description but what I mean when I talk about these. 

Unit Testing: A white-box developer testing that is built within a framework such as NUnit or JUnit to cover all code paths and interfaces to a class or function. 

Component Testing: Testing of a compiled piece of software (not a white-box approach).Testing is done separately from the full system it is part of. This can be either a component such as a service or an application that is part of a business process that spans across several applications. 

Integration Testing: Testing interfaces between components (or applications that are part of a single business process). This can be tested as part of an end to end (E2E) test but the focus is on the interface between the components – testing edge cases, negative tests of invalid input from one component to another, etc. 

System Level Testing: Testing of a full application as the end user users it. If this application is not part of a larger business process that has other apps, this can also be called End to End Testing. 

End to End Testing: Usually tests a business process that spans across many applications, testing the full process from beginning to end. With Agile becoming more common and testing organization mature it is important to make sure that whatever terms you are using, that the scope of each test suite is well defined and put in the right place in the quality process. 



Funding the test automation team

First I’d like to thank all of you that read and responded to my last post. As this was the first post in this blog, it was great to follow up on the great replies and read your points of view. I specifically enjoyed reading your replies, George and LaBron – I think it is interesting to see how many flavors there are for the model I wrote about and the trick is finding the right one for your organization while keeping the eye on the ball and making sure the critical things that apply across the board are still kept. In a way my post this week will touch upon some of what you wrote LaBron, regarding passing on responsibility to SMEs from the business units.


 This week, I want to continue along the same lines of discussion but focus on a problem that every CoE manager or a manager of a centralized functional testing team struggles constantly – how do I scale up and support more and more projects and business units while working within a given budget (or let’s face it, in this economy – maybe even a shrinking one). Basically I see 3 main funding models among our customers and users:

  1.  Full service based model where any activity of the centralized FT (functional testing) team is funded by the project it supports and needs to convince it to fund and invest in automation, rather than keeping the existing manual test suites as they are.

  2. Fully self funded automation team where all activities of the FT team are funded by the team itself (which manages its own budget) and the team can scale up only up to its available budget

  3. Somewhere in the middle – the FT team has some budget to work with but not enough to carry on a full service to a business unit. At a certain point the testing project needs to be convinced and fund the rest of the investment in automation in order to get the ROI. 
Using quotes is always a bad sign for someone who thinks he has something to say but it’s still fun…J – Andre Gide said “Believe those who are seeking the truth; doubt those who find it.
As always, each of the 3 ways above has it’s pros and cons and it is up to us to understand them, consider the differences and choose which is best for our organization. Here is how I see it: 

Full Service Based Model


  • Projects need to be proven as worth the effort with positive ROI before investing. This allows the organization as a whole to have a process that makes sure evangelists are not running mad and building their empires but invest their time where it is most beneficial to the organization.

  • Quantification of ROI usually improves and becomes much better when it’s the team only way to receive budget. This gives management better visibility.

  • The centralized automation team and its automation projects are fully scalable – a new project drives new budget to the team to expand its activities.

  • Once a project is on its way, the business unit getting the service is fully committed.

  • Making change is hard. A project that might benefit a whole lot from automation might be missing it if the project manager has a hard time taking risks or creating change. This does happen.

  • There is very little innovation from the centralized FT team since projects want to pay only for clear deliverables.

  • Very hard to maintain a growing centralized infrastructure that is owned by the FT team – an activity which is usually hidden from the day to day project’s life.
Fully Self-Funded Centralized Automation Team


  • The team has enough budget to innovate, maintain its infrastructure and invest a lot in convincing a project they want to port to using automation (sometimes automating large parts of the project as a POC)

  • The team can grow its knowledge and constantly try out new supporting tools which allow it to improve.

  • The centralized automation team has a very hard time to scale – a fixed budget and growing number of projects will create a problem and the FT team will become a bottleneck for the organization’s move to automation.

  • Automation might grow where things are ‘cool’ to work on but not the most critical to the business.

  • There isn’t clear ROI business case unless someone chooses or instructed to calculate it and report on it

  • Even if a POC was positive and the project is ongoing with success, the business unit might never actually be totally bought-in and might decide to let the effort die, too easily, in the future.

Somewhere in the Middle


  • The FT team can decide where is the best place to invest its own budget in convincing a project to invest in automation (best ROI for the POC) but once the POC is done the ongoing automation work is funded by the projects which allow to scale and redirect FT resources to the next new project.

  • Innovation within the FT team is possible but needs to be managed closely.

  • There is a clear point where the business unit that receives the service commits and takes the lead in terms of funding and interest in the automation effort.

  • Very difficult to find the middle ground without hurting the centralized automation team – too small of a budget for a large organization will create failures and negative momentum for automation in the organization which is hard to fix sometimes.

  •  This approach can sometimes create frustration among managers of centralized automation teams – they are expected to push automation as they do have the budget for it but since it might not be enough they might feel the organization is expecting them for deliverables that they cannot actually deliver on. 
Choosing is hard and it really depends on the size of the organization you are in, its nature, its management style and more… 
There might be more than these 3 flavors and probably more pros and cons to the ones I listed. I am eager to hear from you which ones you think I missed and which you find best for you…and why. 
Till next time… 




Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • This account is for guest bloggers. The blog post will identify the blogger.
  • Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
  • Michael Deady is a Pr. Consultant & Solution Architect for HP Professional Service and HP's ALM Evangelist for IT Experts Community. He specializes in software development, testing, and security. He also loves science fiction movies and anything to do with Texas.
  • HP IT Distinguished Technologist. Tooling HP's R&D and IT for product development processes and tools.
  • Rick Barron is a Program Manager for various aspects of the PM team and HPSW UX/UI team; and working on UX projects associated with He has worked in high tech for 20+ years working in roles involving web design, usability studies, and mobile marketing. Rick has held roles at Motorola Mobility, Symantec and Sun Microsystems.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
HP Blog

HP Software Solutions Blog


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.