Agile Development at HP - Part 2: Culture, quality and measuring success

This is the second in a series of five posts that describe the methodology and development processes used by the HP Agile Manager team.  In the previous post, I described the journey we embarked on to define our development processes. In this post, I want to discuss the culture and values of the team, and how we maintain a consistently high level of quality.  In the next two posts, I’ll talk about the release and sprint lifecycles, and the feature and user story lifecycles.  The last post will summarize the roles and responsibilities of each member on the team.

 

Culture

culture.png

We all have baggage.  Every one of us has his or her unique background and set of experiences, and we carry this around with us wherever we go.  These experiences make us who we are.  The Agile Manager development team is no different.  Many of the people in the team came from teams that specialized in developing on-premise HP products. They had to adjust to the world of Software-as-a-Service (SaaS), where the emphasis is on the continuous delivery of content. Some people were new developers with little practical experience, proudly clutching their freshly-minted university degrees, and still others joined the team from other companies.

 

But when we analyzed the development process (as we described in the previous post in this series), we found that there is a core set of beliefs that lie at the heart of our success as a team:

  • Honesty – We believe in putting everything on the table, good or bad. We encourage managers to provide on-going feedback to their people to continuously help them improve. We also encourage people to share their mistakes as a way to help others avoid reproducing those mistakes.
  • Learning – We are a learning organization, and prefer ‘learn and improve’ over ‘interrogate and punish’. We conduct retrospectives as part of our development processes, after every sprint, push and release. We also conduct ad-hoc retrospectives when an outstanding event occurs that needs special attention and which has learning potential.
  • Ownership – We encourage developers to own a specific area or feature and to be responsible for it end-to-end.
  • Accountability – We emphasize accountability, to ensure things aren’t just ‘chucked over the wall’ or ‘fall between the cracks’. We expect every developer, QA engineer, manager and architect to be fully accountable for his or her work. To emphasize the importance of this value to us, we have established group goals for which all group members are accountable.
  • Agility – Since we are responsible for developing a tool for managing projects in the Agile world, we try to adopt the Agile state-of-mind in all areas. We continuously assess what’s important, and we shift resources to work on the important things regardless of the upfront planning and areas of responsibilities.
  • Versatility – We understand that we are evaluated as a group, so we reinforce weak links in the chain with stronger links. If we have resource problems that create bottlenecks in definitions, testing, operations or support we compensate by assigning resources from other places.
  • Independence – We encourage our people to be independent, to reduce dependencies that can lead to bottlenecks.

 

Definition of Done

The seventh principle of the Agile Manifesto dictates that “Working software is the primary measure of progress”.  If we don’t deliver working software, any progress is irrelevant to the customer. Our team defines working software in terms of the ‘Definition of Done’.

 

The Definition of Done (DoD) determines whether a backlog item is complete. The DoD is so important that it needs to be decided and agreed up front. Locking down a good DoD can be challenging, but it is fundamental to the success of the sprint, and ultimately, the release. We use the following DoD:

 

A user story is done if, and only if:

  ✓  Unit tests are written and green

  ✓ Acceptance tests are defined by QA, run against the build, and passed

  ✓ The main functionality of the user story is covered by automated acceptance tests

  ✓ The sanity and regression tests are passed after the user story’s implementation is checked in

  ✓ The user story has no outstanding new or open defects

  ✓ The majority of Medium defects are fixed

  ✓ All fixed defects have been validated

 

A user story is only eligible to be included in a production release if it has achieved all of these criteria. I’ll explain more about how a release is rolled out to production in the next post of this series.

 

Measuring Success

Let’s take another look at the principle: “Working software is the primary measure of progress.”  I emphasized the word ‘primary’ because there are secondary measurements that have to be tracked so that we can see how close we’re getting to delivering working software. If you only measure working software you’ll only get a simple “Yes, it works” or “No, it doesn’t work”. Evaluating secondary measures allows you to understand how close you’re getting to “Yes, it works”.  In the HP Agile Manager team, we decided that the most important measurements for us are quality and development efficiency.

 

Quality

We track quality by measuring defects.  Over the course of the first six months of development after adopting the recommendations of the work streams (as described in the previous post), our measurements showed a reduction in the number of new and open defects, and that fewer defects were being detected during our regression cycles, as shown in this graph of new and open defects throughout the course of the release:

graph.png

 

The QA team also managed to reach 100 percent coverage by the end of the sprints, which allowed us to make every sprint releasable.

 

Development Efficiency

After the same six months, we found that:

  • We no longer needed extra time to stabilize the build
  • Manual regression testing had been replaced by automation
  • The average time between defects being detected and fixed was reduced

 Our estimations also improved.  We were calculating our availability more accurately, and we were better able to predict the number of story points we could develop.  Consequently, we no longer needed to add buffers to our estimations.

 

 

In the next part in this series, we’ll examine how the HP Agile Manager team manages its release and sprint cycles.

 

UPDATE:  Here are the links to the other posts in this series:

 

Let us know what cultural values you encourage in your team and how you measure success, by leaving us a comment in the box below.

 

 

This series of articles was written based on processes and methods developed by Lihi Elazar (Agile Manager PMO) and materials provided by her.  Thanks to both Lihi and to Asi Sayag (R&D Group Leader), for their help, guidance, and extensive reviews of this series.

 

 

 

Comments
MikeCarew | ‎10-03-2014 01:57 AM

The discussion here talks of ownership as though it is an individual thing, but XP practice leads us to the concept of collective ownership. How does you experience teach you with respect to this point?

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
Malcolm is a functional architect, focusing on best practices and methodologies across the software development lifecycle.
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.