Improving Service Quality by the Numbers - 11, 10, and 9

Over the past month (dating back to the itSMF FUSION event in Dallas), a pair of conversations have been rattling around in my mind. The first one starts off with a set of numbers 11, 10, and 9.

 

11 is the number of major incidents experienced by a Fortune 500 company in the past year. 10 is the number of these major incidents caused by change induced problems. So, 10 of the 11 major incidents were arguably self-inflicted due to change. And 9 is the number of these changes that spanned different groups or organizations.

 

This story was told to me in relative confidence, so I won’t include the company’s name. I don’t find the cross group 9 out of 10 (or 11) surprising especially if the IT organization had fairly mature change management processes. This would have arguably left the trickiest potential changes to analyze with respect to risk and impact. I found the 10 out of 11 a bit surprising noting that this isn’t all incidents – just the most impactful major incidents. 

 

southwest power outage.jpgI would acknowledge that effective cross organization change impact and risk analysis requires more sophisticatedapproaches to configuration management along with an accurate understanding of dependency relationships – at multiple levels (service, application, and infrastructure).

 

The second conversation also comes from FUSION, but has a number of parallel threads. I was speaking with a presenter who specializes in process consulting and associated education. The topic was hot topics or challenges in the industry overall. She put forth “Improving Service Quality”. The general idea was that IT organizations are increasingly looking for ways to improve service quality while better aligning with business on what constitutes both “service” and “quality”. On the surface, this doesn’t sound particularly news worthy. But, I find this interesting with respect to other industry data points. 

 

First related point, Gartner offers a self-assessment to their clients across a number of infrastructure and operations management disciplines including IT service and support management. Gartner has published a number of papers on the findings, but the key point is that most IT organizations are not as mature as they reasonably should be. And, the recommended destination is not a fully service-aligned business partner. The recommendation is to become proactive moving past aware and committed. A second data point is from this past spring; we had a market research firm call into a number of organizations currently using SaaS service desks. The happiest customers were the ones doing the most basic things. The most dissatisfied customers were the ones trying to become more process oriented and needed more consulting and configuration. 

 

Here is my thesis; cost has been the most significant driver in the ITSM market since 2008 and there is now a “re-awakening” underway on pragmatically improving service quality. Cost brought pressure to increase self-service, automation, and evaluate sourcing options. Arguably these cost drivers are now driving process improvements. They all align with cloud related initiatives for service catalogs and request management, and they require the service desk organization and processes to be more mature.

 

There are number of paths that can be taken from here. Linking back to the 11-10-9 start, I would recommend evaluating a more structured approach to your configuration management system (CMS) and associated discovery. A number of good blog posts on these topics can be found in the HP ITSM Blog (www.hp.com/go/itsmblog) including:

 

How DOES one measure the success of a CMS

 

Introducing HP Universal Discovery 10

 

Chuck Darst

 

P.S. This isn’t the source of the 11-10-9 numbers, but it is an incredible example of a self-induced “major incident”.

 

http://en.wikipedia.org/wiki/2011_Southwest_blackout : The initiating event was a procedural error made by a technician switching out a capacitor bank on a 500kV line at a APS substation in North Gila, AZ, causing the line to trip; it had been carrying 1,397 MW westward across California to Imperial and San Diego counties and CFE in Mexico. Electric utilities normally use advance planning and real-time computer monitoring and modeling to detect when such a single-point failure could trigger a cascading blackout, but none of the utilities detected that the system was vulnerable to the loss of this particular line. 

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
HP IT Service Management Product Marketing team manager. I am also responsible for our end-to-end Change, Configuration, and Release Managem...
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.