IT Service Management Blog
Follow information regarding IT Service Management via this blog.

Displaying articles for: March 2010

How Long Should a CMDB / CMS Take to Build? Part 3: Process Engineering

This is the third post in the "How Long Should a CMDB/CMS Take to Build" series.


 


Today's tagline: YOU must make the final journey to the right process.  No one else, not even a cherished trusted vendor or analyst, can make it for you.  But they can act as a spiritual advisor.


 


ITIL Process engineering is the second most important part of the deployment, the most difficult to get right except for people and cultural change.  The only reason process engineering is slightly easier than these is because you at least have better measurement tools.


 


And I'm talking about ITIL processes here, for which additional complexities apply.


 


There aren't many vacant lots left in downtown ITIL.  I'm talking about process RE-engineering as well, because almost none of you are building a data center from scratch.  You already have some kind of processes, bethem manual or dysfunctional.  Part of the process-building involves assimilation and demolition of parts of the earlier generation processes.


 


So what do you start with:  Needs, goals, plans, budget, vendors, tools?  Turns out it's not so straightforward.


 


It's a paradox.  You can't easily build your processes without a tool in mind or you will not be able to find a tool that does everything you want.  Don't believe me?  Go ahead, try, you'll spend a ton of money on column fodder and end up picking the vendor that can just fill in the most columns - a disappointing and possibly unwise strategy.


 


However, you don't want your processes to be tool-driven because you will end up locking out the most important KPIs which are fulfilling your use cases exactly. 


 


So, do I pick a vendor first, or define my requirements first?  My answer:  it's an iterative process, there is no prescriptive approach that ensures success - you must have a good IDEA of your processes, then court a few vendors, then get some preliminary input to refine your idea of what config management should be, ask a few more questions, and repeat until you have a good foundation that will fulfill your use cases and is supportable by a solution you can buy and build.


 


The CMDB is a tool, maybe even a platform.  The CMS is a deployed operational solution.  You must still operate it with your own processes and people.  Good luck with ITIL.  You'll need more than that.  But I digress.


 


If you expect your vendor to supply all the processes because the tool won't work without them - you're in trouble.  You must still understand all your processes to the point where YOU are doing the service transitions and operations.  Most  vendors can't and won't care as much about how well your processes work, and at best will deliver incomplete, high-level, or overly-generic  processes, the same cartoon version of IT that ITIL already provides.


 


 As a vendor you have to work really hard to create and deliver a good process layer of best practices around your CMS and CMDB.  And while I've tried hard (that is one of my projects at HP), I cannot fool myself that we have gotten everything right, in fact or in principle.  Experience and the rigorous discipline of journal-keeping,  analysis, and continual improvement are our only lights into the future of process.  Don't let anyone else sell you otherwise.


 


Some final recommendations:



  • Get yourself some wild, angry beekeepers.  They'll keep your you, as well as your vendor, honest, and help you identify the needed, the unneeded, and the just plain stupid.

  • Come to recognize the smell of crap factoids.  Analysts and vendors, like Alpha Geeks, CIOs, bloggers, and help desk technicians, are not immune to hubris. 

  • Not all IT organizations need to "mature" all of their processes to the maximum "maturity".  Avoid unnecessary or self-fulfilling scaffolding, even if it's your vendor's favorite.  Even though ITIL says you should be doing something, you must decide for yourself whether you actually should be doing that thing.  And  it's not always easy to determine.  Read.  Study.  Know not just IT but YOUR IT.  In the vicious world of ITIL, knowledge isn't just power, it's survival.

  • Same thing I tell all the school kids I teach astronomy to: Keep asking questions.

  • Configuration management, like education, is not about filling a bucket, it's about lighting a fire.   Think, motivated, self-policing, continual service improvement.  Incent your people to seek out improvement and they will do so, to your benefit.  Too expensive?  Don't expect much help.

  • If you don't understand something but should, go ahead and ask the question.  But remember the risk.  And think about who you should ask first.


 


I hope this post touches a nerve, or gets through to someone, or even angers someone enough to post a reply.  I'd really like to hear what you think.  Thanks for your time.

Mass Customization in ITSM (and Movies) for predictable success

What a season for Oscar!  Golden statues are adorning fireplace mantles in houses owned by Sandra Bullock, Jeff Bridges and the screenwriter of the movie “Up” (well maybe he still rents).  Watching the Academy Awards Show just a few weeks ago got me all excited about seeing movies again.  Not just DVDs or downloads, but real honest-to-goodness fresh movies.  It’s a way of supporting the industry that produces big time entertainment.  I’m not discounting the independent films that are scraped together with a meager budget and limited distribution (some are indeed great).  Sometimes I’m willing to gamble $10 and a few hours of my life on the chance of having an artistic epiphany.  But most of the time I spend my money and time on a “sure thing” – Hollywood’s guarantee of a fantastic entertainment experience – provided by leveraging proven elements and then re-mixing with some new elements (and pixie dust) to generate a brand new hit.  At least something about either the actors or directors or approach or plotlines will already be familiar to me.  I figure that’s the way it is for most people.


Do you ever wonder how a movie can be so creative but still be a project that comes in on time and on budget?  Sure, there are always some crazy movie projects out there that get green-lighted (usually run by James Cameron or, in his day, Francis Ford Coppola) but, by in large, the movie industry has learned that mass customization works.  I just went to see the feature film “Alice in Wonderland” largely due to the fact that the combination of Tim Burton and Johnny Depp doesn’t usually disappoint me.  I will probably go to the next Disney Pixar animation movie and I don’t even know the name of it yet.  What does it take to create a production franchise (and not just a series of formulaic sequels, which I hate)?  It is a result of experience and trust in the results one will get with talent and a process that works.


So, following this analogy, how can an IT organization get projects done on time and on budget, yet with a predictable level of quality and a successful outcome?  Well, we know that the adoption of ITIL-based processes can help, because they are proven -- they have been collaborated on, used, tested, and refined over time by countless IT organizations.  The ability to codify experience can help.  And how does service management software codify experience? HP believes it is through not only the documentation of ITIL-based best practices, but through the actual out-of-the-box implementation of ITIL-based best practices.  Mass customization in the ITIL world is not achieved using a clean sheet of paper with infinite flexibility to invent all your own mistakes.  It is achieved by building from a foundation of best practices infused into the guts of the HP Service Manager product itself – in the workflow, forms, and pre-configured data such as pre-defined roles, sample service level agreements, service level objectives, and key performance indicators that work time and time again, just like a good production franchise in the movie industry.


 


Does the implementation of best practices limit creativity or force an IT organization to bend to its prescriptions? I don’t think so.  Using ITIL best practices, every instance of HP Service Manager can be tailored to a unique set of customer requirements while still avoiding the excessive re-work that results whenever a fully customized configuration needs to be migrated or upgraded.  This is an approach that makes sense in the real world and a lesson that less mature vendors haven’t yet learned.  Even some of the more established names in service management software take their “best practices” only so far (by providing best practices documentation without providing support for the best practices in the inner workings of the actual product itself).  HP Service Manager is the only offering out there that holds true to the concept of a production franchise by actually implementing ITIL best practices out-of-the-box. Not by prescribing, but by guiding.  Just like the best Hollywood producers.


So the next time you select a movie to see based on your expectations, recognize that it is the mass customization by Hollywood that allows a creative story to be told to an appreciative audience.  Mainstream movies may not be everyone’s cup of tea, but, as a business approach, this method can’t be beat.  Now you know the “magic” behind the success of HP Service Manager as well. 

Taming (if not slaying) one of IT’s many Medusas

My third grade son and I have been exploring Greek mythology lately.  We’ve been reading about the Gods of Olympus.  This new found interest was triggered by my son having recently listened to the “Lighting Thief” on audio book - the first of the "Percy Jackson and the Olympians” series.   If you aren’t familiar with Medusa, she is monster in female form who has hair that is made of dozens of horrifying snakes.   The hair filled with snakes idea reminded me of a very thorny problem that IT deals with -  that of addressing compliance related issues.  The more I thought about this the more I realized that almost any problem I have ever come across in IT reminds me of Medusa but this area in particular stands out in my mind.  


 


In my last post I talked about the importance of use cases.  In this post I want to focus on a trend I’ve seen that often is the genesis of a Configuration Management System (CMS) initiative – that of addressing compliance related reporting.  Over the years I have dealt off and on with the compliance problem and it stands out in my mind because of the duality that permeates the issue.  Compliance has this quality of being everywhere and being nowhere at the same time.  Let me explain.  When you think about the roles in IT almost every group has some level of responsibility for supporting compliance and yet responsibility for what must be done is highly diffused across the organization.  This is true even if the organization has (and most now do have) a Chief Compliance Officer.  From a product standpoint every product seems to be able to highlight itself as a solution but no one offering by itself really gets you very far.


 


So having acknowledged upfront that no single product can be all things to all issues compliance;  I have been working in the CMS area long enough to see a recurring trend.  That of using Discovery and Dependency Mapping (DDM) as a way of helping to lighten the burden around compliance reporting in highly regulated industries like Financial Services, Health Care and Utilities.  In each of these cases, I know of at least one (sometimes more)  large and complex organizations,  with massive reporting requirements,  that are using DDM to meet requirements around the need to attest and verify that they have strong controls in place to prevent unauthorized changes to their mission critical infrastructures. For many organizations addressing these kinds of compliance requirements is a hugely time consuming and costly endeavor from the standpoint of IT hours invested.


 


I will start with a publicly available story, that of FICO.  Known to most in the US for their credit scoring service, FICO used DDM as key element in a solution which also included HP Service Manager.  FICO talks about their solution from the standpoint of incident, change and problem management but addressing compliance was certainly a big motivator for them as well.  Operating in the highly regulated financial services industry, audits are a way of life for FICO.  Matt Dixon, Director of IT Service Management at FICO, has said that with their solution they were able to go from taking in the neighborhood of a day to address audit requests to being able to do so in a matter of minutes.  Given that something like an audit a day is what FICO deals with, this is no small deal.


 


A health care company that I know provides another good example.  This company had built a compliance reporting database where they had integrated close to 100 data sources.  They had further built on their own reconciliation logic to support data normalization.   The development effort and the ongoing care and feeding associated this system was enormous.  The company launched an initiative to rationalize data sources, implement automated discovery and dependency mapping and replace this home grown reconciliation database and logic with a vendor supported solution (they chose HP). 


 


Turns out that in their data rationalization effort this company found that something like 80% of the data held in their source systems was redundant at some level across the organization.  This understanding helped them move forward and develop a program around retiring systems and moving to a data leverage model using a CMS style approach.  By the way I do not  feel that what this company found in terms of redundant data would be that much different if we ran the same exercise at most large companies I deal with.


 


Another large company I know involved in the highly regulated utility sector went through a very similar process.  Like FICO this company is pursuing a fairly broad agenda around Incident, Change, Configuration and Release management but addressing compliance related reporting requirements was their initial priority.  Like FICO this company has been able to substantially reduce the amount of time invested in compliance while radically shortening the time it takes to produce compliance related reporting.


 


So while discovery and dependency mapping is by no means a panacea when it comes to compliance issues, it can help an organization meet its commitments relative to compliance reporting.  At the heart of many compliance related requirements is the need to attest and prove that you have tight controls in place around how your infrastructure is managed.  Transparency and a continuous visibility to the configurations in your organization is fundamental to addressing this requirement and a CMS can be a key element that helps address this requirement. 


 


 

How Long Should a CMDB/CMS Take to Build? Part 2: Culture and Understanding

 


This is part two in a series of understanding  CMDB and CMS deployment times.  Last post, we talked about people.  Here, we'll discuss people in an interactive collective workplace context:  in other words, corporate culture, and why corporate culture can be easy, or very very difficult, to understand and change.


 


And before we get too far, you must realize I did not graduate from any School of Business Science.  So there's probably some theorum or corollary  that describes what I'm getting at here.  Something like the Blake-Mouton  grid but with cookies.


 


So I won't be solving all your cultural change problems in a blog.  I'm here about why understanding corporate and human culture, or not, matters so much for TCD (total cost of deployment).


 


Solution Architecture and Service Delivery often take the fall for being late or underdelivering.  Why?  Much of the actual TCD goes unaccounted for.  A scenario: consultant shows up, clock starts ticking.  Shortly a missing or broken process is revealed to impede the project.  Tech consultant is then pressed into the role of business process engineering consultant, then burns the rest of the time on one or two of these process problems.  Maybe,  obtaining approval to touch some piece of the infrastructure, or trying to fast track a three-week change request into three hours.   Project gets behind.  Customer gets unhappy, Vendor gets blamed.  Free stuff is demanded.  What happened?  The consultant told the customer to "prepare" so this wouldn't happen.  Wasn't the problem understood or properly prioritized?   Delivery is relied on when they arrive. They have "done this before" or "should" know how to fix these kinds of problems.   Big mistake.  Organizational issues in the mirror look smaller than they actually are.


 


Why do people and cultural change remain the biggest variables of the deployment, the most difficult to get right, and the hardest to estimate?  Pragmatically, it's difficult to measure.  Scientifically, it's poorly understood.  Rhetorically I suppose one could answer "lack of mutual understanding".  Especially that "mutual" part - you understanding them is not enough.  You must strive to not be merely heard but understood as well.   Understanding is not just the first step to effect cultural change - it's the thing.  Going from an informal to a formal process for say, change management, can be really earth-shaking culturally, especially if a profound understanding of the people and culture are ignored, misunderstood, or  underbudgeted - and if they don't understand where you are coming from.


 


This is serious stuff - we're messing with  people's ideologies here.  And not just who-moved-my-cheese ideologies.  Ideologies like associating personal self-worth with job performance,a very strong ideology among many - in part because people tend to become attached to eclectic parts of their job, ironically,  those parts that have no safety net, that depend on human talent, to get done properly.


 


I have seen trouble even in well-planned and executed projects, because the project carried before it an apparent air of distrust or an implication that humans were no longer doing a good enough job so "control" was needed.   And the project had none of those things, the people just weren't  in on what was happening.   Even good people who do a good job can take it personally or feel misjudged or that they've failed in some way if they go unassured for too long.  Ignored long enough, these good people's fears will develop into a fight and you won't understand why.  This is the kind of stuff that can tsunami project schedules.   People who feel in on things  tend to produce a lot more.


 


Address common, as well as valid, concerns:  "Alice, you do a great job, but even with 99.98% accuracy, that .0200 is worth about a million dollars a year.  That not only justifies the cost but demands that we implement this automation to stay competetive.  You didn't do anything wrong to 'cause' this project. "  If you can say this sincerely and without  patronization, you'll get very far.


 


Still, cultural change is an enigma in some organizations.  Culture?  What do you mean "culture"?  It is what it is.  (an actual response to me from a manager of long ago).  How does one change what one can't understand, let alone measure?  I'm sure most of you understand your organization well.  What you may want to get a fresh look at is, is corporate culture grown or constructed?  That mattters in your approach.  What parts of your organization and culture can and cannot change? 


 


It's how you think about it.  And this doesn't merely depend on the nature of the force applied and the malleability of the material being worked.   Complex dances like the Change Management Tango or the Incident Cha-cha cannot be beaten into being; they are not rough structures forged in a foundry and bolted together!  They must be grown, nurtured, filed at at odd angles, sanded a lot, to produce the right result.  Think of your project as constructing a precision instrument, not raising a barn .  These are the nuances of cultural change, not the blunt strokes of an .mpp file.  Approached improperly, the vicious old theory X minimal effort/maximal clarity cycle rears its ugly head.  Zombified IT, there's a good ITIL replacement:  Obey…obey…


 


And you can't buy this cultural change off with a few all-hands meetings or a prop- uh, I mean an advertising campaign .  Pep rallies thinly disguised as running interference for a no-choice change might just deserve the derision and sabotage that come their way.  Which could be a lot. 


 


To effect real cultural change, your people must not just hear but believe.  You can make people do the former but not the latter.  DO bother to show them the numbers,  your assumptions, your heart - why you believe this is good for IT and the business because your people are a part of both.  Even if they look bored.  Don't assume they read all your memos or that memos will change corporate culture.  If you can't do these things, your project probably has a higher risk of failure.  And, while a business is not a democracy (unless you're an elected  government),  there are some things it behooves one to be democratic about.  And I'm not talking about voting.  I'm talking about involvement and communication!


 


These are the types of obstacles you face.  Any history you care to read teaches us that the best intentions of human engineering have often run aground on the unpredictable shoals of human behavior.  Don't skimp on the research or buy someone else's.  Configuration Management Systems are in today's time still hand-made.  But the parts are getting much better.


 


Next time we'll look at process engineering, an almost-as-mysterious and as difficult-to-estimate as cultural change.  Whether you're planning, implementing, or operational, I hope I've given you some small insight as to how important and unwrangly TCD really is and what success really means organizationally.  Can you relate?  Do you agree?  Maybe what your TCD numbers were?  Please reply and let us know how you feel.  Thanks.

Meet The Experts: A series of webinars on managing a virtualized IT environment

HP recently sponsored a series of virtualization roundtables, run by CIO magazine, titled "What your team's not telling you about virtualization". Over the course of these virtualization roundtables, we heard from more than 100 IT executives (C-Level, VP…) about what’s on their minds regarding the management challenges around virtualization. They were very interactive discussions between the HP speakers and the customers, and regardless of the industry or city they were in, a set of common needs were expressed by these IT executives, including the need to:


• Automate change across physical and virtual environments that make up the business service


 • Become more cost efficient


• Increase IT operations efficiency and delivering high-quality services


• Better enable business continuity and compliance


• Manage asset and software entitlements, contracts and deployments


• Learn from their peers and from HP about the best practices around virtualization


 As a result of this feedback, we scheduled an April web event series (six one-hour virtual discussions) that drills down to answer these common needs. They are called the ‘Meet the Experts’ presentations where virtualization experts discuss best practices. Some of the speakers are from HP, some are customers. The dates and topics are:


 • April 13 - Optimizing service modeling, discovery, and monitoring for VMware environments


 • April 14 - Protecting Virtualized Environments from Disaster with HP Data Protector •


 April 21 - Testing Smarter and Faster with Virtualization


• April 22 - Improve customer satisfaction and maintain service levels in virtualized environments


• April 27- BCBS of Florida builds a foundation for virtualization with HP Asset Manager


• April 29 - Virtualization: Compliance enforcement in a virtualized world


If you are interested in listening to any of these presentations you can attend by registering at: https://h30406.www3.hp.com/campaigns/2010/events/1-8K6H1/index.php?rtc=3-3ERQKL8&jumpid=ex_r11374_us/en/large/eb/adv3_virtualization_wave_sdr_ptr/rtc_3-3ERQKL8/20100310.  I think they will be interesting and insightful if you want to learn more about how to manage a virtualized environment!


 

How Long Should a CMDB / CMS Take To Build? Part 1: People

For some time I've been exploring the value of a CMDB and CMS.  A big part of the TCO value equation is the  TCD, Total Cost of Deployment.  TCD is often underestimated - note how little Google has on the subject.  Why?


 



  • Not everything that matters gets estimated

  • TCD tends to be estimated optimistically or "carved" to fit  budgets

  • The number and complexity of problems are often underestimated

  • The straightforwardness of fulfilling the first few use cases are often overestimated


 


And isn't all this "estimating" really a euphemism for "we don't know"?  If we knew we wouldn't have to estimate.  Estimation has an inherent connotation of uncertainty that we don't like.  We all want complex things to be simpler, more transactional, more commodotized, than they really are:


 


C: "Nice CMDB.  I'll take it."


V: "That'll be one million dollars.  Where do you want it delivered?"


C: "Dock 2."


V: "Ok.  What color?"


V: "Fast."


 


No really, what should you expect your CMDB deployment to be like?   What should we be focusing on estimating?


 


What we like to use to estimate CMDB deployment aren't  the biggest or most important variables.  We like to focus on things like, how long do other deployments take.  How long will it take to install the software.  How long until the hardware arrives.  When can we get everyone "trained".  All good and proper project management, can't do without it.  A deployment project is focused on consulting time and cost, hardware schedules, definable things.  


 


But the two most important factors are also the two most difficult to measure and change are people and processes.  And these are also the biggest variables in estimating time to full implementation of a CMDB or CMS.  In this series, this post will start with the most biggest variable, people.


 


Implementing a CMS is much more than getting the solution deployed, or getting some discovery done, or even getting some providers and consumers onboarded.  It's about changing the way IT works. To that end you absolutely must start with what IT is - not a data center or even a collection of infrastructure and apps - IT's an organization, and organizations are built around people, process and culture.


 


Deploying a CMS will touch almost everyone in the IT organization, because the CMDB almost always follows the implementation of some other initiative such as change or release management or other IT-wide scope.  As ITSM initiatives go,  so goes the CMDB.


 


The Kicker:  The ITSM ecosystem of applications, plus the CMDB to facilitate exchange of configuration data and the common view of IT services, forms the CMS.  Now this should sound like a much harder project than implementing a CMDB.  It is, that's my point.   Without thinking of your CMDB this way, you are likely to do some of that dangerous underestimating of the effort of getting ROI out of your CMS after your consultants have left the building.


 


In my next post I'll explore and  ask you why cultural change is the most important part of the deployment and the most difficult to get right and the hardest to estimate. 


 


Questions, comments, complaints, please reply and let us know how you feel.  Thanks.


 

How important is Services Asset and Configuration Management (SACM)?


One of HP's enterprise customers thinks it’s important!  This large insurance company has consolidated their asset management, human resource, and configuration management system data to calculate inventory data reports for several departments. They are looking at SACM from different perspectives and ensuring that the data accuracy and calculations are consistent between the different views. I don’t think this company is alone.  Companies seem to be increasingly challenged by the complexity of their IT environment and are looking for better ways to manage control of their infrastructure.


 


The ITILv3 definition of Service Access and Configuration Management is:


- The SACM process manages the service assets in order to support the other Service Management processes.


- SACM objective is to define and control the components of services and infrastructure and maintain accurate configuration information on the historical, planned and current state of the services and infrastructure.


 


It’s vital to provide integrated, accurate and current data across IT and it requires rigorous processes to achieve this federation of data. A goal of SACM is to establish Asset Manager as the reference source of assets from the point assets are procured to the time they are retired. But what’s best method of federating data?  


 


HP’s Asset Manager integrates with the UCMDB to automate the ITSM process without requiring a monolithic repository and it ensures all hardware and software assets supporting business services are effectively managed.  It also provides a clear illustration of dynamic enrichment of CI by federating attributes from an external authoritative source of data.


 


Is your company tangled up in confusing asset reports and CMDB's?  Is your data federated across your IT environment?  I want to know your thoughts...

Important week for Software Asset Management

You may have read my recent posts about Software Asset Management, where I have been promoting the ISO19770-2 software ID tags.


This is an important week for the future of Software Asset Management.  This week, US General Services Administration (GSA) is meeting with some of the people involved in passing the ISO 19770-2 standard and TagVault.org.  They will be discussing whether US Government will adopt ISO 19770-2 software tags as a requirement for all future software purchases.


I for one, hope the GSA adopts this requirement and forces software companies to include these tags with all software  I also hope the GSA will adopt an aggressive and realistic date for the requirement to be mandatory.  I also hope this is a “hard” requirement, because otherwise adoption rates may be low, or may take a long time for these tags to become common.  The tags are relatively easy to create and TagVault.org can provide assistance and, perhaps more importantly, is becoming a central tag certification and signing authority.


In other words, I hope the outcome of the meeting will be a statement like this “in order to sell software to US Government, your software must include ISO 19770-2 tag.  The requirement is effective January 1, 2011”, as opposed to “US Government will prefer to use software which includes ISO 19770-2 tags from today on”.


I will be waiting for the results of the meeting.  I hope the GSA decides to require these tags and soon.


If you are involved in Software Asset Management this could be like Christmas in March.  And if all goes really well, then maybe the requirement will come into effect in time for Christmas this year.

The “Early Majority” is marching on Washington D.C. for HP Software Universe

I’m the lucky track manager for the “Pragmatic ITSM” track at the HP Software Universe event in Washington D.C. in June 2010.  In reviewing the line-up of large, recognizable companies coming to speak for the ITSM track, it seems to be a microcosm of what I am noticing on a larger scale:  that HP Service Manager is “Crossing the Chasm”, a milestone in the Technology Adoption Lifecycle described by Geoffrey A. Moore in his well-known book of the same name.   


We appear to be moving from those “Early Adopters” who chose to upgrade to HP Service Manager in past years to the “Early Majority” who are now following suit.  Far from visionaries driven by a ‘dream’ of matching an emerging technology to a strategic opportunity, these pragmatists are driven by vendor leadership and stability, product quality and reliability, and a robust feature set enabled by an infrastructure of supporting products and system interfaces.  The speakers in this year’s line-up are talking not just about their actual migration experience, but going beyond that topic to discuss the value add that HP Service Manager is providing to their business. 


The following testimonials amply demonstrate that, while less mature point solution vendors tout limited help desk platforms requiring substantial customization to deliver value, the parade towards full-scale integration and automation of ITIL processes, with the ability to provide maximum out-of-box value to the business, has already left town.  Here’s a sampling of the must-see ITSM track line-up:   


·    Independence Blue Cross will show how the integrated HP business technology optimization (BTO) suite can deliver a holistic view of Services performance.  Such a view enables you to monitor and measure the performance of business and infrastructure services, and is required to institute continuous improvement, achieve cost reductions, and establish IT as a consumable service. This capability opened the door to service-level agreements between IT and the business, real-user activity data, and executive dash boarding. 


·    Core Media Technologies will demonstrate how it has brought order and efficiency to IT service delivery for 30+ global media communication services companies while staying agile to meet the needs of each of its customers. The secret was to make every process service-driven: the security model, the subscription, notification and change management processes and the reporting of results to management, to name a few.   


·    NBC Universal will show how portfolio-based demand-management practices helped their organization focus on priority projects, maintain the right mix of applications, and provide valuable services to the business. They will tell you how, by consolidating demand on one centralized platform, you can give IT and business leaders full visibility into IT portfolios, communicate and track the true cost of ownership, improve resource planning, and standardize processes to improve operational efficiency.   


·    Volkswagen Credit will share cutting edge ideas around the Service Catalog, Request Management, and end-user self-service, and describe how these helped them improve speed of service delivery and responsiveness to service requests, increase customer satisfaction, and reduce cost per end-user. 


·    Sprint and several other companies will describe the value of fully integrated ITIL process workflows supported by a cross-HP BTO software solution and will share best practices on integrating HP IT service management (ITSM) and business service management (BSM) software.  


·    Blue Cross Blue Shield (BCBS) of Tennessee will describe to you how they improved audit compliance and minimized risk by implementing an ITIL-based change management solution with tight integrations between HP Service Manager and HP Asset Manager. 


·    UnitedHealth Group will talk about their large-scale migration from HP Service Desk to HP Service Manager, including how their architecture allowed them to scale to such magnitude, key areas of customization, and how important integrations with HP Universal CMDB and HP Operations Manager increased the reach and automation potential of their ITSM solution.   


·    America First Credit Union (AFCU) will show how, by integrating HP Service Manager and HP Universal CMDB via Web services, they have developed a ‘360 degree view of IT’.   


·    A Fortune 500 company will be showcased as to how it decreased outages and managed change through a well-defined, automated release process, by synergistically integrating HP Service Manager, HP Release Control, and HP Server Automation.  


·    HPES (formerly EDS), the largest HP ServiceCenter customer in the world with a massive multi-tenant environment, shows how they have developed an efficient and effective, low risk, repeatable process for migrating each of their customers from a legacy system to HP Service Manager 7.11.  As an example, all data for one group of their 178 customers was migrated in 2.5 hours!  


·    Playtech will share how they minimized costs and risks in a hosted HP Service Manager implementation, with HP’s Software as a Service (SaaS) offering. You’ll hear how they enjoy reduced TCO and renewed IT focus on service delivery, and how they are able to re-deploy IT resources to activities and projects that provide greater value to the business, by leaving both the most tedious and the most challenging aspects of HP Service Manager support and maintenance to the experts at HP. 


Join us on June 15-18 in Washington D.C. for a state-of-the-art conference about outcomes that matter NOW—to your career, your organization and your business.  IT professionals who want to increase control, transparency and flexibility so that the IT organization delivers more value to the business should attend HP Software Universe. 



Register with the code INSIDER at www.hpsoftwareuniverse2010.com and get $100 off the conference rate.  Keep up with updates on HP Software Universe by following our Twitter and Facebook feeds.


 



 

How far will your tires take you?


When you are getting ready for a long drive, you make sure your car is in good working order.  One of the things you check is tires.  After all, you won’t get far without tires, and you don’t want to get stuck in the middle of nowhere because you blew your tire and have no spare, or had an accident because your tires were bold and the car skidded in the rain.


Discovery is like tires for different IT solution.  Whether you are talking about managing end points, implementing CMDB or an Asset Management solution, you need to be able to discover the environment relevant to your needs.


We tend to focus today on “higher level” solutions.  CMDB and CMS are hot!  Software Asset Management is up there as well.  Everyone spends lots of time selecting and evaluating the right products in those areas.  We make sure they can handle the size of our environment, have the functions that we need to assist in our daily jobs.  That’s great – choosing the right product is paramount – I recall working with one of the large IT industry analyst companies a few years ago.  They rated the product I was selling at the time as the best in the market.  But, when we tried to get them to adopt it internally, they were very quick to point out that what is best in the market, may not fit their specific needs.  Yes, they implemented our product in the end, but the point was made – choose products that meet your needs, not the ones that are marketed the most or evaluated as the best.  But I digress…


Let me focus on Asset Management, since that is what I am most familiar with these days.  You evaluate Asset Management products and choose the best one (of course, I hope the winning product is HP Asset ManagerJ).  You choose the right product for asset management, but how do you populate inventory data?  Many customers simply choose to use existing products for feeding data to the Asset Manager product.  Why?  Because they are already deployed and, well, data is data, right?


If you buy a car, you make sure it looks good, it feels comfortable and handles well.  When you get into an IT solution, like Asset Management, you pick the right product that fits your needs.  But, when it comes to data collection many people say, I will just use whatever I have.  It’s cheaper and data is data.  Except that in many cases data has to be transformed into information.  And that will cost time, effort and money.  It will require ongoing maintenance as the environment changes.  Do you want to maintain a custom solution?  In majority of situations IT does not want to have a “custom” implementation of any products anymore.  Do you just stick whatever tires are the cheapest?  Would you put 14 inch tires on a Hummer? No.  And you shouldn’t pick the cheapest discovery tool either.  You should make sure it meets your needs, and one of the criteria must be “does it provide the data the consuming products need” and “is the data in a format that is easily consumed”.  It is true that you will likely end up with multiple tools that collect overlapping data.  It will cost you some storage and it will cost some resources to collect and transfer to the destination.  But, the cost of the overlap should be quite small.  And the value of the right data in the right format is that the overall solution will work as intended and required.


If you buy a Hummer, don’t skimp on the tires.  Make sure the discovery product you use delivers the data you need with little or no customization.  It will be safer, more comfortable and cheaper in the long run.


 


"Aha!" Moments: Serendipitous Early Value Encounters in CMDB and CMS projects

My Social Media Manager Heather asked me to consider writing a customer success story for one of my blog posts.  I decided to try to raise the bar even further.  This is a compilation of many customers' success stories, albethem small successes.  I'm talking about "Aha!" moments, success they didn't know was coming and generated an "aha!" moment.


 


Early, informal value realization, aka “aha!” moments, found early in a CMS or CMDB project can act as value tiers, financing time and credibility for the more difficult-to-realize, longer-term ROI.  Especially if your funding and sponsorship is dependent on volatile management moods and economic fluctuation.  Look for aha moments whenever you can - they will usually reward you and your project in ways that are hard to predict.  And, very occasionally, negatively, in those types of environments where No Good Deed Goes Unpunished or where the messengers, especially bearers of bad news, are still killed - you know who you are.


 


Informal research (my own experience and anecdotes from my customers and colleagues) indicates that some engagements are more successful because value was realized and documented early and often. This is paradoxical, because the business transaction funding the project is usually measured only on the final value realization i.e. fulfilling the sponsored use cases. However, if no value is shown before the primary use cases are successful, organizations typically do not do as well success-wise. It is unclear as to which way the correlation goes, in other words, whether the lack of early success lead to a loss of project momentum, vs. if a project with a weak link has inherently fewer aha moments.


 


Apparently, “aha!” moments can be almost as important as the overall drivers, even though these are rarely formalized or anticipated. However, aha moments alone cannot sustain a project. The primary use cases must be delivered.


 


Here are a few of my favorites.


 






























































    When the aha moment happened



    Source of the aha moment



    Why the "aha"?



    The Value of the aha!



    Planning



    Problem awareness



    These kind of broad projects illuminates the business cases throughout the organization.


     


    Take the example of the "email from corporate",  an email notifying all employees of a new project.  It's unread by many due to the large number of such announcements. Until something happens that involves them, the project is only dimly visible to much of the organization. The “aha!” moments happen when you start interviewing people (known as "surfing the organization".) When the conversation starts with “Why are you here” and ends with “Can I play too?”, the value of the project has been successfully evangelized. But this is not the aha.


     


    When people begin answering the planning questions – questions like “what is your process for documenting applications?” – and the answer is “Well, we really don’t have a process for that.” – People begin to realize that there is a broken, missing, or inefficient process for which the CMDB use case can improve.



    When people become more aware of the problems facing their organization, at a higher level than their function. From the individual’s perspective, the the organization’s "ubiquity" is reduced.  It becomes a bit more personal.


     


    This can enable intangible value ranging from motivation, morale, and incentive to participate, to developing interests in the company’s higher functions – adding momentum not only to the CMS project, but in part to the entire organization.



    Planning



    Infrastructure awareness



    During interviews, we have sometimes uncovered missing firewall rules, missing hardware redundancy, and missing security rules. We would ask something like “what is your firewall policy for this DMZ?” And the technician would log on to the firewall or look through their spreadsheet, and say “I don’t see it.”. they would call their buddy or their manager and discuss it, then would turn to us and say “We’re fixing that right now.” Or “We’ve got to open a change for this”.  Cha-chingCool.



    Risk is directly reduced by correcting redundancy and security-related issues.



    Discovery



    Identifying infrastructure Single Points of Failure



    Initial baseline discovery has found the actual infrastructure to be contrary to a stated configuration. This is not only due to dating, but understanding, and differences between planned and implemented solutions.


     


    For example, a single point-of-failure was found for a mission-critical application requiring redundancy down to the network level. Connectivity to the application’s database was found to flow through a single router.  The Senior Geek we were working with assured us this was impossible and that the tool had to be wrong somehow.  A few phone calls to his Alpha Geek and and some probing, the single point of failure was confirmed.  The Alpha Geek and his team were later praised for uncovering a critical point of failure so early in the project.  We looked pretty good too.  Cha-ching.Cool



    Risk is directly reduced by the identification and subsequent correction of situations falling short of documented or expected implementation.


    Depending on the significance of the differences, finding these kinds of things often is a big boost to the credibility and confidence for a fledgling CMDB initiative.



    Discovery



    Discovering non-standard / unauthorized hardware and software



    Often, unauthorized software or hardware configurations place production at risk. For example of a software risk, non-standard software or patches installed on production servers. Actual examples of “risky” hardware:



    1. finding a part of a production application running on a desktop

    2. finding personal network hardware on a production network

    3. finding a part of production running on the CIO's desktop at his residence!Cool Cha-ching!


     



    Reduced risk to production applications



    Discovery



    Security and auditing



    SNMP was often found to be running with the default community string, even after the Security staff has assured us that all their devices are not on the default value. Also, where insecure protocols such as telnet are disabled by policy, and found to be enabled.


    Some SMEs, when approached, are skeptical of the discovery results. Only after verification using another tool will action be taken. Often, this has the net effect of increasing trust in the product.



    Risk is directly reduced by identifying missing or default security credentials. However, the amount of value varies widely depending on where the breach in question was located. For example, a breach found in a DMZ would be more valuable than one found in an internal-only network.


     


    Confidence in the CMDB contents usually increases with these kinds of discoveries because they are often visible to management and other groups.



    Dependency Mapping



    Unexpected Host and Applications Dependencies



    When we start putting the topology views together for the core service models, we sometimes discover application relationships that make a difference. For example, during DR planning, a customer found a mission-critical application to depend on a “non-critical” application. It was realized that the non-critical application was made critical by this discovery. A change in plans was made to accommodate moving the newly-critical application at the same time as the mission-critical application.



    Outage avoidance is more than risk reduction – had the situation not been found and corrected there would have been an outage. This is a direct improvement in quality, both statistically for risk and cost-wise operationally.  Even if the ROI is hard to quantify there is no doubt that ROI occurredCool.  Cha-ching.



    Impact Analysis



    New Application-level Dependencies



    As in the previous scenario, we sometimes uncover additional dependencies when we begin testing the impact analysis correlation rules.  Usually it's in the form of gap identification with the application owners, e.g. "Hey, where's the ABC app, huh?"  But you should take relationship identification any way you can get it.


     



    Outage avoidance as described above.



    Training, both formal and informal


     


    Interaction with application SMEs


     


    Interaction with customer management


     


    Interaction with technical staff (network, security, DBA, etc.)



    New Use Cases



    When you have good stuff, everybody comes to you and asks you to make it do everything you ever told them it could do.  It can be quite overwhelming.


     


    The team begins linking the concepts learned in training to begin solving their own problems. Matrix teams such as those found on CMS and CMDB projects often bring new valuable and challenging use cases to the table.


     


    So there's a risk of “scope creep” as students try to use resources already allocated for the primary use cases to their own use cases, or if the project attempts to take on too many use cases too early, before it has sufficient momentum to succeed. A lot of aha moments can increase project momentum, in a way, increasing time-to-value of the primary use cases along with it. It is worth a mention here that, as a CMDB matures, it can take on those additional use cases.  So Aha moments aren't exactly scope creep repellent, but they make the smell more tolerable.


     


    However, too many use cases early on tend to starve the project due to lack of delivery of the primary use cases.  Don't run before you walk.



    With a reliable means of capturing these use cases, the CMS grows in value by further decreasing cost of implementation and increasing time-to-value through experience. All consumers benefit by a collective body of expertise.


    Ultimately, aha moments alone are insufficient for a project’s success, but the do seem to play an important part.





 


Yeah, you caught me, this is part of a paper I already wrote.  So I'm a little drier here than my previous posts.  That's ok, I've been pretty "wet" so far.    But this is serious stuff when you get down to it!  My pontification (and YOUR COMMENTS, PLEASE!) should add up to something greater than poking fun at dumb stuff and waxing philosophic about mundane topics like Configuration Data Provider Rationalization (another juicy topic coming soon!)


 


If this is interesting to you, please let us know.   Our blog isn't really a blog until we get our user community actively involved and discussing these topics, which are really little more than starting points.  Please take a quick moment to let us know if you agree, if we suck, or what.  We'd appreciate it.

Location, Location, Location – Part 2

In my last post we took a look at the lineage of today’s CMS efforts.  The two major lineages I cited were ITIL v2 CMDB initiatives and the other was dependency mapping initiatives focused on application architecture reengineering.   A modern CMS initiative unifies these heritages from a technology standpoint.  It brings together the aspirations of an ITIL v2 CMDB initiative but does so in a technology form factor that is much more practical given the complexity and scale of any modern enterprise. 


What I mean to say is that the approach of having a federated CMDB acting as the bridge to other CMDBs and to other management data repositories (MDRs)  is a much more practical approach than focusing on the creation of a single monolithic CMDB.  Consuming applications in turn leverage this federated CMDB for access to configuration item information, business service context and for access to any information across IT that can be tied to these two elements.  


To be effective a modern CMS must also embrace automated discovery and dependency mapping.  The huge amount of gear and the complexity in today’s multi-tier and shared component application stacks make it totally impractical to try to support most IT operational functions without automated discovery and dependency mapping.  The old approach of leveraging tribal knowledge and manual processes just doesn’t scale.  This approach results in a data layer that is far too incomplete and far too inaccurate to support the data integrity requirements of the IT processes that need to consume this data.


So where are we today?  The technology platform to effectively implement a modern CMS exists right now.  Of that I have no doubt.  It is not perfect but it is very, very capable.  But if CMS initiatives are not to go the way of prior CMDB and Dependency Mapping efforts, more than technology is required.  What is required is a focus on use cases first, meaning a strong and crisp set of data requirements to support one or more critical IT processes.  Once this is well understood you can focus on what data is needed and where that data will come from.   Sponsorship with well defined consuming processes will also be higher than when initiatives are started from the data gathering side only. 


The requirements related to data sources should be fairly fine grained - meaning you must understand requirements down to a data attribute level.  Saying that Server data will come from solution “Y” is not enough since the data related to a server that is consumed by a specific IT process might require that your understanding of what a server is encompass data from many data sources.  The bottom line remains the same: “use cases, use cases, use cases”.   


Let me know what your experience has been addressing dependency mapping, CMDB or CMS initiatives at your company.  I and my colleagues would love to hear from you but even more important, I know others working on similar initiatives at other companies would love to hear from you.

Is Your CMS "On Fire"?

How does one measure the "Quality" of something?  What does CMS "Quality" mean?   "High-Quality" is thrown around without much substantiation, especially in the world of software.


 


My friend Dennis says  that for a CMS to work it must be actionable.  Of course we all agree.  But how many of us are measuring (or trying) the actionability of the CMS?  What are the metrics for measuring "actionability" and any other metrics which are important to its other and lower functions?  How is it possible to measure fuzzy, subjective, inexact things like  "data quality"?  At what point does measuring the CMS become ROI analysis?  I'm full of questions today.


 


Let's start pedantically for fun:


 


On Fire: adj. 1. Positive connotation: A continual period of producing exceptional work.  On a winning or lucky streak.  "Three goals in one game, he's on fire!".  The good kind of on fire.


 


2. Negative connotation: Exceptionally behind schedule or fraught with so many problems as to seriously hinder, halt, or even reverse forward progress.  "Our waiter is so in the weeds he's on fire."  Aka "mega-backlogged" or "dead in the water".      The bad kind of on fire.


 


3. Aflame, as in, seriously hot, or producing a glow or light.   Can apply to either prior definitions.


 


Fighting Fires:  Helping someone who is on fire in the bad way.  Commonly for someone important.    It is possible to catch fire  from fighting too many fires at once.  So much for my dictionary-writing skills.


 


For whatever acronym that is commercially and culturally significant to you, there's a way to say you're On Fire -  in both the good and bad ways.


 


Is your CMS on fire?  How would you know?  What metrics would one look at?  Is there such a thing as a CMS "thermometer"?    Let's call it a CMS-o-Meter:


 


 



 


 


During implementation, it's easier to tell if your CMS project is on fire.  Assuming we defined clear goals and have reasonable success criteria, we can look to the early deliverables and status reports like any other project to determine how on fire we are one way or another.


 


But operationally, once you get the CMS or part of it built, how does one measure it's temperature?


 


What if your pile of CIs were as important as, say, a nuclear pile - you pretty much couldn't go without a thermometer.  Kind of important to avoid catching fire.  Big, hot fire.  The kind that burns you for a long time.  How important is the CMS to your IT?  Got anything valuable in there?  Research like this and personal experience show that it is as easy to blow up your CMDB project.


 


Two tried and true methods are  to find  and measure quality for almost anything is by 1) what's important to the consumer and  2) what's important to whoever is responsible for the maintenance.  This is true for a car or a Service Desk or a CMDB or a CMS.


 


Research from Gartner suggests monitoring data quality is not widespread, and the decision to monitor data quality  falls as either an afterthought or chronically shorted on resources due to the cost of not only doing the monitoring but learning how.


 


Use case and Consumer-based metrics could include  qualitative  vectors like timeliness and accuracy of the consumed information.  These are often difficult and costly to measure, but they're the best indicators.  I believe you should invest here if you can.  Talk to the users.  Measure the value chain as far up and including the business as you can.  Are your change control and closed-loop incident/problem management processes working better?  Is your MTBF and MTTR improving?  Did the business lose less money due to critical availability downtime?


 


 Process-based measurements could be important too, such as, the performance of the CMS itself - what is the latency when you open an RFC or when the Service Desk creates an incident (both processes which can consume or provide data to/from the CMS)?


 


Administration-based metrics are usually more foundational and architectural:  Does the system work?  Is it secure?    Is working  with the software more like dancing or more like wrestling?  Do you get good support from the vendor, and as importantly, is it easy to work with support?  Is R&D responsive on patching major problems?  Is the vendor forthcoming with their road map?


 


Stratifying the measurements this way will help. 


 


The takeaway here is not to give you a comprehensive list, it's that you should be concerned about and invest in quality measurement of your CMS and it's data as much as the data itself.  


 


Think about making the CMS actionable.


 


Build yourself the right thermometer for your CMS.


 


Calibrate your CMS-o-Meter to make sure it's reporting accurately.


 


Then monitor these metrics.  Operationally, in Production, like you mean it.  Treat it as you would any other production applications according to it's priority in your organization.


 


When you're on fire, what do you do?  Let us know with a reply.  We'd always like to hear if you found this post useful, offensive, or just amusing for a few minutes.  Thanks.


 


 


 


 


 


 


 


 

When Thinking CMS remember “Location, Location, Location”

The other day I presented to a customer that had purchased HP Discovery and Dependency Mapping software.  This customer was interested in understanding HP’s direction relative to the concept of a Configuration Management System (CMS).  My discussion with this customer focused on how HP was addressing the data needs of IT operational domains ranging from application performance and availability management, to configuration and change management to IT process automation for server and network elements.  From a product perspective HP’s focus in this area has been and remains providing a platform that delivers configuration item data, service context and federated data that can be related to those two items to consuming solutions across IT.


Our conversation eventually and rather inevitably turned to what was the best strategy to achieve such a grand vision.  The answer is surprisingly simple at one level yet remarkably difficult to do in practice.  Like the old adage, location, location, location used to talk about buying real estate, the answer to building a comprehensive CMS that works as promised and stands the test of time requires a laser focus on use cases, use cases, use cases.   


I’ll return to this idea after a brief detour to look at the origin of today’s CMS Initiatives and how many of those early ancestors went wrong.


 “Origins of Today’s CMS Initiatives”


Modern CMS initiatives have two main lineages.  The first and best known are CMDB efforts that were launched in the wake of ITIL v2.  Many if not most of these early efforts failed (or at least fell far short of expectations).  The primary reason was a lack of a crisp focus on what problems were going to be solved and in what order.  Companies sought to create a “master” database with all configurations representing all infrastructures across the whole of the enterprise.  While the CMDB technologies used in these early efforts were immature and had some  technical limitations, most of these efforts didn’t fail because of technology.  They failed due to a lack of clarity around what the end game was.


The second major ancestor of today’s CMS efforts is dependency mapping.  Many of the early adopters of dependency mapping embraced this technology for reasons having little to do with how these capabilities are primarily used today; to support ongoing IT operations.  Instead, most of the early adopters of this technology were interested in dependency mapping as a means of supporting some form of application infrastructure reengineering.


Why?  Well during periods of rapid business expansion the IT infrastructure at many companies had grown substantially and no one had a handle on what existed and how it worked together to deliver IT services.  As a result many companies found themselves unable to effectively take on new IT initiatives focused on reducing the infrastructure footprint, reign in runaway server and network admin costs, or effectively take advantage of new virtualization capabilities.  These organizations lacked a clear understanding of what was the starting point.  As a result many of these organizations embraced dependency mapping as a means to generating this understanding. 


For these companies using this information for ongoing management to support application performance and availability, change management, or IT process automation was not the focus.  As a result little emphasis was placed on consuming IT processes and the integrations with the applications that support these processes.  Like early failed CMDB efforts many companies stumbled when they first tried to apply dependency mapping to the needs of ongoing IT operations.  Like early CMDB efforts the reason these initiatives failed (or at least did not deliver as much value as expected) was that they lacked focus.  


Many companies when first employing dependency mapping would attempt to discover everything before having clear use cases of what data was needed to support what IT processes.   Since there were no clear consumers for the data many of these efforts either lacked or failed to sustain sponsorship and consequently withered on the vine.   In my next post I’ll take a look at how these two independent branches have come together to be the foundation of the current crop of enterprise CMS initiatives and how these initiatives face the same challenge that plagued their technology antecedents.

Wild, Angry Bees and Why Your ITSM Vendor Needs Them

For some years now, I've been building up a resistance to kool-aid.  How?  Personally, I'm a pretty calm balanced person.  But as a software professional, I'm a wild, angry beekeeper.  I'll explain.


 


Angry is good, sometimes - A wise man once said, it is impossible to truly know the capabilities of a product unless it is used in anger.  This spoke volumes to me when I read it.  It explained so many problems, to me especially problems associated with product quality.  UI design and friendliness.  The OOBE (out-of-box experience)  KPI.   TTV (Time-To-Value).  Why it is sometimes so hard to get a non-trivial problem reproduced and fixed.


 


Wild vs. Tame - As a software professional employed by a vendor, one tends to become familiar and even attached in a geeky sort of way to the products with which one works.    From your product's perspective, you are "tame" to it, as opposed to a user, who is "wild" to it.  An example of a "wild" user is someone who has been given a tool without direct choice, someone who hasn't been "sold" on the product.  Someone who can draw objective, unbiased  conclusions about the product's suitability.


 


"Wild" users seem to frighten software vendors.  Wild angry users are more likely to seek out and verbalize defects, and are resistant to lame workarounds (we all know the smell of a lame workaound that's a poor fit, or impractical to implement.  Especially when an obviously lame feature is "working as designed".  Yes, but it was poorly designed.  WAD is a loophole designed to take the vendor off the hook .


 


And that is a very well-designed and functional loophole, don't you fall into it.  My advice for wild, angry users:



  • Get on the vendor's advisory board.

  • Be active on forums and communities where these ideas are discussed.

  • Don't give up.  Persistence will get you very far.

  • escalate if the Working-as-designed isn't

  • DOCUMENT your problem, don't just complain about it.

  • Have a positive attitude when you're working with Support, especially if you are a Wild user.

  • If you angrily tell the tech support person "Your product is broken, fix it.", while it may clearly be broken, and while it may clearly need fixing, you're going to get a lot more help more willingly if you are forthcoming with details and are just a little bit nice. 


 


Here's my hyphothesis:  If you are sufficiently "tame", "workarounds" become indistinguishable from "features".  But if you are wild, "workarounds" anger you.  Every time you have to do a "workaround", you just wasted a little of your time and your company's money trying to make a product do something which it should do (your justification for spending time on it) but doesn't (the problem wasn't anticipated or wasn't important enough to fix in the version you're using.)


 


 The lesson and challenge is for software organizations to use wild  QA and product management and (were it possible) wild marketing.  It will HELP you to find out your own problems before the paying wild customers do.  Expensive but worth it - tell me if you agree, would you pay more for a product that had wild users QA it first?    The famous physicist Richard Feynman said it best:  You first have to work at not fooling yourself - then it is easy to not fool others.  If you've never read his cargo cult science speech, it is illuminating and priceless if you care about research or technology integrity.


 


Bees - We tend to want to go after the major themes and features going into the next release, the "big game".  The relatively slow, big targets, and only dangerous if you get too close.


 


But while you're waiting for the big game to come along, you're constantly attacked by diminutive "almost-bugs"  -  bees and mosquitoes - all the small features or lack thereof which add up to either a very positive or very negative feeling about the product's usability.  It is these that will kill a product more quickly than starvation, if you don't deal with them quickly.


 


Tame users are used to the bee stings.  But wild users are not.  Bees can be big mistakes in the eyes of the "wild" users.  Anger-generating mistakes.  Anger not often understood by the vendor.    "Can't you just…" is the wrong attitude no matter what words  follow.  It's the death of a thousand bee stings to a vendor.  High TCO.  Unfriendly features.  It's not being run over by the elephant.  You can adjust your road map and invest in the new feature and do it right, recover or advance or whatever, but it's very difficult to recover from a thousand angry bees - wild users who get stung a lot tend to sabotage your product if they can in retaliation for being forced to use what they perceive as a poor choice of product.


 


This post is really an essay on software engineering.  The tie-in to Configuration Management is, I want this short post to be a mental runway from which you can fly to your own conclusions about your choice of vendor for your ITSM solutions.  For some really good  essays on Software Engineering, check out this gentleman, Mr. Fred Brooks,  and his timeless anchor-to-reality classic, The Mythical Man-Month.  Mr. Brooks is the father of much of what we know as commercial software today. 


 


Maybe you wild and angry beekeepers out there will understand a little better the tremendous opposing forces facing software vendors.  Maybe you competitors will realize how one gets so far behind GOOD software companies that understand my wild and angry allegory.  And maybe, just maybe, you can help me keep a burr under our own R&D saddle - they ride fastest that way.  With love to all my R&D friends, of course!  They really do a tremendous job given their constraints.


 


I'm very interested in what you have to say about my Wild Angry Bees.  Please comment and let us know.  Thanks!


 

The complex world of Software Inventory

In my previous blog, I promoted the concept of ISO 19770-2 tags.  But, I did not get deep into the reasons why I think they are so important.  Let me fill in some of the blanks.


In my many conversations with IT professionals, I noticed that outside of Software Asset Managers, few people understand why Software Asset Management (SAM) is so difficult.  And I am not surprised.  And the reason is rather obvious – we know what software is on our own machines.  By extension, we think that IT should also be able to find out what is installed on all IT managed machines.


Here is why this is not so.


1.       There are no universal standards to enable a reliable and complete discovery of software.  Not all applications report themselves to the OS – even on Windows. The file header information, the registry, WMI and Add/Remove Programs information in Windows is not consistent and reliable, although still miles ahead of Linux and UNIX.  I do have to give some kudos to Microsoft for having the most effective standards.


2.       There is no universally standard way to install applications.  There are many installers, and there no universal way to extract information from the installer – again, the situation is the better on Windows than other OSs.


3.       There is no single approach that can discover all software.  Some applications can be identified using file-based recognition, others require scripts, etc.


I have seen various attempts at solving this challenge.  Just talk to different asset management vendors.  You will hear about thousands of recognition entries (or signatures, footprints, etc).  You will hear about pulling data from OS sources and custom modules for identification of individual applications, but I bet that not one company can say they can discover all applications (unless they mean to provide a list of all files on each file system, but that is not exactly what we are interested in, is it?).


And here is another thing – none of us want to spend any time or money in the trenches.  We want Software Asset Management; we want it now and at a minimal cost. But, how do you manage your assets without proper discovery?  It’s like trying to drive a car with no wheels.  It may feel great to sit in it, but it won’t get you far.  Software discovery or inventory is the foundation – without it, you cannot do SAM.  But if we don’t want to invest in it, means we must find a common way of collecting the information.  This has to be something that is OS independent, it has to be something that is vendor independent.  It also has to be something that is quick and easy to do, because everyone, customer and vendor, is watching their expenses these days.


And that, my friends is why I am so passionate about promoting the ISO 19770-2 standard.  It is vendor and OS independent.  It is quick and easy to adopt (relative terms of courseJ).  There is even an organization that can help create and sign these tags – TagVault.org. It is a standard that can be universally adopted.  And it is time we had an adopted standard.  Trust me, I would much rather think about how to create an innovative user experience, or look for ways to adopt some new wiz-bang technology, than spend my days creating file-based software recognition entries.


I recall a conversation I had with one of my customers about SAM.  This particular gentleman is a manager of a large IT shop that has in-sourced its asset management.  His customers don’t understand how difficult it is to collect software inventory information.  He knows there is no magic bullet to solve the problem.  But, until this standard, he did not see much hope.  He thought that the only way to get software vendors to provide a way to track their software was through courts.  I am not sure if you have noticed, but many software vendors are now investing resources in license compliance audits.  Reason is simple – they are not selling as much as they used to before the recession (everyone is tightening their budgets and software expenditures are finally being scrutinized).  So, how do you make up a revenue shortfall?  One word – Audits.


His wish may yet come true – I think that if ISO 19770-2 gets adopted, it will force all vendors to compliance – the legal system that is fully behind the license agreements today may suddenly wake up to the fact that in some cases software identification is incredibly difficult, almost as if the vendors were purposely making it difficult.  I am not saying that is so by any means, but our legal system may decide that it is unfair that a particular vendor is not adopting a common standard and therefore putting undue pressure on the customer to track their software installations.  And I have yet to meet a customer who is not bewildered by the challenges that software discovery/inventory presents in their daily lives.  Like a real life Sisyphean task (even though they cannot tell me what they are being punished forJ).


But anyway, let me get off my soap box – I am getting long winded (and I know those who know me aren’t surprised).


But, ISO 19770-2 is only a part of the Software Asset Management challenge – it’s a start.  But then, we will need to get behind ISO 19770-3.  But that is another topic, for another time.   Hope you enjoyed this post – I promise/threat to write more.


 

Avoid a Vendor-Customer Disconnect with CMS and CMDB

So after all the soul-searching and researching, you've decided whether or not you need a CMS.  Or have you?  Have you yet sorted out the sordid relationship between CMDB and CMS?  CMDB was ITIL v2, but it's still around in v3's CMS, what's up with that?  What's a CMS again?  Can't buy one.  Has some CMDBs around, why won't just one do?  What's different?  What's a/the CMDB supposed to do?  Why doesn't the vendor just tell us what to do?


 


Does realizing you need something mean you've decided exactly what it should do, what you expect out of it?  One would hope so, but it's of course not that simple.


 


As a customer, it's a complex answer:  Fulfill and support my use case(s).  Build my processes according to the needs of my business family.  And within the capabilities of technology I can afford.   ITIL doesn't help much with the "T" part.  You must build or buy or fake something.  And you can't believe everything you read.


 


But as a vendor, getting it right is even dodgier - you have to decide what the technology should be, based on your knowledge, vision of the future and the needs of your market.  Not enough, and you lose to competitors.  Too much and you have quality ,usability, and other problems (I'm saving that for another post.)  You better trust who you go with into this very uncommodotized world.


 


Foremostly, by definition, the CMS and it's CMDB(s) are there to manage configuration information.  Everything else in the stack serves this purpose.  However, this isn't a very useful definition.  What does it take to "manage" configuration data?  It's not even so easy as to define it in terms of a lifecycle, or discovery or a database or APIs.  What's required is to understand how configuration management interacts with all the providers and consumers of CIs.


 


A CMS should understand dependencies and relationships as they exist in reality, not just contain a model created by a human - humans miss lots of things like this.  Tell me your all-knowing person keeps track of every .NET connection that lasts 3 milliseconds yet during those milliseconds some mission-critical data is exchanged.


 


Dependency discovery and mapping is not just for impact analysis.  There is compliance and audit, DR planning, data center transformations, and other use cases which absolutely require visibility to the dependencies and relationships of applications and infrastructure.  A CMDB must make it easy to make and use dynamic topology visualization (service models, application maps.) for a variety of use cases, not just the ones we can envision.


 


A CMS must support states of CIs, like "actual" state, "authorized" state (to compare to, control, audit, report, etc.)  There are reasons to make a CMS support an arbitrary number of user-definable states so customers can tightly couple their lifecycle stages together.  The importance of this is described in the ITIL books, and I will expand on it in a later post.


 


To support a CMS, a CMDB must be a powerful data integration platform, complete with



  • An ontology (Class model), and the ability to manipulate and extend it easily

  • Openness and interoperability (fully-formed discovery, federation,  and APIs.  It's also nice to actually document these things so users can use them, and even better - have a good UI that does all the hard parts of things like keeping names and properties right.  You wanna annoy users?  Make 'em configure the entire product and build all their integration with text files.

  • Capabilities to easily establish common identity and reconcile distributed data.   This is colloquially called a "reconciliation engine" but I'm uncomfortable with the term; it conveys the wrong focus on what should really be going on with multi-source identity.  It's an "engine".  Something under the hood.  Give it gas and it gives you power.  No, no, no.  Reconciliation is powered by business logic, not a commodity-like  fuel, and you can't buy it off (or the investment required) with descriptions like an "engine", even the software kind.

  • Easy-to-use Extensibility - you want to work with standard "pipe-fittings", not a welder-and-torch approach to customization.  Beware vendors whose only approach to integration flexibility is writing your own code.  It has to be actually easy to build and extend any kind of integration and discovery content you need.


 


Any CMDB and CMS has to be secure in all ways at all times, it has to perform and scale, and otherwise be Enterprise-ready.  You'll rarely get in the door without this.


 


A CMDB has to be deployable and supportable with practical TCO.  It can't take a room full of people to manage.  A lot of software is great at generating more work than the work it saves you from doing.


 


If you were expecting deep insight into vendor-customer relationship science, sorry to disappoint.  I believe that good technology that implements the right understanding of customers is what succeeds (usually, although the bookstores are full of examples that prove me wrong, where inferior technology won out because the salesperson's technique was better or more money was spent on marketing than the competition's.  These are exceptions rather than rules and that's why they make it into books - when a superior product wins, it's unremarkable except to the vendor, who is understandably happy.)    Agreeing on what the technology is and does is a necessary step to avoid a disconnect between IT organizations and software vendors.


 


The "why" of all this covers lots of ground, on which I hope you all have opinions you'd like to share.  What else do you think a CMDB has to do and why?  Feel free to post a comment and let us know.  Thanks!


 

Keeping it real - Some blogs don't help!

I am an avid reader of blogs that criticize ITSM as an approach to managing IT. They fall into three main categories.


First are the blogs that criticize ITSM based on some theoretical point of contention.  These arguments are based on the blogger’s understanding of a term, and the fact that ITIL uses it differently.  These are the subject of heated online debates, and make for some amusing reading, but rarely have any real practical value.  Further examination often reveals that the original blogger has their own publication or framework and is using ITIL to establish their credentials.


Second are the blogs that see ITSM as a passing phase that will be replaced by some new technology-based approach, like Cloud.  “ITIL is too old-fashioned for these new approaches and they solve all the problems that ITSM was trying to solve – just quicker and better”, the argument goes.  Really?  So which parts of ITSM are no longer going to be relevant in the emerging technologies?  Will incidents no longer occur?  We won’t need to manage changes?  Capacity becomes limitless?  Service Levels are automatically discerned and delivered?  I don’t think so!  Here’s what I believe:  Innovation and new approaches bring a huge amount of value, but they also bring a number of new management challenges.  We shouldn’t throw out everything we’ve learned from the past every time something new comes along.  ITSM will have to evolve to deal with the challenges of every innovation, but that doesn’t mean that ITSM is not a valid approach.


Third are the blogs that focus on actual failures and successes of organizations who have used ITSM / ITIL.  These are the most important and relevant blogs to me, because this is where we can learn about how to make ITSM work.  It is only in the real world that we can make something actually work, and where we can learn from failure.  So what have these experiences taught me?


·         Best practice is not a framework.  It’s not necessary to implement the whole of ITSM in order to get value from it.  A successful project should be measured by whether IT gets better at enabling the business to meet its objectives, not by whether the whole of ITIL has been implemented.  Some of the most successful ITSM projects only focused on one or two key processes or services.  Best practice is a set of guidelines based on previous experience.  The fact that it has been documented in ITIL doesn’t mean that it’s compulsory – it just means that it has been made accessible.  Each organization can choose what applies to them and how to implement it


·         Don’t take everything in ITIL literally.  Many projects have been derailed by arguments about the “correct” interpretation of something in ITIL.  In many cases somebody has taken an example or a guideline as an absolute rule, and then there is no way to deal with variations or exceptions.  In these cases, it is important to keep the overall objective of the project in mind, and figure out what will work for that particular organization.  Often, it can be helpful to ask other organizations how they dealt with the issue – the it Service Management Forum (itSMF) can be very helpful here


·         The world of service and the world of technology are not two worlds.  They’re both part of the same world.  It’s not possible to provide services without infrastructure and applications.  Many ITSM activities are performed by technical groups.  Successful ITSM projects focus on both the business and technology, and involve both groups in every phase of the project – and many of the deliverables of the project should be owned and delivered by the technical groups


·         Governance is key.  A successful ITSM project will change the way the organization works.  Having a good project plan and executive sponsorship are necessary, but not enough.  Good project governance will ensure that changes needed for decision-making, reporting, management behavior and execution are properly communicated and cascaded to the appropriate levels of the organization.  Too many projects have failed because the team went off on their own and built a great solution for one part of the organization, but never integrated or coordinated it with other key players


Watch this space for some practical advice on how to get ITSM to work for you no matter who you are!

Can Software Asset Management Become Easier?

We are now living in 2010, computers are everywhere....so why is it so hard to track license compliance?  After all, we can all see the applications in Add/Remove programs…


I have been managing HP DDMI (Discovery and Dependency Mapping Inventory -our asset and inventory discovery software) for a couple of years now.  Before I took on managing this product, I knew it had hardware and software inventory capabilities and I was impressed with its software recognition capabilities. Then, as the world entered the global recession at the end of 2008, I started hearing a lot of complaints about gaps in DDMI’s software inventory.  I was a little surprised…I mean I knew we had some limitations, but I thought most of them were because we were not providing all of the results we were capturing and that we could improve the level of automation.


But, as it turns out (hindsight being 20/20) the issue is much bigger than I thought.  Is DDMI behind the competition?  Are we in danger of becoming irrelevant in the market place?  The answers I found comforted and shocked me at the same time!


First of all, I began to realize how incredibly complex the world of Software Asset Management really is.  Having gained CSAM certification from IAITAM, I validated that realization. I also learned about the many daily challenges of an IT Asset Management professional.  I realized there is a big difference between reporting what is installed and being able to track licenses.  There are also differences between tracking desktop software and server software, Windows software and Linux/UNIX software.


My conclusion?  There is no way to be able to automatically track license compliance across the board today.  You may be able to do it for specific titles, or perhaps vendors.  But there is no way to do it across the board!!!


Is there hope for the future? Yes!  It is a faint hope, but there is a light at the end of the tunnel (hopefully it is sunlight and not a train lightJ).  We now have a first global standard that promises to improve the current situation - ISO 19770.  ISO 19770-1 provides information about best practices for performing effective Asset Management.  ISO 19770-2 provides a description of a standard asset tag that will identify installed software.  That means, you will be able to read the tag information rather than relying on software recognition or other complex and potentially inaccurate and incomplete methods of identifying software.  Then, if and when ISO 19770-3 is approved, you will be able to use the same method to collect license entitlement information.


Yes, it will take time for vendors to adopt these standards.  This is where each of you come in – vendors listen to their customers.  So, here is my call to action to all of youstart asking for ISO 19770-2 compliance on every RFI and RFP from today on!  It doesn’t matter what the software is – if you buy it, you have to track it, so ISO 19770-2 compliance should be mandatory for all vendors.


Then, once you get the ball rolling, it will be easier to require ISO 19770-3 compliance.  And that will provide you with the license entitlement information – making software license compliance easier.


And don’t worry – you will not put me out of work and you will not lose your jobs either.  As much as I would like to be an optimist, I don’t think for a second that every vendor will fully or correctly implement these standards.  But if we can only solve 80% of the problem, or even 50% of the problem - that will help you deal with the other issues.


What issues?  There will be lots – have you looked at the licensing terms lately?


Stay tuned....more to come... a topic for another one of my posts...COMING SOON!

Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • HP IT Service Management Product Marketing team manager. I am also responsible for our end-to-end Change, Configuration, and Release Management (CCRM) solution. My background is engineering and computer science in the networking and telecom worlds. As they used to say in Telcom, "the network is the business" (hence huge focus on service management). I always enjoyed working with customers and on the business side of things, so here I am in ITSM marketing.
  • David has led a career in Enterprise Software for over 20 years and has brought to market numerous successful IT management products and innovations.
  • I am the PM of UCMDB and CM. I have a lot of background in configuration management, discovery, integrations, and delivery. I have been involved with the products for 12 years in R&D and product management.
  • Gil Tzadikevitch HP Software R&D Service Anywhere
  • This account is for guest bloggers. The blog post will identify the blogger.
  • Jacques Conand is the Director of ITSM Product Line, having responsibility for the product roadmap of several products such as HP Service Manager, HP Asset Manager, HP Universal CMDB, HP Universal Discovery and the new HP Service Anywhere product. Jacques is also chairman of the ITSM Customer Advisory Board, ensuring the close linkage with HP's largest customers.
  • Jody Roberts is a researcher, author, and customer advocate in the Product Foundation Services (PFS) group in HP Software. Jody has worked with the UCMDB product line since 2004, and currently takes care of the top 100 HP Software customers, the CMS Best Practices library, and has hosted a weekly CMDB Practitioner's Forum since 2006.
  • Mary is a member of HP’s ITSM product marketing team and is responsible for HP Service Anywhere. She has 20+ years of product marketing, product management, and channel/alliances experience. Mary joined HP in 2010 from an early-stage SaaS company providing hosted messaging and mobility services. She also has product management experience in the ITSM industry. Mary has a BS in Computer Science and a MBA in Marketing. Follow: @MaryR_Colorado
  • Michael Pott is a Product Marketing Manager for HP ITSM Solutions. Responsibilities include out-bound marketing and sales enablement. Michael joined HP in 1989 and has held various positions in HP Software since 1996. In product marketing and product management Michael worked on different areas of the IT management software market, such as market analysis, sales content development and business planning for a broad range of products such as HP Operations Manager and HP Universal CMDB.
  • Ming is Product Manager for HP ITSM Solutions
  • Nimish Shelat is currently focused on Datacenter Automation and IT Process Automation solutions. Shelat strives to help customers, traditional IT and Cloud based IT, transform to Service Centric model. The scope of these solutions spans across server, database and middleware infrastructure. The solutions are optimized for tasks like provisioning, patching, compliance, remediation and processes like Self-healing Incidence Remediation and Rapid Service Fulfilment, Change Management and Disaster Recovery. Shelat has 21 years of experience in IT, 18 of these have been at HP spanning across networking, printing , storage and enterprise software businesses. Prior to his current role as a World-Wide Product Marketing Manager, Shelat has held positions as Software Sales Specialist, Product Manager, Business Strategist, Project Manager and Programmer Analyst. Shelat has a B.S in Computer Science. He has earned his MBA from University of California, Davis with a focus on Marketing and Finance.
  • Oded is the Chief Functional Architect for the HP Service and Portfolio Management products, which include Service Manager, Service Anywhere, Universal CMDB & Discovery, Asset Manager, Project and Portfolio Manager.
  • Olivier is Product Line Manager for the HP Configuration Management System (CMS) which is comprised of UCMDB, UCMDB Configuration Manager, the UCMDB Browser, and Universal Discovery.
  • I am Senior Product Manager for Service Manager. I have been manning the post for 10 years and working in various technical roles with the product since 1996. I love SM, our ecosystem, and our customers and I am committed to do my best to keep you appraised of what is going on. I will even try to keep you entertained as I do so. Oh and BTW... I not only express my creativity in writing but I am a fairly accomplished oil painter.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
  • Vesna is the senior product marketing manager at HP Software. She has been with HP for 13 years in R&D, product management and product marketing. At HP she is responsible for go to market and enablement of the HP IT Performance Suite products.
  • A 25+ year veteran of HP, Yvonne is currently a Senior Product Manager of HP ITSM software including HP Service Anywhere and HP Service Manager. Over the years, Yvonne has had factory and field roles in several different HP businesses, including HP Software, HP Enterprise Services, HP Support, and HP Imaging and Printing Group. Yvonne has been masters certified in ITIL for over 10 years and was co-author of the original HP IT Service Management (ITSM) Reference Model and Primers.
Follow Us


HP Blog

HP Software Solutions Blog

Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation