IT Service Management Blog
Follow information regarding IT Service Management via this blog.

New HP Data Center Transformation Solution is now available!

How do you consolidate data centers, move data centers, transform your data center infrastructure or employ the most current data center technologies with the least amount of risk?  HP software and services are proud to introduce a new data center transformation solution that will help you answer these questions. 

Even if you're going off the rails, you're still a Train: Leverage your Transformation's Momentum.

A data center transformation is essentially a train wreck in which the  train runs full speed into a transparent wall of time, pieces fly in all directions, and everyone on the Transformation team pick all the pieces out of the air, starting with those that will hit the ground soonest,  take them back to the track, reassemble them back into the running train, and once set off down the track, wonder if any stray pieces hit the ground and hope the occupants didn't notice.  The small miracle aside that it is actually possible to transform a live data center, is it possible to do it with little or no bloodshed to the business?  Do you want to find out?

It's Troux, CMDB and Enterprise Architecture Play Nice! Part 2, EA as a CMS Consumer

ITSM, CMDB, PPM, EA, CMS.  Not just SEO, but ITIL!  DCT!  Clooouuuuud……..

What does it all mean for the Enterprise Architect who spends a third of their time  gathering business context data?  A provider of even a small fraction of that amount would be, like, some kind of EA whisperer.  Really, what does business context mean to an enterprise architect?   What would be the value of CMS data to an EA?  What?

It's Troux, CMDB and Enterprise Architecture Play Nice! Part 1, EA as a CMS Provider

I see all kinds of IT environments, ranging from normal, to dungeonesque (full of tortured but alive people) to Taj-Mahalesque (pretty on the outside but full of death on the inside),  to, let's just say, the lights are on but nobody's home.  And every now and then, you find a data center who knows what Enterprise Architecture is.  It's Troux.    Which one are you?

Reduce the complexity of change through visibility - See HP DDMA and UCMDB in action

Check out the new demo of HP DDMA. Through this scenario demonstration of HP Discovery and Dependency Mapping ( DDMA ) , hear and see how you can:


  • find out how easily you can discover new applications and their dependency on other applications and underlying IT Infrastructure
  • define service models within the UCMDB  
  • perform impact analysis to better understand the potential impact of changes 

Improve IT visibility - say goodbye to change failures

New white paper explores various business critical IT functions that require detailed, up-to-date resource and dependency maps to be performed successfully, describes the HP approach to automated discovery and dependency mapping, presents an overview of the complete IT change management solution, describes the HP Discovery and Dependency Mapping software ( DDMA ) , and explains how its features and functionality can improve visibility into the data center and contribute to the success of key IT/ business initiatives.

Avoid CMDB, Discovery, and CMS Provider Conflict. Just Do It.

Are you suffering from provider conflict?  Is it painful for your IT professionals to resolve ownership issues?  Do all your Systems Architects dump the painful bits of that process on to Operations?  Does your vendor encourage this?  Most importantly, can you trust the data in your CMS right this minute?  I thought so.

Using CMS and Discovery in Data Center Transformations: Weird but True Stories

Stay with me for a minute.  I'm not talking about transforming your CMDB/CMS/Discovery/ITSM, I'm talking about USING a CMS or something like it to actually be The Tool for Data Center Transformations.  No way Jody!  What was all that talk last month about using the right tool for the right job?  Stay with me for a minute.

What Should Be a CI?

Life is tough. Deciding how to populate your CMS is tougher. As semantic and political battles rage among vendors, analysts, and the media, customers and practitioners are caught in the middle. Here is a bomb shelter that is small enough to carry on you but strong enough to protect you from the red glare buzzing overhead. It's constructed of high-fiber perspective.

CI = Asset?

In my presentation on the CMS value chain at Software Universe I tried to demystify some of the confusion around the CMS, and the CMDB in the CMS (yes you still need one in ITIL V3) . For example, how does the CMS relate to an "Asset"? As it turns out, there are multiple definitions and positioning. It's sometimes unpleasant.

No, the CMDB is NOT Irrelevant in a Virtual and Cloud Based World

Olivier Pignault reacts to a blog entry posted by Bernd Harzog where it is claimed that today’s CMDB products will not be able to evolve to handle the Virtual and Cloud Based World.

Introducing HP Configuration Management System 9.0

Announcing the availability of new leading UCMDB and DDM products! Find out what's so special.

When is IT Service Management a Life and Death Issue?

Vancouver Coastal Health (VCH), one of British Columbia’s six regional health authorities, implemented HP Service Manager software and ITIL processes to support clinical and IT systems across 14 hospitals and 90 community care sites. As a result, VCH has accelerated time-to-resolution from days to minutes, shedding $4,000 per month in costs. More importantly, VCH care providers now enjoy highly available and well performing clinical systems and are equipped to provide high standards of patient care. To learn how VCH achieved these results, check out the Vancouver Coastal Health ITSM case study, see the attached document below.

Labels: CMDB| ITSM

HP Service Manager customer becomes more agile with consistent change management - learn more about this up and coming webcast

CMT becomes more agile with consistent change management

Wednesday, May 19, 2010 at 9:00am PT / 12:00pm ET


CMT faced pressures of speed and risk - to be more responsive to business needs and more consistent on risk and impact analysis.  Register now to hear Tuilo Quinones, Manager of Enterprise Service Management, share detail on their approach to create repeatable and consistent change management including building efficiency, leveraging the service catalog, managing non-standard changes and simplifying the change workflow process.

Are You Dancing or Wrestling With Your ITSM Applications?

Did you ever notice how working with some software is like dancing, and other software is like wrestling?


How do you engage your CMDB, your Service Management applications? Do you dance or wrestle with them?


Do they anticipate your every need? Do they try to lead you to the next step, even if you're not sure? Or, must you force it onto the ground and win or lose based on strength vs. mass?


Must you use your CMDB in anger or is it inexpensive and easy to go query or integrate something? Does your CMDB keep your data "in jail"? Is everything always customization, or in the "next" release? These things add up!


It almost seems intentional sometimes. You can imagine a sumo developer, sitting up in his ivory tower, pondering his product's next feature. He's very good at building traps for unsuspecting or insufficiently expert users that don't measure up to his expectations of a user. He knows enough about the real world to be dangerous.


Smiling, he envisions the user face-down on the mat, begging for mercy. Yes, a newbie trap would be very good here, he reasons. Reaching for his keyboard, mentally he begins to wrestle with the user. The poor guy trying to implement it doesn't stand a chance:


 sumo_mismatch.jpg          sumo.jpg                        


Now of course we have to assume that our developer is very busy, that he can't overcome all the obstacles, that he's limited in his choices, that his budget is already set and he can't really listen to the users just now. And that bothersome product manager is a noob himself, so they're no help. Everything would just be ok if everybody did what I said, he thinks. That's it. My software will impose my will over the users. Make 'em do what I think they need to be doing.   Phooey on their use cases. Mine are better.


Now this is admittedly a cynical view. But tell me every one of you haven't used software that made you wonder if something like this wasn't just a tiiiiny bit true.  Tell me I'm wrong.


What if we were able to look at our Sumo developer's bag of tricks?    How would he make his product wrestle with his users? What would he do on the odd day he feels merciful and lets the user get something done easily and efficiently?  Even a developer has to give his people and his community a little respect.


Let's do some speculation.  C'mon, conjecture is fun! Besides, it's just the pejorative of "guessing". Which is, in reality, why a lot of software seems to be made to wrestle. The developer isn't evil, they just don't know. But let's not let that spoil our little developmental circus. On to the mind of our jovial but evil Sumo programmer:




Having UIs for all configuration and deployment

Use text configuration a lot. Throw in some XML editing too. Make it all in different formats. For fun, change it every version.  Make configuration UI a low priority.  Make 'em wait for IDEs at least two or three releases.

Low TCO, easy to manage

Cost of integration is greater than the cost of the products

Positive OOBE (out-of-box experience)

Requires a consultant to take it out of the box

Users feel like the interface designer understood and faced the same needs and efficiencies and structures as they

Users wonder if the interface designer has talked to a customer

You have to do so few clicks to do something that the designer must have known this would be a frequently-used path and planned it that way

You have to do so many clicks to do something that you realize the designer never imagined this function would actually be used that way

User doesn't have to remember something between screens

User is required to remember complex names or strings between screens

Path(s) to do things are well-lit and clearly-defined

Path is not intuitive or is overly circuitous. Sometimes there just is no path

Any data entry error erases properly-entered data and forces you to re-enter or start over

Data entry error handling preserves all preservable user work

UI performs well all around

The more important the function, the worse it performs

Server and UI are stable, do not crash except under extraordinary circumstances

You have to do a lot of work and do things exactly right to minimize the number of crashes

Does not make you change your processes or organization to fit technology specifics

Requires obtrusive engineering or processes all it's own to be implemented in the organization

Natively understands ITSM processes and terms

Product feels like a generic platform fitted with ITSM restraints

Security is ubiquitous and transparent, mostly

Secure usage is painful because the Sumo developer wants to develop features, not security.  Security takes next-to-last place in priority ahead only of documentation.

Documentation was a priority. The developers wrote the documentation as part of their responsibility. It's not painful to read the doc.

Documentation was an afterthought. The developers were forced to "write" documentation or face consequences, so they jotted down a few bullets and handed it to the writers. The resulting lack of quality is painful and obvious.

UI is consistent. Menus, dialogue boxes, presentation, lists, right-clicks, etc. all seem to be the same thing

The evil developer bakes UI components based on his mood. Things are situated every which way to keep the users on their toes. For fun, switch around the "yes/no" choices and use double-negative choices ("Yes, I don't want to undo my 'yes' answer" - Huh?)

Diagnostics are easy. Logs and monitoring are easy to plug in and use

You better hope nothing goes wrong

If you do something bad, the product gracefully tries to tell you and help you.

It's easy for the user to crash the product or corrupt data.

The product doesn't allow a basic user to do any damage.

Even a basic user can create big problems for other users

Users smile when using the product

Users blood pressure increases when using the product

Product has expertise scalability - it's easy for both beginner and advanced users to use

New users cannot use the product. Only experienced users can use it. There are no new users of the product.

UI was designed under standard standards

UI was developed according to the developer's whimsy

Can install and manage it's own database, but allows external configuration as well

Either has full control or no control over database installation and configuration


I'm sure all of you have encountered at least of few these wrestling matches. But when you find a product you can dance with, it's beautiful. My question to you is, have any of you found a dancing CMDB? A dancing Service Management tool? The Grace Kelly of Asset Managers?


Our developers aren't evil. But I'm sure they occasionally entertain naughty thoughts. They're quite talented and for the most part, they're great dancers. If you're tired of wrestling with your ITSM solutions, drop in or give us a call. We'll dance with you.



Process Governance: The Dark Side of CMS

Ditch the suit, get your boots on, and bring a good flashlight!   Today we're journeying into the center of the ITSM Universe, Configuration Management.  What will we find?


Planning  or designing a CMS or CMDB?  Beware deceptively well-lit, short and simple paths to success.  They're illusions.  The actual path of achieving substantial ITSM ROI involves lots of crawling around in tight spaces and heavy organizational and technical lifting.  And there IS a wrong and a right way to do it.  There are all kinds of ways to fail, but only a few ways to succeed.  I'm thinking not "tour guide" but "expedition leader". ITSM is more Lechugilla than Carlsbad.


Like caving, CMS solution architecture can be, well, dark.  For example:  "should the _____ process interact with the _____ process through ____ or _____?"  Plug in your question of the day.  There's a million of 'em: Change Management,  Incident Management, the CMDB, directly.


Adding process and automation to an IT organization can be as challenging as any other organization in a business.  Possibly more so because that sort of thing IS our business and we don't like our business getting messed with.  But I've been covering that in my last three posts.   Today's hypothesis:


After a dark and strenuous trip, you arrive to find ITIL is hollow in the center.  The missing part is the data, how to handle the configuration data itself.  All the process and governance around the data and information about the data, for example, how accurate or timely the data is.  You have to think about what to fill this gap with and the operational ebb and flow of the data.   From the beginning.  Ultimately, it's as much about the data as the process.  But ITIL is forthemostpart dark on this, and it's also least-filled-in part all around.  The documentation, the stakeholders, the technicians, the consultants don't focus on it.  The process and governance work is left to evolve organically or for you, the accountable party, to figure out.  While you must ultimately craft your own processes, it's still not nice of the consultants to leave anything undocumented, especially any custom work like integration or model extensions, etc.  But that's not what I'm writing about.  Back to the gap. 


The gap is, the discipline of constantly measuring all the onboarding, as well as the operational, activities related to the data against the desired state.  Not just a successful installation and demonstration, and not even just usage and operation, but successful operation, where the CMDB measurably reduced risk and cost and improved reliability of the configuration data provided to IT's consumers.    And I presuppose a well-defined use case and an achievable, understandable, quantifiable ROI here.  If the data is sloppy going in, your ROI suffers an you see sluggish or no ROI up in the business KPIs.


I'm not talking about Asset Management or SACM!  These processes  do things like, call for specific ownership of certain CI types, and call to be the specific provider of certain CI types.  This is not what I'm talking about!  I'm talking about the layer that sits between ALL the processes and the CMS, the governance part, that which is responsible for evaluating the data and its fitness for consumption.


If CMDB is about plumbing, CMS is about water quality.  


So you have a working  CMDB, a building full of experts ready to federate, a bag full of use cases, now what?  Oh yeah, maybe you already did some discovery.  You'll probably have to undo some of that.  Rewind for a minute.


As you go through the presales demo, your Proof of Concept, your testing and lab work, and finally, your move to production, can you answer the following questions:

  • Who is the intended consumer of the data?  (Hint:  You better have at least one)

  • Who owns each CI and attribute?  (Hint:  Someone needs to)

  • What provider conflicts exist?  For example, between the discovery and the asset or service or network management systems? (Hint:  There shouldn't be conflicts in the finished CMS.  Not a misprint.)

  • Who does the business see as accountable for this configuration data?  (Hint:  not the Alpha Geek's spreadsheet)

  • Who decides what data is provided to what consumer?  (Hint:  It shouldn't be the consumer or the provider.  No really, not a misprint.)


The answers reveal your config data onboarding processes, or lack thereof.


Here is the dark space in the center of ITIL:  You need a good governance model to build the processes around onboarding and managing configuration data, which, when followed, will result in the highest quality data possible provided to the consumer in the most efficient and secure manner possible.   "Data Stewardship" is a good way of thinking about it, here is a gentleman who understands.


By accounting for the authoritative nature of the data, and the entitlement of the consumer, one can construct such a model which can guide those creating processes for using a CMS, during and after CMDB and CMS implementation.


Such a model must minimally account for three things:  the consumer, the provider, and the owner.  The model must provide a few basic tenets which reach into every aspect of how configuration data is handled by the CMS.  For example, how provider "conflict" is handled, how ownership of data is established, and why consumers must be understood vs. merely serviced.  This is the missing gap in the center of ITIL.  I hope. 


I and a bunch of my friends have developed such a model, and it happens to be called the Consumer/Owner/Provider Model, or just the COP model for short.  In my next posts, I'll be introducing some of the major tenets of the COP model and discussing each one in  more detail.  Perhaps you think of some of these tenets in other ways or using other language, but I want to see how well this relates to you. 


I hope I've whetted your appetite and that you're full of questions.  Feel free to ask them here or comment with a reply.  Thanks!


CMS Use Cases On Parade At HP Software Universe

In my last few posts I talked about the need to focus on use cases.  Over many years I have learned that the number one thing people want to hear about is as follows:  "what is my peer down the street (or across the ocean) doing about similar problems".

Being the track manager for the Configuration Management System (CMS) track at HP Software Universe in Washington D.C. (June 2010), I just completed scheduling a number of great presentations that represent real world use cases and implementation outcomes.   The CMS track at Universe this year highlights a number of great case studies of what real customers - facing real challenges - at very  large and complex companies - are doing around CMS related initiatives.  What follows is a quick summary of customer centric use cases that will be on stage for the CMS track at Universe this summer.

Turkcell, one of the largest mobile phone companies in Europe, will be on stage addressing how they are creating an integrated IT environment capable of supporting a broad range of IT processes including Asset Management, Configuration Management, Change Management and Problem Management.  Elements being integrated include IBM Maximo, HP Business Service Management (BSM) solutions, the HP Universal CMDB and HP Discovery and Dependency Mapping.

An HP partner, Linium L.L.C., will be walking through the work they have done for a major retailer in the US.  The focus of this case study is around the implementation of a Change and Release Management solution that brought together HP Server Automation, HP Release Control, HP Service Manager and the HP Universal CMDB.  

Melillo Consulting is working with a large company to integrate several of our BSM solutions with our HP Client Automation Center to implement an Incident, Change, Problem and Request Management solution.

Elegasi, another partner, is working with a large Financial Services company to help them effectively manage the cost of licenses associated with virtualized infrastructure.   The session will highlight how Discovery and Dependency Mapping, the Universal CMDB, and HP Asset Manager can work together to help address license compliance and cost management for virtualized infrastructures.

Finally, our HP Professional Services team is implementing a Service Asset and Configuration Management solution for a major Telecom company.  They'll be addressing the work they have done to integrate UCMDB and Asset Manager and talking about where they are going next in terms of integrating Service Manager. 

When I consider all of the sessions being put together across other tracks as well - I know that there are many more customer or partner delivered sessions that focus on integrated solutions.  In many of these, the UCMDB is a central component of the solution that will be represented on stage.  If you are interested in going to Universe and have not yet registered, I invite  you to get $100 off the entry price by entering the promotion code INSIDER when you register.  Feel free to pass this promotion code on to others.  Hope to see you in Washington this summer.  Cheers!

How Long Should a CMDB / CMS Take to Build? Part 3: Process Engineering

This is the third post in the "How Long Should a CMDB/CMS Take to Build" series.


Today's tagline: YOU must make the final journey to the right process.  No one else, not even a cherished trusted vendor or analyst, can make it for you.  But they can act as a spiritual advisor.


ITIL Process engineering is the second most important part of the deployment, the most difficult to get right except for people and cultural change.  The only reason process engineering is slightly easier than these is because you at least have better measurement tools.


And I'm talking about ITIL processes here, for which additional complexities apply.


There aren't many vacant lots left in downtown ITIL.  I'm talking about process RE-engineering as well, because almost none of you are building a data center from scratch.  You already have some kind of processes, bethem manual or dysfunctional.  Part of the process-building involves assimilation and demolition of parts of the earlier generation processes.


So what do you start with:  Needs, goals, plans, budget, vendors, tools?  Turns out it's not so straightforward.


It's a paradox.  You can't easily build your processes without a tool in mind or you will not be able to find a tool that does everything you want.  Don't believe me?  Go ahead, try, you'll spend a ton of money on column fodder and end up picking the vendor that can just fill in the most columns - a disappointing and possibly unwise strategy.


However, you don't want your processes to be tool-driven because you will end up locking out the most important KPIs which are fulfilling your use cases exactly. 


So, do I pick a vendor first, or define my requirements first?  My answer:  it's an iterative process, there is no prescriptive approach that ensures success - you must have a good IDEA of your processes, then court a few vendors, then get some preliminary input to refine your idea of what config management should be, ask a few more questions, and repeat until you have a good foundation that will fulfill your use cases and is supportable by a solution you can buy and build.


The CMDB is a tool, maybe even a platform.  The CMS is a deployed operational solution.  You must still operate it with your own processes and people.  Good luck with ITIL.  You'll need more than that.  But I digress.


If you expect your vendor to supply all the processes because the tool won't work without them - you're in trouble.  You must still understand all your processes to the point where YOU are doing the service transitions and operations.  Most  vendors can't and won't care as much about how well your processes work, and at best will deliver incomplete, high-level, or overly-generic  processes, the same cartoon version of IT that ITIL already provides.


 As a vendor you have to work really hard to create and deliver a good process layer of best practices around your CMS and CMDB.  And while I've tried hard (that is one of my projects at HP), I cannot fool myself that we have gotten everything right, in fact or in principle.  Experience and the rigorous discipline of journal-keeping,  analysis, and continual improvement are our only lights into the future of process.  Don't let anyone else sell you otherwise.


Some final recommendations:

  • Get yourself some wild, angry beekeepers.  They'll keep your you, as well as your vendor, honest, and help you identify the needed, the unneeded, and the just plain stupid.

  • Come to recognize the smell of crap factoids.  Analysts and vendors, like Alpha Geeks, CIOs, bloggers, and help desk technicians, are not immune to hubris. 

  • Not all IT organizations need to "mature" all of their processes to the maximum "maturity".  Avoid unnecessary or self-fulfilling scaffolding, even if it's your vendor's favorite.  Even though ITIL says you should be doing something, you must decide for yourself whether you actually should be doing that thing.  And  it's not always easy to determine.  Read.  Study.  Know not just IT but YOUR IT.  In the vicious world of ITIL, knowledge isn't just power, it's survival.

  • Same thing I tell all the school kids I teach astronomy to: Keep asking questions.

  • Configuration management, like education, is not about filling a bucket, it's about lighting a fire.   Think, motivated, self-policing, continual service improvement.  Incent your people to seek out improvement and they will do so, to your benefit.  Too expensive?  Don't expect much help.

  • If you don't understand something but should, go ahead and ask the question.  But remember the risk.  And think about who you should ask first.


I hope this post touches a nerve, or gets through to someone, or even angers someone enough to post a reply.  I'd really like to hear what you think.  Thanks for your time.

Taming (if not slaying) one of IT’s many Medusas

My third grade son and I have been exploring Greek mythology lately.  We’ve been reading about the Gods of Olympus.  This new found interest was triggered by my son having recently listened to the “Lighting Thief” on audio book - the first of the "Percy Jackson and the Olympians” series.   If you aren’t familiar with Medusa, she is monster in female form who has hair that is made of dozens of horrifying snakes.   The hair filled with snakes idea reminded me of a very thorny problem that IT deals with -  that of addressing compliance related issues.  The more I thought about this the more I realized that almost any problem I have ever come across in IT reminds me of Medusa but this area in particular stands out in my mind.  


In my last post I talked about the importance of use cases.  In this post I want to focus on a trend I’ve seen that often is the genesis of a Configuration Management System (CMS) initiative – that of addressing compliance related reporting.  Over the years I have dealt off and on with the compliance problem and it stands out in my mind because of the duality that permeates the issue.  Compliance has this quality of being everywhere and being nowhere at the same time.  Let me explain.  When you think about the roles in IT almost every group has some level of responsibility for supporting compliance and yet responsibility for what must be done is highly diffused across the organization.  This is true even if the organization has (and most now do have) a Chief Compliance Officer.  From a product standpoint every product seems to be able to highlight itself as a solution but no one offering by itself really gets you very far.


So having acknowledged upfront that no single product can be all things to all issues compliance;  I have been working in the CMS area long enough to see a recurring trend.  That of using Discovery and Dependency Mapping (DDM) as a way of helping to lighten the burden around compliance reporting in highly regulated industries like Financial Services, Health Care and Utilities.  In each of these cases, I know of at least one (sometimes more)  large and complex organizations,  with massive reporting requirements,  that are using DDM to meet requirements around the need to attest and verify that they have strong controls in place to prevent unauthorized changes to their mission critical infrastructures. For many organizations addressing these kinds of compliance requirements is a hugely time consuming and costly endeavor from the standpoint of IT hours invested.


I will start with a publicly available story, that of FICO.  Known to most in the US for their credit scoring service, FICO used DDM as key element in a solution which also included HP Service Manager.  FICO talks about their solution from the standpoint of incident, change and problem management but addressing compliance was certainly a big motivator for them as well.  Operating in the highly regulated financial services industry, audits are a way of life for FICO.  Matt Dixon, Director of IT Service Management at FICO, has said that with their solution they were able to go from taking in the neighborhood of a day to address audit requests to being able to do so in a matter of minutes.  Given that something like an audit a day is what FICO deals with, this is no small deal.


A health care company that I know provides another good example.  This company had built a compliance reporting database where they had integrated close to 100 data sources.  They had further built on their own reconciliation logic to support data normalization.   The development effort and the ongoing care and feeding associated this system was enormous.  The company launched an initiative to rationalize data sources, implement automated discovery and dependency mapping and replace this home grown reconciliation database and logic with a vendor supported solution (they chose HP). 


Turns out that in their data rationalization effort this company found that something like 80% of the data held in their source systems was redundant at some level across the organization.  This understanding helped them move forward and develop a program around retiring systems and moving to a data leverage model using a CMS style approach.  By the way I do not  feel that what this company found in terms of redundant data would be that much different if we ran the same exercise at most large companies I deal with.


Another large company I know involved in the highly regulated utility sector went through a very similar process.  Like FICO this company is pursuing a fairly broad agenda around Incident, Change, Configuration and Release management but addressing compliance related reporting requirements was their initial priority.  Like FICO this company has been able to substantially reduce the amount of time invested in compliance while radically shortening the time it takes to produce compliance related reporting.


So while discovery and dependency mapping is by no means a panacea when it comes to compliance issues, it can help an organization meet its commitments relative to compliance reporting.  At the heart of many compliance related requirements is the need to attest and prove that you have tight controls in place around how your infrastructure is managed.  Transparency and a continuous visibility to the configurations in your organization is fundamental to addressing this requirement and a CMS can be a key element that helps address this requirement. 



How Long Should a CMDB/CMS Take to Build? Part 2: Culture and Understanding


This is part two in a series of understanding  CMDB and CMS deployment times.  Last post, we talked about people.  Here, we'll discuss people in an interactive collective workplace context:  in other words, corporate culture, and why corporate culture can be easy, or very very difficult, to understand and change.


And before we get too far, you must realize I did not graduate from any School of Business Science.  So there's probably some theorum or corollary  that describes what I'm getting at here.  Something like the Blake-Mouton  grid but with cookies.


So I won't be solving all your cultural change problems in a blog.  I'm here about why understanding corporate and human culture, or not, matters so much for TCD (total cost of deployment).


Solution Architecture and Service Delivery often take the fall for being late or underdelivering.  Why?  Much of the actual TCD goes unaccounted for.  A scenario: consultant shows up, clock starts ticking.  Shortly a missing or broken process is revealed to impede the project.  Tech consultant is then pressed into the role of business process engineering consultant, then burns the rest of the time on one or two of these process problems.  Maybe,  obtaining approval to touch some piece of the infrastructure, or trying to fast track a three-week change request into three hours.   Project gets behind.  Customer gets unhappy, Vendor gets blamed.  Free stuff is demanded.  What happened?  The consultant told the customer to "prepare" so this wouldn't happen.  Wasn't the problem understood or properly prioritized?   Delivery is relied on when they arrive. They have "done this before" or "should" know how to fix these kinds of problems.   Big mistake.  Organizational issues in the mirror look smaller than they actually are.


Why do people and cultural change remain the biggest variables of the deployment, the most difficult to get right, and the hardest to estimate?  Pragmatically, it's difficult to measure.  Scientifically, it's poorly understood.  Rhetorically I suppose one could answer "lack of mutual understanding".  Especially that "mutual" part - you understanding them is not enough.  You must strive to not be merely heard but understood as well.   Understanding is not just the first step to effect cultural change - it's the thing.  Going from an informal to a formal process for say, change management, can be really earth-shaking culturally, especially if a profound understanding of the people and culture are ignored, misunderstood, or  underbudgeted - and if they don't understand where you are coming from.


This is serious stuff - we're messing with  people's ideologies here.  And not just who-moved-my-cheese ideologies.  Ideologies like associating personal self-worth with job performance,a very strong ideology among many - in part because people tend to become attached to eclectic parts of their job, ironically,  those parts that have no safety net, that depend on human talent, to get done properly.


I have seen trouble even in well-planned and executed projects, because the project carried before it an apparent air of distrust or an implication that humans were no longer doing a good enough job so "control" was needed.   And the project had none of those things, the people just weren't  in on what was happening.   Even good people who do a good job can take it personally or feel misjudged or that they've failed in some way if they go unassured for too long.  Ignored long enough, these good people's fears will develop into a fight and you won't understand why.  This is the kind of stuff that can tsunami project schedules.   People who feel in on things  tend to produce a lot more.


Address common, as well as valid, concerns:  "Alice, you do a great job, but even with 99.98% accuracy, that .0200 is worth about a million dollars a year.  That not only justifies the cost but demands that we implement this automation to stay competetive.  You didn't do anything wrong to 'cause' this project. "  If you can say this sincerely and without  patronization, you'll get very far.


Still, cultural change is an enigma in some organizations.  Culture?  What do you mean "culture"?  It is what it is.  (an actual response to me from a manager of long ago).  How does one change what one can't understand, let alone measure?  I'm sure most of you understand your organization well.  What you may want to get a fresh look at is, is corporate culture grown or constructed?  That mattters in your approach.  What parts of your organization and culture can and cannot change? 


It's how you think about it.  And this doesn't merely depend on the nature of the force applied and the malleability of the material being worked.   Complex dances like the Change Management Tango or the Incident Cha-cha cannot be beaten into being; they are not rough structures forged in a foundry and bolted together!  They must be grown, nurtured, filed at at odd angles, sanded a lot, to produce the right result.  Think of your project as constructing a precision instrument, not raising a barn .  These are the nuances of cultural change, not the blunt strokes of an .mpp file.  Approached improperly, the vicious old theory X minimal effort/maximal clarity cycle rears its ugly head.  Zombified IT, there's a good ITIL replacement:  Obey…obey…


And you can't buy this cultural change off with a few all-hands meetings or a prop- uh, I mean an advertising campaign .  Pep rallies thinly disguised as running interference for a no-choice change might just deserve the derision and sabotage that come their way.  Which could be a lot. 


To effect real cultural change, your people must not just hear but believe.  You can make people do the former but not the latter.  DO bother to show them the numbers,  your assumptions, your heart - why you believe this is good for IT and the business because your people are a part of both.  Even if they look bored.  Don't assume they read all your memos or that memos will change corporate culture.  If you can't do these things, your project probably has a higher risk of failure.  And, while a business is not a democracy (unless you're an elected  government),  there are some things it behooves one to be democratic about.  And I'm not talking about voting.  I'm talking about involvement and communication!


These are the types of obstacles you face.  Any history you care to read teaches us that the best intentions of human engineering have often run aground on the unpredictable shoals of human behavior.  Don't skimp on the research or buy someone else's.  Configuration Management Systems are in today's time still hand-made.  But the parts are getting much better.


Next time we'll look at process engineering, an almost-as-mysterious and as difficult-to-estimate as cultural change.  Whether you're planning, implementing, or operational, I hope I've given you some small insight as to how important and unwrangly TCD really is and what success really means organizationally.  Can you relate?  Do you agree?  Maybe what your TCD numbers were?  Please reply and let us know how you feel.  Thanks.

How Long Should a CMDB / CMS Take To Build? Part 1: People

For some time I've been exploring the value of a CMDB and CMS.  A big part of the TCO value equation is the  TCD, Total Cost of Deployment.  TCD is often underestimated - note how little Google has on the subject.  Why?


  • Not everything that matters gets estimated

  • TCD tends to be estimated optimistically or "carved" to fit  budgets

  • The number and complexity of problems are often underestimated

  • The straightforwardness of fulfilling the first few use cases are often overestimated


And isn't all this "estimating" really a euphemism for "we don't know"?  If we knew we wouldn't have to estimate.  Estimation has an inherent connotation of uncertainty that we don't like.  We all want complex things to be simpler, more transactional, more commodotized, than they really are:


C: "Nice CMDB.  I'll take it."

V: "That'll be one million dollars.  Where do you want it delivered?"

C: "Dock 2."

V: "Ok.  What color?"

V: "Fast."


No really, what should you expect your CMDB deployment to be like?   What should we be focusing on estimating?


What we like to use to estimate CMDB deployment aren't  the biggest or most important variables.  We like to focus on things like, how long do other deployments take.  How long will it take to install the software.  How long until the hardware arrives.  When can we get everyone "trained".  All good and proper project management, can't do without it.  A deployment project is focused on consulting time and cost, hardware schedules, definable things.  


But the two most important factors are also the two most difficult to measure and change are people and processes.  And these are also the biggest variables in estimating time to full implementation of a CMDB or CMS.  In this series, this post will start with the most biggest variable, people.


Implementing a CMS is much more than getting the solution deployed, or getting some discovery done, or even getting some providers and consumers onboarded.  It's about changing the way IT works. To that end you absolutely must start with what IT is - not a data center or even a collection of infrastructure and apps - IT's an organization, and organizations are built around people, process and culture.


Deploying a CMS will touch almost everyone in the IT organization, because the CMDB almost always follows the implementation of some other initiative such as change or release management or other IT-wide scope.  As ITSM initiatives go,  so goes the CMDB.


The Kicker:  The ITSM ecosystem of applications, plus the CMDB to facilitate exchange of configuration data and the common view of IT services, forms the CMS.  Now this should sound like a much harder project than implementing a CMDB.  It is, that's my point.   Without thinking of your CMDB this way, you are likely to do some of that dangerous underestimating of the effort of getting ROI out of your CMS after your consultants have left the building.


In my next post I'll explore and  ask you why cultural change is the most important part of the deployment and the most difficult to get right and the hardest to estimate. 


Questions, comments, complaints, please reply and let us know how you feel.  Thanks.


How important is Services Asset and Configuration Management (SACM)?

One of HP's enterprise customers thinks it’s important!  This large insurance company has consolidated their asset management, human resource, and configuration management system data to calculate inventory data reports for several departments. They are looking at SACM from different perspectives and ensuring that the data accuracy and calculations are consistent between the different views. I don’t think this company is alone.  Companies seem to be increasingly challenged by the complexity of their IT environment and are looking for better ways to manage control of their infrastructure.


The ITILv3 definition of Service Access and Configuration Management is:

- The SACM process manages the service assets in order to support the other Service Management processes.

- SACM objective is to define and control the components of services and infrastructure and maintain accurate configuration information on the historical, planned and current state of the services and infrastructure.


It’s vital to provide integrated, accurate and current data across IT and it requires rigorous processes to achieve this federation of data. A goal of SACM is to establish Asset Manager as the reference source of assets from the point assets are procured to the time they are retired. But what’s best method of federating data?  


HP’s Asset Manager integrates with the UCMDB to automate the ITSM process without requiring a monolithic repository and it ensures all hardware and software assets supporting business services are effectively managed.  It also provides a clear illustration of dynamic enrichment of CI by federating attributes from an external authoritative source of data.


Is your company tangled up in confusing asset reports and CMDB's?  Is your data federated across your IT environment?  I want to know your thoughts...

"Aha!" Moments: Serendipitous Early Value Encounters in CMDB and CMS projects

My Social Media Manager Heather asked me to consider writing a customer success story for one of my blog posts.  I decided to try to raise the bar even further.  This is a compilation of many customers' success stories, albethem small successes.  I'm talking about "Aha!" moments, success they didn't know was coming and generated an "aha!" moment.


Early, informal value realization, aka “aha!” moments, found early in a CMS or CMDB project can act as value tiers, financing time and credibility for the more difficult-to-realize, longer-term ROI.  Especially if your funding and sponsorship is dependent on volatile management moods and economic fluctuation.  Look for aha moments whenever you can - they will usually reward you and your project in ways that are hard to predict.  And, very occasionally, negatively, in those types of environments where No Good Deed Goes Unpunished or where the messengers, especially bearers of bad news, are still killed - you know who you are.


Informal research (my own experience and anecdotes from my customers and colleagues) indicates that some engagements are more successful because value was realized and documented early and often. This is paradoxical, because the business transaction funding the project is usually measured only on the final value realization i.e. fulfilling the sponsored use cases. However, if no value is shown before the primary use cases are successful, organizations typically do not do as well success-wise. It is unclear as to which way the correlation goes, in other words, whether the lack of early success lead to a loss of project momentum, vs. if a project with a weak link has inherently fewer aha moments.


Apparently, “aha!” moments can be almost as important as the overall drivers, even though these are rarely formalized or anticipated. However, aha moments alone cannot sustain a project. The primary use cases must be delivered.


Here are a few of my favorites.


    When the aha moment happened

    Source of the aha moment

    Why the "aha"?

    The Value of the aha!


    Problem awareness

    These kind of broad projects illuminates the business cases throughout the organization.


    Take the example of the "email from corporate",  an email notifying all employees of a new project.  It's unread by many due to the large number of such announcements. Until something happens that involves them, the project is only dimly visible to much of the organization. The “aha!” moments happen when you start interviewing people (known as "surfing the organization".) When the conversation starts with “Why are you here” and ends with “Can I play too?”, the value of the project has been successfully evangelized. But this is not the aha.


    When people begin answering the planning questions – questions like “what is your process for documenting applications?” – and the answer is “Well, we really don’t have a process for that.” – People begin to realize that there is a broken, missing, or inefficient process for which the CMDB use case can improve.

    When people become more aware of the problems facing their organization, at a higher level than their function. From the individual’s perspective, the the organization’s "ubiquity" is reduced.  It becomes a bit more personal.


    This can enable intangible value ranging from motivation, morale, and incentive to participate, to developing interests in the company’s higher functions – adding momentum not only to the CMS project, but in part to the entire organization.


    Infrastructure awareness

    During interviews, we have sometimes uncovered missing firewall rules, missing hardware redundancy, and missing security rules. We would ask something like “what is your firewall policy for this DMZ?” And the technician would log on to the firewall or look through their spreadsheet, and say “I don’t see it.”. they would call their buddy or their manager and discuss it, then would turn to us and say “We’re fixing that right now.” Or “We’ve got to open a change for this”.  Cha-chingCool.

    Risk is directly reduced by correcting redundancy and security-related issues.


    Identifying infrastructure Single Points of Failure

    Initial baseline discovery has found the actual infrastructure to be contrary to a stated configuration. This is not only due to dating, but understanding, and differences between planned and implemented solutions.


    For example, a single point-of-failure was found for a mission-critical application requiring redundancy down to the network level. Connectivity to the application’s database was found to flow through a single router.  The Senior Geek we were working with assured us this was impossible and that the tool had to be wrong somehow.  A few phone calls to his Alpha Geek and and some probing, the single point of failure was confirmed.  The Alpha Geek and his team were later praised for uncovering a critical point of failure so early in the project.  We looked pretty good too.  Cha-ching.Cool

    Risk is directly reduced by the identification and subsequent correction of situations falling short of documented or expected implementation.

    Depending on the significance of the differences, finding these kinds of things often is a big boost to the credibility and confidence for a fledgling CMDB initiative.


    Discovering non-standard / unauthorized hardware and software

    Often, unauthorized software or hardware configurations place production at risk. For example of a software risk, non-standard software or patches installed on production servers. Actual examples of “risky” hardware:

    1. finding a part of a production application running on a desktop

    2. finding personal network hardware on a production network

    3. finding a part of production running on the CIO's desktop at his residence!Cool Cha-ching!


    Reduced risk to production applications


    Security and auditing

    SNMP was often found to be running with the default community string, even after the Security staff has assured us that all their devices are not on the default value. Also, where insecure protocols such as telnet are disabled by policy, and found to be enabled.

    Some SMEs, when approached, are skeptical of the discovery results. Only after verification using another tool will action be taken. Often, this has the net effect of increasing trust in the product.

    Risk is directly reduced by identifying missing or default security credentials. However, the amount of value varies widely depending on where the breach in question was located. For example, a breach found in a DMZ would be more valuable than one found in an internal-only network.


    Confidence in the CMDB contents usually increases with these kinds of discoveries because they are often visible to management and other groups.

    Dependency Mapping

    Unexpected Host and Applications Dependencies

    When we start putting the topology views together for the core service models, we sometimes discover application relationships that make a difference. For example, during DR planning, a customer found a mission-critical application to depend on a “non-critical” application. It was realized that the non-critical application was made critical by this discovery. A change in plans was made to accommodate moving the newly-critical application at the same time as the mission-critical application.

    Outage avoidance is more than risk reduction – had the situation not been found and corrected there would have been an outage. This is a direct improvement in quality, both statistically for risk and cost-wise operationally.  Even if the ROI is hard to quantify there is no doubt that ROI occurredCool.  Cha-ching.

    Impact Analysis

    New Application-level Dependencies

    As in the previous scenario, we sometimes uncover additional dependencies when we begin testing the impact analysis correlation rules.  Usually it's in the form of gap identification with the application owners, e.g. "Hey, where's the ABC app, huh?"  But you should take relationship identification any way you can get it.


    Outage avoidance as described above.

    Training, both formal and informal


    Interaction with application SMEs


    Interaction with customer management


    Interaction with technical staff (network, security, DBA, etc.)

    New Use Cases

    When you have good stuff, everybody comes to you and asks you to make it do everything you ever told them it could do.  It can be quite overwhelming.


    The team begins linking the concepts learned in training to begin solving their own problems. Matrix teams such as those found on CMS and CMDB projects often bring new valuable and challenging use cases to the table.


    So there's a risk of “scope creep” as students try to use resources already allocated for the primary use cases to their own use cases, or if the project attempts to take on too many use cases too early, before it has sufficient momentum to succeed. A lot of aha moments can increase project momentum, in a way, increasing time-to-value of the primary use cases along with it. It is worth a mention here that, as a CMDB matures, it can take on those additional use cases.  So Aha moments aren't exactly scope creep repellent, but they make the smell more tolerable.


    However, too many use cases early on tend to starve the project due to lack of delivery of the primary use cases.  Don't run before you walk.

    With a reliable means of capturing these use cases, the CMS grows in value by further decreasing cost of implementation and increasing time-to-value through experience. All consumers benefit by a collective body of expertise.

    Ultimately, aha moments alone are insufficient for a project’s success, but the do seem to play an important part.


Yeah, you caught me, this is part of a paper I already wrote.  So I'm a little drier here than my previous posts.  That's ok, I've been pretty "wet" so far.    But this is serious stuff when you get down to it!  My pontification (and YOUR COMMENTS, PLEASE!) should add up to something greater than poking fun at dumb stuff and waxing philosophic about mundane topics like Configuration Data Provider Rationalization (another juicy topic coming soon!)


If this is interesting to you, please let us know.   Our blog isn't really a blog until we get our user community actively involved and discussing these topics, which are really little more than starting points.  Please take a quick moment to let us know if you agree, if we suck, or what.  We'd appreciate it.

Location, Location, Location – Part 2

In my last post we took a look at the lineage of today’s CMS efforts.  The two major lineages I cited were ITIL v2 CMDB initiatives and the other was dependency mapping initiatives focused on application architecture reengineering.   A modern CMS initiative unifies these heritages from a technology standpoint.  It brings together the aspirations of an ITIL v2 CMDB initiative but does so in a technology form factor that is much more practical given the complexity and scale of any modern enterprise. 

What I mean to say is that the approach of having a federated CMDB acting as the bridge to other CMDBs and to other management data repositories (MDRs)  is a much more practical approach than focusing on the creation of a single monolithic CMDB.  Consuming applications in turn leverage this federated CMDB for access to configuration item information, business service context and for access to any information across IT that can be tied to these two elements.  

To be effective a modern CMS must also embrace automated discovery and dependency mapping.  The huge amount of gear and the complexity in today’s multi-tier and shared component application stacks make it totally impractical to try to support most IT operational functions without automated discovery and dependency mapping.  The old approach of leveraging tribal knowledge and manual processes just doesn’t scale.  This approach results in a data layer that is far too incomplete and far too inaccurate to support the data integrity requirements of the IT processes that need to consume this data.

So where are we today?  The technology platform to effectively implement a modern CMS exists right now.  Of that I have no doubt.  It is not perfect but it is very, very capable.  But if CMS initiatives are not to go the way of prior CMDB and Dependency Mapping efforts, more than technology is required.  What is required is a focus on use cases first, meaning a strong and crisp set of data requirements to support one or more critical IT processes.  Once this is well understood you can focus on what data is needed and where that data will come from.   Sponsorship with well defined consuming processes will also be higher than when initiatives are started from the data gathering side only. 

The requirements related to data sources should be fairly fine grained - meaning you must understand requirements down to a data attribute level.  Saying that Server data will come from solution “Y” is not enough since the data related to a server that is consumed by a specific IT process might require that your understanding of what a server is encompass data from many data sources.  The bottom line remains the same: “use cases, use cases, use cases”.   

Let me know what your experience has been addressing dependency mapping, CMDB or CMS initiatives at your company.  I and my colleagues would love to hear from you but even more important, I know others working on similar initiatives at other companies would love to hear from you.

Is Your CMS "On Fire"?

How does one measure the "Quality" of something?  What does CMS "Quality" mean?   "High-Quality" is thrown around without much substantiation, especially in the world of software.


My friend Dennis says  that for a CMS to work it must be actionable.  Of course we all agree.  But how many of us are measuring (or trying) the actionability of the CMS?  What are the metrics for measuring "actionability" and any other metrics which are important to its other and lower functions?  How is it possible to measure fuzzy, subjective, inexact things like  "data quality"?  At what point does measuring the CMS become ROI analysis?  I'm full of questions today.


Let's start pedantically for fun:


On Fire: adj. 1. Positive connotation: A continual period of producing exceptional work.  On a winning or lucky streak.  "Three goals in one game, he's on fire!".  The good kind of on fire.


2. Negative connotation: Exceptionally behind schedule or fraught with so many problems as to seriously hinder, halt, or even reverse forward progress.  "Our waiter is so in the weeds he's on fire."  Aka "mega-backlogged" or "dead in the water".      The bad kind of on fire.


3. Aflame, as in, seriously hot, or producing a glow or light.   Can apply to either prior definitions.


Fighting Fires:  Helping someone who is on fire in the bad way.  Commonly for someone important.    It is possible to catch fire  from fighting too many fires at once.  So much for my dictionary-writing skills.


For whatever acronym that is commercially and culturally significant to you, there's a way to say you're On Fire -  in both the good and bad ways.


Is your CMS on fire?  How would you know?  What metrics would one look at?  Is there such a thing as a CMS "thermometer"?    Let's call it a CMS-o-Meter:





During implementation, it's easier to tell if your CMS project is on fire.  Assuming we defined clear goals and have reasonable success criteria, we can look to the early deliverables and status reports like any other project to determine how on fire we are one way or another.


But operationally, once you get the CMS or part of it built, how does one measure it's temperature?


What if your pile of CIs were as important as, say, a nuclear pile - you pretty much couldn't go without a thermometer.  Kind of important to avoid catching fire.  Big, hot fire.  The kind that burns you for a long time.  How important is the CMS to your IT?  Got anything valuable in there?  Research like this and personal experience show that it is as easy to blow up your CMDB project.


Two tried and true methods are  to find  and measure quality for almost anything is by 1) what's important to the consumer and  2) what's important to whoever is responsible for the maintenance.  This is true for a car or a Service Desk or a CMDB or a CMS.


Research from Gartner suggests monitoring data quality is not widespread, and the decision to monitor data quality  falls as either an afterthought or chronically shorted on resources due to the cost of not only doing the monitoring but learning how.


Use case and Consumer-based metrics could include  qualitative  vectors like timeliness and accuracy of the consumed information.  These are often difficult and costly to measure, but they're the best indicators.  I believe you should invest here if you can.  Talk to the users.  Measure the value chain as far up and including the business as you can.  Are your change control and closed-loop incident/problem management processes working better?  Is your MTBF and MTTR improving?  Did the business lose less money due to critical availability downtime?


 Process-based measurements could be important too, such as, the performance of the CMS itself - what is the latency when you open an RFC or when the Service Desk creates an incident (both processes which can consume or provide data to/from the CMS)?


Administration-based metrics are usually more foundational and architectural:  Does the system work?  Is it secure?    Is working  with the software more like dancing or more like wrestling?  Do you get good support from the vendor, and as importantly, is it easy to work with support?  Is R&D responsive on patching major problems?  Is the vendor forthcoming with their road map?


Stratifying the measurements this way will help. 


The takeaway here is not to give you a comprehensive list, it's that you should be concerned about and invest in quality measurement of your CMS and it's data as much as the data itself.  


Think about making the CMS actionable.


Build yourself the right thermometer for your CMS.


Calibrate your CMS-o-Meter to make sure it's reporting accurately.


Then monitor these metrics.  Operationally, in Production, like you mean it.  Treat it as you would any other production applications according to it's priority in your organization.


When you're on fire, what do you do?  Let us know with a reply.  We'd always like to hear if you found this post useful, offensive, or just amusing for a few minutes.  Thanks.









When Thinking CMS remember “Location, Location, Location”

The other day I presented to a customer that had purchased HP Discovery and Dependency Mapping software.  This customer was interested in understanding HP’s direction relative to the concept of a Configuration Management System (CMS).  My discussion with this customer focused on how HP was addressing the data needs of IT operational domains ranging from application performance and availability management, to configuration and change management to IT process automation for server and network elements.  From a product perspective HP’s focus in this area has been and remains providing a platform that delivers configuration item data, service context and federated data that can be related to those two items to consuming solutions across IT.

Our conversation eventually and rather inevitably turned to what was the best strategy to achieve such a grand vision.  The answer is surprisingly simple at one level yet remarkably difficult to do in practice.  Like the old adage, location, location, location used to talk about buying real estate, the answer to building a comprehensive CMS that works as promised and stands the test of time requires a laser focus on use cases, use cases, use cases.   

I’ll return to this idea after a brief detour to look at the origin of today’s CMS Initiatives and how many of those early ancestors went wrong.

 “Origins of Today’s CMS Initiatives”

Modern CMS initiatives have two main lineages.  The first and best known are CMDB efforts that were launched in the wake of ITIL v2.  Many if not most of these early efforts failed (or at least fell far short of expectations).  The primary reason was a lack of a crisp focus on what problems were going to be solved and in what order.  Companies sought to create a “master” database with all configurations representing all infrastructures across the whole of the enterprise.  While the CMDB technologies used in these early efforts were immature and had some  technical limitations, most of these efforts didn’t fail because of technology.  They failed due to a lack of clarity around what the end game was.

The second major ancestor of today’s CMS efforts is dependency mapping.  Many of the early adopters of dependency mapping embraced this technology for reasons having little to do with how these capabilities are primarily used today; to support ongoing IT operations.  Instead, most of the early adopters of this technology were interested in dependency mapping as a means of supporting some form of application infrastructure reengineering.

Why?  Well during periods of rapid business expansion the IT infrastructure at many companies had grown substantially and no one had a handle on what existed and how it worked together to deliver IT services.  As a result many companies found themselves unable to effectively take on new IT initiatives focused on reducing the infrastructure footprint, reign in runaway server and network admin costs, or effectively take advantage of new virtualization capabilities.  These organizations lacked a clear understanding of what was the starting point.  As a result many of these organizations embraced dependency mapping as a means to generating this understanding. 

For these companies using this information for ongoing management to support application performance and availability, change management, or IT process automation was not the focus.  As a result little emphasis was placed on consuming IT processes and the integrations with the applications that support these processes.  Like early failed CMDB efforts many companies stumbled when they first tried to apply dependency mapping to the needs of ongoing IT operations.  Like early CMDB efforts the reason these initiatives failed (or at least did not deliver as much value as expected) was that they lacked focus.  

Many companies when first employing dependency mapping would attempt to discover everything before having clear use cases of what data was needed to support what IT processes.   Since there were no clear consumers for the data many of these efforts either lacked or failed to sustain sponsorship and consequently withered on the vine.   In my next post I’ll take a look at how these two independent branches have come together to be the foundation of the current crop of enterprise CMS initiatives and how these initiatives face the same challenge that plagued their technology antecedents.

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • HP IT Service Management Product Marketing team manager. I am also responsible for our end-to-end Change, Configuration, and Release Management (CCRM) solution. My background is engineering and computer science in the networking and telecom worlds. As they used to say in Telcom, "the network is the business" (hence huge focus on service management). I always enjoyed working with customers and on the business side of things, so here I am in ITSM marketing.
  • David has led a career in Enterprise Software for over 20 years and has brought to market numerous successful IT management products and innovations.
  • I am the PM of UCMDB and CM. I have a lot of background in configuration management, discovery, integrations, and delivery. I have been involved with the products for 12 years in R&D and product management.
  • Gil Tzadikevitch HP Software R&D Service Anywhere
  • This account is for guest bloggers. The blog post will identify the blogger.
  • Jacques Conand is the Director of ITSM Product Line, having responsibility for the product roadmap of several products such as HP Service Manager, HP Asset Manager, HP Universal CMDB, HP Universal Discovery and the new HP Service Anywhere product. Jacques is also chairman of the ITSM Customer Advisory Board, ensuring the close linkage with HP's largest customers.
  • Jody Roberts is a researcher, author, and customer advocate in the Product Foundation Services (PFS) group in HP Software. Jody has worked with the UCMDB product line since 2004, and currently takes care of the top 100 HP Software customers, the CMS Best Practices library, and has hosted a weekly CMDB Practitioner's Forum since 2006.
  • Mary (@maryrasmussen_) is the worldwide product marketing manager for HP Software Education. She has 20+ years of product marketing, product management, and channel/alliances experience. Mary joined HP in 2010 from an early-stage SaaS company providing hosted messaging and mobility services. Mary has a BS in Computer Science and a MBA in Marketing.
  • Michael Pott is a Product Marketing Manager for HP ITSM Solutions. Responsibilities include out-bound marketing and sales enablement. Michael joined HP in 1989 and has held various positions in HP Software since 1996. In product marketing and product management Michael worked on different areas of the IT management software market, such as market analysis, sales content development and business planning for a broad range of products such as HP Operations Manager and HP Universal CMDB.
  • Ming is Product Manager for HP ITSM Solutions
  • Nimish Shelat is currently focused on Datacenter Automation and IT Process Automation solutions. Shelat strives to help customers, traditional IT and Cloud based IT, transform to Service Centric model. The scope of these solutions spans across server, database and middleware infrastructure. The solutions are optimized for tasks like provisioning, patching, compliance, remediation and processes like Self-healing Incidence Remediation and Rapid Service Fulfilment, Change Management and Disaster Recovery. Shelat has 21 years of experience in IT, 18 of these have been at HP spanning across networking, printing , storage and enterprise software businesses. Prior to his current role as a World-Wide Product Marketing Manager, Shelat has held positions as Software Sales Specialist, Product Manager, Business Strategist, Project Manager and Programmer Analyst. Shelat has a B.S in Computer Science. He has earned his MBA from University of California, Davis with a focus on Marketing and Finance.
  • Oded is the Chief Functional Architect for the HP Service and Portfolio Management products, which include Service Manager, Service Anywhere, Universal CMDB & Discovery, Asset Manager, Project and Portfolio Manager.
  • I am Senior Product Manager for Service Manager. I have been manning the post for 10 years and working in various technical roles with the product since 1996. I love SM, our ecosystem, and our customers and I am committed to do my best to keep you appraised of what is going on. I will even try to keep you entertained as I do so. Oh and BTW... I not only express my creativity in writing but I am a fairly accomplished oil painter.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
  • Vesna is the senior product marketing manager at HP Software. She has been with HP for 13 years in R&D, product management and product marketing. At HP she is responsible for go to market and enablement of the HP IT Performance Suite products.
  • A 25+ year veteran of HP, Yvonne is currently a Senior Product Manager of HP ITSM software including HP Service Anywhere and HP Service Manager. Over the years, Yvonne has had factory and field roles in several different HP businesses, including HP Software, HP Enterprise Services, HP Support, and HP Imaging and Printing Group. Yvonne has been masters certified in ITIL for over 10 years and was co-author of the original HP IT Service Management (ITSM) Reference Model and Primers.
HP Blog

HP Software Solutions Blog


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.