Location, Location, Location – Part 2

In my last post we took a look at the lineage of today’s CMS efforts.  The two major lineages I cited were ITIL v2 CMDB initiatives and the other was dependency mapping initiatives focused on application architecture reengineering.   A modern CMS initiative unifies these heritages from a technology standpoint.  It brings together the aspirations of an ITIL v2 CMDB initiative but does so in a technology form factor that is much more practical given the complexity and scale of any modern enterprise. 


What I mean to say is that the approach of having a federated CMDB acting as the bridge to other CMDBs and to other management data repositories (MDRs)  is a much more practical approach than focusing on the creation of a single monolithic CMDB.  Consuming applications in turn leverage this federated CMDB for access to configuration item information, business service context and for access to any information across IT that can be tied to these two elements.  


To be effective a modern CMS must also embrace automated discovery and dependency mapping.  The huge amount of gear and the complexity in today’s multi-tier and shared component application stacks make it totally impractical to try to support most IT operational functions without automated discovery and dependency mapping.  The old approach of leveraging tribal knowledge and manual processes just doesn’t scale.  This approach results in a data layer that is far too incomplete and far too inaccurate to support the data integrity requirements of the IT processes that need to consume this data.


So where are we today?  The technology platform to effectively implement a modern CMS exists right now.  Of that I have no doubt.  It is not perfect but it is very, very capable.  But if CMS initiatives are not to go the way of prior CMDB and Dependency Mapping efforts, more than technology is required.  What is required is a focus on use cases first, meaning a strong and crisp set of data requirements to support one or more critical IT processes.  Once this is well understood you can focus on what data is needed and where that data will come from.   Sponsorship with well defined consuming processes will also be higher than when initiatives are started from the data gathering side only. 


The requirements related to data sources should be fairly fine grained - meaning you must understand requirements down to a data attribute level.  Saying that Server data will come from solution “Y” is not enough since the data related to a server that is consumed by a specific IT process might require that your understanding of what a server is encompass data from many data sources.  The bottom line remains the same: “use cases, use cases, use cases”.   


Let me know what your experience has been addressing dependency mapping, CMDB or CMS initiatives at your company.  I and my colleagues would love to hear from you but even more important, I know others working on similar initiatives at other companies would love to hear from you.

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation