IT Service Management Blog
Follow information regarding IT Service Management via this blog.

Searching for the common ITSM good

currents.jpg

I had a wise manager who used to refer to the service desk as THE communication hub. It is interesting to revisit this theme and role in light of the changes going on technology at home and at work. I submit that there are some key interwoven themes that will be foundational to the emerging new style of IT. And with the decentralization and consumization of IT, communication is more important than ever.

 

Keep reading to find out how communication is key within your organization.

Tags: ITIL| ITSM| SPM

3 Keys to Improving your Change and Configuration Management Webinar

Change and configuration continues to be challenging for many organizations. The risk of self-inflicted service outages is ever present and captures the attention of the ever watchful IT auditors.

 

Brighttalk 12June.jpg

Brian Miller and I will present some reasonable suggestions for improving you change and configuration management processes Wednesday, June 12 at 8am Eastern. Replays are typically available almost immediately afterwards.

 

To register or attend, simply visit http://www.brighttalk.com/webcast/534/74205

Improving Service Quality by the Numbers - 11, 10, and 9

Over the past month (dating back to the itSMF FUSION event in Dallas), a pair of conversations have been rattling around in my mind. The first one starts off with a set of numbers 11, 10, and 9. 11 is the number of major incidents experienced by a Fortune 500 company in the past year.

Handle configuration changes effectively - technical white paper

Whether you follow ITIL® or other industry best practices or standards, configuration management is a fundamental process enabling the management and improvement of other ITSM processes. It provides an accurate picture of your environment and ensures the integrity of all involved data along the entire lifecycle of your services. But how can you get your configuration data under control?

It's Troux, CMDB and Enterprise Architecture Play Nice! Part 2, EA as a CMS Consumer

ITSM, CMDB, PPM, EA, CMS.  Not just SEO, but ITIL!  DCT!  Clooouuuuud……..

What does it all mean for the Enterprise Architect who spends a third of their time  gathering business context data?  A provider of even a small fraction of that amount would be, like, some kind of EA whisperer.  Really, what does business context mean to an enterprise architect?   What would be the value of CMS data to an EA?  What?

Pedal to the metal - HP widens its leadership gap in ITIL v3 certification


HP Service Manager has taken a big leap forward in its ITIL v3 leadership position.  HP Service Manager 7.1x is now certified by OGC (the creators of ITIL) at the Gold Level in nine ITIL v3 processes, three times the lead of the next closest vendor’s product.   


 


The Gold Level certification is hard to achieve (see my previous blog entry to get more detail on that).  It means that multiple customers provided documented proof to the auditors that they are using HP Service Manager to automate their ITIL v3 processes. 


 


Here is a list of the ITIL processes implemented within HP Service Manager that have been certified at the Gold Level:


 


1. Incident Management


2. Problem Management


3. Change Management


4. Service Asset & Configuration Management


5. Service Catalog Management


6. Request Fulfillment


7. Service Level Management


8. Knowledge Management


9. Service Portfolio Management


 


Customers who understand the value of fully embedded ITIL v3 best practices have it easier than ever before.  Now that HP Service Manager is the clear leader on OGC's certification scorecard it will certainly help many of these customers determine, without a doubt, that HP Service Manager is their best choice to help run IT like a business.


 


 



 

Process Governance: The Dark Side of CMS

Ditch the suit, get your boots on, and bring a good flashlight!   Today we're journeying into the center of the ITSM Universe, Configuration Management.  What will we find?


 


Planning  or designing a CMS or CMDB?  Beware deceptively well-lit, short and simple paths to success.  They're illusions.  The actual path of achieving substantial ITSM ROI involves lots of crawling around in tight spaces and heavy organizational and technical lifting.  And there IS a wrong and a right way to do it.  There are all kinds of ways to fail, but only a few ways to succeed.  I'm thinking not "tour guide" but "expedition leader". ITSM is more Lechugilla than Carlsbad.


 


Like caving, CMS solution architecture can be, well, dark.  For example:  "should the _____ process interact with the _____ process through ____ or _____?"  Plug in your question of the day.  There's a million of 'em: Change Management,  Incident Management, the CMDB, directly.


 


Adding process and automation to an IT organization can be as challenging as any other organization in a business.  Possibly more so because that sort of thing IS our business and we don't like our business getting messed with.  But I've been covering that in my last three posts.   Today's hypothesis:


 


After a dark and strenuous trip, you arrive to find ITIL is hollow in the center.  The missing part is the data, how to handle the configuration data itself.  All the process and governance around the data and information about the data, for example, how accurate or timely the data is.  You have to think about what to fill this gap with and the operational ebb and flow of the data.   From the beginning.  Ultimately, it's as much about the data as the process.  But ITIL is forthemostpart dark on this, and it's also least-filled-in part all around.  The documentation, the stakeholders, the technicians, the consultants don't focus on it.  The process and governance work is left to evolve organically or for you, the accountable party, to figure out.  While you must ultimately craft your own processes, it's still not nice of the consultants to leave anything undocumented, especially any custom work like integration or model extensions, etc.  But that's not what I'm writing about.  Back to the gap. 


 


The gap is, the discipline of constantly measuring all the onboarding, as well as the operational, activities related to the data against the desired state.  Not just a successful installation and demonstration, and not even just usage and operation, but successful operation, where the CMDB measurably reduced risk and cost and improved reliability of the configuration data provided to IT's consumers.    And I presuppose a well-defined use case and an achievable, understandable, quantifiable ROI here.  If the data is sloppy going in, your ROI suffers an you see sluggish or no ROI up in the business KPIs.


 


I'm not talking about Asset Management or SACM!  These processes  do things like, call for specific ownership of certain CI types, and call to be the specific provider of certain CI types.  This is not what I'm talking about!  I'm talking about the layer that sits between ALL the processes and the CMS, the governance part, that which is responsible for evaluating the data and its fitness for consumption.


 


If CMDB is about plumbing, CMS is about water quality.  


 


So you have a working  CMDB, a building full of experts ready to federate, a bag full of use cases, now what?  Oh yeah, maybe you already did some discovery.  You'll probably have to undo some of that.  Rewind for a minute.


 


As you go through the presales demo, your Proof of Concept, your testing and lab work, and finally, your move to production, can you answer the following questions:



  • Who is the intended consumer of the data?  (Hint:  You better have at least one)

  • Who owns each CI and attribute?  (Hint:  Someone needs to)

  • What provider conflicts exist?  For example, between the discovery and the asset or service or network management systems? (Hint:  There shouldn't be conflicts in the finished CMS.  Not a misprint.)

  • Who does the business see as accountable for this configuration data?  (Hint:  not the Alpha Geek's spreadsheet)

  • Who decides what data is provided to what consumer?  (Hint:  It shouldn't be the consumer or the provider.  No really, not a misprint.)


 


The answers reveal your config data onboarding processes, or lack thereof.


 


Here is the dark space in the center of ITIL:  You need a good governance model to build the processes around onboarding and managing configuration data, which, when followed, will result in the highest quality data possible provided to the consumer in the most efficient and secure manner possible.   "Data Stewardship" is a good way of thinking about it, here is a gentleman who understands.


 


By accounting for the authoritative nature of the data, and the entitlement of the consumer, one can construct such a model which can guide those creating processes for using a CMS, during and after CMDB and CMS implementation.


 


Such a model must minimally account for three things:  the consumer, the provider, and the owner.  The model must provide a few basic tenets which reach into every aspect of how configuration data is handled by the CMS.  For example, how provider "conflict" is handled, how ownership of data is established, and why consumers must be understood vs. merely serviced.  This is the missing gap in the center of ITIL.  I hope. 


 


I and a bunch of my friends have developed such a model, and it happens to be called the Consumer/Owner/Provider Model, or just the COP model for short.  In my next posts, I'll be introducing some of the major tenets of the COP model and discussing each one in  more detail.  Perhaps you think of some of these tenets in other ways or using other language, but I want to see how well this relates to you. 


 


I hope I've whetted your appetite and that you're full of questions.  Feel free to ask them here or comment with a reply.  Thanks!


 

CMS Use Cases On Parade At HP Software Universe

In my last few posts I talked about the need to focus on use cases.  Over many years I have learned that the number one thing people want to hear about is as follows:  "what is my peer down the street (or across the ocean) doing about similar problems".


Being the track manager for the Configuration Management System (CMS) track at HP Software Universe in Washington D.C. (June 2010), I just completed scheduling a number of great presentations that represent real world use cases and implementation outcomes.   The CMS track at Universe this year highlights a number of great case studies of what real customers - facing real challenges - at very  large and complex companies - are doing around CMS related initiatives.  What follows is a quick summary of customer centric use cases that will be on stage for the CMS track at Universe this summer.


Turkcell, one of the largest mobile phone companies in Europe, will be on stage addressing how they are creating an integrated IT environment capable of supporting a broad range of IT processes including Asset Management, Configuration Management, Change Management and Problem Management.  Elements being integrated include IBM Maximo, HP Business Service Management (BSM) solutions, the HP Universal CMDB and HP Discovery and Dependency Mapping.


An HP partner, Linium L.L.C., will be walking through the work they have done for a major retailer in the US.  The focus of this case study is around the implementation of a Change and Release Management solution that brought together HP Server Automation, HP Release Control, HP Service Manager and the HP Universal CMDB.  


Melillo Consulting is working with a large company to integrate several of our BSM solutions with our HP Client Automation Center to implement an Incident, Change, Problem and Request Management solution.


Elegasi, another partner, is working with a large Financial Services company to help them effectively manage the cost of licenses associated with virtualized infrastructure.   The session will highlight how Discovery and Dependency Mapping, the Universal CMDB, and HP Asset Manager can work together to help address license compliance and cost management for virtualized infrastructures.


Finally, our HP Professional Services team is implementing a Service Asset and Configuration Management solution for a major Telecom company.  They'll be addressing the work they have done to integrate UCMDB and Asset Manager and talking about where they are going next in terms of integrating Service Manager. 


When I consider all of the sessions being put together across other tracks as well - I know that there are many more customer or partner delivered sessions that focus on integrated solutions.  In many of these, the UCMDB is a central component of the solution that will be represented on stage.  If you are interested in going to Universe and have not yet registered, I invite  you to get $100 off the entry price by entering the promotion code INSIDER when you register.  Feel free to pass this promotion code on to others.  Hope to see you in Washington this summer.  Cheers!

Taming (if not slaying) one of IT’s many Medusas

My third grade son and I have been exploring Greek mythology lately.  We’ve been reading about the Gods of Olympus.  This new found interest was triggered by my son having recently listened to the “Lighting Thief” on audio book - the first of the "Percy Jackson and the Olympians” series.   If you aren’t familiar with Medusa, she is monster in female form who has hair that is made of dozens of horrifying snakes.   The hair filled with snakes idea reminded me of a very thorny problem that IT deals with -  that of addressing compliance related issues.  The more I thought about this the more I realized that almost any problem I have ever come across in IT reminds me of Medusa but this area in particular stands out in my mind.  


 


In my last post I talked about the importance of use cases.  In this post I want to focus on a trend I’ve seen that often is the genesis of a Configuration Management System (CMS) initiative – that of addressing compliance related reporting.  Over the years I have dealt off and on with the compliance problem and it stands out in my mind because of the duality that permeates the issue.  Compliance has this quality of being everywhere and being nowhere at the same time.  Let me explain.  When you think about the roles in IT almost every group has some level of responsibility for supporting compliance and yet responsibility for what must be done is highly diffused across the organization.  This is true even if the organization has (and most now do have) a Chief Compliance Officer.  From a product standpoint every product seems to be able to highlight itself as a solution but no one offering by itself really gets you very far.


 


So having acknowledged upfront that no single product can be all things to all issues compliance;  I have been working in the CMS area long enough to see a recurring trend.  That of using Discovery and Dependency Mapping (DDM) as a way of helping to lighten the burden around compliance reporting in highly regulated industries like Financial Services, Health Care and Utilities.  In each of these cases, I know of at least one (sometimes more)  large and complex organizations,  with massive reporting requirements,  that are using DDM to meet requirements around the need to attest and verify that they have strong controls in place to prevent unauthorized changes to their mission critical infrastructures. For many organizations addressing these kinds of compliance requirements is a hugely time consuming and costly endeavor from the standpoint of IT hours invested.


 


I will start with a publicly available story, that of FICO.  Known to most in the US for their credit scoring service, FICO used DDM as key element in a solution which also included HP Service Manager.  FICO talks about their solution from the standpoint of incident, change and problem management but addressing compliance was certainly a big motivator for them as well.  Operating in the highly regulated financial services industry, audits are a way of life for FICO.  Matt Dixon, Director of IT Service Management at FICO, has said that with their solution they were able to go from taking in the neighborhood of a day to address audit requests to being able to do so in a matter of minutes.  Given that something like an audit a day is what FICO deals with, this is no small deal.


 


A health care company that I know provides another good example.  This company had built a compliance reporting database where they had integrated close to 100 data sources.  They had further built on their own reconciliation logic to support data normalization.   The development effort and the ongoing care and feeding associated this system was enormous.  The company launched an initiative to rationalize data sources, implement automated discovery and dependency mapping and replace this home grown reconciliation database and logic with a vendor supported solution (they chose HP). 


 


Turns out that in their data rationalization effort this company found that something like 80% of the data held in their source systems was redundant at some level across the organization.  This understanding helped them move forward and develop a program around retiring systems and moving to a data leverage model using a CMS style approach.  By the way I do not  feel that what this company found in terms of redundant data would be that much different if we ran the same exercise at most large companies I deal with.


 


Another large company I know involved in the highly regulated utility sector went through a very similar process.  Like FICO this company is pursuing a fairly broad agenda around Incident, Change, Configuration and Release management but addressing compliance related reporting requirements was their initial priority.  Like FICO this company has been able to substantially reduce the amount of time invested in compliance while radically shortening the time it takes to produce compliance related reporting.


 


So while discovery and dependency mapping is by no means a panacea when it comes to compliance issues, it can help an organization meet its commitments relative to compliance reporting.  At the heart of many compliance related requirements is the need to attest and prove that you have tight controls in place around how your infrastructure is managed.  Transparency and a continuous visibility to the configurations in your organization is fundamental to addressing this requirement and a CMS can be a key element that helps address this requirement. 


 


 

How far will your tires take you?


When you are getting ready for a long drive, you make sure your car is in good working order.  One of the things you check is tires.  After all, you won’t get far without tires, and you don’t want to get stuck in the middle of nowhere because you blew your tire and have no spare, or had an accident because your tires were bold and the car skidded in the rain.


Discovery is like tires for different IT solution.  Whether you are talking about managing end points, implementing CMDB or an Asset Management solution, you need to be able to discover the environment relevant to your needs.


We tend to focus today on “higher level” solutions.  CMDB and CMS are hot!  Software Asset Management is up there as well.  Everyone spends lots of time selecting and evaluating the right products in those areas.  We make sure they can handle the size of our environment, have the functions that we need to assist in our daily jobs.  That’s great – choosing the right product is paramount – I recall working with one of the large IT industry analyst companies a few years ago.  They rated the product I was selling at the time as the best in the market.  But, when we tried to get them to adopt it internally, they were very quick to point out that what is best in the market, may not fit their specific needs.  Yes, they implemented our product in the end, but the point was made – choose products that meet your needs, not the ones that are marketed the most or evaluated as the best.  But I digress…


Let me focus on Asset Management, since that is what I am most familiar with these days.  You evaluate Asset Management products and choose the best one (of course, I hope the winning product is HP Asset ManagerJ).  You choose the right product for asset management, but how do you populate inventory data?  Many customers simply choose to use existing products for feeding data to the Asset Manager product.  Why?  Because they are already deployed and, well, data is data, right?


If you buy a car, you make sure it looks good, it feels comfortable and handles well.  When you get into an IT solution, like Asset Management, you pick the right product that fits your needs.  But, when it comes to data collection many people say, I will just use whatever I have.  It’s cheaper and data is data.  Except that in many cases data has to be transformed into information.  And that will cost time, effort and money.  It will require ongoing maintenance as the environment changes.  Do you want to maintain a custom solution?  In majority of situations IT does not want to have a “custom” implementation of any products anymore.  Do you just stick whatever tires are the cheapest?  Would you put 14 inch tires on a Hummer? No.  And you shouldn’t pick the cheapest discovery tool either.  You should make sure it meets your needs, and one of the criteria must be “does it provide the data the consuming products need” and “is the data in a format that is easily consumed”.  It is true that you will likely end up with multiple tools that collect overlapping data.  It will cost you some storage and it will cost some resources to collect and transfer to the destination.  But, the cost of the overlap should be quite small.  And the value of the right data in the right format is that the overall solution will work as intended and required.


If you buy a Hummer, don’t skimp on the tires.  Make sure the discovery product you use delivers the data you need with little or no customization.  It will be safer, more comfortable and cheaper in the long run.


 


Location, Location, Location – Part 2

In my last post we took a look at the lineage of today’s CMS efforts.  The two major lineages I cited were ITIL v2 CMDB initiatives and the other was dependency mapping initiatives focused on application architecture reengineering.   A modern CMS initiative unifies these heritages from a technology standpoint.  It brings together the aspirations of an ITIL v2 CMDB initiative but does so in a technology form factor that is much more practical given the complexity and scale of any modern enterprise. 


What I mean to say is that the approach of having a federated CMDB acting as the bridge to other CMDBs and to other management data repositories (MDRs)  is a much more practical approach than focusing on the creation of a single monolithic CMDB.  Consuming applications in turn leverage this federated CMDB for access to configuration item information, business service context and for access to any information across IT that can be tied to these two elements.  


To be effective a modern CMS must also embrace automated discovery and dependency mapping.  The huge amount of gear and the complexity in today’s multi-tier and shared component application stacks make it totally impractical to try to support most IT operational functions without automated discovery and dependency mapping.  The old approach of leveraging tribal knowledge and manual processes just doesn’t scale.  This approach results in a data layer that is far too incomplete and far too inaccurate to support the data integrity requirements of the IT processes that need to consume this data.


So where are we today?  The technology platform to effectively implement a modern CMS exists right now.  Of that I have no doubt.  It is not perfect but it is very, very capable.  But if CMS initiatives are not to go the way of prior CMDB and Dependency Mapping efforts, more than technology is required.  What is required is a focus on use cases first, meaning a strong and crisp set of data requirements to support one or more critical IT processes.  Once this is well understood you can focus on what data is needed and where that data will come from.   Sponsorship with well defined consuming processes will also be higher than when initiatives are started from the data gathering side only. 


The requirements related to data sources should be fairly fine grained - meaning you must understand requirements down to a data attribute level.  Saying that Server data will come from solution “Y” is not enough since the data related to a server that is consumed by a specific IT process might require that your understanding of what a server is encompass data from many data sources.  The bottom line remains the same: “use cases, use cases, use cases”.   


Let me know what your experience has been addressing dependency mapping, CMDB or CMS initiatives at your company.  I and my colleagues would love to hear from you but even more important, I know others working on similar initiatives at other companies would love to hear from you.

When Thinking CMS remember “Location, Location, Location”

The other day I presented to a customer that had purchased HP Discovery and Dependency Mapping software.  This customer was interested in understanding HP’s direction relative to the concept of a Configuration Management System (CMS).  My discussion with this customer focused on how HP was addressing the data needs of IT operational domains ranging from application performance and availability management, to configuration and change management to IT process automation for server and network elements.  From a product perspective HP’s focus in this area has been and remains providing a platform that delivers configuration item data, service context and federated data that can be related to those two items to consuming solutions across IT.


Our conversation eventually and rather inevitably turned to what was the best strategy to achieve such a grand vision.  The answer is surprisingly simple at one level yet remarkably difficult to do in practice.  Like the old adage, location, location, location used to talk about buying real estate, the answer to building a comprehensive CMS that works as promised and stands the test of time requires a laser focus on use cases, use cases, use cases.   


I’ll return to this idea after a brief detour to look at the origin of today’s CMS Initiatives and how many of those early ancestors went wrong.


 “Origins of Today’s CMS Initiatives”


Modern CMS initiatives have two main lineages.  The first and best known are CMDB efforts that were launched in the wake of ITIL v2.  Many if not most of these early efforts failed (or at least fell far short of expectations).  The primary reason was a lack of a crisp focus on what problems were going to be solved and in what order.  Companies sought to create a “master” database with all configurations representing all infrastructures across the whole of the enterprise.  While the CMDB technologies used in these early efforts were immature and had some  technical limitations, most of these efforts didn’t fail because of technology.  They failed due to a lack of clarity around what the end game was.


The second major ancestor of today’s CMS efforts is dependency mapping.  Many of the early adopters of dependency mapping embraced this technology for reasons having little to do with how these capabilities are primarily used today; to support ongoing IT operations.  Instead, most of the early adopters of this technology were interested in dependency mapping as a means of supporting some form of application infrastructure reengineering.


Why?  Well during periods of rapid business expansion the IT infrastructure at many companies had grown substantially and no one had a handle on what existed and how it worked together to deliver IT services.  As a result many companies found themselves unable to effectively take on new IT initiatives focused on reducing the infrastructure footprint, reign in runaway server and network admin costs, or effectively take advantage of new virtualization capabilities.  These organizations lacked a clear understanding of what was the starting point.  As a result many of these organizations embraced dependency mapping as a means to generating this understanding. 


For these companies using this information for ongoing management to support application performance and availability, change management, or IT process automation was not the focus.  As a result little emphasis was placed on consuming IT processes and the integrations with the applications that support these processes.  Like early failed CMDB efforts many companies stumbled when they first tried to apply dependency mapping to the needs of ongoing IT operations.  Like early CMDB efforts the reason these initiatives failed (or at least did not deliver as much value as expected) was that they lacked focus.  


Many companies when first employing dependency mapping would attempt to discover everything before having clear use cases of what data was needed to support what IT processes.   Since there were no clear consumers for the data many of these efforts either lacked or failed to sustain sponsorship and consequently withered on the vine.   In my next post I’ll take a look at how these two independent branches have come together to be the foundation of the current crop of enterprise CMS initiatives and how these initiatives face the same challenge that plagued their technology antecedents.

Who Needs a CMDB, Anyway?

 I am disturbed.


 Particularly about the amount of hype that surrounds the discipline and implementing technologies of configuration management.  There is much more hype here than there should be.   Shouldn't it be an unexciting, mature, ubiquitous process  baked in to IT already?  Apparently not, if the volume and tone of blogs like the IT Skeptic are any indicator.  Great stuff there.


 


Who needs a CMDB?  Anybody who will pay for one?  The Fortune 500?  Anyone with more than (pick a number) CIs?  Someone with a use case?  Somebody with a  big  honkin' reconciliation engine?  One could imagine  choosing any of these answers depending on one's background, ergo the hype.  


 


If you are a vendor or consultant, you may ask yourself "Is this is a rhetorical question?"  It's not.  Some feel  CMDBs generally have not provided the expected ROI and tend to discount their value, relegating them to toys for the rich or as unproven gadgets, insinuating poor choice of budget investment.  The good news:  people care deeply about finding the truth, on both sides.


 


What we call configuration management today in past practice has traditionally been implemented in disparate technologies intended to do something else, but did a piece of something .  Auto-discovery, asset/inventory/service management, application mapping.  These apps all do/did some configuration management-like things.  With emphasis placed on whatever the product that you were  familiar with did.  It follows that most people's opinions stem from  formative experiences with technology, scale and scalability, the kind of business demand present, and industry-specific  drivers such as  security or  regulations and standards.


 


This converging-but-still-disparate landscape has created a kind of configuration management tower of Babel:  how can we build anything if we don't speak the same language, agree on what a CMDB should be and do and for whom?


 


Let's not forget a primary CMDB contender, "no technology".  Configuration Management in sufficiently small shops is done by people, in people's heads.   It's very controversial what the limits of size and complexity are that require a configuration management tool.  There are  as many variables and forms for such an equation as there are human characteristics related to the competency.


 


However, while people (vs. a tool) can and do perform configuration management,  there is almost no shop whose business has not suffered to some degree at the hands of human error.   The corollary to this is that you won't see value in a CMDB until you've felt the wrong kind of heat.  If you've ever experienced costly downtime because of an outage caused by miscommunications in change management, then it may be a little easier to accept that you might have saved your business a bit of pain if you'd had something to help your change management left hand know what the right was doing.


 


For example:  Configuration management has always been done by Bill The Alpha Geek, and all the configuration data for the entire shop exists in Bill's head.  No serious outages have ever been correlated to Bill not being there one day, or having not known a particular fact, or having not had a good enough memory to never need to write anything down or put it in a computer.  Bill is not going to be a big fan of CMDB.


 


Now let's consider another persona, Bob, the Operations Director and Bill's Manager.  It's a medium-sized shop,  well under the fortune 1000, that has experienced rapid growth.   The current application infrastructure resembles a very complex, finely-tuned bowl of spaghetti, understood by only a few people in the company.  Historically Bob has depended on Bill for all the answers:   We need to upgrade our ERP app, what servers are part of ERP?  Bill knows.  What other apps and services does ERP depend on?  Ask Bill.  If I unplug the CIO's desktop, will ERP continue running?  Check with Bill.  If our second-hand Cisco 2600 finally craps out, will our ERP still be ok?  Riiiight. 


 


So if you're Bill, Bob is skipping without a rope.  And if you're Bob, Bill is empire-building.


 


But today, another server was added, so now there are 101 servers, pushing Bill over the edge.  There is now a non-zero chance of Bill forgetting something.  It's like the old pickup sticks game with a couple thousand sticks.


 


And you're telling me this shop can do it's configuration management with people?  Maybe with MENSA people or Marilyn Vos Savant or somebody.   But sooner or later, Bill's going to miss something and your online retail web site is going to go down, or maybe you can't collect money or manufacture your product for a day.  IT happens.


 


But isn't there a simpler solution than one of those hypey, expensive CMDBs?


 


Herein lies the distinction based on those formative experiences I mentioned earlier:   You can't appreciate the need for a CMDB unless you have truly experienced the onslaught of bad changes and angry application owners that result from not managing your configuration data properly.  Yes, you can do some spreadsheet kung fu.  But Configuration management is about more than just record-keeping.  It's about a programmatic approach, it's about comprehensive, quality data, always available, and secured.


 


The CMDB isn't extraordinarily sophisticated technology.  But does a few very powerful, fundamental things,  for everyone, all the time, consistently.  Best thing next to Bill, when he's awake and not on vacation.    And if CMDB saves you a day's worth of downtime over, say, a year, then there's probably some nice crunchy ROI in there.


 


What do you think?  Drop a comment and let us know!  To be continued...


 

Let's Get This Party Started - A Short Look Back at the Evolution of Configuration Management

The year was 1989, the end of a wildly successful decade.  Too bad I was in college or I would have taken advantage of the 80's to become a mega-zillionaire and build my own Caribbean meditation retreat.  Failing that, at least the 80's produced one of the first successful (but sometimes pejorative) “Stone Soups” of IT,  - The IT Information Library, or ITIL.  ITIL is loosely (and I use the term loosely) a compilation of what we had in the past called SOP’s, Standards and Procedures, Run books, and what have you.  It basically said; you need a help desk (and someone to actually staff it).  You need to know what and where all your stuff is.  You need to try to do something about problems that keep coming up.   Other stuff.



Yeesh.  We need a bunch of books to tell us that?  Didn't we already know all that?  Well, then, why did Data Processing Shops (I'll use some classic, period-accurate, nostalgic terms) still have problems keeping their "onlines" up? 


Turns out, the books left the hard parts up to the readers, for example, how to actually implement anything.  The books, preferring to remain timeless, don’t discuss any technology. 



To the ITIL founders' credit, they did try to show how some of the small organizations within DP (DP = IT) fit together, again loosely.  It was a kind of cartoon version of IT and how its existence was rationalized, linking daily activities with how and why they mattered in a business sense.  To understand the emergence point, why IT evolved, it is helpful to understand what it was like before.  


The CIO of the distant past was a well-understood persona (and often at that time called something like the Data Processing Manager).  He probably owned IBM mainframes.  And boy did IBM know how to sell to this person.  When the IBM salesman showed up at your door, he told you what you were going to buy, you signed the contract, the IBM truck backed up to your data center and unloaded rows of lovely water-cooled DP goodness.



ITIL was brought about and an evolutionary shift in buying behavior and selection criteria took place (this is an extensive topic which cannot be discussed at length here). No longer were CIOs fed a hardware and software plan that they did not really need.  Smaller business had to pay attention to what IT was asking for.  And so the evolution began and businesses got smart and started to ask questions such as:"How much does it cost to keep our Accounting Department running compared to how much it costs to keep our Complaint department running?"  How do I know what to charge back to my users?  “I don’t know” got to be a less and less acceptable answer.  


Despite this, it still wasn't so easy to talk about "process" in IT in those days.  Process??  A curmudgeonly System Programmer might say the "process" is, Mr. MBA, that I yell over my cube wall to the Accounting Programmer or whoever and tell them what to do.  I don’t need a “process”, you’ve got to be kidding me.    No really, this was a pervasive, uncommonly-questioned way of running IT; driven by the dominance of a  polarized selling motion (albeit a very successful one), and an enormous difference in subject matter expertise between the IT providers of the services, and the business who paid for and consumed the services.



The Business Process Engineering Wave of the early 1990's cost many DP managers their jobs.  It cost IT professionals their comfortable, dareisay cushy, jobs as mainframe stewards and forced them (including me!) to re-learn much about IT. Especially, about how to think about and address all those uncomfortable business questions like "What did you do today?" 


So what DID you do today? What are we doing today in this evolutionary shift of how we leverage IT to run our business? Is ITIL really that important or just “Kool-Aid”? Add a comment and let me know your thoughts? To be continued…


Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • HP IT Service Management Product Marketing team manager. I am also responsible for our end-to-end Change, Configuration, and Release Management (CCRM) solution. My background is engineering and computer science in the networking and telecom worlds. As they used to say in Telcom, "the network is the business" (hence huge focus on service management). I always enjoyed working with customers and on the business side of things, so here I am in ITSM marketing.
  • David has led a career in Enterprise Software for over 20 years and has brought to market numerous successful IT management products and innovations.
  • I am the PM of UCMDB and CM. I have a lot of background in configuration management, discovery, integrations, and delivery. I have been involved with the products for 12 years in R&D and product management.
  • Gil Tzadikevitch HP Software R&D Service Anywhere
  • This account is for guest bloggers. The blog post will identify the blogger.
  • Jacques Conand is the Director of ITSM Product Line, having responsibility for the product roadmap of several products such as HP Service Manager, HP Asset Manager, HP Universal CMDB, HP Universal Discovery and the new HP Service Anywhere product. Jacques is also chairman of the ITSM Customer Advisory Board, ensuring the close linkage with HP's largest customers.
  • Jody Roberts is a researcher, author, and customer advocate in the Product Foundation Services (PFS) group in HP Software. Jody has worked with the UCMDB product line since 2004, and currently takes care of the top 100 HP Software customers, the CMS Best Practices library, and has hosted a weekly CMDB Practitioner's Forum since 2006.
  • Mary is a member of HP’s ITSM product marketing team and is responsible for HP Service Anywhere. She has 20+ years of product marketing, product management, and channel/alliances experience. Mary joined HP in 2010 from an early-stage SaaS company providing hosted messaging and mobility services. She also has product management experience in the ITSM industry. Mary has a BS in Computer Science and a MBA in Marketing. Follow: @MaryRasmussen_
  • Michael Pott is a Product Marketing Manager for HP ITSM Solutions. Responsibilities include out-bound marketing and sales enablement. Michael joined HP in 1989 and has held various positions in HP Software since 1996. In product marketing and product management Michael worked on different areas of the IT management software market, such as market analysis, sales content development and business planning for a broad range of products such as HP Operations Manager and HP Universal CMDB.
  • Ming is Product Manager for HP ITSM Solutions
  • Nimish Shelat is currently focused on Datacenter Automation and IT Process Automation solutions. Shelat strives to help customers, traditional IT and Cloud based IT, transform to Service Centric model. The scope of these solutions spans across server, database and middleware infrastructure. The solutions are optimized for tasks like provisioning, patching, compliance, remediation and processes like Self-healing Incidence Remediation and Rapid Service Fulfilment, Change Management and Disaster Recovery. Shelat has 21 years of experience in IT, 18 of these have been at HP spanning across networking, printing , storage and enterprise software businesses. Prior to his current role as a World-Wide Product Marketing Manager, Shelat has held positions as Software Sales Specialist, Product Manager, Business Strategist, Project Manager and Programmer Analyst. Shelat has a B.S in Computer Science. He has earned his MBA from University of California, Davis with a focus on Marketing and Finance.
  • Oded is the Chief Functional Architect for the HP Service and Portfolio Management products, which include Service Manager, Service Anywhere, Universal CMDB & Discovery, Asset Manager, Project and Portfolio Manager.
  • I am Senior Product Manager for Service Manager. I have been manning the post for 10 years and working in various technical roles with the product since 1996. I love SM, our ecosystem, and our customers and I am committed to do my best to keep you appraised of what is going on. I will even try to keep you entertained as I do so. Oh and BTW... I not only express my creativity in writing but I am a fairly accomplished oil painter.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
  • Vesna is the senior product marketing manager at HP Software. She has been with HP for 13 years in R&D, product management and product marketing. At HP she is responsible for go to market and enablement of the HP IT Performance Suite products.
  • A 25+ year veteran of HP, Yvonne is currently a Senior Product Manager of HP ITSM software including HP Service Anywhere and HP Service Manager. Over the years, Yvonne has had factory and field roles in several different HP businesses, including HP Software, HP Enterprise Services, HP Support, and HP Imaging and Printing Group. Yvonne has been masters certified in ITIL for over 10 years and was co-author of the original HP IT Service Management (ITSM) Reference Model and Primers.
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.