IT Service Management Blog
Follow information regarding IT Service Management via this blog.

3 Keys to Improving your Change and Configuration Management Webinar

Change and configuration continues to be challenging for many organizations. The risk of self-inflicted service outages is ever present and captures the attention of the ever watchful IT auditors.

 

Brighttalk 12June.jpg

Brian Miller and I will present some reasonable suggestions for improving you change and configuration management processes Wednesday, June 12 at 8am Eastern. Replays are typically available almost immediately afterwards.

 

To register or attend, simply visit http://www.brighttalk.com/webcast/534/74205

Improving Service Quality by the Numbers - 11, 10, and 9

Over the past month (dating back to the itSMF FUSION event in Dallas), a pair of conversations have been rattling around in my mind. The first one starts off with a set of numbers 11, 10, and 9. 11 is the number of major incidents experienced by a Fortune 500 company in the past year.

ITSM Architecture Missing a Fourth Dimension?

Yep, I found the problem.  There seems to be an intrinsic temporal dimension to ITSM implementations that I think needs to be called out.  A reference implementation of even a rudimentary ITSM "out of the box" will be different depending on what you start with and the order in which you implement your use cases. Meaning, if you implement Asset  then Change then Configuration Management, your resulting CMS will work differently than if you changed the order of just these three around!  ITSM math is apparently non-commutative!  I personally find this hilarious.

Discovery: There is no "Getting Started"

One of my favorite South Park episodes is Kenny the Virtuoso.  Kenny decides to take singing lessons.  The lessons go something like this:  "Ok, repeat after me.  La la la la".  Kenny goes "Hmmm hmm hmm hmm".  The instructions continue:  "good.  Now, sing along."  And starts in on Con Ti Partiro.  Amazingly, Kenny actually keeps up and sings it perfectly and on key.

 

We can't all be Discovery Kennys.  But we can certainly get a little help.  Ok, repeat after me...

HP DDMA helps customers in all phases - assess, modernize and manage - of application transformation

HP today announced Application Transformation Solution offerings designed to help enterprises drive innovation by delivering application flexibility and speed.

 

An initiative such as application transformation requires the use of technology both to simplify and to automate the tasks at hand; HP Discovery and Dependency Mapping Advanced Edition (DDMA) software is one such product that assists enterprises through their application transformation journey.

 

A new solution brief discusses how HP DDMA software is uniquely positioned to help customers in all three (assess, modernize and manage) phases of application transformation.

Don't Use a Discovery Tool as a Discovery Toolset. Don't.

I get asked all the time whether a particular discovery technology should be used in favor of another; whether one is better suited for a particular use case; if what we have can do a particular job.  It's all over the map.  Maybe because those asking don't have one.  A map I mean.  Here's one.  And a compass to use it by.  Hope it helps!

Improve visibility into IBM mainframe environment using HP DDMA, UCMDB and EView/390z Discovery

Visibility into IT is no longer an IT thing. It’s a business thing. Accurate, up-to-date, easily accessible insight into the inner workings of the IT environment yields improvements across a wide range of IT functions that have a direct impact on business success.

 

This week HP and EView technology are announcing a new IBM mainframe-z/SO environment discovery offering, View/Jazz Discovery, by extending the HP DDMA software, at the HP Software Universe, Barcelona

ISO 19770-2 SWID update

I know, I have been very tardy – sorry, but things like this happen and will continue to happen.  I am starting to like blogging, but there will be times when I will simply not be able to do this, when my “day job” will get in the way.  Some say that this is a part of my day job now – but I prefer to think about this as a way to “spread my wings”.  But enough about that…


Let’s get back to my thoughts about the Software Identification tags.  I have been waiting for some more news from Steve Klos from Tagvault.org.  Unfortunately for me (and you), Steve went on vacation and truly “unplugged” (I hope it wasn’t simply because cell phones don’t work under water J).


So, here is what I know now:


1.       GSA should have their policy established sometime in June (it would be nice if it was just before HP Software Universe).


2.       Tagvault.org held a contest to promote the use of tags.  Unfortunately, HP did not participate at this time, but believe me – not being the first out of the gate does not mean we are not interested (or else, why would I be writing to promote this topic).


If you go to the Tagvault.org website, you can watch a video of the contest results – if you are not familiar with the tags – it is a great way to get introduced to them.

IS ISO 19770 going far enough?

OK, I think everyone following my posts knows by now that I believe these tags are an important step in solving the challenges associated with Software Asset Management.  But, I really haven’t talked about whether this is enough or not.


Before I go further, let me review the different parts of this standard:


-1 – Established a set of best practices for implementing SAM


-2 – Created a first industry standard mechanism for identifying software installations


-3 – A proposed extension of the -2 standard focused on providing license model information


This is all I am aware of at the moment.  But, is this enough?  I don’t think so.  This is a series of great steps.  We are defining a set of best practices, which is great.  We are standardizing the collection of data, that’s awesome.  All of these are important in solving the SAM challenges.  But, we are not yet collecting enough data to make asset management truly simple.


The -2 and -3 standards will tell you items such as the name of the vendor, software title and version and type of licensing the application uses.  But, one very important piece is still missing.  We still don’t know how many licenses we are consuming.


I am not going to go into the details about some of the wild and crazy licensing schemes vendors have come up with.  All justified, at least in their minds.  Inventory Discovery tools, such as HP DDMI, do good job collecting all the data that is available – ISO 19770 standards will help, of course.  But, the existing standards will still not help the SAM professionals to reconcile licenses in use for many applications.  How do you get data for “per user” licenses?  That’s where I believe we will need a “-4” standard.  Only then we will have information about the software and manufacturer, type of licenses used and number of those licenses CONSUMED.

CMS Use Cases On Parade At HP Software Universe

In my last few posts I talked about the need to focus on use cases.  Over many years I have learned that the number one thing people want to hear about is as follows:  "what is my peer down the street (or across the ocean) doing about similar problems".


Being the track manager for the Configuration Management System (CMS) track at HP Software Universe in Washington D.C. (June 2010), I just completed scheduling a number of great presentations that represent real world use cases and implementation outcomes.   The CMS track at Universe this year highlights a number of great case studies of what real customers - facing real challenges - at very  large and complex companies - are doing around CMS related initiatives.  What follows is a quick summary of customer centric use cases that will be on stage for the CMS track at Universe this summer.


Turkcell, one of the largest mobile phone companies in Europe, will be on stage addressing how they are creating an integrated IT environment capable of supporting a broad range of IT processes including Asset Management, Configuration Management, Change Management and Problem Management.  Elements being integrated include IBM Maximo, HP Business Service Management (BSM) solutions, the HP Universal CMDB and HP Discovery and Dependency Mapping.


An HP partner, Linium L.L.C., will be walking through the work they have done for a major retailer in the US.  The focus of this case study is around the implementation of a Change and Release Management solution that brought together HP Server Automation, HP Release Control, HP Service Manager and the HP Universal CMDB.  


Melillo Consulting is working with a large company to integrate several of our BSM solutions with our HP Client Automation Center to implement an Incident, Change, Problem and Request Management solution.


Elegasi, another partner, is working with a large Financial Services company to help them effectively manage the cost of licenses associated with virtualized infrastructure.   The session will highlight how Discovery and Dependency Mapping, the Universal CMDB, and HP Asset Manager can work together to help address license compliance and cost management for virtualized infrastructures.


Finally, our HP Professional Services team is implementing a Service Asset and Configuration Management solution for a major Telecom company.  They'll be addressing the work they have done to integrate UCMDB and Asset Manager and talking about where they are going next in terms of integrating Service Manager. 


When I consider all of the sessions being put together across other tracks as well - I know that there are many more customer or partner delivered sessions that focus on integrated solutions.  In many of these, the UCMDB is a central component of the solution that will be represented on stage.  If you are interested in going to Universe and have not yet registered, I invite  you to get $100 off the entry price by entering the promotion code INSIDER when you register.  Feel free to pass this promotion code on to others.  Hope to see you in Washington this summer.  Cheers!

Taming (if not slaying) one of IT’s many Medusas

My third grade son and I have been exploring Greek mythology lately.  We’ve been reading about the Gods of Olympus.  This new found interest was triggered by my son having recently listened to the “Lighting Thief” on audio book - the first of the "Percy Jackson and the Olympians” series.   If you aren’t familiar with Medusa, she is monster in female form who has hair that is made of dozens of horrifying snakes.   The hair filled with snakes idea reminded me of a very thorny problem that IT deals with -  that of addressing compliance related issues.  The more I thought about this the more I realized that almost any problem I have ever come across in IT reminds me of Medusa but this area in particular stands out in my mind.  


 


In my last post I talked about the importance of use cases.  In this post I want to focus on a trend I’ve seen that often is the genesis of a Configuration Management System (CMS) initiative – that of addressing compliance related reporting.  Over the years I have dealt off and on with the compliance problem and it stands out in my mind because of the duality that permeates the issue.  Compliance has this quality of being everywhere and being nowhere at the same time.  Let me explain.  When you think about the roles in IT almost every group has some level of responsibility for supporting compliance and yet responsibility for what must be done is highly diffused across the organization.  This is true even if the organization has (and most now do have) a Chief Compliance Officer.  From a product standpoint every product seems to be able to highlight itself as a solution but no one offering by itself really gets you very far.


 


So having acknowledged upfront that no single product can be all things to all issues compliance;  I have been working in the CMS area long enough to see a recurring trend.  That of using Discovery and Dependency Mapping (DDM) as a way of helping to lighten the burden around compliance reporting in highly regulated industries like Financial Services, Health Care and Utilities.  In each of these cases, I know of at least one (sometimes more)  large and complex organizations,  with massive reporting requirements,  that are using DDM to meet requirements around the need to attest and verify that they have strong controls in place to prevent unauthorized changes to their mission critical infrastructures. For many organizations addressing these kinds of compliance requirements is a hugely time consuming and costly endeavor from the standpoint of IT hours invested.


 


I will start with a publicly available story, that of FICO.  Known to most in the US for their credit scoring service, FICO used DDM as key element in a solution which also included HP Service Manager.  FICO talks about their solution from the standpoint of incident, change and problem management but addressing compliance was certainly a big motivator for them as well.  Operating in the highly regulated financial services industry, audits are a way of life for FICO.  Matt Dixon, Director of IT Service Management at FICO, has said that with their solution they were able to go from taking in the neighborhood of a day to address audit requests to being able to do so in a matter of minutes.  Given that something like an audit a day is what FICO deals with, this is no small deal.


 


A health care company that I know provides another good example.  This company had built a compliance reporting database where they had integrated close to 100 data sources.  They had further built on their own reconciliation logic to support data normalization.   The development effort and the ongoing care and feeding associated this system was enormous.  The company launched an initiative to rationalize data sources, implement automated discovery and dependency mapping and replace this home grown reconciliation database and logic with a vendor supported solution (they chose HP). 


 


Turns out that in their data rationalization effort this company found that something like 80% of the data held in their source systems was redundant at some level across the organization.  This understanding helped them move forward and develop a program around retiring systems and moving to a data leverage model using a CMS style approach.  By the way I do not  feel that what this company found in terms of redundant data would be that much different if we ran the same exercise at most large companies I deal with.


 


Another large company I know involved in the highly regulated utility sector went through a very similar process.  Like FICO this company is pursuing a fairly broad agenda around Incident, Change, Configuration and Release management but addressing compliance related reporting requirements was their initial priority.  Like FICO this company has been able to substantially reduce the amount of time invested in compliance while radically shortening the time it takes to produce compliance related reporting.


 


So while discovery and dependency mapping is by no means a panacea when it comes to compliance issues, it can help an organization meet its commitments relative to compliance reporting.  At the heart of many compliance related requirements is the need to attest and prove that you have tight controls in place around how your infrastructure is managed.  Transparency and a continuous visibility to the configurations in your organization is fundamental to addressing this requirement and a CMS can be a key element that helps address this requirement. 


 


 

Important week for Software Asset Management

You may have read my recent posts about Software Asset Management, where I have been promoting the ISO19770-2 software ID tags.


This is an important week for the future of Software Asset Management.  This week, US General Services Administration (GSA) is meeting with some of the people involved in passing the ISO 19770-2 standard and TagVault.org.  They will be discussing whether US Government will adopt ISO 19770-2 software tags as a requirement for all future software purchases.


I for one, hope the GSA adopts this requirement and forces software companies to include these tags with all software  I also hope the GSA will adopt an aggressive and realistic date for the requirement to be mandatory.  I also hope this is a “hard” requirement, because otherwise adoption rates may be low, or may take a long time for these tags to become common.  The tags are relatively easy to create and TagVault.org can provide assistance and, perhaps more importantly, is becoming a central tag certification and signing authority.


In other words, I hope the outcome of the meeting will be a statement like this “in order to sell software to US Government, your software must include ISO 19770-2 tag.  The requirement is effective January 1, 2011”, as opposed to “US Government will prefer to use software which includes ISO 19770-2 tags from today on”.


I will be waiting for the results of the meeting.  I hope the GSA decides to require these tags and soon.


If you are involved in Software Asset Management this could be like Christmas in March.  And if all goes really well, then maybe the requirement will come into effect in time for Christmas this year.

How far will your tires take you?


When you are getting ready for a long drive, you make sure your car is in good working order.  One of the things you check is tires.  After all, you won’t get far without tires, and you don’t want to get stuck in the middle of nowhere because you blew your tire and have no spare, or had an accident because your tires were bold and the car skidded in the rain.


Discovery is like tires for different IT solution.  Whether you are talking about managing end points, implementing CMDB or an Asset Management solution, you need to be able to discover the environment relevant to your needs.


We tend to focus today on “higher level” solutions.  CMDB and CMS are hot!  Software Asset Management is up there as well.  Everyone spends lots of time selecting and evaluating the right products in those areas.  We make sure they can handle the size of our environment, have the functions that we need to assist in our daily jobs.  That’s great – choosing the right product is paramount – I recall working with one of the large IT industry analyst companies a few years ago.  They rated the product I was selling at the time as the best in the market.  But, when we tried to get them to adopt it internally, they were very quick to point out that what is best in the market, may not fit their specific needs.  Yes, they implemented our product in the end, but the point was made – choose products that meet your needs, not the ones that are marketed the most or evaluated as the best.  But I digress…


Let me focus on Asset Management, since that is what I am most familiar with these days.  You evaluate Asset Management products and choose the best one (of course, I hope the winning product is HP Asset ManagerJ).  You choose the right product for asset management, but how do you populate inventory data?  Many customers simply choose to use existing products for feeding data to the Asset Manager product.  Why?  Because they are already deployed and, well, data is data, right?


If you buy a car, you make sure it looks good, it feels comfortable and handles well.  When you get into an IT solution, like Asset Management, you pick the right product that fits your needs.  But, when it comes to data collection many people say, I will just use whatever I have.  It’s cheaper and data is data.  Except that in many cases data has to be transformed into information.  And that will cost time, effort and money.  It will require ongoing maintenance as the environment changes.  Do you want to maintain a custom solution?  In majority of situations IT does not want to have a “custom” implementation of any products anymore.  Do you just stick whatever tires are the cheapest?  Would you put 14 inch tires on a Hummer? No.  And you shouldn’t pick the cheapest discovery tool either.  You should make sure it meets your needs, and one of the criteria must be “does it provide the data the consuming products need” and “is the data in a format that is easily consumed”.  It is true that you will likely end up with multiple tools that collect overlapping data.  It will cost you some storage and it will cost some resources to collect and transfer to the destination.  But, the cost of the overlap should be quite small.  And the value of the right data in the right format is that the overall solution will work as intended and required.


If you buy a Hummer, don’t skimp on the tires.  Make sure the discovery product you use delivers the data you need with little or no customization.  It will be safer, more comfortable and cheaper in the long run.


 


"Aha!" Moments: Serendipitous Early Value Encounters in CMDB and CMS projects

My Social Media Manager Heather asked me to consider writing a customer success story for one of my blog posts.  I decided to try to raise the bar even further.  This is a compilation of many customers' success stories, albethem small successes.  I'm talking about "Aha!" moments, success they didn't know was coming and generated an "aha!" moment.


 


Early, informal value realization, aka “aha!” moments, found early in a CMS or CMDB project can act as value tiers, financing time and credibility for the more difficult-to-realize, longer-term ROI.  Especially if your funding and sponsorship is dependent on volatile management moods and economic fluctuation.  Look for aha moments whenever you can - they will usually reward you and your project in ways that are hard to predict.  And, very occasionally, negatively, in those types of environments where No Good Deed Goes Unpunished or where the messengers, especially bearers of bad news, are still killed - you know who you are.


 


Informal research (my own experience and anecdotes from my customers and colleagues) indicates that some engagements are more successful because value was realized and documented early and often. This is paradoxical, because the business transaction funding the project is usually measured only on the final value realization i.e. fulfilling the sponsored use cases. However, if no value is shown before the primary use cases are successful, organizations typically do not do as well success-wise. It is unclear as to which way the correlation goes, in other words, whether the lack of early success lead to a loss of project momentum, vs. if a project with a weak link has inherently fewer aha moments.


 


Apparently, “aha!” moments can be almost as important as the overall drivers, even though these are rarely formalized or anticipated. However, aha moments alone cannot sustain a project. The primary use cases must be delivered.


 


Here are a few of my favorites.


 






























































    When the aha moment happened



    Source of the aha moment



    Why the "aha"?



    The Value of the aha!



    Planning



    Problem awareness



    These kind of broad projects illuminates the business cases throughout the organization.


     


    Take the example of the "email from corporate",  an email notifying all employees of a new project.  It's unread by many due to the large number of such announcements. Until something happens that involves them, the project is only dimly visible to much of the organization. The “aha!” moments happen when you start interviewing people (known as "surfing the organization".) When the conversation starts with “Why are you here” and ends with “Can I play too?”, the value of the project has been successfully evangelized. But this is not the aha.


     


    When people begin answering the planning questions – questions like “what is your process for documenting applications?” – and the answer is “Well, we really don’t have a process for that.” – People begin to realize that there is a broken, missing, or inefficient process for which the CMDB use case can improve.



    When people become more aware of the problems facing their organization, at a higher level than their function. From the individual’s perspective, the the organization’s "ubiquity" is reduced.  It becomes a bit more personal.


     


    This can enable intangible value ranging from motivation, morale, and incentive to participate, to developing interests in the company’s higher functions – adding momentum not only to the CMS project, but in part to the entire organization.



    Planning



    Infrastructure awareness



    During interviews, we have sometimes uncovered missing firewall rules, missing hardware redundancy, and missing security rules. We would ask something like “what is your firewall policy for this DMZ?” And the technician would log on to the firewall or look through their spreadsheet, and say “I don’t see it.”. they would call their buddy or their manager and discuss it, then would turn to us and say “We’re fixing that right now.” Or “We’ve got to open a change for this”.  Cha-chingCool.



    Risk is directly reduced by correcting redundancy and security-related issues.



    Discovery



    Identifying infrastructure Single Points of Failure



    Initial baseline discovery has found the actual infrastructure to be contrary to a stated configuration. This is not only due to dating, but understanding, and differences between planned and implemented solutions.


     


    For example, a single point-of-failure was found for a mission-critical application requiring redundancy down to the network level. Connectivity to the application’s database was found to flow through a single router.  The Senior Geek we were working with assured us this was impossible and that the tool had to be wrong somehow.  A few phone calls to his Alpha Geek and and some probing, the single point of failure was confirmed.  The Alpha Geek and his team were later praised for uncovering a critical point of failure so early in the project.  We looked pretty good too.  Cha-ching.Cool



    Risk is directly reduced by the identification and subsequent correction of situations falling short of documented or expected implementation.


    Depending on the significance of the differences, finding these kinds of things often is a big boost to the credibility and confidence for a fledgling CMDB initiative.



    Discovery



    Discovering non-standard / unauthorized hardware and software



    Often, unauthorized software or hardware configurations place production at risk. For example of a software risk, non-standard software or patches installed on production servers. Actual examples of “risky” hardware:



    1. finding a part of a production application running on a desktop

    2. finding personal network hardware on a production network

    3. finding a part of production running on the CIO's desktop at his residence!Cool Cha-ching!


     



    Reduced risk to production applications



    Discovery



    Security and auditing



    SNMP was often found to be running with the default community string, even after the Security staff has assured us that all their devices are not on the default value. Also, where insecure protocols such as telnet are disabled by policy, and found to be enabled.


    Some SMEs, when approached, are skeptical of the discovery results. Only after verification using another tool will action be taken. Often, this has the net effect of increasing trust in the product.



    Risk is directly reduced by identifying missing or default security credentials. However, the amount of value varies widely depending on where the breach in question was located. For example, a breach found in a DMZ would be more valuable than one found in an internal-only network.


     


    Confidence in the CMDB contents usually increases with these kinds of discoveries because they are often visible to management and other groups.



    Dependency Mapping



    Unexpected Host and Applications Dependencies



    When we start putting the topology views together for the core service models, we sometimes discover application relationships that make a difference. For example, during DR planning, a customer found a mission-critical application to depend on a “non-critical” application. It was realized that the non-critical application was made critical by this discovery. A change in plans was made to accommodate moving the newly-critical application at the same time as the mission-critical application.



    Outage avoidance is more than risk reduction – had the situation not been found and corrected there would have been an outage. This is a direct improvement in quality, both statistically for risk and cost-wise operationally.  Even if the ROI is hard to quantify there is no doubt that ROI occurredCool.  Cha-ching.



    Impact Analysis



    New Application-level Dependencies



    As in the previous scenario, we sometimes uncover additional dependencies when we begin testing the impact analysis correlation rules.  Usually it's in the form of gap identification with the application owners, e.g. "Hey, where's the ABC app, huh?"  But you should take relationship identification any way you can get it.


     



    Outage avoidance as described above.



    Training, both formal and informal


     


    Interaction with application SMEs


     


    Interaction with customer management


     


    Interaction with technical staff (network, security, DBA, etc.)



    New Use Cases



    When you have good stuff, everybody comes to you and asks you to make it do everything you ever told them it could do.  It can be quite overwhelming.


     


    The team begins linking the concepts learned in training to begin solving their own problems. Matrix teams such as those found on CMS and CMDB projects often bring new valuable and challenging use cases to the table.


     


    So there's a risk of “scope creep” as students try to use resources already allocated for the primary use cases to their own use cases, or if the project attempts to take on too many use cases too early, before it has sufficient momentum to succeed. A lot of aha moments can increase project momentum, in a way, increasing time-to-value of the primary use cases along with it. It is worth a mention here that, as a CMDB matures, it can take on those additional use cases.  So Aha moments aren't exactly scope creep repellent, but they make the smell more tolerable.


     


    However, too many use cases early on tend to starve the project due to lack of delivery of the primary use cases.  Don't run before you walk.



    With a reliable means of capturing these use cases, the CMS grows in value by further decreasing cost of implementation and increasing time-to-value through experience. All consumers benefit by a collective body of expertise.


    Ultimately, aha moments alone are insufficient for a project’s success, but the do seem to play an important part.





 


Yeah, you caught me, this is part of a paper I already wrote.  So I'm a little drier here than my previous posts.  That's ok, I've been pretty "wet" so far.    But this is serious stuff when you get down to it!  My pontification (and YOUR COMMENTS, PLEASE!) should add up to something greater than poking fun at dumb stuff and waxing philosophic about mundane topics like Configuration Data Provider Rationalization (another juicy topic coming soon!)


 


If this is interesting to you, please let us know.   Our blog isn't really a blog until we get our user community actively involved and discussing these topics, which are really little more than starting points.  Please take a quick moment to let us know if you agree, if we suck, or what.  We'd appreciate it.

Location, Location, Location – Part 2

In my last post we took a look at the lineage of today’s CMS efforts.  The two major lineages I cited were ITIL v2 CMDB initiatives and the other was dependency mapping initiatives focused on application architecture reengineering.   A modern CMS initiative unifies these heritages from a technology standpoint.  It brings together the aspirations of an ITIL v2 CMDB initiative but does so in a technology form factor that is much more practical given the complexity and scale of any modern enterprise. 


What I mean to say is that the approach of having a federated CMDB acting as the bridge to other CMDBs and to other management data repositories (MDRs)  is a much more practical approach than focusing on the creation of a single monolithic CMDB.  Consuming applications in turn leverage this federated CMDB for access to configuration item information, business service context and for access to any information across IT that can be tied to these two elements.  


To be effective a modern CMS must also embrace automated discovery and dependency mapping.  The huge amount of gear and the complexity in today’s multi-tier and shared component application stacks make it totally impractical to try to support most IT operational functions without automated discovery and dependency mapping.  The old approach of leveraging tribal knowledge and manual processes just doesn’t scale.  This approach results in a data layer that is far too incomplete and far too inaccurate to support the data integrity requirements of the IT processes that need to consume this data.


So where are we today?  The technology platform to effectively implement a modern CMS exists right now.  Of that I have no doubt.  It is not perfect but it is very, very capable.  But if CMS initiatives are not to go the way of prior CMDB and Dependency Mapping efforts, more than technology is required.  What is required is a focus on use cases first, meaning a strong and crisp set of data requirements to support one or more critical IT processes.  Once this is well understood you can focus on what data is needed and where that data will come from.   Sponsorship with well defined consuming processes will also be higher than when initiatives are started from the data gathering side only. 


The requirements related to data sources should be fairly fine grained - meaning you must understand requirements down to a data attribute level.  Saying that Server data will come from solution “Y” is not enough since the data related to a server that is consumed by a specific IT process might require that your understanding of what a server is encompass data from many data sources.  The bottom line remains the same: “use cases, use cases, use cases”.   


Let me know what your experience has been addressing dependency mapping, CMDB or CMS initiatives at your company.  I and my colleagues would love to hear from you but even more important, I know others working on similar initiatives at other companies would love to hear from you.

When Thinking CMS remember “Location, Location, Location”

The other day I presented to a customer that had purchased HP Discovery and Dependency Mapping software.  This customer was interested in understanding HP’s direction relative to the concept of a Configuration Management System (CMS).  My discussion with this customer focused on how HP was addressing the data needs of IT operational domains ranging from application performance and availability management, to configuration and change management to IT process automation for server and network elements.  From a product perspective HP’s focus in this area has been and remains providing a platform that delivers configuration item data, service context and federated data that can be related to those two items to consuming solutions across IT.


Our conversation eventually and rather inevitably turned to what was the best strategy to achieve such a grand vision.  The answer is surprisingly simple at one level yet remarkably difficult to do in practice.  Like the old adage, location, location, location used to talk about buying real estate, the answer to building a comprehensive CMS that works as promised and stands the test of time requires a laser focus on use cases, use cases, use cases.   


I’ll return to this idea after a brief detour to look at the origin of today’s CMS Initiatives and how many of those early ancestors went wrong.


 “Origins of Today’s CMS Initiatives”


Modern CMS initiatives have two main lineages.  The first and best known are CMDB efforts that were launched in the wake of ITIL v2.  Many if not most of these early efforts failed (or at least fell far short of expectations).  The primary reason was a lack of a crisp focus on what problems were going to be solved and in what order.  Companies sought to create a “master” database with all configurations representing all infrastructures across the whole of the enterprise.  While the CMDB technologies used in these early efforts were immature and had some  technical limitations, most of these efforts didn’t fail because of technology.  They failed due to a lack of clarity around what the end game was.


The second major ancestor of today’s CMS efforts is dependency mapping.  Many of the early adopters of dependency mapping embraced this technology for reasons having little to do with how these capabilities are primarily used today; to support ongoing IT operations.  Instead, most of the early adopters of this technology were interested in dependency mapping as a means of supporting some form of application infrastructure reengineering.


Why?  Well during periods of rapid business expansion the IT infrastructure at many companies had grown substantially and no one had a handle on what existed and how it worked together to deliver IT services.  As a result many companies found themselves unable to effectively take on new IT initiatives focused on reducing the infrastructure footprint, reign in runaway server and network admin costs, or effectively take advantage of new virtualization capabilities.  These organizations lacked a clear understanding of what was the starting point.  As a result many of these organizations embraced dependency mapping as a means to generating this understanding. 


For these companies using this information for ongoing management to support application performance and availability, change management, or IT process automation was not the focus.  As a result little emphasis was placed on consuming IT processes and the integrations with the applications that support these processes.  Like early failed CMDB efforts many companies stumbled when they first tried to apply dependency mapping to the needs of ongoing IT operations.  Like early CMDB efforts the reason these initiatives failed (or at least did not deliver as much value as expected) was that they lacked focus.  


Many companies when first employing dependency mapping would attempt to discover everything before having clear use cases of what data was needed to support what IT processes.   Since there were no clear consumers for the data many of these efforts either lacked or failed to sustain sponsorship and consequently withered on the vine.   In my next post I’ll take a look at how these two independent branches have come together to be the foundation of the current crop of enterprise CMS initiatives and how these initiatives face the same challenge that plagued their technology antecedents.

The complex world of Software Inventory

In my previous blog, I promoted the concept of ISO 19770-2 tags.  But, I did not get deep into the reasons why I think they are so important.  Let me fill in some of the blanks.


In my many conversations with IT professionals, I noticed that outside of Software Asset Managers, few people understand why Software Asset Management (SAM) is so difficult.  And I am not surprised.  And the reason is rather obvious – we know what software is on our own machines.  By extension, we think that IT should also be able to find out what is installed on all IT managed machines.


Here is why this is not so.


1.       There are no universal standards to enable a reliable and complete discovery of software.  Not all applications report themselves to the OS – even on Windows. The file header information, the registry, WMI and Add/Remove Programs information in Windows is not consistent and reliable, although still miles ahead of Linux and UNIX.  I do have to give some kudos to Microsoft for having the most effective standards.


2.       There is no universally standard way to install applications.  There are many installers, and there no universal way to extract information from the installer – again, the situation is the better on Windows than other OSs.


3.       There is no single approach that can discover all software.  Some applications can be identified using file-based recognition, others require scripts, etc.


I have seen various attempts at solving this challenge.  Just talk to different asset management vendors.  You will hear about thousands of recognition entries (or signatures, footprints, etc).  You will hear about pulling data from OS sources and custom modules for identification of individual applications, but I bet that not one company can say they can discover all applications (unless they mean to provide a list of all files on each file system, but that is not exactly what we are interested in, is it?).


And here is another thing – none of us want to spend any time or money in the trenches.  We want Software Asset Management; we want it now and at a minimal cost. But, how do you manage your assets without proper discovery?  It’s like trying to drive a car with no wheels.  It may feel great to sit in it, but it won’t get you far.  Software discovery or inventory is the foundation – without it, you cannot do SAM.  But if we don’t want to invest in it, means we must find a common way of collecting the information.  This has to be something that is OS independent, it has to be something that is vendor independent.  It also has to be something that is quick and easy to do, because everyone, customer and vendor, is watching their expenses these days.


And that, my friends is why I am so passionate about promoting the ISO 19770-2 standard.  It is vendor and OS independent.  It is quick and easy to adopt (relative terms of courseJ).  There is even an organization that can help create and sign these tags – TagVault.org. It is a standard that can be universally adopted.  And it is time we had an adopted standard.  Trust me, I would much rather think about how to create an innovative user experience, or look for ways to adopt some new wiz-bang technology, than spend my days creating file-based software recognition entries.


I recall a conversation I had with one of my customers about SAM.  This particular gentleman is a manager of a large IT shop that has in-sourced its asset management.  His customers don’t understand how difficult it is to collect software inventory information.  He knows there is no magic bullet to solve the problem.  But, until this standard, he did not see much hope.  He thought that the only way to get software vendors to provide a way to track their software was through courts.  I am not sure if you have noticed, but many software vendors are now investing resources in license compliance audits.  Reason is simple – they are not selling as much as they used to before the recession (everyone is tightening their budgets and software expenditures are finally being scrutinized).  So, how do you make up a revenue shortfall?  One word – Audits.


His wish may yet come true – I think that if ISO 19770-2 gets adopted, it will force all vendors to compliance – the legal system that is fully behind the license agreements today may suddenly wake up to the fact that in some cases software identification is incredibly difficult, almost as if the vendors were purposely making it difficult.  I am not saying that is so by any means, but our legal system may decide that it is unfair that a particular vendor is not adopting a common standard and therefore putting undue pressure on the customer to track their software installations.  And I have yet to meet a customer who is not bewildered by the challenges that software discovery/inventory presents in their daily lives.  Like a real life Sisyphean task (even though they cannot tell me what they are being punished forJ).


But anyway, let me get off my soap box – I am getting long winded (and I know those who know me aren’t surprised).


But, ISO 19770-2 is only a part of the Software Asset Management challenge – it’s a start.  But then, we will need to get behind ISO 19770-3.  But that is another topic, for another time.   Hope you enjoyed this post – I promise/threat to write more.


 

Can Software Asset Management Become Easier?

We are now living in 2010, computers are everywhere....so why is it so hard to track license compliance?  After all, we can all see the applications in Add/Remove programs…


I have been managing HP DDMI (Discovery and Dependency Mapping Inventory -our asset and inventory discovery software) for a couple of years now.  Before I took on managing this product, I knew it had hardware and software inventory capabilities and I was impressed with its software recognition capabilities. Then, as the world entered the global recession at the end of 2008, I started hearing a lot of complaints about gaps in DDMI’s software inventory.  I was a little surprised…I mean I knew we had some limitations, but I thought most of them were because we were not providing all of the results we were capturing and that we could improve the level of automation.


But, as it turns out (hindsight being 20/20) the issue is much bigger than I thought.  Is DDMI behind the competition?  Are we in danger of becoming irrelevant in the market place?  The answers I found comforted and shocked me at the same time!


First of all, I began to realize how incredibly complex the world of Software Asset Management really is.  Having gained CSAM certification from IAITAM, I validated that realization. I also learned about the many daily challenges of an IT Asset Management professional.  I realized there is a big difference between reporting what is installed and being able to track licenses.  There are also differences between tracking desktop software and server software, Windows software and Linux/UNIX software.


My conclusion?  There is no way to be able to automatically track license compliance across the board today.  You may be able to do it for specific titles, or perhaps vendors.  But there is no way to do it across the board!!!


Is there hope for the future? Yes!  It is a faint hope, but there is a light at the end of the tunnel (hopefully it is sunlight and not a train lightJ).  We now have a first global standard that promises to improve the current situation - ISO 19770.  ISO 19770-1 provides information about best practices for performing effective Asset Management.  ISO 19770-2 provides a description of a standard asset tag that will identify installed software.  That means, you will be able to read the tag information rather than relying on software recognition or other complex and potentially inaccurate and incomplete methods of identifying software.  Then, if and when ISO 19770-3 is approved, you will be able to use the same method to collect license entitlement information.


Yes, it will take time for vendors to adopt these standards.  This is where each of you come in – vendors listen to their customers.  So, here is my call to action to all of youstart asking for ISO 19770-2 compliance on every RFI and RFP from today on!  It doesn’t matter what the software is – if you buy it, you have to track it, so ISO 19770-2 compliance should be mandatory for all vendors.


Then, once you get the ball rolling, it will be easier to require ISO 19770-3 compliance.  And that will provide you with the license entitlement information – making software license compliance easier.


And don’t worry – you will not put me out of work and you will not lose your jobs either.  As much as I would like to be an optimist, I don’t think for a second that every vendor will fully or correctly implement these standards.  But if we can only solve 80% of the problem, or even 50% of the problem - that will help you deal with the other issues.


What issues?  There will be lots – have you looked at the licensing terms lately?


Stay tuned....more to come... a topic for another one of my posts...COMING SOON!

DDMI 7.61 Available Now! - my first post ever!

I was thinking about what my first blog entry should be…I always believe I have much to learn from others, so perhaps this is a good ice breaker for me.


I would like to let you know that HP has recently released Discovery and Dependency Mapping Inventory (DDMI) 7.61.  This is a maintenance release and a follow up to the 7.60 release, meaning it is focused on small changes and product fixes for issues found since the 7.60 release.  If you are a DDMI customer, with a valid support contract, you can download it from our Software Support Portal by choosing Patch Download option.


In this release we have added:


-          Agent and scanner support for Microsoft Windows 7


-          Agent and scanner support for Microsoft Windows 2008 R2


-          Agent and scanner support for MAC OS 10.6


-          Enhancement to the SAI editor which allows you to separately see Package rules and Version Data rules that exist in the SAI.  This makes it easier to work with rule-based SAI entries, available since the 7.60 release.


-          Support for autofs file systems on Linux and UNIX systems.  This allows you to configure scanner to exclude auto-mounted file systems, which will reduce the amount of time to complete a scan, eliminate “looping” (some customers have reported that scan files effectively “hang”, since they can never complete the scan).  This means scans will be smaller and complete faster.


-          Identification of Primary IP address of a device.  This allows DDMI to consistently select the same interface when identifying and communicating with the device.


-          Improves identification of new CPU types.


-          Support for SMBIOS 2.6.1


Since Discovery and Dependence Mapping for Inventory (DDMI) is a product that interacts with target devices, it is important to keep it up to date.  I recommend that customers take advantage of the latest capabilities by upgrading their installations to the current release.  Our product team works to ensure that upgrades are highly automated to minimize possible disruptions in production environments. 


 

Search
Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • HP IT Service Management Product Marketing team manager. I am also responsible for our end-to-end Change, Configuration, and Release Management (CCRM) solution. My background is engineering and computer science in the networking and telecom worlds. As they used to say in Telcom, "the network is the business" (hence huge focus on service management). I always enjoyed working with customers and on the business side of things, so here I am in ITSM marketing.
  • David has led a career in Enterprise Software for over 20 years and has brought to market numerous successful IT management products and innovations.
  • I am the PM of UCMDB and CM. I have a lot of background in configuration management, discovery, integrations, and delivery. I have been involved with the products for 12 years in R&D and product management.
  • Gil Tzadikevitch HP Software R&D Service Anywhere
  • This account is for guest bloggers. The blog post will identify the blogger.
  • Jacques Conand is the Director of ITSM Product Line, having responsibility for the product roadmap of several products such as HP Service Manager, HP Asset Manager, HP Universal CMDB, HP Universal Discovery and the new HP Service Anywhere product. Jacques is also chairman of the ITSM Customer Advisory Board, ensuring the close linkage with HP's largest customers.
  • Jody Roberts is a researcher, author, and customer advocate in the Product Foundation Services (PFS) group in HP Software. Jody has worked with the UCMDB product line since 2004, and currently takes care of the top 100 HP Software customers, the CMS Best Practices library, and has hosted a weekly CMDB Practitioner's Forum since 2006.
  • Mary is a member of HP’s ITSM product marketing team and is responsible for HP Service Anywhere. She has 20+ years of product marketing, product management, and channel/alliances experience. Mary joined HP in 2010 from an early-stage SaaS company providing hosted messaging and mobility services. She also has product management experience in the ITSM industry. Mary has a BS in Computer Science and a MBA in Marketing. Follow: @MaryRasmussen_
  • Michael Pott is a Product Marketing Manager for HP ITSM Solutions. Responsibilities include out-bound marketing and sales enablement. Michael joined HP in 1989 and has held various positions in HP Software since 1996. In product marketing and product management Michael worked on different areas of the IT management software market, such as market analysis, sales content development and business planning for a broad range of products such as HP Operations Manager and HP Universal CMDB.
  • Ming is Product Manager for HP ITSM Solutions
  • Nimish Shelat is currently focused on Datacenter Automation and IT Process Automation solutions. Shelat strives to help customers, traditional IT and Cloud based IT, transform to Service Centric model. The scope of these solutions spans across server, database and middleware infrastructure. The solutions are optimized for tasks like provisioning, patching, compliance, remediation and processes like Self-healing Incidence Remediation and Rapid Service Fulfilment, Change Management and Disaster Recovery. Shelat has 21 years of experience in IT, 18 of these have been at HP spanning across networking, printing , storage and enterprise software businesses. Prior to his current role as a World-Wide Product Marketing Manager, Shelat has held positions as Software Sales Specialist, Product Manager, Business Strategist, Project Manager and Programmer Analyst. Shelat has a B.S in Computer Science. He has earned his MBA from University of California, Davis with a focus on Marketing and Finance.
  • Oded is the Chief Functional Architect for the HP Service and Portfolio Management products, which include Service Manager, Service Anywhere, Universal CMDB & Discovery, Asset Manager, Project and Portfolio Manager.
  • I am Senior Product Manager for Service Manager. I have been manning the post for 10 years and working in various technical roles with the product since 1996. I love SM, our ecosystem, and our customers and I am committed to do my best to keep you appraised of what is going on. I will even try to keep you entertained as I do so. Oh and BTW... I not only express my creativity in writing but I am a fairly accomplished oil painter.
  • WW Sr Product Marketing Manager for HP ITPS VP of Apps & HP Load Runner
  • Vesna is the senior product marketing manager at HP Software. She has been with HP for 13 years in R&D, product management and product marketing. At HP she is responsible for go to market and enablement of the HP IT Performance Suite products.
  • A 25+ year veteran of HP, Yvonne is currently a Senior Product Manager of HP ITSM software including HP Service Anywhere and HP Service Manager. Over the years, Yvonne has had factory and field roles in several different HP businesses, including HP Software, HP Enterprise Services, HP Support, and HP Imaging and Printing Group. Yvonne has been masters certified in ITIL for over 10 years and was co-author of the original HP IT Service Management (ITSM) Reference Model and Primers.
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.