Business Service Management (BAC/BSM/APM/NNM)
More than everything monitoring, BSM provides the means to determine how IT impacts the bottom line. Its purpose and main benefit is to ensure that IT Operations are able to reactively and proactively determine where they should be spending their time to best impact the business. This covers event management to solve immediate issues, resource allocation and through reporting performance based on the data of applications, infrastructure, networks and from third-party platforms. BSM includes powerful analytics that gives IT the means to prepare, predict and pinpoint by learning behavior and analyzing IT data forward and backwards in time using Big Data Analytics applied to IT Operations.

Sneak Preview: HP Software Universe - Hamburg

Software Universe is only a few weeks away. We hold this event twice each year, alternating between a United States and European location. The upcoming event is in Hamburg at the Congress Centre Hamburg (CCH) from Wednesday, December 16th – Friday, December 18th 2009.

(If you missed our last Software Universe in Las Vegas, you can download some of the presentations here.)

We are very excited about this year’s show. There are many interesting presentations which we have put together based on information we have received after talking to many of our customers. Some highlights* (based on their relevancy to managing applications/ services - both physical and virtual) include:

 Wednesday, 16.12.2009

16:00-16:45 hall B22:  How to design, build and sell HP Business Availability Center (BAC) to end customers Rolf Vassdokken, Ergo Group

16:00 - 16:45 room 15: Enrich Service Quality Management with end-user Quality of Experience - Sacha Raybaud, Hewlett-Packard

17:00 - 17:45 hall G2: Application Lifecycle Management FastTour: Redefining the Lifecycle -Brad Hipps, Hewlett-Packard

17:00-17:45 hall 6:  Bankwest: deploying HP Business Availability Center solutions to monitor business services to help maximise service uptime and drive out cost - Richard Rees, Chief Manager, Platform Integrity; Richard Walker, Hewlett-Packard

18:00-18:45 hall G1: Quickly identify and resolve 4 common application issues using HP's Application Management Solution - Amy Feldman, Hewlett-Packard

18:00 - 18:45  room 13: Rabobank - An IT Operation's transformation. Maturing from silo management to a Unified Operations solution, underpinning a best practice approach for better business outcomes - Toine Jenniskens, Rabobank

Thursday, 17.12.2009

09:00-09:45 hall D: Isolation and Business Transaction Management: How new technologies in BAC help you better manage applications - Michael Procopio, Hewlett-Packard; Chris Tracey, Hewlett-Packard

10:00 - 10:45 hall F: Reducing costs and accelerating your BSM solution through HP SaaS - HMRC's experiences with on-site deployment and SaaS. -Spencer Holland, HMRC / CapGemini

10:00 - 10:45 hall D: The all new Operations Center (& BAC) licensing model: escaping the hardware bonds - Jon Haworth, Hewlett-Packard; Peter Crosby, Hewlett-Packard

15:15 - 16:00 BTO Impact of Virtualization on IT Management -Dennis Corning, Hewlett-Packard

15:15 - 16:00 hall C21: The Impact of Virtualization on IT Management - Mike Shaw, Hewlett-Packard

16:30 - 17:15 hall 6: Impact of HP Business Service Management solution on service provider business and end-user value (panel) -Rolf Vassdokken, Stefan Danisovsky and Vincenzo Asaro Telecom Italia

 Friday, 18.12.2009

 09:00 - 11:00 room 17: BSM - Advanced Workshop - Gundula Swidersky, Hewlett-Packard

 * dates, times, and speakers subject to change


Hope to see you there.

Sanjay Anne, Amy Feldman & Aruna Ravichandran


Not true, IBM

 By Mike Shaw

IBM recently made some incorrect claims on their web site about HP's management products. The network side of those claims was handled on our network management blog.  I wanted to handle the application management claims here.


J2EE Diagnostics Claims

IBM claimed that HPs BAC solution (our solution for application management) cannot provide drill down into J2EE applications. This is not true:


  • HP Diagnostics software for J2EE provides a top-down, end-to-end lifecycle approach for seamlessly monitoring, triaging and diagnosing critical problems with J2EE and Java applications – in both pre-production and production environments. 

  • HP Diagnostics for J2EE starts with the end-user (real and synthetic), then drills down into application components, systems layers and back-end tiers – helping you rapidly resolve the problems that have the greatest business impact

  • HP Diagnostics will monitor any java application, and will discover and monitor the relationship between applications (java and .net)


Application and infrastructure data integration claims

IBM further claimed that HPs BAC does not have the capability to correlate application data to infrastructure data. This is not true. Our integration between the application and the infrastructure layers is two-way - from bottom-up and from top-down.


  • Bottom-up:

    • You can see how an event impacts business services above by looking upwards thru the service topology held in HP's CMDB. The services you can look up to may be applications, they may be a user experience (e.g. the online checkin user experience) or they may be steps in a business process. And, you can see what SLAs are resting on the impacted services and those SLAs' closeness to jeopardy. 

    • This service topology information can be discovered using a number of different methods, all under the overall control of the dynamic discovery manager. For example, if you have OperationsCenter's Smart Plug-ins (SPI), many of these do discovery of their domains and this is now fed into the CMDB. Or, if you are doing agentless monitoring (less expensive to buy and manage, but not the same level of fidelity and action control as with agents - it's horses for courses), this will also discover the hierarchies under the items it's monitoring. And if you have NNMi, our network management product, it will put its end-point discovery into CMDB. If you want everything discovered from business service on down, you can use our advanced discovery technology. As I said earlier, ourdiscovery manager is the overall controller, orchestrating the other discovery methods like SPIs and NNMi should you choose to use them

    • The new OMi "TBEC" (topology-based event correlation) technology is able to take an event stream, map the events to services, and then group events related by services in the service topology and thus infer which are causal events (events we need to take action on) and which are symptomatic events (events that are as a consequence of a causal event and thus don't need to be actioned). Included in the symptomatic events may well be an event from our user experience or business transaction monitoring technology. Imagine a DB is having a performance problem. This, in turn, causes a user application to slow. The real user monitor notices this an raises an event. The OMi TBEC will notice both events, realize they are related in the service topology, and infer that the DB problem is the cause and the real user monitor event is a symptom. Is this new? No - the technology was invented by Bell Labs and has been in our NNMi network management product for about 18 months now.

    • Summary: bottom-up we have two links up to the application / business service layer. The first is for exntensive "service impact analysis" and the second is for TBEC - for analysis so you just get to see the actionable events you need to do something about.

  • Top-down

    • Our performance triage technology takes performance and event information from dependent services (those services the business service having a performance problem rest on). It uses an HP Labs' patented algorithm to infer causal relationships between infrastructure service performance and fault and the business service's performance. So what? This allows you to know which area is causing the performance problem. Useful given that the average performance problem goes thru 6 to 8 groups before being solved!  By the way, the event stream doesn't have to come from Operations Center. We can, should you still have it having not swallowed the rip'n'replace mega-pain yet, take events from Tivoli (or any other event management system).

    • The performance triage module doesn't just look at performance and event streams. It looks at recent changes in the dependent services as determined by the discovery monitor (e.g. Server XYZ has had 4gig of memory ripped out). I'm sure you've heard the stat that if a change has a occurred, there's an 80% chance it's the cause of the problem.

    • And, as of last November, the performance triage module also considers the compliance state of the dependent services. How does it do this? The ex-OpsWare Server Automation product now puts its discovered information into the CMDB too, and compliance state is one of the things it discovers.  I'm sure there's a stat on how non-compliant systems screw up business services above :-)

  • And finally, something we are very proud of, and something that people really like - the 360 degree view.  Take a service, any service. For that service, you can see the following.....

    • The performance of the service versus its KPIs. Now and over time.

    • What services are above it

    • What user experiences are resting on it and what their state is

    • The business processes resting on it and the throughput of those services (i.e. Are they slowing down because of this service?)

    • The SLAs resting above this service their closeness to jeopardy

    • The status of the services this service is resting on

    • The change state of services at and below this service

    • The compliance state of services at and below this service

    • The planned changes for this service

    • What the service desk knows about this service in terms of incidents - "do we get an incident on this every Monday at this time?"




OK. I've gone to town on this response a little bit. But to HP Software saying we can't correlation application data to infrastructure is like telling Eugene Bolt he can't run!


Rip out Operations Center and replace it with NetCool

Finally, in this piece of their web site, IBM was suggesting people move from Operations Manager to NetCool. As you probably know, the migration from Tivoli to NetCool is a rip'n'replace. Operations Manager has never done this to our customer base. As a recent and concrete example, the new OMi functionality with its ability to do topology-based (i.e. no writing of event correlation rules) event correlation to reduce event streams to actionable events is an ADD-ON to existing Operations Center installations. No rip, no replace.


If however, you have a predilection for rippin' and replacin', then please do consider the move from Operations Center to NetCool. Personally, I'd add OMi instead because I'd want the topology based event correlation and easy life - but maybe that's just me!


Fuel Efficient IT Operations

Mike Shaw, BSM Product Marketing.

My wife just bought a BMW 118D. The 118D won the "Green Car of the Year" award in 2008 at the New York Auto Show.  It does an amazing number of miles to the gallon (km to the litre / miles to the US gallon). Her old car (also a BMW) did about 26 miles per gallon. The 118D does 63 miles per gallon. Now, the new car is slightly smaller, so we're not comparing apples to apples. However, you get the point -- car manufacturers are pushing fuel economy to new limits. At the cost of acceleration? Not that I've noticed - when you put to the floor in the 118D, it most certainly accelerates.

I think there are parallels between fuel economy and IT operations.  During a down-turn, because there is less activity, there is less pressure on IT operations (fewer events, fewer system overloads, etc). This is like a car that is only required to go at 30 miles per hour and accelerate slowly because that's what everyone else on the road is doing.  In an attempt to cut the costs of motoring, one might be tempted to adjust the fuel injector so that a smaller amount of fuel is available. This will cut fuel costs during this recessionary period.


BUT, when we come out of recession (some time in 2010??), acceleration will be required. Actually, our competitors will be accelerating - it's up to us whether or not we match them. If we've chosen to create a fuel efficient car (like the BMW 118D), then we can match the required acceleration and have fuel efficiency. If we've decided to simply cut the fuel that goes into the car without any consideration for fuel efficiency, our competitors will accelerate away from us come the upturn.


During a down-turn, we are under pressure to cut IT operations costs. In fact, in a recent IDC study performed for HP Europe, 40% of customers surveyed said they were very likely to cut IT operating costs while 74% said it was likely they would cut IT ops costs.


We have two choices in how we behave in response to this pressure to cut costs. We can take a simple "let's cut people and that's it" path, or do we take the "fuel efficiency" path and create an IT operations to match the BMW 118D. If we just cut people, we'll drown in IT operations stuff when the upturn comes. If we create a fuel efficient IT ops engine, we'll be able to embrace the acceleration when the upturn comes.


This sentiment is echoed by recent comments make by HP's CEO, Mark Hurd (I'm sure Mark will be greatly comforted to know that he and I are in snych on this one). Mark said he didn't want to simply cut heads because when the upturn comes, he won't have the "people muscle" required to handle the upturn. HP's IT department is taking the BMW 118D approach - data centre consolidation, network operations efficiency, centralized event management, pro-active user experience management, constrained self-serve of IT product, etc.


So, how do we create a fuel efficient IT operations? I'm not an expert across the whole IT operations stack, so I'll talk to the area I know about - availability and performance management.  And in the interests of keeping these blog posts to a manageable size, I'll do that in the next post.


(Footnote: I'm sure all car manufacturers are producing more fuel efficient cars. My wife just happens to like BMWs, and she only looked at BMW!  I'll bet the average HP sales rep wished their customers were so loyal (naive ??))

Showing results for 
Search instead for 
Do you mean 
About the Author(s)
  • Doug is a subject matter expert for network and system performance management. With an engineering career spanning 25 years at HP, Doug has worked in R&D, support, and technical marketing positions, and is an ambassador for quality and the customer interest.
  • Dan is a subject matter expert for BSM now working in a Technical Product Marketing role. Dan began his career in R&D as a devloper, and team manger. He most recently came from the team that created and delivered engaging technical training to HP pre-sales and Partners on BSM products/solutions. Dan is the co-inventor of 6 patents.
  • This account is for guest bloggers. The blog post will identify the blogger.
  • Manoj Mohanan is a Software Engineer working in the HP OMi Management Packs team. Apart being a developer he also dons the role of an enabler, working with HP Software pre-sales and support teams providing technical assistance with OMi Management Packs. He has experience of more than 8 years in this product line.
  • HP retiree Author of 42 Rules for B2B Social Media Marketing
  • Nimish Shelat is currently focused on Datacenter Automation and IT Process Automation solutions. Shelat strives to help customers, traditional IT and Cloud based IT, transform to Service Centric model. The scope of these solutions spans across server, database and middleware infrastructure. The solutions are optimized for tasks like provisioning, patching, compliance, remediation and processes like Self-healing Incidence Remediation and Rapid Service Fulfilment, Change Management and Disaster Recovery. Shelat has 21 years of experience in IT, 18 of these have been at HP spanning across networking, printing , storage and enterprise software businesses. Prior to his current role as a World-Wide Product Marketing Manager, Shelat has held positions as Software Sales Specialist, Product Manager, Business Strategist, Project Manager and Programmer Analyst. Shelat has a B.S in Computer Science. He has earned his MBA from University of California, Davis with a focus on Marketing and Finance.
  • Architect and User Experience expert with more than 10 years of experience in designing complex applications for all platforms. Currently in Operations Analytics - Big data and Analytics for IT organisations. Follow me on twitter @nuritps
  • 36-year HP employee that writes technical information for HP Software Customers.
  • Pranesh Ramachandran is a Software Engineer working in HP Software’s System Management & Virtualization Monitoring products’ team. He has experience of more than 7 years in this product line.
  • Ramkumar Devanathan (twitter: @rdevanathan) works in the IOM-Customer Assist Team (CAT) providing technical assistance to HP Software pre-sales and support teams with Operations Management products including vPV, SHO, VISPI. He has experience of more than 12 years in this product line, working in various roles ranging from developer to product architect.
  • Ron is a subject matter expert for BSM\APM, Currently in the Demo Solutions Group. Ron have over thirteen years of technology experience, and a proven track record in providing exceptional customer service. He began his career in R&D as a software engineer, and team manager.
  • Stefan Bergstein is chief architect for HP’s Operations Management & Systems Monitoring products, which are part HP’s business service management solution. His special research interests include virtualization, cloud and software as a service.
Follow Us

HP Blog

HP Software Solutions Blog

The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation