Business Service Management (BAC/BSM/APM/NNM)
More than everything monitoring, BSM provides the means to determine how IT impacts the bottom line. Its purpose and main benefit is to ensure that IT Operations are able to reactively and proactively determine where they should be spending their time to best impact the business. This covers event management to solve immediate issues, resource allocation and through reporting performance based on the data of applications, infrastructure, networks and from third-party platforms. BSM includes powerful analytics that gives IT the means to prepare, predict and pinpoint by learning behavior and analyzing IT data forward and backwards in time using Big Data Analytics applied to IT Operations.

Get press exposure on your BSM/APM deployment

 - Are you attending our HP Software Universe event in Washington this year?


- Are you a current HP Business Service Management (BSM) customer - customer who has one or more products from


HP Business Availability Center (any of these products Business Process Monitor/Real user monitor/Sitescope/Diagnostics/Service Level Management/Transaction Vision)



HP Operations Management Center (Operations Manager, Operations Manager i, any of our SPIs)


 HP Network Management Center (NNMi, Performance SPIs, etc)



- Are you happy with your BSM deployment and want to talk about it?



- Do you want to be seen as a leader and innovative company by having your BSM story quoted in press articles?



If you answered yes, to all of the above, then send an email to: aru@hp.com with your contact information (your name, company, email and phone number). I’ll call you and we can discuss how to give you and your BSM deployment some great exposure.



Thanks
Aruna Ravichandran
Group Product Marketing Manager
Application Performance Management ( part of BAC)
aru@hp.com


 

HP APM partner in crime with HP's Application Lifecycle Management

Did you know that HP's application performance management is part of HP's application lifecycle management (ALM)? HP has a lifecycle approach to application performance and availability management that focuses on integration, collaboration, and resource sharing—from pre-deployment application development to production application management and back again. For example, the ability to reuse scripts has been available, but collaboration among teams did not occur, because each team did not know what existed among the other teams. The HP approach bridges the gap among development, QA, and IT operations so that your teams can work together more effectively, understand and meet end-user performance requirements, and cut cost, complexity, and deployment time frames. Examples of HP this lifecycle approach with End User Management (EUM) are:  



  • Bi-directional script reuse: You can use reuse artifacts between production environment and QA. Real User sessions can be used generate more realistic test scripts, which can help strengthen testing suite so that you can catch most of the application performance in pre-production, rather than in production. LoadRunner and Quality Center scripts can be used in HP Business Process Monitor allowing you to leverage testing scripts to monitor a production environment, reducing the time required to roll out application into production.  

  • Load modeling analysis: You can create an accurate load test that reflects real-life conditions, using HP Real User Monitor session data.   • Impact analysis: You can measure the impact of changes on the system in production (“before and after” snapshots) or during load testing. You can compare production results with synthetic load results or reproduce production results in the QA lab.  

  • Cross-environment analysis: You can differentiate between code and configuration issues so problems can move to and from production and synthetic load environments  

  • Root-cause analysis: You can identify the underlying cause of performance degradation related to one or more tiers in the system, both in production using real load data or in pre-production using synthetic load data using HP's Diagnostics software.


    • HP Diagnostics software helps identify transaction paths, which makes it very easy for QA or production support to quickly identify and triage application performance problems. It is single tool which can be shared between both production and pre-production environment.



    • HP Diagnostics software helps you improve application availability and performance in pre-production and production environments helping to significantly compress testing and tuning cycles, increase productivity, and accelerate performance problem diagnosis and repair.



    • You can utilize HP Diagnostics software to drill down from the end user into application components and cross-platform service calls to pinpoint and resolve the toughest problems. This includes slow services, methods, SQL, out-of-memory errors, threading problems, and more.



    • It also extends HP LoadRunner and HP Performance Center software capabilities to address the unique challenges of testing and diagnosing even the most complicated, composite applications. It allows you to identify issues early that are often hidden in testing but show up in production, such as a slow memory leak.





 Thanks,


Aruna Ravichandran
Group Product Marketing Manager, APM
aru 'at' hp.com

Take the next step to maximize your virtualization management ROI – Webinar series

 Take the next step to maximize your virtualization management ROI – Webinar series


HP Software and Solutions recently sponsored a series of virtualization roundtables, run by CIO magazine, where we shared the 2009 study findings on virtualization adoption and challenges. During these events we heard over 100 IT executives tell us their specific needs around virtualization and their desire to continue with the discussion around key areas of virtualization. We’re created a series of webinars to help continue the virtualization discussions including:


• April 13 - Optimizing service modeling, discovery, and monitoring for VMware environments
 • April 14 - Protecting Virtualized Environments from Disaster with HP Data Protector
• April 21 - Testing Smarter and Faster with Virtualization
• April 22 - Improve customer satisfaction and maintain service levels in virtualized environments
• April 27- BCBS of Florida builds a foundation for virtualization with HP Asset Manager
• April 29 - Virtualization: Compliance enforcement in a virtualized world


Register now for one or more of the web events in the April 2010 virtualization series and take the next step in virtualization management.


The webinar pertinent to the area of Application Performance Management is on April 22nd. In this webinar Amy Feldman, Product Marketing Manager for APM, will discuss the challenges of managing applications in virtualized environments while continuing to provide the same level of service quality. Businesses want to know that their quality of service and customer satisfaction will not be negatively impacted by moving critical business applications into a virtualized environment. Detailed approaches will show how to align to the business, maintain service levels and improve customer satisfaction.

During this webinar, you will learn:



  • how to establish Service Levels for virtualized environments

  • monitor from the end-user perspective to improve customer satisfaction

  • show the business that moving to virtualization does not need to disrupt the quality of service


Thanks
Aruna Ravichandran
Group Product Marketing Manager, BAC

HP recognized as a leader in the Gartner's Magic Quadrant for Application Performance Monitoring (APM)

HP application performance management (APM) solutions allow IT organizations to detect, prioritize, isolate, diagnose, and repair, and prevent problems before users and the business are impacted, thereby improving the end user experience and IT staff efficiency.


 In February of 2010, Gartner Inc. released its Magic Quadrant for Application Performance Monitoring, which evaluated 19 APM vendors on the completeness of their vision and their ability to execute. The following graphic, taken directly from Gartner Magic Quadrant for Application Performance Monitoring, Will Cappelli, 18 February 2010, shows the market positioning.

View the full report here.


 


 



* This Magic Quadrant graphic published by Gartner, Inc. as part of a larger research note and should be evaluated   in the context of the entire report.



According to Gartner, “Application performance monitoring (APM) now requires coordinated decisions across five distinct dimensions of functionality: end-user experience monitoring; user-defined transaction profiling; application component discovery and modeling; application component deep-dive monitoring; and application performance management database capabilities.”



We believe the HP APM solution meets these requirements with an extensive suite of proven products that help you align IT efforts with business goals and optimize application performance to improve productivity, customer satisfaction, and revenues. HP’s APM solution is a part of HP Business Availability Center, which includes software for monitoring synthetic transactions, real user experiences, service-level management, end to end transaction profiling and tracing, and diagnostics for rapidly identifying and resolving problems.



 View the full report here.



For more information about HP APM solutions, visit www.hp.com/go/bac .


 


The Magic Quadrant is copyrighted February 2010 by Gartner, Inc. and is reused with permission. The Magic Quadrant is a graphical representation of a marketplace at and for a specific time period. It depicts Gartner’s analysis of how certain vendors measure against criteria for that marketplace, as defined by Gartner. Gartner does not endorse any vendor, product or service depicted in the Magic Quadrant, and does not advise technology users to select only those vendors placed in the “Leaders” quadrant. The Magic Quadrant is intended solely as a research tool, and is not meant to be a specific guide to action. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


 


 



 

Australias largest superannuation administrators achieve amazing ROI with HP's Application Performance Management Solution (BAC)

Last week,I saw this article from a online AUstrialian IT newspaper, and I was amazed to see so many tangible improvement and ROI metrics realized by a company called SuperPartners, who is one of the largest superannuation administratos in Australia, while using our Application Performance Management solution ( End user management (RUM/BPM), Diagnostics, Sitescope) .


SuperPartners services 6.1 million member accounts and 687,000 employee accounts and has more than $68 billion in funds under administration!


Some of the key lines from the article which really stuck me include: For three years running, Superpartners' systems have had an average uptime of 99.7 per cent. Three years ago the figure was 97 per cent. One of the main benefits has been reduced incidents. Between 2005 and 2008, Superpartners experienced a 65 per cent reduction in "severity one" incidents. In 2008 alone, it achieved a 73 per cent reduction in support queues.  They said that Superpartners was expected to begin achieving return on its investment within 15 months, and to have 58 per cent ROI after three years.


If you are interested in reading the whole article, please check out:


http://www.theaustralian.com.au/australian-it/it-business/hands-free-monitoring-cuts-superfund-glitches/story-e6frganx-1225828007689


 Thanks


Aruna Ravichandran
Sr. Manager, Business Availability Center Product Marketing
aru@hp.com

HP and Microsoft, how does this help me manage my IT environment?

by Michael Procopio



 HP and Microsoft recently made a joint announcement, saying they intend to invest $250 million over the next three years to significantly simplify technology environments for customers.


 It included many parts of the HP Enterprise Business products. But, what does this really mean to customers who use HP Software & Solutions products?


This was a huge announcement in terms of the number of different parts. The clearest write-up I have seen covering the whole announcement is on a TechNet blog.


Indeed, a portion of this announcement was to say ‘these two companies are writing a $250M check to fund additional R&D on joint solutions.’ I’m guessing that means some smart folks got together and realized that expanding the long-standing relationship between the two companies would have an extremely positive impact for customers.


I have seen a number of blog posts that don’t seem to ‘get’ the value in the announcement. However, Jim Frey, in aNetwork World blog post, seems to understand what we are trying to accomplish. He wrote:


“The result could just be the most complete story for Cloud that we have seen to date. It includes hypervisor technology that IBM does not. It addresses the application software that VCE does not. And, it includes the infrastructure technologies that Oracle does not.”


So here’s what I think the announcement means to HP Software & Solutions customers.


In a sentence


HP and Microsoft are working together to make infrastructure management and application management better—whether it is physical or virtual (including Hyper-V) with new integrations between software products.


Details


1/ HP Business Service Management (BSM) will collaborate with HP Insight Software (from HP’s Enterprise Server group) and Microsoft (System Center) to provide integrated and interoperable virtualization and management solutions to reduce complexity in the datacenter.


Note: HP Insight Software provides hardware level and remote management for HP hardware. It also provides control of your Windows, HP-UX, Linux, OpenVMS and NonStop environments.


It means: if you have a multi-OS/hypervisor environment, including Windows you get an integrated solution to monitor and manage your servers.


2/ HP will work with Microsoft to provide bi-directional integrations between HP Business Service Management and Microsoft System Center for enterprise systems and application monitoring. 


It means: if you have a multi-OS/hypervisor environment, including Windows you get a heterogeneous solution to monitor and manage servers, the software on them and end-user performance. HP Software has HP Operation Manager including HP SiteScope for server management and HP BPM and HP RUM modules of HP BAC for end user management.


3/ In addition, we will develop bi-directional integrations between HP Business Service Automation (BSA) and Microsoft System Center for OS, application provisioning and compliance management.


It means: if you have a multi-OS/hypervisor environment, including Windows you get a heterogeneous solution for configuration and compliance (checks that servers are configured per your policies) management.


4/ For the SMB market (defined as customers with 50 servers or less), there will be “Virtualization Smart bundles” with prepackaged elements of HP Operation Center.


It means: Virtualization solutions are easier to buy and deploy since it is in one bundled package. There is a separate document on “Virtualization Smart bundles.”


Related Items:



 


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group on LinkedIn.



 



 

Announcing the new HP Saas Portal

We are excited to announce the launch of the exciting new HP SaaS Portal at http://saas.hp.com  !!!


The new streamlined interface gives you new visibility into a range of products and solutions offered by HP Software in the SaaS delivery model.



On the Saas Portal, you will find rich collateral which will help you understand the our rich Saas offerings. You will find




  • Product Pages: Each product page has the wealth of material about its delivery in SaaS environment, white papers, webinars and Solution Briefs.


  • HP SaaS eBookhttps://saas.hp.com/site/portal/ebooks/hpsaas/index.html

  • Trials: We’ve simplified our trial registration to provide you with an easier way to register for trials.

  • News and Events: We have created a space to share the latest news from HP Software and Services with HP SaaS community.



At a time when IT budgets and resources are stretched, it’s easy to perceive that the monitoring of business applications is not mission critical. However, performance and availability monitoring tools systemize KPIs through the use of scorecards and dashboards, which enable business owners to align IT performance to business outcomes. This alignment is indeed critical to an organization’s ability to sustain and grow their business.



There exists a false perception that this monitoring requires significant investment in money and resources with an often delayed-time-to-value.  With HP Software-as-a-Service you can achieve your application performance monitoring goals without the business risk of a perpetual capital investment. HP SaaS is responsible for the entire software lifecycle so implementation risks are greatly reduced which enables your IT resources to focus on their core competencies. Start gaining immediate value and adoption from HP SaaS!



To learn more about our performance and availability offering, please visit us at http://hp.com/go/bac



To read more about the benefits you stand to gain from the HP SaaS Application Performance Management (APM) offering, please visit us at: https://saas.hp.com/site/html/bac.mss   



We have much more planned for SaaS Portal. We’ll keep you posted as these developments are launched on the site. In the meantime, please feel free to provide us feedback through the site feedback link.



Thanks,


Aruna Ravichandran
Group Product Marketing Manager,
Business Availability Center (Application Performance Management)


aru@hp.com

HP Software Universe 2009 Hamburg Day 2

by Michael Procopio


Just over 3200 attendees at the show this year.


Each of the last eight years at the European Universe, Ulrich Pfeiffer, CTO for IT Management in EMEA creates a Live in Action demo showcasing the fictitious Full Throttle Company (FTC) on mainstage. Live in Action is a demonstration of HP Software integrated into a complete solution.


 



Ulrich Pfeiffer


 


This year, the Live in Action team recalled Hamburg of the early 1960s when this German city gave the Beatles a start before the group became a mainstream sensation. The demo featured the HP Performance Optimized Datacenter (“POD”), a datacenter in a container, along with music by the re-Beatles band and an on-stage yellow submarine in a disaster recovery scenario involving a flooded datacenter.


The POD is a standard container completely fitted with servers, storage and HVAC. You can see the POD demo here.


BTO (Business Technology Optimization) solutions demoed live on stage showed capabilities in three areas: 1) business and disaster recovery; 2) service recovery and web security; and 3) business improvements to avoid service disruptions and enhance capacity via virtualization management.


Disaster recovery highlighted HP Operations Orchestration configuring the POD to the state of the original datacenter.


Web security started with finding the problem using End User Monitoring, HP SiteScope and HP Problem Isolation. The problem was a hacked web site. The knowledge base in HP Service Manager (SM) which was populated during QA testing by HP Quality Center has the solution to the problem.   QA had not originally done security testing on every release but it is now added to their plan in HP Quality Center (QC) to run HP Security Center.


HP WebInspect, was run directly from SM. The test failed and WebInspect automatically created a ticket in SM for the QA team.  Once the QC test passes the SM ticket is automatically closed.


The website having the problem was supporting a reseller using a configuration utility to order Snowmobiles, a new business line for FTC. When Ulrich, playing CIO for FTC, called the reseller to tell him service was restored the reseller claimed his SLA (Service Level Agreement) was violated and would not pay. Ulrich brought up HP Service Level Manager to show that while any further downtime would violate the SLA, it was currently in tact.


Now that the problems is solved the infrastructure manager looks to see how the failure happened and what can be done to prevent it in the future.


They review the infrastructure using HP Asset Manager and Operations Manager Virtualization Smart Plug-in, both now running on Linux. The CIO wants this Virtualization data to show up in his 360 degree dashboard and it does.


It is much more fun to watch and it was video recorded which will be up after the holidays.


Related items:



 

Operations: application performance sucks, what do I do now?

by Michael Procopio


In my IT days (it has been a while) as still happens today this is the question many have asked. It’s more complicated today, applications are more distributed now. However, you still have to go through the triage process. The topic these days is named “Application Performance Management” or APM.


APM has two parts, the traditional looking at infrastructure resources and measuring performance from the end user perspective.


You probably detected this problem in one of two ways. You are ahead of the curve and have end user monitoring in place or the user called the help desk to complain.


A typical web based application today uses a web server, application server and a backend, typically a database. Though the backend might actually have multiple parts if service oriented architecture, (SOA) is used. Good news, Operations Manager, agent based, and SiteScope, agentless, will provide status on condition of those servers.


These tools can also look at how packaged applications are doing on the server. Oracle, WebSphere, MS Exchange, MS Active Directory, to name a few, can be monitored by either Operations Agents or have SiteScope templates (a SiteScope template is a prepackaged set of monitors). These tools might point to something as detailed as database locks being far higher than normal and beyond the current setting on the database. A quick parameter change might fix this.


Next, we have the code. We hope this isn’t the case because this typically moves the problem from operations to development. However, Operations is still responsible for pinpointing the problem area. This is typically the domain of application support and in some organizations that’s inside Operations, in others a different group.


Here Business Transaction Management (BTM) tools can help. BTM manages from a transaction point of view. BTM includes transaction tracing. TransactionVision and Diagnostics work in a complimentary fashion to give you the next level of detail although each is usable separately. TransactionVision traces individual critical transactions (as you define them) through multiple servers; it gives you information on a specific transaction including the value of the transaction.


Diagnostics provides aggregate information on all transactions in a composite application giving you timing information. It can pinpoint:


· where time is spent in an application; either processing data or waiting for a response from another part of the application.


· the slowest layers.


· the slowest server requests which are the application entry points.


· outliers to help diagnose intermittent problems.


· threads that may be contributing to performance issues.


· memory problems and garbage collection issues.


· the fastest growing and largest size collections.


· leaking objects, object growth trends, object instance counts, and the byte size for objects.


· the slowest SQL query and report query information.


· exception counts and trace information which often go undetected.


TransactionVision and Diagnostics also integrate with Business Availability Center, which means you can start with a topology view and drill all the way down to find the status of the most valuable transaction running through you systems.


You can manage what you can’t measure. So what do I do now? If you are properly instrumented the problem will show itself. If you don’t find something you can fix, you can tell the app developers where they need to look to fix the problem.


   


Related Items:


· End User Monitoring


· Operations Manager


· SiteScope


· SiteScope Administrator Forum


· TransactionVision


· Diagnostics


· Business Availability Center





RUM the Real User Experience Manager

by Michael Procopio


RUM or Real User Monitor is a tool to monitor actual user traffic running over your network.


Its part of our EUM or end user management suite. In the area of EUM there are two primary ways to monitor 1/ synthetic, which is covered by BPM or Business Process Monitor and 2/ real user monitoring.


Each has its place in a monitoring strategy. BPM is good for making sure things are up 24x7, even when no users are using your applications. Real user monitoring can give you information down to the specific user.


When I first moved over to BAC group and heard about RUM, I was impressed. One of the things it can do is replicate a users web session click by click. This allows someone troubleshooting a problem to see exactly what happened and what the error message was the user saw – no guessing. (sensitive data like passwords and credit cards are filtered out in memory before writing to a disk). Further, if you do find a problem it can turn the session into a script that can be passed to the QA team to replicate the problem, if they are using LoadRunner.


How does it work? It starts by capturing packets as they go over the network. This is done by a RUM Probe, which is software that runs on a dedicated piece of x86 hardware (typically). The Probe passes the relevant data to the RUM engine.


The RUM engine stores the data and key performance metrics are aggregated before being sent up to BAC for reporting and alerting. For example, an alert might be round trip time for the Savings Deposit transaction is taking too long. Here are some of the reports RUM provides:



  • Global Statistics

  • Page Summary

  • Transaction Summary

  • End User Summary

  • End User Over Time

  • Server Over Time

  • Session Analyzer

  • TCP Application Summary

  • TCP Application Over Time

  • Event Summary

  • Business Process Distribution



Figure: Example RUM deployment configuration


Originally RUM strictly focused on HTTP/S traffic. But a while back support was expanded to due general tracking of TCP traffic, both streaming and non-streaming. In more recent releases additional upper level protocol analysis has been added. Beyond HTTP/S current support includes:



  • XML/SOAP

  • Siebel

  • WebSphere

  • MPLS


Related Items:



For Business Availability Center, Michael Procopio


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP Software group on LinkedIn and/or the Business Availability Center group on LinkedIn.


 

Fighting or friendly, Problem Isolation and OMi

by Michael Procopio


In the post  Event Correlation OMi TBEC and Problem Isolation What's the Difference, my fellow blogger, Jon Haworth, discussed the differences between TBEC and Problem Isolation. To be consistent, I'll use the acronyms PI for Problem Isolation and TBEC to refer to OMi (Operations Manager i series) Topology Based Event Correlation.


Briefly, he mentioned that TBEC works “bottom up”, that is starting from the infrastructure, with events. PI works “top down”, that is, starting from an end user experience problem, primarily with metric (time series) data.


Jon did an excellent job describing TBEC; I’ll do my best on PI because like Jon I have a conscience to settle.


Problem Isolation is a tool to:


1. automate the steps a troubleshooter would go through


2. run additional tests that might uncover the problem


3. look at all metric/performance data from the end user experience monitoring and all the infrastructure it depends


4. find the infrastructure metric the most closely matches the end user problem using behavior learning and regression analysis techniques (developed by HP Labs)


5. bring additional data such as events, help/service desk tickets and changes to the troubleshooter


6. allow the troubleshooter to execute Run books to potentially solve the problem


Potentially the biggest difference in the underlying technology is that Problem Isolation does not require any correlation rules or thresholds to be set for it to do the regression analysis to point to the problem. Like TBEC, it does require that an application be modeled in a CMDB.


An example: Presume a situation with a typical composite application - web server, application server and database. No infrastructure thresholds were violated; therefore, there are no infrastructure alerts. Again, as mentioned in the previous post, end user monitoring (EUM) is the back stop. EUM alerts on slow end user performance, now what?


Here is what Problem Isolation does:


1. determines which infrastructure elements (ITIL configurations items or CIs) support the transaction


2. reruns the test(s) that caused the alert – this validates it is not transient problem


3. runs any additional tests defined for the CIs


4. collects Service Level Agreement information


5. collects all available infrastructure performance metrics (web server, application server, database server and operating systems for each) and compares them to the EUM data using behavior and regression analysis



Problem Isolation screen show performance correlation between end user response and SQL Server database locks


-------------------------------------------------------------------------------------------


6. determines and displays the most probable suspect CI and alternates


7. displays run books available for all infrastructure CIs for the PI user to run directly from the tool


8. allows the PI user to attach all the information to a service ticket, either existing or create a new one


Another key differentiator of OMi/TBEC and PI is the target user. There is such a wide variance in how organizations work that it is hard to name the role but let me do a brief description and I think will be able to determine the title in your organization.


There are some folks in the organization whose job is to take a quick look (typically < 10 minutes, in one organization I interviewed < 1 minute) at a situation and determine if they have explicit instructions on what to do via scripts or run books. When they have no instructions for a situation they pass it on to someone who has a bit more experience and does some free form triage.


This person might be able to fix the problem or may have to pass it on to a subject matter expert, for example if they believe it is an MS Exchange problem to an Exchange admin. It is this second person that Problem Isolation is targeted at. This is helping automate her job, reducing what might take tens of minutes to hours and performing it in seconds. If it ends up she can’t solve the problem it automatically provides full documentation of all information collected. That alone might take someone five minutes to write-up.


OMi’s target is the operations bridge console user. Ops Bridge operators tend to be lower skilled and face hundreds if not thousands of events per hour. Jon described how OMi helps them work smarter.


TBEC and Problem Isolation both work to find the root cause of an incident but in different ways. Much like a doctor might use an MRI or CAT scan to diagnose a patient based on what the situation is, TBEC and Problem Isolation are complementary tools each with unique capabilities.


Problem Isolation will not find problems in redundant infrastructure that OMi will. Conversely, OMi can’t help with EUM problems when no events are triggered, where Problem Isolation will.


We know this can be a confusing area. We welcome your questions to help us do a better job of describing the difference. But these two are definitely friendly.


For Business Availability Center, Michael Procopio


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP Software group on LinkedIn and/or the Business Availability Center group on LinkedIn.


Related Items



  1. Advanced analytics reduces downtime costs - detection

  2. Advanced analytics reduces downtime costs – isolation

  3. Problem Isolation page

  4. Operations Manager i page

Law School Admission Council (LSAC): customer case study

LSAC is a nonprofit organization founded in 1947 to facilitate the law school admission process. Today, however, LSAC has evolved into a cutting-edge technology services provider: its newly updated software application, ACES2, provides data-hosting services and online, real-time admission processing to LSAC members, which number more than 200 law schools in the United States, Canada, and Australia.

Challenges:                                                  
A key turning point in LSAC’s thinking about business technology management was Hurricane Katrina, recalls Jerry Goldman, Director of Technical Services, LSAC. “The hurricane didn’t affect our business, but we saw how it did affect some of the law schools we serve,” Goldman says. “It opened our eyes to the importance of having a business continuity solution in place.” They stared on a journey to enable business continuity and IT Service Management (ITSM) best practices for Storage Area Network (SAN) management.

Approach:
In view of achieving their objective - LSAC began implementing HP Real User Monitor (RUM) software, part of the HP Business Availability Center solution. “RUM lets us better analyze our ACES2 services,” Jerry Goldman, Director of Technical Services, LSAC says “We can look at transaction volume, network performance, and application-response times, and analyze whether performance issues are related to application or network issues.” With this set of comprehensive and accurate data, LSAC can work constructively with its member law schools to optimize ACES2 performance. “We’ve committed to delivering platinum service levels to our clients,” Goldman says. “HP RUM software is a critical enabler in achieving that goal.”

Results:
Goldman says “HP Real User Monitor (RUM) lets us better analyze our ACES2 services”. This solution has helped in improving the ability to deliver platinum-level services to clients, reduced risk of data loss, and reduced risk that outage will impact on operations.

For the full story please visit: LSAC boosts service with technology 

Do composite applications really need to be managed?

YES - large-scale applications such as SAP and Oracle Siebel support core business functions. Many build and implement integrations between SAP systems and non-SAP applications such as Seibel using SAP NetWeaver, SOA and/or other middleware components to support complex business processes. If they fail, business goals are in jeopardy. These complex applications are woven into heterogeneous networks, and every change or fix ripples through multiple systems and applications.


 


The focus of IT has been on managing systems, efficiently and at low cost. In today’s environment, this systems orientation results in disconnected monitoring data that does nothing to indicate the health of overall business services, making end-user complaints the first indication that service levels aren’t being met. It also makes it extremely difficult to assign problems to the team that can fix them most quickly.


 


HP Business Availability Center for SAP applications is a comprehensive solution to proactively manage SAP & non-SAP environments in a production environment, so that IT is able to best leverage its resources and respond faster to SAP incidents, thus increasing the value its SAP applications deliver to the business.


 


We have a very exciting flash demo (total run time 3:34) highlighting our approach to helping our customers better manage their business services and complex IT landscapes.


Check it out:


https://h30406.www3.hp.com/campaigns/2009/demo/bsm_sap/index.html


 


You can read more details on the solution here: http://h20229.www2.hp.com/partner/protected/assets/pdf/4AA1-9302ENW.pdf


 <-->

Webinar - "The Cloud and Your Applications: What is the Impact on Application Management"

 


Date: Thursday, July 30, 2009


Time: 11 am Pacific / 2 pm Eastern / 7:00 pm GMT Daylight Time


Are your applications doing what they are supposed to be doing? Cloud computing, tiered applications, Software as a Service (SaaS) – all add complexity and make that question more difficult to answer.

Customers are finding more business service problems. The result: customer satisfaction and retention is at risk. The good news is that despite declining budgets, you can reduce downtime, increase service levels, and improve user experience quality. 

Hear EMA Research Director Julie Craig share the early results of a recent market research survey.  You’ll also hear from HP Product Marketing Manager Amy Feldman, who will share customer best practices and success stories.


Register at http://www.enterprisemanagement.com/research/asset.php?id=1506



Related Items: 





HP Business Availability Center 8.02 What's New

 by Michael Procopio


 BAC 8.02 is now generally available. Here are the highlights of what's new. There are also a number of defect fixes in this release.


 



  • Netuitive integration:

  • Enables integrating Netuitive alarms into Dashboard

  • System Availability Management (SAM) enhancement:

  • SAM administration now displays SAM points in use, enabling more effective management of SiteScope license points by increasing visibility into point consumption

  • Apache Web server upgrade:

  • Upgrade of Apache Web server to version 2.2.11 keeps Business Availability Center current with the latest industry release and compliant with security requirements

  • End User Management (EUM) enhancements, including:

  • Real User Monitor (RUM): Additional OS support for RUM probe (Windows 2003 and RHE5 64 bit); Improved SSL traffic handling for special environments; Seven protocol decoder packs for slow requests: MSSQL, LDAP, MySQL, IMAP, POP3, SMTP, FTP; Improved system health; Keystore management improvements

  • Business Process Monitor (BPM): Support for VuGen 9.5; BPM support for Windows Vista; New MSI installation replaces InstallShield on Windows platforms; New Solaris packaging (SPARC) installation replaces InstallShield on Solaris platforms

  • Business Process Insight enhancement:

  • Implemented a way to link sub-processes to parent processes.

  • Interactive integration documentation for Business Availability Center-Service Manager/Service Center integration:

  • New interactive document that enables selecting specific integration parameters and viewing only the documentation relevant for the integration specified


 


For the Business Availability Center, Michael Procopio


 


Related items:



 


 

Business and IT closer but still not on the same page

by Michael Procopio


Network World editor Denise Dubie assessed a report by the analyst firm Aberdeen Group  in her recent article Application performance management: Keeping an eye on the end-user prize.  Her comments mirrored many of those I have spoken with recently who have said their top priorities this are year are revenue and removing any distractions from making revenue.


What was pleasant to see was data showing the end user performance is still important. More frustrating to see was that business and IT managers still differ in their priority of what needs to be measured, according to the article. As you might guess, business folks are more concerned with business processes while IT folks are more concerned with the infrastructure. One-step down from that the article covers the specifics of how each group prioritized how to do application performance monitoring.


Both end user and infrastructure monitoring are critical. Recently, at HP Software Universe in a talk I gave with a customer he showed this picture.



 


 


This does a good job of making the point that end user monitoring is critical. Since there are multiple pieces to the IT part of the puzzle, cumulative effect is important.


This customer said when he took his job there was a Severity 1 (highest priority) problem meeting everyday with typically greater than 10 "Sev 1" items. Today he has no meetings and approximately one Sev 1 per week.


How did he get here, monitoring the infrastructure. He said when they tracked down the source of the problem they put in a new infrastructure monitor for the item that failed, so he got early warning. But even in his current state he commented that end user monitoring is important because things change, whether it is a new version of the application or a change in the infrastructure that create a situation where something can go wrong that isn't currently being monitored.


For the Business Availability Center, Michael Procopio


 


Related Items:



 



 


 


Not true, IBM

 By Mike Shaw


IBM recently made some incorrect claims on their web site about HP's management products. The network side of those claims was handled on our network management blog.  I wanted to handle the application management claims here.


 


J2EE Diagnostics Claims


IBM claimed that HPs BAC solution (our solution for application management) cannot provide drill down into J2EE applications. This is not true:


 



  • HP Diagnostics software for J2EE provides a top-down, end-to-end lifecycle approach for seamlessly monitoring, triaging and diagnosing critical problems with J2EE and Java applications – in both pre-production and production environments. 

  • HP Diagnostics for J2EE starts with the end-user (real and synthetic), then drills down into application components, systems layers and back-end tiers – helping you rapidly resolve the problems that have the greatest business impact

  • HP Diagnostics will monitor any java application, and will discover and monitor the relationship between applications (java and .net)


 


Application and infrastructure data integration claims


IBM further claimed that HPs BAC does not have the capability to correlate application data to infrastructure data. This is not true. Our integration between the application and the infrastructure layers is two-way - from bottom-up and from top-down.


 




  • Bottom-up:



    • You can see how an event impacts business services above by looking upwards thru the service topology held in HP's CMDB. The services you can look up to may be applications, they may be a user experience (e.g. the online checkin user experience) or they may be steps in a business process. And, you can see what SLAs are resting on the impacted services and those SLAs' closeness to jeopardy. 

    • This service topology information can be discovered using a number of different methods, all under the overall control of the dynamic discovery manager. For example, if you have OperationsCenter's Smart Plug-ins (SPI), many of these do discovery of their domains and this is now fed into the CMDB. Or, if you are doing agentless monitoring (less expensive to buy and manage, but not the same level of fidelity and action control as with agents - it's horses for courses), this will also discover the hierarchies under the items it's monitoring. And if you have NNMi, our network management product, it will put its end-point discovery into CMDB. If you want everything discovered from business service on down, you can use our advanced discovery technology. As I said earlier, ourdiscovery manager is the overall controller, orchestrating the other discovery methods like SPIs and NNMi should you choose to use them

    • The new OMi "TBEC" (topology-based event correlation) technology is able to take an event stream, map the events to services, and then group events related by services in the service topology and thus infer which are causal events (events we need to take action on) and which are symptomatic events (events that are as a consequence of a causal event and thus don't need to be actioned). Included in the symptomatic events may well be an event from our user experience or business transaction monitoring technology. Imagine a DB is having a performance problem. This, in turn, causes a user application to slow. The real user monitor notices this an raises an event. The OMi TBEC will notice both events, realize they are related in the service topology, and infer that the DB problem is the cause and the real user monitor event is a symptom. Is this new? No - the technology was invented by Bell Labs and has been in our NNMi network management product for about 18 months now.

    • Summary: bottom-up we have two links up to the application / business service layer. The first is for exntensive "service impact analysis" and the second is for TBEC - for analysis so you just get to see the actionable events you need to do something about.






  • Top-down



    • Our performance triage technology takes performance and event information from dependent services (those services the business service having a performance problem rest on). It uses an HP Labs' patented algorithm to infer causal relationships between infrastructure service performance and fault and the business service's performance. So what? This allows you to know which area is causing the performance problem. Useful given that the average performance problem goes thru 6 to 8 groups before being solved!  By the way, the event stream doesn't have to come from Operations Center. We can, should you still have it having not swallowed the rip'n'replace mega-pain yet, take events from Tivoli (or any other event management system).

    • The performance triage module doesn't just look at performance and event streams. It looks at recent changes in the dependent services as determined by the discovery monitor (e.g. Server XYZ has had 4gig of memory ripped out). I'm sure you've heard the stat that if a change has a occurred, there's an 80% chance it's the cause of the problem.

    • And, as of last November, the performance triage module also considers the compliance state of the dependent services. How does it do this? The ex-OpsWare Server Automation product now puts its discovered information into the CMDB too, and compliance state is one of the things it discovers.  I'm sure there's a stat on how non-compliant systems screw up business services above :-)






  • And finally, something we are very proud of, and something that people really like - the 360 degree view.  Take a service, any service. For that service, you can see the following.....



    • The performance of the service versus its KPIs. Now and over time.

    • What services are above it

    • What user experiences are resting on it and what their state is

    • The business processes resting on it and the throughput of those services (i.e. Are they slowing down because of this service?)

    • The SLAs resting above this service their closeness to jeopardy

    • The status of the services this service is resting on

    • The change state of services at and below this service

    • The compliance state of services at and below this service

    • The planned changes for this service

    • What the service desk knows about this service in terms of incidents - "do we get an incident on this every Monday at this time?"




 


 


 


OK. I've gone to town on this response a little bit. But to HP Software saying we can't correlation application data to infrastructure is like telling Eugene Bolt he can't run!


 


Rip out Operations Center and replace it with NetCool


Finally, in this piece of their web site, IBM was suggesting people move from Operations Manager to NetCool. As you probably know, the migration from Tivoli to NetCool is a rip'n'replace. Operations Manager has never done this to our customer base. As a recent and concrete example, the new OMi functionality with its ability to do topology-based (i.e. no writing of event correlation rules) event correlation to reduce event streams to actionable events is an ADD-ON to existing Operations Center installations. No rip, no replace.


 


If however, you have a predilection for rippin' and replacin', then please do consider the move from Operations Center to NetCool. Personally, I'd add OMi instead because I'd want the topology based event correlation and easy life - but maybe that's just me!


 

HP Software Universe - Mainstage Robin Purohit

After a break Mainstage split into a
Business Intelligence session and a Business Technology Optimization (BTO)
session, I went to the later hosted by
Robin
Purohit, vice president and general manger, Software Products, HP Software &
Solutions



Here are some bits I found
interesting:



  1. There are 15M virtual servers now and
    it is expected to double in 2 years.


  2. Three key areas of change IT will see
    are: virtualization, SaaS and Web 2.0.


  3. Web 2.0 screws up a lot in management,
    HP is building Web 2.0 compatibility into BTO tools.


  4. Dataprotector, monitoring tools and
    mapping tools are including or working to include virtualization capability.


  5. Service Manager and Asset Manager are
    now available as SaaS.


  6. 35% of flash apps violate Adobe best
    practices. HP released a testing tool which you can download for free called
    SWFScan. The same
    technology is built into some of HP’s BTO products. Customers can also do a free
    trial of our scanning SaaS.


  7. One customer cut $5M  in costs just in
    automating incidents with HP Operations Orchestration.


  8. HP Software recently announced a suite
    of software for operations of Blackberry implementations. See HP and RIM Announce Strategic Alliance to Mobilize Business on
    BlackBerry
     for more details.


  9. HP Software is publishing
    configuration management best practices later this month.


  10. Brian Byun VP Global Alliances from VMware came on stage
    and announced announced an expanded partnership with HP to jointly develop
    software to manage VMware hypervisor technology. Denise Dubie at Network World
    wrote HP, VMware team to manage virtual
    servers
    .


There are a variety of Twitter accounts
you can follow as well as the hashtag #HPSU09. Search Twitter for
#HPSU09
.


HPITOps  – Covers BSM, ITFM, ITSM, Operations and
Network Management


HPSU09  – show logistics and other
information


HPSoftwareCTO


informationCTO


HPSoftware


BTOCMO – HP BTO Chief Marketing Officer


 


For HP BSM, Michael Procopio

HP Software Universe - day 1

by Michael Procopio


 


Today was the first day of Software Universe. I had customer meetings all day today. Here are some interesting items from my conversations.



  1. Most said budgets were down in 2009 and will be flat to down in 2010. But a few who were related to government stimulus said theirs will be up.

  2. Co-sourcing and outsourcing continue as ways to reduce costs

  3. A few were focusing on asset management with the express purpose of getting rid of things in the environment they don’t need anymore. They know they are out there but they need to find them first.

  4. Most customers I spoke to said they keep aggregated performance data for 2 years the range was 18 months to 5 years.

  5. There was an interesting discussion about the definition of a business service versus an IT service. The point being made was a business service by definition involves more than IT. While I agree this is a good point, I think the IT industry has focused on business service as a way to say - “I’m thinking about this IT service in the context the business thinks about it not just from my own IT based perspective”

  6. A number of customers have or are about to implement NNMi. If this is something you are interested in check out the NNMi Portal

  7. Many customers are moving to virtualized environment highest percentage I heard was 70%. Another customer forces all internal developers to deliver software as a virtual image.

  8. Another topic was how to monitor out tasked items. For example, some part of what you offer is delivered by a third party - how do you make sure they are living up to your standards. Two methods I heard were 1/ use HP Business Process Monitor 2/ get the 3rd party to send you alerts from their monitoring system.

  9. On the question does your manager of managers send back data to sync the original tools 1 did, 1 didn’t. For the one who did it was part of a closed loop process.

    • Monitor tool finds problem send alert to MOM (Manager of managers).

    • MOM send event ID to monitoring tool

    • Subject matter expert uses monitoring tools to diagnose problem

    • Once diagnosed updates monitoring tool which updates MOM




A very productive day for me. I hope some of this is useful information to you.


For additional coverage my blogger buddy Pete Spielvogel is also here and beat me to the first post. You can read his posts at the ITOps Blog.


There are a variety of Twitter accounts you can follow as well as the hashtag #HPSU09


HPITOps – Covers BSM, Operations and Network Management


HPSU09 – show logistics and other information


HPSoftwareCTO


informationCTO


HPSoftware


BTOCMO – HP BTO Chief Marketing Officer 


 


For HP BSM, Michael Procopio

How long between the problem and the first phone call?

 By Mike Shaw, BSM Product Marketing



 Last year, we did a series of in depth interviews with customers (28 of them, actually).  As part of these interviews, we asked if people did proactive user experience monitoring - either using synthetic scripting technology to pretend to be a user, or using a probe on the network to look at the data going to the users' screens and monitoring the response time.


 


About half the respondents said they did. This ties in with a recent Aberdeen study that found 57% of companies didn't do user experience monitoring.


 


So, we asked on IT manager who didn't do user experience monitoring, why had he not invested in this technology. "Because we respond very quickly when the first customer rings in", was his response.


 


And since that day, I've been on a quest to get a magic number. How many minutes, on average,elapses between a business service giving a poor user experience and the first customer calling in. I have only three data points, but no definitive study. The average seems to be about 30 to 45 minutes.


 


To get another angle, whenever I present to a friendly audience, I'll ask them how often they have called a company when they have had problems with a user interface (e.g. on an ordering web site). Of the 160 people I've asked, just two had actually picked up the phone, and in both  situations, it's been something critical like sorting out a mobile/cell phone bill.


 


I have another data point. A study by Corporate Executive Board in 2004 found that the average cost to a company of down-time is $1.3m per hour. That's $27,000 per minute.


 


So, we have 30 to 45 minutes (very rough estimate). We have $27,000 per minute. Being conservative (and very rough), we have a cost of 30 minutes X $27,000  or $81,000 per poor user experience problem. 


 


Do you have any data on the average time between a poor user experience situation starting and the first customer calling in? If you do, could you please post a comment with the data - it amazes me that such data is not readily available "out there".

BSM Evolution: Small Enterprise Example

My previous BSM evolution postings focused on mega-corporations and large IT organizations with a myriad of personas.  In this post, I will contrast the experience of a relatively small IT shop of roughly 30 full time IT operations personnel.


Back when the economy was cooking along, an up and coming commercial construction company grew right out of their business model.  Historically, they utilized a decentralized model, setting up and staffing a stand-alone onsite operation for each new project. This model was excellent at delivering customized project support, but lacked scalability and leverage; with remote site spin-up slow and error prone.


From an IT perspective, the CIO realized they needed to, in his words, "Consolidate and professionalize the IT operations", with the following goals:



  • 1. Improve quality of service and experience for worksite users & applications

  • 2. Contain IT costs and efficiently scale current IT personnel to meet growth

  • 3. Improve speed, accuracy, and agility of spinning up new project worksites

Key Personas:


CIO



  • Many years of commercial construction experience

  • Personally drove IT consolidation / professionalization strategy and roadmap

  • Directly engaged in evaluating and selecting the solution vendor/consultant

VP of IT



  • "Co-pilot" for CIO on strategy, drove project deployment and vendor engagement

Subject Matter Experts (SME)



  • One for performance and availability tools / architecture

  • One for service management process workflow and automation (helpdesk)

Two Key Parallel Evolution Paths:


Path A:  Performance, availability, and quality of experience monitoring


Step 1: -Deployed synthetic end-user / application monitors, agentless remote site infrastructure monitoring, and general WAN/LAN management


             -Basic service experience reporting, and per-site performance dashboards


Step 2:  -Enterprise infrastructure fault/performance (agent based system, OS, DB)


             -Central "IT Command Center" event console with trouble ticket integration


Step 3:  -In-depth application management modules (exchange, SAP)


             -Advanced network services (route analytics, performance)


Path B:  Service management process workflow and automation


 Step 1:  -Single call/request center organization established


              -Incident management (utilized pre-packed ITIL module)


 Step 2:  -Knowledge management process, analytics and automation modules


 Step 3:  -Configuration and change management process/automation


              -Service Level Management definition and basic reporting


An Uncommon Sequence of Evolution Steps


Notice the interesting order of the steps.  The CIO dictated that the performance monitoring path start with remote site end-user / application experience monitoring.  The original roadmap proposed by the system integrator recommended starting with basic data center tools, advancing through central event console, then application and database management, and finally end user experience.  This is a traditional evolution path, but the CIO was adamant that, "what happens at the remote work-sites IS the business".  So, he wanted an immediate awareness of remote site experience to drive the design of every step in the roadmap. 


There was a similar "cultural" direction from the CIO on the service management workflow path.  Again, the CIO insisted that Knowledge management be moved up in the evolution before configuration, change, and service level management.  Typically, significant knowledge management execution is viewed as "icing on the cake" by most organizations, and only implemented after all the other core ITIL processes.


This CIO believed that analyzing and formalizing knowledge learned from successes and failures of spinning-up remote sites and dealing with issues was the best early investment. This approach immediately became part of the standard IT culture, and played a significant role in guiding change and configuration management process definition.


The CIO's Project-Based perspective


This CIO is indeed very ITIL savvy, but I think living and breathing the commercial construction business had a significant impact on his choice of system integrators. During the bidding process for the ITSM/BSM contract, it came down to three competitors in a direct "shoot-out". System integrator number one and two brought product and ITIL experts to the shoot-out, concentrated very heavily on features and functions, and gave a fixed-price bid of 200 deployment days. 


System integrator number three brought a project manager to the shoot-out, and changed 75% of the discussion to, "here is how we will navigate the project and be successful". Can you guess who won? It shouldn't be news to anyone that a CIO's background alters the decision criteria, or the roadmap vision.... But it is always interesting to observe it in action.  Maybe I will write a post about that someday


Conclusion


This IT organization is relatively small, so the decision making process and personas are greatly simplified compared with the large corporations previously analyzed. Despite the CIO's unique influence on approach and deployment sequence, in the end, the same fundamental truths of BSM/ITM evolution apply.... Just on a different scale, agility and timeframe.    


Bryan Dean - BSM Research



 Tweet this! 


Related Items



 


 


 

BSM Evolution: The CIO/Ops Perception Gap

 


There are many potential culprits for why IT organizations struggle to make substantive progress in evolving their ITSM/BSM effectiveness. A customer research project we did a few years ago offered an interesting insight into one particular issue that I rarely see the industry address. The research showed that most CIO’s simply had a different perception – when compared to their IT operations managers- of their IT organization’s fundamental service delivery maturity and capability. This seemingly benign situation often proved to be a powerful success inhibitor.


 


The Gap:


A substantial sample size of international, Global 2000 enterprise IT executives participated in the study. When asked to prioritize investment priorities on a broad range of IT capabilities, we saw a definite gap. IT Operations managers consistently ranked, “Investing to improve general IT service support and production IT operations” in their top 1 or 2 priorities, where CIO’s ranked this same capability much lower as a priority 6 or 7.


 


The Perception:


When pressed further, CIOs believed that the IT service management basics of process and technology were already successfully completed, and the CIO’s had mentally moved on to other priorities such as rolling out new applications, IT financial management, or project and portfolio management.


 


Most of the CIOs in the study could clearly recall spending thousands of dollars sending IT personnel to ITIL education, and thousands more purchasing helpdesk, network, and system management software. Apparently, these CIO’s thought of their investment in service operations as a onetime project, rather than an ongoing journey that requires multiple years of investment, evolution, reevaluation, and continuous improvement.


 


IT operations managers on the other hand- clearly had a different view of the world. They were generally pleased with the initial progress from the service operations investments, but realized they were far from the desired end state. The Ops managers could plainly see the need to get proactive, to execute advanced IT processes and more sophisticated management tools, but could not drain the proverbial swamp while fighting off the alligators.


 


The Trap:


We probed deeper in the research, diligently questioning the IT operations managers on why they didn’t dispel the CIO’s inaccurate perception. In order to secure the substantial budget, these Ops managers had fallen into the trap of over-promising the initial service management project’s end-state, ROI and time to value. (I wouldn’t be surprised if they had been helped along by the process consultants and software management vendors!)


 


These Ops managers saw it as “a personal failure” to re-approach the CIO and ask for additional budget to continue improving the IT fundamentals. Worse yet, they had to continually reinforce the benefits from the original investment so the CIO didn’t think they had wasted the money. So, the IT operations staff enjoyed the result of reactively working nights and weekends to meet business’ expectations, and make sure everyone kept their jobs. Meanwhile, the CIO’s slept well at night thinking, “Hey, we are doing a pretty darn good job”, but faced the next day asking, “Why are my people burnt out?” A vicious cycle.


 


Recommendation through Observation:


Im not wild about making recommendations since I merely research this stuff… not actually perform hands-on implementation. Instead, I will offer some observations of best practices from companies who appear to be breaking through on BSM, lowering costs, raising efficiency and improving IT quality of service.


 



  1. Focus on Fundamentals: It is boring and basic, but absolutely critical to continually look for ways to improve the foundational service management elements of event, incident, problem, change, and configuration management. Successful IT organizations naturally assume that if they implemented these core processes more than 3 years ago, they likely need to update both technology and process. If FIFA World Cup Football clubs and Major League Baseball teams revisit their fundamental skills each and every year, why wouldn’t IT?

 



  1. Assume a Journey: IT leaders who develop a step-wise, modular path of realistic projects that deliver a defined ROI at each step have the best track record of securing ongoing funding from the business. The danger here is defining modular steps that are so disconnected and silo’d, that IT never progresses toward an integrated BSM/ITSM process and technology architecture. This balance continues to be one of the most difficult to manage.

 



  1. Empowered VP of IT Operations: The advantages of a CIO empowering a VP of IT operations and holding them accountable for end-to-end business service has been discussed in previous posts. The practice of having a strong VP of operations who has executive focus on service operations and continual service improvement, while having end-to-end service performance responsibility does appear to be a growing trend and success factor.

 



  1. Focus on the Applications: In the same research study that showed the perception gap on, “Investing to improve general IT service support and production IT operations”, there was consistent agreement on, “Investing to improve business critical application performance and availability”. The CIO’s, Ops Managers and Business Relationship managers all ranked this capability as a top 1 or 2 priority.

 


Successful BSM implementations focus on the fundamentals of process and infrastructure management, but do so from a business service, or an application perspective. This approach not only enables an advantageous budget discussion with the business, but it also hones the scope and execution of projects.


 


It is difficult to assess the relative impact of this CIO/IT Ops perception gap, considering the wide variety of challenges that IT faces. But hopefully, this post gives you something to consider when assessing your own IT organization’s situation and evolution.


 


Let us know where your organization fits – please take our two question survey (two demographics questions also). We’ll publish the results on the blog.


 



  • Describe the perception of your IT's fundamental service delivery process

  • How often does your IT organization significantly evaluate and invest to update your fundamental IT process

 


Click Here to take survey


 


Bryan Dean – BSM Research

Monitoring your cloud computing as easy as calling an airport shuttle

HP made an announcement about new cloud computing management capabilities today: HP Unveils "Cloud Assure" to Drive Business Adoption.


HP currently offers Software-as-a-Service (SaaS) for individual management applications such as HP Business Availability Center (BAC) and HP Service Manager primarily for intranet and extranet applications.




HP Cloud Assure helps customers validate:



  • Security – by scanning networks, operating systems, middleware layers and web applications. It also performs automated penetration testing to identify potential vulnerabilities. This provides customers with an accurate security-risk picture of cloud services to ensure that provider and consumer data are safe from unauthorized access.

  • Performance – by making sure cloud services meet end-user bandwidth and connectivity requirements and provide insight into end-user experiences. This helps validate that service-level agreements are being met and can improve service quality, end-user satisfaction and loyalty with the cloud service.

  • Availability – by monitoring cloud-based applications to isolate potential problems and identify root causes with end-user environments and business processes and to analyze performance issues. This allows for increased visibility, service uptime and performance.




HP Cloud Assure provides control over the three types of cloud service environments:



  • For Infrastructure as a Service, it helps ensure sufficient bandwidth ability and validates appropriate levels of network, operating system and middleware security to prevent intrusion and denial-of-service attacks.

  • For Platform as a Service, it helps ensure customers who build applications using a cloud platform are able to test and verify that they have securely and effectively built applications that can scale and meet the business needs.

  • For Software as a Service, it monitors end-user service levels on the cloud applications, loads tests from a business process perspective and tests for security penetration.




A diagram showing the differences in the services is at Cloud Computing Basics.




In the end it doesn't matter where the service is; you need to be sure it is available and performing to expectations. Cloud Assure provides the capability in a way that is very agile. You say "I need this service monitored" and it is monitored. Its just like calling for an airport shuttle -- you call, they show up.




Related articles:





For Business Availability Center Michael Procopio, product manager HP Problem Isolation.

OpEx versus CapEx

Forrester just posted on how the recession is hitting capital budgets (CapEx) and that you should consider using operating expenses (OpEx) to purchase software (http://www.idc.com/getdoc.jsp?containerId=lcUS21765009).


About 18 months ago, we introduced one year term licenses on the Business Availability Center (application and business transaction management) software so that it is more likely to fit within OpEx budgets.


Mike Shaw.

BSM Evolution Paths: Financial Services Example

 


When two Fortune 500 companies merge the IT convergence can feel like two high speed trains on parallel tracks speeding toward a single-track tunnel. Not only is IT tasked with maintaining or increasing quality of service, but the CEO’s are quite impatient to quickly rationalize the IT operating expense equation of “1+1=1.25”. Maybe 1.50 if you have an extremely benevolent Board of Directors.


 


Unlike the Automotive Industry example posted earlier (BSM Evolution Paths: Auto Industry Sample), this Financial Services example has much less tops-down roadmap direction, and much more independent parallel paths. Let’s take a look at three of the key personas and evolutions within these parallel paths.


 


Data Center Operations Manager; Infrastructure Operations path:


 


The new Data Center Operations Manager (DCOM; reporting to VP of IT Ops) commissioned a tools architecture analysis. They inventoried their management tools and counted over 80 major “platforms” in the fault, performance and availability category alone!


 


The DCOM empowered a Global Software Management Architect to drive a “limited vendor” strategy to simplify and standardize the tool environment. Although there were many individual domain experts bent out of shape, this standardized environment limited the vendor touches, enabled renegotiated license/support contracts, concentrated tool expertise and resulted in improved quality of service.


 


The fault, performance and availability architecture was boiled down to three major vendors covering three broad categories (plus device specific element plug-ins):



  • System Infrastructure (Server, OS, Database, storage middleware, LAN feeds, etc.)

  • Network Services (WAN, LAN, advanced protocols, route analytics, etc.)

  • Enterprise Event (consolidated event console, correlation, filtering, root cause)

 


The DCOM could have pushed harder for a single vendor covering all three categories, but it was a matter of time-to-deploy pragmatism. A vendor could only be selected as category solution if the product was successfully deployed previously, and internal deployment expertise existed to lead the global implementation. This “survival of the fittest” approach did not necessarily drive the most elegant architecture, but it did speed deployment and limit risk.


 


Independent roadmaps and key integration capabilities were developed for each category to meet 6, 12, 18 and 24 month milestones.


 


CTO; Business Service Oversight path


 


Early on in the merger process, there was a power struggle to own the business service visibility and accountability solution. The VP of IT Operations wanted the tools, process and organizational power, but the Lines of Business insisted on a more independent group that would sit between IT Operations and the business-aligned Application Owners.


 


The Online Banking Group from one of the pre-merger divisions had successfully implemented a business service dashboard and Service Level Agreement reporting solution (based primarily on end-user experience monitoring). Using an “adopt and go” strategy, the CIO empowered the CTO to develop an end-to-end group and expand the solution to all six major business units.


 


This business unit expansion rolled out over 12-18 months and was successful, but limited to monitoring and reporting. Over the next 12 months, Application Owners, Line of Business CIO’s and VP of IT Operations all wanted to extend the business service monitoring to:



  1. Problem isolation, application diagnostics, and incident resolution

  2. In-depth transaction management of composite applications

 


Director Service Management; Enterprise CMDB path


 


The Director of Service Management, reporting to VP IT Ops, drove two major initiatives over the first 12 months of the merger.



  1. Consolidate to a single, global, follow-the-sun service desk

  2. Rationalize and standardize the request and incident management process

 


I could easily spend an entire blog post discussing the IT process convergence and standardization, but I refuse! Instead, I’ll focus on what happened in the 12 months following the service desk consolidation.


 


The Director of Service Management launched a CMDB RFP which was originally grounded in incident, problem and configuration management. The RFP touched off an enterprise-wide nerve, not to mention a flurry of vendor responses. The project quickly expanded, and changed focus to the “hotter” driver of change and (release) risk management, and how to drive all IT process from an enterprise service model.


 


Once the application owners got involved (from a change/release control perspective), and the infrastructure operations got involved (from a change and performance/availability perspective), and the CTO got involved (from a business service reporting and accountability rperspective) all of a sudden incident management took a back seat in the decision process.


 


In the end, a service discovery, dependency mapping and change/release management solution was selected that was a different vendor all together from the incumbent service desk solution.


 


An interesting journey… so far


 


The three paths described above are clearly a small subset of the overall work done for this corporate merger, but hopefully gives a glimpse into the BSM evolution dynamics. By all accounts, this company has been successful in their journey; you may be interested to know that this financial services company is not participating in the government bail-out program.


 


The lack of a tops-down “enterprise IT transformation” roadmap did not hinder their progress… in fact some will argue it enable their progress! You can observe, however, that at the end of each path there is a drive towards further integration and cross-IT dependence. It will be interesting to watch this company, and see how their approach evolves as they continue down the intersecting evolution paths.


 


Bryan Dean, BSM Research

Prediction BSM Evolution

I usually do not like making public predictions because I hate being wrong. But I was discussing my last blog post (Business Service Visibility & Accountability: Where is it Homed?) with a colleague and he pressed me for my prediction of where I thought this function will eventually live in the organization. Maybe more of you have the same question, so in this post I will lay out what I believe is the compelling evidence… and I might even make a prediction.


Let’s take a quick look some of the key evidence or clues:


 


CIO role continues to shift


This has been researched to death, but is still true. CIO’s are spending more time on business innovation and less time on production IT operations. The range of issues that CIO’s drive and influence is staggering. Does this mean they don’t care about production operations and business service accountability? No, they care greatly; it is just that most CIO’s have learned that having a top-notch, empowered VP of IT Operations is the only way to be a proactive CIO.


 


Application owners want to focus on development


My previous post looked historically at application owners and line of business CIO’s buying their own business service visibility and accountability tools because the pressure they felt from the business. This did happened and continues to happen in many organizations, but research shows that after a couple years of owning, architecting and maintaining these tools the application owners realize that production management tools takes valuable time away from their primary goals.


 


They are primarily goaled to get new functionality out the door that meets business requirements for function, quality, performance and security. It is still vitally important to the Application Owners to maintain visibility and accountability once their applications are in production. They will continue to be a catalyst in purchasing performance tools, and providing the intellectual property for rules, thresholds and reporting metrics. But ownership of the tools, configuration, vendor management and ongoing maintenance of the tools is clearly shifting to the production operations teams.


 


Line of Business CIO’s don’t own enough


Line of business CIO’s love to have business visibility and accountability tools in their hot hands, but they also recognize the issues of owning tools without owning the IT infrastructure. Security access and rights is a constant issue for them. Management process and tool architecture is also becoming a more standardized, centralized function that the line of business IT participates strongly in, but really is not in a position to own.


 


Successful customers adding problem resolution


Something I have observed in customers, who have successfully implemented a business service visibility and accountability solution, is that the next step in their evolution is to tie issue visibility to issue diagnostics and resolution. They find it is wonderful that they now have a business relevant way to measure IT service performance, but their constituents quickly move to, “Ok, now fix it when it breaks”.


 


Nobody in IT will be shocked to hear business takes a, “what have you done for me lately”, stance. So, the tool owners now find themselves sorting through how to integrate into the established event, incident and problem management processes. Depending on where they sit organizationally, this can be a painful yet necessary adjustment when trying to improve efficiency and time to diagnose/repair.


 


VP of IT Operations taking on end-to-end responsibilities


Seven years ago we conducted an extensive ITSM customer research project. At that time, there were a large number of CIO’s and industry pundits who had taken on the mantra, “Run IT like a business”. IT consolidation, adoption of ITIL process standards, organization alignment and tool deployment were all solid benefits from this era (and continue as we speak), but did not solve the issue of managing end-to-end business services.


 


At the time, too many IT Operations managers became “infrastructure service providers”, and when polled did not feel responsibility for the application, the end user experience, or the final business service. The CIO, and many of the application owners ended up shouldering the business service responsibility. Today, this is radically changing and ITIL V3 clearly reflects this evolution.


 


Application development teams continue to be organizationally aligned to the line of business more often than not, but research is showing a dramatic shift in mindset as to whom is responsible for the end-to-end application performance. The majority of IT Operations organizations today own Level 1 application monitoring and often own level 2 application support. level 3 application support typically remains aligned with the development teams, but there is no doubt that the VP of IT operations is taking on end-to-end responsibility.


 


Tools vendors are getting their act together


Alas, I must at least touch on technology… but only briefly! The major tools vendors have done a commendable job putting together portfolios of solutions that span the BSM/ITSM lifecycle. Plenty of improvement can still be made on integration, interoperability and ease of use; but I think it is fair to say IT finally has access to a management technology architecture that can be leveraged and multi-purposed to serve a wide range of persona needs and management disciplines.


 


The Prediction


You have probably guessed my prediction by now based on my biased presentation of the six clues above. I believe the ownership of the business service visibility and accountability solution will be homed under the VP of IT Operations, and purpose-specific instances will be customized for the application owners, business relationship managers, line of business CIO’s and executive IT management.


 


The VP of IT Operations – empowered by the CIO - will continue to drive compliance to a single, standardized IT process and software management architecture (not “single vendor”, but “limited vendors”). This will irritate many ‘best-of-breed’ fans, but in the end it will pay off.


 


The business service visibility and accountability function will be a module of a more comprehensive fault, performance and availability solution set that effectively ties together discovery, visibility, accountability, issue detection, isolation, diagnosis, business impact analysis and direct connection to the enterprise service model.


 


Implementing this cross-IT management solution will not be easy. Organizationally, the VP of IT Operations will have to empower an independent executive-level manager to drive, similar to an ERP Application owner. This will fail if owned by an “ivory tower” type, but must be practically driven by trading off the lobbying of existing IT domain and function specialists with the need to consolidate, standardize and implement a modular, multi-purposed solution.


 


Ok, maybe I got a little carried away at the end there, but one cannot ignore the demonstrable evidence of business, organization and persona driver dynamics. I would be surprised if we do not see a pragmatic, yet steady evolution toward this ultimate model.

Webinar announcement: "Decrease IT Operational Costs by Accelerating Problem Resolution"

In a recent post, I talked about  “from user experience monitoring to user experience management”.  


Related to this, a recent Forrester study found that 74% of problems with business services are reported by the end users through the service desk; not reported by infrastructure management tools. The same survey found that an average of six service desk calls are needed to identify the problem owner for a top-down performance problem.  


 


What is therefore needed if we want to increase IT Ops efficiency and stop using our customers as the most expensive monitoring devices there are, is to proactively monitor customer experience before our customers do and have the tools to quickly and accurately pinpoint the cause of business service performance problems.  


Senior Enterprise Management Associates (EMA) Analyst Liam McGlynn and my colleague Sanjay Anne are conducting a webinar on this topic on March 19. The webinar is entitled “Decrease IT Operational Costs by Accelerating Problem Resolution”.


 


For more details and to register, please go to: http://www.enterprisemanagement.com/research/asset.php?id=1127.


 


 

BSM Evolution Paths: Auto Industry Sample

In the last post, Bryan Dean, our research expert in the BSM team, outlined the different ways in which customer evolve towards Business Service Management. In the next few posts, Bryan will give an example of each of the different types of evolution. Over to you Bryan ....
_______
About three years ago, the business division managers of a multinational automobile manufacturing company planned a bold transformation of their distribution network to leapfrog the competition.  They enthusiastically laid out a roadmap for business process innovation and aggressive customer/dealer satisfaction initiatives.  

Only one real problem; the CIO knew that building, rolling-out, and operating the underlying IT for this future business vision exceeded their current capabilities.  The CIO eventually had to raise the red flag and explain to the executive committee why IT was the bottleneck.  Ouch, not a good day.

 

In the previous post BSM Evolution Paths:  Samples and Observations, we talked about five common evolution paths, the organizational and persona dynamics of an Automated BSM/ITSM journey.  In this post we will overview a specific example.


 


To be fair, the CIO spent years driving significant investment in process, tools and the organization.  Let’s look at a subset of key personas and BSM/ITSM foundation: 

 

Director of Infrastructure (reporting to the VP Global IT Ops):



  • Enterprise-class central event/performance platform and console

  • WAN/LAN network management platform

  • Basic, component level performance and availability reporting

  • Dozens of vendor-specific configuration and admin tools

Director of Service Management (reporting to the VP Global IT Ops):



  • Global, consolidated helpdesk/service desk

  • Well defined and automated incident process; basic level problem, configuration, and a manual change process

Director of Applications (development, test & level 3 support.  Reports to business divisions):



  • Suite of pre-production stress-test quality and performance tools

  • End-user  and application performance/diagnostic tools (test environment)

The Key Evolution Steps


Step 1:  CIO empowers and holds the VP of Global IT Operations (VPITops) accountable for end-to-end business service responsibility.  Imagine the panic on his face!  VPITops launches key lieutenants on quick gap analysis.

 

Step 2:  The VPITops needed a quick win.  He believed that visually demonstrating and reporting performance and availability from a business service perspective -versus an infrastructure perspective- would be a catalyst for driving “aligned” IT behavior.  The current network and infrastructure products didn’t have this capability, so VPITops leveraged the tools already proven by the application test and level 3 support team.


VPITops established a new team within Operations (parallel to infrastructure event management) to own and run the end-to-end business service visibility/accountability solution.  Integration was established between the two teams and tools.

 

Step 3a:  VPITops took his new business service visibility/accountability tool (in dashboard/report form) to key business division managers, and established a business relationship management function.  This converted the conversation from anecdotal complaints, to measurable service levels.  The CIO had tangible proof of progress.

 

Step 3b:  While engineering step 2, the Tools and Process Architect realized they needed a better means of discovering and maintaining the IT/Business service models.  Their infrastructure environment was shared, complex and dynamic enough that static service models were not effective, so they brought in an application dependency mapping technology.  This success spawned a serendipitous benefit to another team in step 4a.

 

Step 4a:  The application quality/test and release team realized the service model could be utilized in the service transition process.  They previously had several very painful episodes of moving complex applications from test into production.  With an accurate, up to date service model of the production environment they could better identify dependency issues before roll-out.  Speed and accuracy...  Happy CIO.

 

Step 4b:  The Director of Service Management and the architect evaluated how to federate the data between the application dependency mapping service model and the CI configuration data in the helpdesk.   The software vendor provided a federation / reconciliation adaptor, so the helpdesk was able to leverage the CI relationships and operate off a “single version of the truth” (sounds eerily like an ITIL V3 CMS!). 

 

Near Term Roadmap


·         Automate change/configuration workflow and provisioning


·         Upgrade/replace enterprise event and performance console to leverage service model for root cause analysis and business impact assessment


·         Apply business service relationship management to additional business divisions


·         End-to-end visibility of composite MQ application business transactions   

 

The Verdict of the Journey so far


The CIO still has a job, and has a funded roadmap.  One might ask why they didn’t start with step 4b, and establish the CMDB and service model first?  Well, the CIO was on the hot seat, and they were concerned about getting bogged down in an enterprise-wide CMDB architecture project. 

 

This exemplifies the unpredictable and unique nature of evolution paths.  More can be said about the delicate balance between tops-down guidance, and fostering organic innovation from within the ranks of IT.  In future posts, I will discuss and analyze this further, as well as introduce other examples.

BSM customer evolution paths: Samples and observations

When developing and marketing products, we often have questions  which can only be answered by going out there and seeing what people are doing. We have a guy on the BSM team who does this for us. His name is Bryan Dean. I've worked with Bryan for many years and I've always been impressed by his objectivity and the insight he brings to his analysis (i.e. he doesn't just present a set of figures - he gets behind the figures).


 


At the end of last year, we asked Bryan to analyze the top 20-odd BSM deals of 2008. He formed a number of conclusions from this research. One set of conclusions concerned how people "get to BSM" - how they evolve towards an integrated BSM solution. I asked Bryan to help me with a series of posts to share what he learnt about evolutions towards BSM because I think that knowing what our other BSM customers are doing may help you.


 


________


 


Mike: Bryan, can you give a summary of what you learnt?


Bryan: There is no one evolution path. It's fascinating to me that a hundred different IT organizations can have virtually the same high-level goals, fundamentally agree on the key factors for success, and yet end up with a hundred unique execution paths.


 


Before I answer your question, can I create a definition? The term "BSM" is very poorly defined within the IT industry - different vendors have different versions, and so do the industry analysts (in fact, some other research I did last year concluded that very few people had a clear idea of what BSM means).  So, I'd like to introduce the term "Automated Business/IT Service Management"  or AB/ITSM.


 


Back to your question, I think I can group all the different evolution paths into five key types:  




  1. ITSM incident, problem change & configuration:  this evolution is driven out of the need for process-driven IT service management with the service desk as a key component


  2. Consolidated infrastructure event, performance and availability: this is driven by a recognition that having a whole ton of event management and performance monitoring systems is not an efficient way to run IT, and so there is a drive to consolidate them into one console.


  3. Business service visibility & accountability:  this is more of a top-down approach - start with monitoring the customer's quality of experience and then figure out what needs to happen underneath. This is popular in industries where the "web customer experience" is everything - if it's not good, you lose your business


  4. Service discovery & model: this is where evolution towards integration is driven from the need for a central model (the CMDB). Often, the main driver for such a central model is the need to control change


  5. Business transaction management: today, this is the rarest starting point. It's driven by a need to monitor and diagnose complex composite transactions. We see this need most strongly in the financial services sector

Mike: How about the politics of such AB/ITSM projects?  (I don't see the AB/ITSM term taking hold, by the way :-) )


Bryan: Politics (or, most specifically, the motivational side) is important. I think many heavy thinkers in our industry have the mistaken assumption that that there is a single evolution path, controlled from the top on down by the CIO following a master plan. Trying to manage such a serialized, mega project is a huge challenge and too slow, not to mention that 99% of CIO’s are not in the habit of forcing tactical execution edicts on their lieutenants (I know I’ll get some argument on that one :-) ).


 


What I see from my research is that the most successful IT organizations are those who have figured out how to balance between discrete doable projects, and an overall AB/ITSM end-goal context and roadmap.  Typically, the CIO lays down a high-level vision that ties to specific business results, and then allows key lieutenants to assess and drive a prioritized set of federated, manageable projects that independently drive incremental ROI. Some IT organizations may have a well-defined integrated roadmap, but the majority of IT run federated projects in a fairly disjointed fashion.


 


These parallel paths are owned by many independent personas within IT, each trying to solve the specific set of issues at hand. For them, being bogged down in how their federated project aligns and integrates with all the other AB/ITSM projects is daunting… if not fatal.


 


And on reflection this makes sense to me - the human side of things plays a large role in such endeavors.


 


Mike: What do you mean?


Bryan: IT organizations of all shapes and sizes have goals to reduce costs, increase efficiency, improve business/IT service quality, and mitigate risk all while applying technology in an agile way to boost business performance.   What I find interesting is how specific, funded initiatives are created by specific personas to achieve the goals.


 


In future posts, I will share some specific examples of how customers evolved through these paths, the key driver personas, the core motivations and how these paths come together.

There are a number of ways of populating the service dependency map

 


In a post two weeks  on this blog, I listed all the ways that we use service dependency maps (model-based event correlation, service impact analysis, top-down performance problem isolation, SLAs, etc).  What can be used to discover service dependency information?


 


OperationsCenter Smart Plug-ins (SPIs) now discover to the CMDB


If you're using the agent-based side of OperationsCenter (OpC), then each managed node will have an agent on it. You can put a smart plug-in (SPI) onto that agent. SPIs have specialized knowledge of the domain they are managing. There are many SPIs for all kinds of things from infrastructure up to applications like SAP. Many of the SPIs discover (and continue to discover) the environment they are monitoring. This is agent-based discovery using all the credentials you've already configured into the OpC agent.




The OMi team are working on putting SPI-based discovery information into the HP CMDB (the Universal CMDB or uCMDB).


 


Agentless monitoring populates the uCMDB


If you have agentless monitoring (HP SiteScope) this will populate the uCMDB too (as of SiteScope version 10).




Whatever SiteScope monitors you have configured will send their configuration information to the uCMDB. So, if you're monitoring a server with a database on it, all the information about the server and its database will be sent to the uCDMB.


 


Network Node Manager populates the uCMDB


As of the latest version of Network Node Manager (NNMi 8.10), discovered network end-points are also put into the uCMDB. "Network end-points" are anything with a network terminator - network devices, servers, and printers. NNMi provides no service dependency information, but it does provide an inventory of what's out there.




This inventory discovery is useful for rouge device investigation - noticing an unknown device, creating a ticket to the group responsible for that type of device so they can look into it.


 


Standard Discovery


Our Standard Dependency Discovery Mapping product (DDM-Standard) will discover your hosts for you. This also discovers network artifacts (but, see NNM discovery above - if you have NNMi, this is a more detailed network discovery mechanism).


 


Advanced Discovery


Advanced Dependency Discovery Mapping will discover storage, mainframes, virtualized environments, LDAP, MS Active Directory, DNS, FTP, MQSeries buses, app servers, databases, Citrix, MS Exchange, SAP, Siebel, and Oracle Financials.




You can also create patterns for top -level business services and DDM-Advanced will discover those too.


 


Transaction Discovery


Our Business Transaction Management product, TransactionVision,  deploys sensors to capture application events (not operational events) from the application and middleware tiers. These sensors feed the events to the TransactionVision Analyzer which automatically correlates these events into an instance of a transaction. TransactionVision also classifies the transactions by type - bond trade, transfer request, etc. Thus, TransactionVision is discovering transactions for you.




TransactionVision puts this transaction information into the CMDB. In other words, the CMDB doesn't just know about "single node" CI types like servers, it also knows about flow CI types - transactions.




Also, if the CMDB notices that the transaction flows over a J2EE application, it links the transaction to information in the CMDB about this J2EE application - the transaction step and the J2EE app are now linked in the model. .


__________


 


By the way, my colleague Jon Haworth has just posted on the value of discovery in the realm of Operations Management at ITOpsBlog (28th January, "Automated Infrastructure Discovery - Extreme Makeover").

Search
About the Author(s)
  • Anil is an enterprise software professional with 15+ years of experience. He has both breadth and depth of understanding in IT Infrastructure management including Network, System, Storage, Virtualization and Cloud. As a product manager, Anil had successfully introduced many new products into the world wide market.He innovates on regular basis and he holds many patents.
  • Doug is a subject matter expert for network and system performance management. With an engineering career spanning 25 years at HP, Doug has worked in R&D, support, and technical marketing positions, and is an ambassador for quality and the customer interest.
  • Drew is a subject matter expert for the BSM product structure, the BSM simplification program and the BSM Customer Appreciation Program. With a career spanning 10 years at HP, Drew has worked in Consulting and Product Management on various HP Software management products.
  • Dan is a subject matter expert for BSM now working in a Technical Product Marketing role. Dan began his career in R&D as a devloper, and team manger. He most recently came from the team that created and delivered engaging technical training to HP pre-sales and Partners on BSM products/solutions. Dan is the co-inventor of 6 patents.
  • This account is for guest bloggers. The blog post will identify the blogger.
  • Manoj Mohanan is a Software Engineer working in the HP OMi Management Packs team. Apart being a developer he also dons the role of an enabler, working with HP Software pre-sales and support teams providing technical assistance with OMi Management Packs. He has experience of more than 8 years in this product line.
  • Architect and User Experience expert with more than 10 years of experience in designing complex applications for all platforms. Currently in Operations Analytics - Big data and Analytics for IT organisations. Follow me on twitter @nuritps
  • Pranesh Ramachandran is a Software Engineer working in HP Software’s System Management & Virtualization Monitoring products’ team. He has experience of more than 7 years in this product line.
  • Ramkumar Devanathan (twitter: @rdevanathan) works in the IOM-Customer Assist Team (CAT) providing technical assistance to HP Software pre-sales and support teams with Operations Management products including vPV, SHO, VISPI. He has experience of more than 12 years in this product line, working in various roles ranging from developer to product architect.
  • Ron is a subject matter expert for BSM\APM, Currently in the Demo Solutions Group. Ron have over thirteen years of technology experience, and a proven track record in providing exceptional customer service. He began his career in R&D as a software engineer, and team manager.
  • Stefan Bergstein is chief architect for HP’s Operations Management & Systems Monitoring products, which are part HP’s business service management solution. His special research interests include virtualization, cloud and software as a service.
Follow Us
Twitter Stream


HP Blog

HP Software Solutions Blog

Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation