Infrastructure Management Software Blog

Administrative burden for Ops tools, the hidden cost

Super advanced tools that claim to automate and optimize front line operations tasks are great, but how much effort do you have to expend to configure and maintain them in dynamic environments? What happens when the costs of admin exceed the benefits gained?

Q&A from the Service Intelligence Webinar

As part of the rollout of our Service Intelligence offerings, a webinar was presented that showcased our latest solutions - Service Health Reporter and Service Health Optimizer. The webcast included a demo and a discussion on the overall Operations Center portfolio.

HP Discover: Operations Center Roadmap session - Day 4

The Operations Center Roadmap presentation is always one of the most highly anticipated sessions of the show. It is one of the few times where we discuss the product direction in a public (or semi-public) format.

Universal Log Management to Improve Troubleshooting

Both Operations Manager and SiteScope include log file management capabilities, But ArcSight Logger’s additional functionality makes the Operations Bridge team even more productive in searching log files for recurring patterns that affect system availability and performance.

Enhance the visibility of IT infrastructure problems (customer case study)

The combination of HP Operations Center and HP Business Availability Center provides a combined top-down and bottom-up view of your IT infrastructure. While the improved visibility into events and their causes certainly makes life easier for the Operations Bridge staff, the real benefit is improving customer satisfaction, enhancing delivery of business services, and improving productivity of the IT staff.


This is exactly what happened when Virgin Atlantic Airways deployed HP’s Business Service Management solution.


Mark Cameron, head of IT architecture, Virgin Atlantic Airways Ltd tells it best:
“Alerts from Operations Manager and other HP software now enter a single console and our IT operations team can see everything across our estate. Personnel view incidents, monitor trends and anticipate potential problems proactively rather than wait for calls from end-users. For example, if there is a trend towards an increasing use of disk space, we plan preventative maintenance to resolve the problem before it affects end-users by jeopardizing availability.”


You can read the complete Virgin Atlantic success story.


For HP Operations Center, Peter Spielvogel.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group on LinkedIn.

Network Management Customer Success Story (webinar)

My colleague, Aruna Ravichandran, in Network Management Center Product Marketing, asked me to share information about  a customer webinar in which they describe their success in upgrading to NNMi. Since many Operations Bridges also manage networks in addition to the server infrastructure, I agreed. As an aside, OMi and NNMi share the same underlying technology in their respective causal engines.


Listen to this on-demand webinar with a customer working for a leading $15 Billion consumer manufacturing company. You will get the complete story on their upgrade to NNMi. The speaker focuses on the ROI he was able to demonstrate to his management to justify spending the time and resources to go through the upgrade process.


In this webinar, you will learn how the customer was able to:



  • Increase network availability from 97.5% to 99.98%

  • Reduce capital and operating costs by consolidating 5 servers down to 1

  • Increase operator efficiency and productivity by decreasing the time spent on L2/L3 escalations from 8 hours to 2 hours/day through automation

  • Enhance its disaster recovery solution with automated application synchronization


 


Here is the link to the recorded ROI webinar.


For HP Operations Center, Peter Spielvogel.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group onLinkedIn
 

Event Correlation: OMi TBEC and Problem Isolation - What's the difference (part 3 of 3)

If you have not done so already, you may want to start with part 1 in this series.
http://www.communities.hp.com/online/blogs/managementsoftware/archive/2009/09/25/event-correlation-omi-tbec-and-problem-isolation-what-s-the-difference-part-1-of-3.aspx


Read part 2 in the series.
http://www.communities.hp.com/online/blogs/managementsoftware/archive/2009/09/25/event-correlation-omi-tbec-and-problem-isolation-what-s-the-difference-part-2-of-3.aspx



This is the final part in my 3 post discussion of the event correlation technologies within OMi Topology Based Event Correlation (TBEC) and Problem Isolation. I've been focusing on talking about how TBEC is used and how it helps IT Operations Management staff be more effective and efficient.


In my last post I started to mention why End User Monitoring (EUM) technologies are important - because they are able to monitor business applications from an end user perspective. EUM technologies can detect issues which Infrastructure monitoring might miss.


 


In the example we worked through in the last post I mentioned how EUM can detect a response time issue and alert staff that they need to expedite the investigation of an ongoing incident. This is also where Problem Isolation helps. PI provides the most effective means to gather all of the information that we have regarding possible causes of the response time issue and analyze the most likely cause.


 


For example: Our web based ordering system had eight load balanced web servers connected to the internet. These are where our customers connect. The web server farm communicates back to application, database and email servers on the intranet and the overall system allows customers to search and browse available products, place an order and receive email confirmations on order confirmation and shipping status.


 


The event monitoring system includes monitoring of all of the components. We also have EUM probes in place running test transactions and evaluating response time and availability. The systems are all busy but not overloaded - so we are not seeing any performance alerts from the event monitoring system.


 


A problem arises with two of our eight web servers, and they drop out of the load balanced farm. The operations bridge can see that the problem has happened as they receive events indicating the web server issues. TBEC shows that there are two separate issues, so this is not a cascading failure – and the operations staff can see that these web servers are part of the online ordering service.


 


However, they also know that the web servers are part of redundant infrastructure and there should be plenty of spare capacity in the six remaining load balanced web servers. As they have no other events relating to the online ordering service, they decide to leave the web server issues for a little while as they are busy dealing with some database problems for another business service.


 


The entire transaction load that would normally be spread across eight web servers is now focused on the remaining six. They were already busy but now are being pushed even harder, not enough to cause CPU utilization alerts but enough to increase the time that it takes them to process their component of the customer’s online ordering transactions. As a result, response time, as seen by customers, is terrible. The Operations Bridge are unaware as they see no performance alerts form the event management system.


 


EUM is our backstop here; it will detect the response time issue and raise an alert. This alert – indicating that the response time for the online ordering application is unacceptable – is sent to the Operations Bridge.


 


The Operations Bridge team now know that they need to re-prioritize resources to investigate an ongoing business service impacting issue. And they need to do this as quickly as possible. They need to gather all available information about the affected business service and try to understand why response time has suddenly become unacceptable. This is where Problem Isolation helps.


 


PI works to correlate more than just events. It will pull together data from multiple sources - performance history (resource utilizations), events, even help-desk incidents that have been logged and work to determine the likely issue.


 


So we've come full circle. I spent a lot of time talking about OMi, and events and how an Operations Bridge is assisted by TBEC. But it's not the one and only tool that you need in your bag. Technologies like EUM and PI help catch and diagnose all of the stuff that just cannot be detected by 'simply' )I use that term lightly) monitoring infrastructure.


 


Once again if you want to understand PI better I encourage you to take a look at the posts by Michael Procopio over on the BAC blog.



For HP Operations Center, Jon Haworth.

Event Correlation: OMi TBEC and Problem Isolation - What's the difference (part 2 of 3)

If you have not done so already, you may want to start with part 1 in this series.
http://www.communities.hp.com/online/blogs/managementsoftware/archive/2009/09/25/event-correlation-omi-tbec-and-problem-isolation-what-s-the-difference-part-1-of-3.aspx


This is part 2 of 3 of my discussion of the event correlation technologies within OMi Topology Based Event Correlation (TBEC) and Problem Isolation. I'm going to focus on talking about how TBEC is used and how it helps IT Operations Management staff be more effective and efficient. My colleague Michael Procopio has discussed PI in more detail over in the BAC blog here: PI and OMi TBEC blog post 


If you think about an Operations Bridge (or "NOC"… but I've blogged my opinion of that term previously) then fundamentally its purpose is very simple.


 


The Ops Bridge is tasked with monitoring the IT Infrastructure (network, servers, applications, storage etc.) for events and resource exceptions which indicate a potential or actual threat to the delivery of the business services which rely on the IT infrastructure. The goal is to fix issues as quickly as possible in order to reduce the occurrence or duration of business service issues.


 


Event detection is an ongoing process 24x7 and the Ops Bridge will monitor the events during all production periods, often 24x7 using shift based teams.


 


Event monitoring is an inexact discipline. In many cases a single incident in the infrastructure will result in numerous events – only one of which actually relates to the cause of the incident, the other events are just symptoms.


 


The challenge for the Ops Bridge staff is to determine which events they need to investigate and to avoid chasing the symptom events. The operations team must prioritize their activities so that they invest their finite resources in dealing with causal events based on their potential business impact, and avoid wasting time in duplication of effort (chasing symptoms) or, even worse, in chasing symptoms down in a serial fashion before they finally investigate the actual causal event, as this will extend the potential for extended downtime of business services.


 


TBEC helps the Operations Bridge in addressing these challenges. TBEC works 24x7, examining the event stream, relating it to the monitored infrastructure and the automatically discovered dependencies between the monitored components. TBEC works to provide a clear indication that specific events are related to each other (related to a single incident) and to identify which event is the causal event and which are symptoms.


 


Consider a disk free space issue on a SAN, which is hosting an oracle database. With comprehensive event monitoring in place, this will result in three events:



  • a disk space resource utilization alert

  • quickly be followed by an Oracle database application error

  • and a further event which indicates that a Websphere server which uses the Oracle database is unhappy


 


Separately, all three events seem ‘important’ – so considerable time could be wasted in duplicate effort as the Ops Bridge tries to investigate all three events. Even worse, with limited resources, it is quite possible that the Operations staff will chase the events ‘top down’ (serially) – look at Websphere first, then Oracle, and finally the SAN – this extends the time to rectification and increases the duration (or potential) of a business outage.


 


TBEC will clearly show that the event indicating the disk space issue on the SAN is the causal event – and the other two events are symptoms.


 


In a perfect world the Ops Bridge can monitor everything, detect every possible event or compromised resource that might impact a business service and fix everything before a business service impact occurs.


 


The introduction of increasingly redundant and flexible infrastructure helps with this – redundant networks, clustered servers, RAID disk arrays, load balanced web servers etc. But, it also can add complications which I’ll illustrate later.


 


One of the challenges of event monitoring is that it simply can NOT detect everything that can impact business service delivery. For example, think about a complex business transaction, which traverses many components in the IT infrastructure. Monitoring of each of the components involved may indicate that they are heavily utilized – but not loaded to the point where an alert is generated.


 


However, the composite effect on the end to end response time of the business transaction may be such that response time is simply unacceptable. For a web based ordering system where customers connect to a company’s infrastructure and place orders for products this can mean the difference between getting orders or the customer heading over to a competitors web site.


 


This is why End User Monitoring technologies are important. I'll talk about EUM in the next, and final, edition of this blog serial.




Read part 3 in the series.
http://www.communities.hp.com/online/blogs/managementsoftware/archive/2009/09/25/event-correlation-omi-tbec-and-problem-isolation-what-s-the-difference-part-3-of-3.aspx



For HP Operations Center,  Jon Haworth.

Analysis of HP announcement at VMworld (podcast)

At VMworld, HP announced its virtualization smart plug-in (SPI) for Operations Center. For companies using Operations Manager as the consolidated event and performance management console, this allows them to see events from VMware Virtual Center in the Operations Manager console.


The implications of the “Virtualization SPI” for business operations are significant. This means operators can manage all events, from both the physical and virtual infrastructure, through a single Operations Bridge. The virtualization team can focus on planning and strategy, leaving the tier 1 operators to manage events.


Dennis Corning, product marketing manager for virtualization, comments on the announcement in this podcast, which I recorded at VMworld.



For HP Operations Center, Peter Spielvogel.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group onLinkedIn.

Your success is my success (customer visit summary)

Another guest post by Lillian Hull, product manager for Operations Manager on Unix.
- Peter Spielvogel


Earlier this month, my colleagues and I spent some time with one of our customers in the health insurance sector. They were interested in our plans around Operations Manager and in particular, Operations Manager i.


Their environment includes a variety of HP Software & Solutions products including Operations Manager on UNIX (OMU), Business Availability Center (BAC), Discovery and Dependency Mapping (DDM), Network Node Manager (NNM) and SiteScope (SiS). 


The portfolio is well-integrated and users familiar with BAC should feel right at home with OMi.  OMi can provide a single “operations bridge” to reduce operational costs. All events are sent to the operations bridge and monitored by staff well-versed in IT operations. This centralization makes it easier to distinguish between causes and symptoms to better isolate the cause of a set of events. This in turn leads to more rapid problem resolution.


As with many of our customers, another topic they wanted to talk more about was virtualization. They are using Linux and virtual machines. For many environments, the slight overhead of running OM in a virtual machine should not have significant impact on performance or scalability.


Finally, during the wrap-up, the customer told us how much they want to work with us and our host said that our success with Operations Manager is linked to his success as an infrastructure architect. His words were “your success is my success”. Our products are vital to keeping everything running smoothly in this organization. HP is very fortunate to have customers that share their insights with us as not just another vendor, but as a trusted adviser. And due to great customers like this one, I am confident we will both triumph.


For HP Operations Center, Lillian Hull.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group onLinkedIn.

Enhancing SiteScope with Operations Manager - and Vice Versa

Many SiteScope customers have been asking about the value of adding Operations Manager to their existing SiteScope implementation. Many Operations Manager customers have been asking about how the new SAM Admin helps them manage their SiteScope. The bottom line is that combining agent-based and agentless monitoring gives customers the ability to expand their monitoring coverage in a cost effective way.


I have listed a few excerpts from a new solution brief called “HP Operations Manager for HP SiteScope Customers


Adding Operations Manager along with its agents extends SiteScope’s depth and breadth of coverage.



  • • Present an Operations Bridge or manager of managers that consolidates events from disparate consoles including multiple SiteScope servers, giving greater visibility into system health across the enterprise.
    • Operate autonomously on managed nodes, ensuring continuous operation, even if the network connection between the management server and managed node fails.
    • Allow monitoring of systems where administrators will not allow security credentials to be placed on the network but will allow an agent to be installed on the server.
    • Collect data at very granular time intervals, allowing you to fine-tune the performance of mission-critical systems.
    • Enrich events by providing context in the form of a service dependency map.


Operations Manager integrates out-of-the-box with SiteScope. It centralizes management of all SiteScope and other event consoles.



  • • Operations Manager automatically adds SiteScope targets to the Operations Manager Service Map.
    • SiteScope alerts go directly to Operations Manager with full details.
    • Operations Manager launches SiteScope tools directly from the Operations Manager console.
    • Operations Manager manages multiple SiteScope servers and can transfer configuration information from one SiteScope instance to another and synchronize settings between multiple SiteScope servers.


For HP Operations Center, Peter Spielvogel.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group onLinkedIn.

Q&A from EMA webinar on incident management and OMi

Thank you to everyone who attended the EMA webinar on “What is New in the Not-so-New Area of Event Management: Five Tips to Reduce Incident Resolution Costs” (view the archived webinar by clicking on the link).


We had many great questions at the end, some of which we did not have time to answer. Here is a complete list of all the questions that were asked, along with the answers. If you have additional questions, please post them in the comment field on the blog.


 


What effect will cloud computing have on the management strategies you discussed?


In many respects, Cloud computing – if it’s to be successful as a responsible answer to optimizing infrastructure for business applications – will accelerate the need for consolidated event management and its associated technologies.  Cloud computing places many new complexities and a stress and real-time awareness in front of IT managers, including how to manage performance, change, and costs effectively across virtualized environments and potentially across a mix of external service providers wedded together in a dynamic ecosystem.  These requirements will force service providers to become more transparent in support of SLAs, performance management, infrastructure discovery, CMDB Systems and CMS involvements, and shared cost analysis, along with compliance, security and risk management issues.  In other words, Cloud computing cannot succeed except as a niche opportunity without embracing the best practices and process-centric programs within IT to optimize its own internal effectiveness.


As you all know, security event management is a domain in its own right, and there is as much interest in cross-domain integration of security processes & tools as in other areas, if not more so in some cases. How can unified event management help security and IT ops team achieve their common goals?


Security event integration with an overall consolidated event management system is one of the more challenging and also more valuable areas of consideration.  This is partly because rather than being a “component-defined” part of the infrastructure or SW environment, security is pervasively associated with all domains and all disciplines.   It is something like the “phantom” in event management-a more logical than tangible entity.  But as such, defining polices for integration and reconciliation are more complex and overall less evolved.  Of course security has its own well established history in event management, in particular with SIEM—but once again this evolved as a way of consolidating security-related event issues, rather than being a more holistic approach to integrating security events with performance and change related events.  And so to a large degree this challenge still remains unanswered by the industry as a whole.


Is OMi a replacement for OM?


No. OMi is a separate product that adds on to Operations Manager. OMi introduces advanced functionality such as system health indicators and topology-based event correlation using Operations Manager as the event consolidation platform. We designed the products in this way to allow our customers to gain significant new capabilities without disrupting their current Operations Manager deployment. There is no rip and replace, just adding a new component on top of the existing monitoring solution.


OMI looks alot like BAC, are they tightly coupled?  Do I need both?


So is BAC and OMi the same product now?


Great observation. OMi is built on the BAC foundation so they do share a common look and feel. OMi performs advanced event management. BAC handles application management, transaction monitoring, and problem isolation. You can mix and match to components from the two product sets to meet the needs of your organization and you only need to purchase the components that fit your needs. So, OMi and BAC are separate products, just tightly integrated.



Sounds great, but what is the cost?  Is there some way to justify the big cash outlay for IT organizations in SMBs?


The return on investment should be apparent. As we covered in the presentation, if you assume the cost per manually handling an event is $75 and OMi will eliminate processing of around 10% of events (conservative estimate), just determine how many events your Operations Bridge team handles per day/week/month/year and do the math.
And, of course, that ignores the benefits associated with a more rapid fix-time for incidents which will enhance business service availability.


For pricing on OMi, please contact your local HP sales representative.


Can OMi run on the same server as Operations Manager?


No. You need to run the two products on different servers. OMi will run on its own Windows based platform and will be connected bi-directionally to a nominated OM server.


Do I need OMi to use the runbook automation capabilities of Operations Orchestration?


No. Operations Orchestration can use the events from Operations Manager as the trigger to launch flows. You do not need OMi too. Like OMi, OO leverages the power of OM and its agents. I strongly recommend you contact your HP sales rep to schedule a demo of Operations Manager and Operations Orchestration working together.


If everyone uses the same console, how will domain experts perform advanced troubleshooting?


The OMi console is designed for Operations Bridge personnel to view events, identify the causal event, and resolve the incident. Likely users will be Tier 1 operators and subject matter experts (SME) starting to troubleshoot problems and determine what to fix. The SMEs will then use their specialized tools to investigate the problems in more detail within their domain. For example, someone on the server team might see that a server is down and then use HP SIM (System Insight Manager) to identify that a fan has stopped working.
OMi includes the concept of “user roles” so that specific users can be provided with access to the events, infrastructure views and tools that are appropriate for their role. Domain experts could have user roles defined which include direct access to tools utilized for advanced troubleshooting.


Is there any special configuration I need to run OMi?


You need Operations Manager to consolidate events before feeding them to OMi. You can feed events from other tools (such as SiteScope for agentless monitoring) into Operations Manager to get better visibility of your enterprise by expanding the number of managed nodes. Operations Manager can also consolidate events from other domain managers such as Microsoft SCOM or IBM Tivoli.
You do need a recent version of Operations Manager – either OMW 8.10 with some specific patches or OMU 9.0. Existing Smart Plug-Ins will work with OMi but we’ve also been making some enhancements to provide tighter integration and to enable the Smart PlugIns for OMU to populate the topology maps automatically. So in general you need a recent OM version and later SPI versions are ‘better’.
Other than that, there is no special configuration.


Does OMi require ECS (event correlation services) to be built out?


No. As a general rule it’s a good idea to ‘refine’ the event stream that is processed by the OM server and passed to OMi. There is absolutely no point in passing lots of noise to OMi – stuff that we know is noise – so we would recommend making good use of all of the traditional event consolidation and filtering technologies in OM. Time and count based correlation on agents, de-duplication etc.
ECS – Event Correlation Services – can also be used to further refine the event stream as it arrives at an OMU server but it is not a requirement for OMi.


Any issues or challenges to be utilize OMi in duplicated IP addresses environment for company like MSP (managed service providers)?


OMi should work in duplicate IP address environments providing that appropriate DNS resolution and IP routing OR HTTP PROXY CHAINING is in place to enable outbound connections from the existing OM server to the managed nodes (agents) to work correctly. The support for dup-IP is something we included in the HTTP communications protocol which can be used with OM agents after version 8.x of the OM servers. There are a number of different ways that the network 'resolution' can be set up - including http proxies and NAT - and we cannot commit to testing every possible configuration. However, with an appropriate configuration OMi will work in these environments. In general, if you have a dup-IP environment working with your existing OM server then OMi should also work.


Does OMi take into consideration HA (high availability) configurations such that it can identify business degradation as opposed to an outage?


Yes. This is one advantage of having health calculation and event correlation which is dynamically driven by the discovery of the infrastructure. Consider a cluster running some Microsoft Exchange Resource Groups, or a number of VMware hosts with some virtual machines which participate in delivering a business service. In either case, if we have a hardware issue then we may move the ‘application’ (resource group or VM) to another host. This may happen automatically.
The Operations Manager Smart Plug-In (SPI) which is monitoring these resources – so the Exchange SPI (which is cluster aware) or the Virtualization Infrastructure SPI – will detect the movement of resources typically within 1 to 2 minutes. The SPI will update the discovery information in OM and this will be synchronized into OMi a short time later. OMi’s perspective of the topology of the infrastructure will change and the health and event correlation rules will adapt.
OMi will now ‘understand’ that the hardware events which arrived from the cluster or VM host do not impact the business service which is supported by the specific Exchange Resource Group or virtual machine.


 


For HP Operations Center, Peter Spielvogel.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group onLinkedIn.

Consolidated IT event management: five requirements for greater efficiency (free white paper)

Consolidated Event Management white paperWe just released a new white paper called “Consolidated IT event management: five requirements for greater efficiency.” It talks about the challenges of managing IT with severely constrained budgets and how to make better use of your existing resources.


The main premise is to create a centralized Operations Bridge to consolidate and correlate events from across your entire enterprise. The paper provides five key requirements for using the Operations Bridge to drive cost-effective IT operations.


You can download the paper now using the "Attachment" link below.


For HP Operations Center, Peter Spielvogel.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group onLinkedIn.

Tell us about your Operations Bridge and be a “NOC Star” (NNMi)

In this case, when we say NOC, we are focusing on the narrower Network Operations Center definition and not the broader Operations Bridge concept that consolidates events from servers, storage, networks, and applications into a single console.


Here is a great promotion from my colleagues in the Network Management Center.



Are you currently using HP Network Node Manager i (NNMi 8.x) software?  If so, you may be the next “NOC Star” we are looking for! 
 
Share your NNMi experience with us – along with any benefits you have received to date -- and you will get a free NOC Star t-shirt for just  submitting your entry.  You can submit videos, podcasts, or just text through the online form.
 
Entries are being accepted through August 15th, 2009.  Users can then vote on their favorite entry for a chance to win a $500 HP gift certificate.
 
Start submitting your entry at www.hp.com/go/NOCstar and you could be our next HP NOC star !


Winning entries, along with a photo of the NOC star will be showcased on HP’s portal.


For HP Operations Center, Peter Spielvogel.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group on LinkedIn.  

Innovation Week Part 2 - Operations Manager i 8.1

While OMU 9.0 represents evolutionary innovation, Operations Manager i 8.1 (OMi) is truly revolutionary. Operations Manager i is a set of add-on products which extends existing HP Operations Manager to provide advanced event correlation and system health capabilities.


It uses the proven causal engine from NNMi with a completely new set of rules designed for server infrastructure rather than network components. This topology-based event correlation reduces duplication of effort in the Operations Bridge by automatically determining which events are symptoms and which are the cause of a problem. If you don’t have TBEC, you have to write and maintain a ton of rules to eliminate events that are symptoms and not causes. This is time consuming (eight full-time people for a medium-sized European bank) and error prone.


Topology-Based Event Correlation


Conservative estimates indicate that OMi can save over $3 million annually in event processing per year for a company the size of HP. If I can get the real numbers next year, I will post them here.


To learn more about OMi, you can attend a Vivit webinar on July 21, 2009 or EMA webinar on July 28, 2009.


For Operations Center, Peter Spielvogel.

Innovation Week Part 1 - Operations Manager on UNIX 9.0

Over the past few weeks, I have been involved in many conversations with both customers and colleagues about innovation. So, I wanted to highlight several new products that demonstrate how managing IT operations continues to evolve (contrary to some opinions).


First, I’ll start with the foundation - a consolidated event console that is the main tool for the operations bridge staff. HP released OMU 9.0 last month. As with all our event management consoles, this does not force our customers to rip and replace. OMU gives companies the ability to reduce the cost of managing their increasingly complex IT environments by consolidating events into a central console.


Some of the enhancements include:



  • A new Web-based Administration UI that allows multiple administrators to perform tasks in parallel, boosting productivity.

  • Enhanced agentless monitoring by incorporating SiteScope events into the Operations Manager console and service map, giving organizations alternate ways to broaden infrastructure coverage and reduce the cost of monitoring.

  • More flexibility in managing network events using NNMi.


OMU 9.0 admin GUI


Next week, I will post a podcast with 10 reasons to upgrade to OMU 9.0.


For Operations Center, Peter Spielvogel.

Free Webinar: HP Operations Manager i Software Deep Dive Presentation

My colleague and consolidated event management expert Jon Haworth is the guest speaker at an upcoming Vivit webinar on Tuesday July 21 . Vivit  is the independent HP Software users community.



Jon will talk about using an operations bridge effectively and how the latest advanced correlation and visualization technology can help you reduce downtime. His presentation will address:
• What are the major differences between HP Operations Manager and HP Operations Manager i software?
• How does Topology Based Event Correlation (TBEC) work?
• How does HP OMi fit into my existing Operations Manager environment?


There will be plenty of time for Jon to answer your questions at the end of the session.


For Operations Center, Peter Spielvogel.

Free Webinar: 5 Tips to Reduce Incident Resolution Costs

On Tuesday July 28, I will be participating in an EMA webinar with researcher and Vice President Dennis Drogseth. The official title “What is New in the Not-so-New Area of Event Management: Five Tips to Reduce Incident Resolution Costs” is very telling. Many people believe that there is nothing new in managing IT infrastructure. The reality is that some of HP’s biggest R&D investments have been in this area.


Displaying disparate events may not be rocket science, but correlating events from different IT domains to determine which is the cause and which are the symptoms certainly is. This is exactly the premise of OMi, which uses topology-based event correlation (TBEC) to consolidate event storms into actionable information.


Here’s the webinar abstract:


Event management may not be the next new thing but it is quietly making dramatic advances that can save your company both time and money. These new approaches rely on understanding up-to-date service dependencies to accelerate problem resolution.


During the 45 minute webinar, we will answer the following questions.



  • Why should you reconsider your event and performance management strategy?

  • What is the impact of ITIL v3 and the concept of an operations bridge on your people, processes, and tools?

  • What innovations can help you more cost-effectively manage events?


We will also leave time at the end to address your questions.


Register for the EMA event management webinar.
www.enterprisemanagement.com/hpeventmanagement



For Operations Center, Peter Spielvogel.

OMi Webinar and Demo Now Available

Every time I speak to customers about consolidated event and performance management, they want to know HP’s vision. What does the end-state look like? How do all the pieces fit together to save my company money? How does an Operation Bridge drive efficiencies? How does OMi extend my existing monitoring infrastructure? Now, we have a recorded webinar that answers these questions.



In 25 minutes, Jon Haworth, one of the Product Marketing Managers for Operations Center will explain how to:



  • increase the efficiency of managing IT Operations

  • cut costs while improving quality of business services

  • speed the time to problem resolution


In addition, Dave Trout shows a short demo of topology-based event correlation in action, including how to:



  • filter events and identify root causes

  • use system health indicators and KPIs to summarize availability and performance

  • visualize configuration items in the context of business services


See the OMi webinar now.


For Operations Center, Peter Spielvogel.

Controlling SiteScope from Operations Manager

I have been getting many questions from both customers and colleagues about how Operations Manager and SiteScope work together. This is a very timely topic as we have some new capabilities connecting SiteScope and Operations Manager.


Since many readers will not be attending my talk about this topic at Software Universe, I’ll preview the information here. (Alex Ryals and I will be focusing on customer success stories in our presentation, so it is still very much worth attending, even if you already know the product integration part.)


The main role of Operations Manger is to serve as an enterprise event console or operations bridge, consolidating events from various domain managers for servers, storage, and networks, from both HP and other vendors. It accomplishes this using agents that run on each managed node, monitoring availability and performance. These agents send information to the Operations Manager server based on user-defined policies. The agents can also act autonomously, performing corrective actions without communicating with the server. This is very useful for minimizing network traffic, or even assuring operation if a connection between the server and managed node gets interrupted.



SiteScope complements this mission by monitoring servers and the applications running on them using agent-less technologies. SiteScope too monitors both HP and other hardware. In some cases, enterprises have some servers on which administrators either cannot or will not install agents. In other cases, customers will monitor servers using a combination of both agent-based and agent-less technology. One common example is for monitoring email environments running Microsoft Exchange, Active Directory and all the supporting infrastructure.


So, how do Operations Manager and SiteScope fit together?



  • SiteScope forwards its events into Operations Manager with the full details.

  • SiteScope targets also appear in the Operations Manger Service Map.

  • Operations Manager lets you control multiple SiteScope servers, including transferring configuration information from one SiteScope instance to another and synchronize settings between multiple SiteScope servers.


The ability to monitor your IT infrastructure using a combination of both agent-based and agent-less technology lets you simultaneously improve the quality of service and reduce IT management costs.


For HP Operations Center, Peter Spielvogel

When is a NOC an Operations Bridge

I've been pondering about recognition of the term "Operations Bridge" for some time now and decided I'd air some thoughts and see what people think.
 
The term "NOC" (Network Operations Center) has been floating around for years, it seems to originate in the telco world but has been adopted by many organizations to refer to some sort of centralized operations function.


But then a lot of organizations still use the term NOC to refer to the Network (only) operations center - the silo which owns and operates the network.
 
So there's the problem that I have with the term NOC.. It's somewhat indistinct, means different things to different people.
 
ITIL V3 recognizes the term "Operations Bridge" as part of the "Service Operation" discipline:
"A physical location where IT Services and IT Infrastructure are monitored and managed."
 
My view is that this is a nice clear definition of the 'modern NOC '- the place where ALL IT infrastructure monitoring comes together and is related to the services which depend on the infrastructure. It avoids confusion about whether we're talking about a "network only" monitoring silo or a full consolidated event and performance management organization.
 
We're using the term Operations Bridge in our own outbound marketing materials. But here is the "rub"... We've done some surveys and the term "Operations Bridge" is not universally recognized - i.e. People do not instantly recognize it or are able to explain what it is.
 
This is not everyone of course but it is true of a large proportion of the people that we tested the term with. I have to add that recognition levels are higher in Europe than the US, maybe something to do with the broader adoption of ITIL . 
 
Interestingly as soon as you start to explain what an "Operations Bridge" is, people "get it" - you don't even need to finish the explanation. It just makes so much sense - and everyone understands the concept of a "Bridge" as a central point of monitoring and control, either because of some nautical knowledge or a passion for Star Trek.
 
So, I'm on a campaign to drive widespread adoption of the term "Operations Bridge" - and move away from the indistinct and sometime confusing term NOC. Make NOC exactly what it states - an NETWORK Operations Center, and use Operations Bridge to describe a 21st Century consolidate IT Infrastructure monitoring function.
 
What do you think? Please enter your response in the comment field below. You may respond anonymously, if you choose.


(A) Yes NOC should be network only, "Operations Bridge" is the centralized monitorng point
(B) No, NOC is the right term
(C) It does not matter, both terms can be used
(D) Other (please elaborate)


For Operations Center, Jon Haworth

A New Data Center is an Opportunity for New Thinking

With all the doom and gloom in the news these days, it was a bright spot in my week to have a meeting with a customer that is planning to build a new data center next year. Even more surprising is that the company is in the financial services industry. And no, they are not receiving any government money to finance this project.


They visited our Executive Briefing Center to learn about best practices in IT transformation. In the introductory comments, the Director of IT (who reports to the CIO) stated “A new data center is opportunity for new thinking.” So, this set the context of looking at the state of the art in data center management and how to build it right if you are starting with the proverbial clean sheet of paper, which in this case, they are.


First, let’s cover their existing IT environment. 900 people (mix of on-shore and off-shore) managing an assortment of hardware (most of it non-HP), running UNIX (not HP-UX), using enterprise storage from one of the major vendors. For management tools, they own a large collection of tools from a single vendor (not HP), most of which has not been deployed because of its complexity and problems with the parts they have put into production. But, in all fairness, what they have does work at some level as they do not currently have issues with outages.


So, where are they going? On the infrastructure side, they are planning to move to Linux, blade servers (likely HP) running Oracle 11g, and VMware. They also plan to refresh their enterprise applications to the latest versions. And, they plan to experiment with some software as a service (SaaS) to see if it meets their needs and fits with their culture.


The overall IT infrastructure management strategy is (1) prevent, (2) detect, (3) respond. Currently, they do not have a true NOC. They are moving in that direction following an ITIL model, building an Operations Bridge. Their IT management goals are to reduce time on incident management and to add automation as much as possible to reduce human error.


On the IT management tools side, they need a way to manage the physical and virtual infrastructure, from the OS through the applications, in a single enterprise event consolidation console. This will capture all the events (after they are de-duplicated upstream), prioritize according to business goals, and then respond appropriately, either by automatically fixing them or by routing to the right subject matter expert.


They generally liked HP’s vision and the success stories we shared about other organizations that had already implemented all or part of their vision. Interestingly, the place that generated the most skepticism was our discussion about runbook automation. While they saw the value of automating IT processes, they just could not believe that they could use this technology to streamline some of their common IT problems. Even talking about specific use cases (from a pool of hundreds of customers) did not sway them, Since seeing is believing, the sales rep took an action item to schedule a follow up meeting where we can show them a demo.


Overall, a great discussion. And, a happy day to hear that a customer is planning a new data center. Even more so when they want to use the opportunity to re-architect their systems to build in the latest and greatest business technology optimization.


For Operations Center, Peter Spielvogel


 

Should Your CIO Share a Console With the Server Admin?

Dell recently issued a press release announcing its plans to launch an enterprise console that uses a single console vs. “up to 9 for HP”.


HP’s approach has been to provide role-based tools that are optimized for a specific functional area. Each of our tools shares information across related domains.


Let me make an analogy. If you are building an office building, do you want a single set of plans or ones that domain experts can use to make fast and accurate decisions? The structural engineers need to understand loads on beams and columns. Electricians need to determine how to place the conduits and wiring. Plumbers need to know where to place pipes and how to calculate pumping loads and pressures.


Although each ‘discipline’ has their own tools to plan their work (engineer, plumber, electrician) they also have VISIBILITY across what the other disciplines are doing - the shared information model. They can see enough to make sure that (e.g.) electrical wiring is not routed too close to water pipes, or that routings are not planned to go through parts of the building which are fire control areas etc. They can see what they need to know to be successful but they do not have the tools (or skills) to plan the building structure – they are focused on the areas for which they are responsible - the wiring or plumbing.


The point is, each discipline needs specialized information to do their job correctly. The same is true with our IT management products.


Operations Manager consolidates events from ALL domains and ANY vendor, both physical and virtual. This includes servers (from HP and other vendors), storage (from HP and other vendors), networks, applications, middleware, and end-user events and even business transactions. It provides a single pane of glass to understand and identify the root cause of IT problems and greatly reduce trouble shooting time. It delivers tremendous value to our 10,000+ customers. Many use Operations Manager to consolidate events from their S**, I**, and D*** hardware, running Windows or UNIX. The primary users are in the Operations Bridge (ITIL terminology) or Network Operations Center (NOC).


Fixing incidents is closely aligned with the Service Desk. So, HP Operations Manager can automatically open service tickets in HP Service Center or other ticketing products. The service desk has a different tracking paradigm, so they view the information through a more appropriate user interface.


HP servers come with the most complete set of instrumentation available. SIM, Insight Control, Insight Dynamics, Insight Orchestration allow customers to provision, manage, and troubleshoot their systems at a very granular level. The user interfaces are optimized for server administrators, as opposed to first line operators or IT service desk staff. Any event information flows to Operations Center which can use it to open service tickets, if necessary.


For executives (both IT and line of business) who might want to see everything and think a single console is appropriate, we have such a product for that market. Business Availability Center creates dashboards that show the status of Business Services, including the dollar volume of transactions, service levels, and other high-level metrics. While a single pane of glass to the IT infrastructure may sounds appealing at the surface, this audience does not care about the status of individual servers or network devices. They want to know whether their business services are meeting performance and availability targets. And yes, if a problem appears in this console, there are tools to diagnose problems, identify the root cause, and get that information to the experts who can fix them.


And, I have not even discussed our comprehensive suite of business service automation products (BSA). These too deliver specialized functionality and tightly integrate with their respective domains (Server Automation, Network Automation, Storage Essentials, Client Automation) and share information with related products (Operations Orchestration can automate virtually any IT process across any domain).


HP has been doing this for over 15 years. We lead the market - check out the reports from any of the major analyst firms.


How many consoles do you use to run your IT infrastructure (event management, ticketing, troubleshooting, executive dashboard)? What is the ideal number?


For Operations Center <http://www.hp.com/go/opc>, Peter Spielvogel

Virtualization Management - Only Part of the Consolidation Picture

Last week, I posted about VMware’s move toward becoming one of the major infrastructure management vendors.


Since then, there has been much dialog about virtualization and the vendors that provide management consoles for virtual servers and their physical counterparts.


Virtualization.info responded that it’s not a big leap for VMware to add support for 3rd party hypervisors.
Microsoft discussed their virtualization management offering in their blog.
Denise Dubie at Network World had a recent post about some new analytics capabilities that vendors are adding to their virtual management offerings.


What seems to be missing from all these discussions is how managing virtual servers is only a part of a comprehensive infrastructure management solution.


Here’s what I have been hearing (loud and clear) from our customers.




  • Infrastructure management needs to start with the end-user experience. The line of business managers to whom IT is accountable do not care about server performance or other IT metrics. They care about the availability of their business services. Any management solution must have a way to monitor service level agreements on this basis.


  • Virtualization is not an IT strategy on its own. It is generally part of broader data center consolidation initiative - an opportunity to reduce hardware, energy, and server management costs. All while improving the overall quality of service. So, talking about virtualization without the impact on the end user is just another IT-driven initiative (albeit one with potentially large and measurable cost savings).


  • Managing physical and virtual servers through a single set of instrumentation is the right approach (everyone seems to get this now). But, a comprehensive data center consolidation project needs to manage storage, networks, applications, and application component events through a single consolidated operations bridge.

Evaluate any infrastructure management vendor based on whether they can do all these things. Don’t just rely on a demo. Ask to speak to some customers running such a solution in production. Finally, have them create a proof of concept on your data.


Then, let’s talk about which vendors can manage heterogeneous environments.


For Operations Center, Peter Spielvogel

A New Puppy, IT Operations and the Economy

My family and I picked up a new Brittany puppy this past Thursday with mass excitement mixed with some degree of uncertainty. Yeah, I know what you are thinking: What were you thinking? We took a stable situation and introduced something new and exciting in the hope of improving the overall environment.


With the economy tsunami we are in, it may not the best time to expand a family from 4 to 5. The weeks prior to picking up Clancy were spent reading various puppy care books from our local library to ensure we were ready. I was expecting to find a single “how to” book/manual but each writer had a little different advice and perspective. I suspect the variety of opinions might be similar to the numerous bosses you have had in your IT Operations career.


My family is making sacrifices to improve our quality of life by adding Clancy. Any spare time we had before is now consumed by the usually duties of feeding, walking, and picking up after him. One significant change I anticipate for my kids is as nothing can be left on the floor. Before the puppy came home it was ok to leave something out if you are going to use it again. My kids probably took this liberty for granted. I suspect this eye-opening adjustment to strategically placed toys is similar to the shock IT Operations teams face today learning that their standard allocation of budget to keep the lights on will probably shrink significantly this year.


Prior to the downturn in the economy it was standard practice for a company to spend 70% of its IT budget “keeping the lights on”. Was 70% the right amount to allocate?  Should it remain the same to avoid impacting service levels? I am afraid I do not have an answer to this question nor do I believe many IT Operations Managers know for sure. I do not claim to have magic pixie dust, no book or manual of running IT operations in the year 2009-2010.


What I can tell you from a recent market study is that many IT organizations buy Performance and Availability monitoring tools along with an Event Management system with good intentions. They plan to “centralize event processing” at an Operations Bridge inside their organization but for various reasons never quite get there.


The study revealed that most companies have more than one event console in their IT Organization and provides an opportunity to reduce CAPEX and OPEX by streamlining their IT Operations process and tool sets. Now given the fact most IT budgets will be cut, is this the time to change your environment to a centralized console for event management? It worked for HP, but we need to eat our own dog food.


Are you willing to take the leap and endure some short term discomfort to permanently lower your IT cost structure?


For Operations Center, Dennis Corning.

Automated Infrastructure Discovery - Extreme Makeover

Good Discovery Can Uncover Hidden Secrets
Infrastructure discovery has something of a bad reputation in some quarters. We've done some recent surveys of companies utilizing a variety of vendors’ IT operations products. What's interesting is that, in our survey results, automated infrastructure discovery fared pretty badly in terms of the support that it received within organizations - and also in terms of the success that they believed they had achieved.
 
There are a number of reasons underlying these survey results. Technology issues and organizational challenges were highlighted in our survey. But I believe that one of the main 'issues' that discovery has is that people have lost sight of its basic values and the benefits that they can bring. Organizations see 'wide reaching' discovery initiatives as complex to implement and maintain - and they do not see compelling short term benefits.
 
I got to thinking about discovery and the path that it has taken over the last 15 or 20 years. I remember the excitement when HP released its first cut of Network Node Manager. It included discovery that showed people things about their networks that they just did not know. There were always surprises when we took NNM into new sites to demonstrate it. Apart from showing folks what was actually connected to the network, NNM also showed how the network was structured, the topology.
 
Visualization --> Association --> Correlation
And once people can see and visualize those two sets of information they start to make associations about how events detected in the network relate to each other - they use the discovery information to optimize their ability to operate the network infrastructure.
 
So the next logical evolution for tools like NNM was to start building some of the analysis into the software as 'correlation'. For example the ability to determine that the 51 "node down" events you just received are actually just one "router down' event and 50 symptoms generated by the nodes that are 'behind' the router in the network topology. Network operators could ignore the 'noise' and focus on the events that were likely causes of outages. Pretty simple stuff (in principle) but very effective at optimizing operational activities.
 
Scroll forward 15 years. Discovery technologies now extend across most aspects of infrastructure and the use cases are much more varied. Certainly inventory maintenance is a key motivator for many organizations - both software and hardware discovery play important roles in supporting asset tracking and license compliance activities. Not hugely exciting for most Operational Management teams.
 
Moving Towards Service Impact Analysis
Service impact analysis is a more significant capability for Operations Management teams and is a goal that many organizations are chasing. Use discovery to find all my infrastructure components - network devices, servers, application and database instances - and tie them together so I can see how my Business Services are using the infrastructure. Then, when I detect an event on a network device or database I can understand which Business Services might be impacted and I can prioritize my operational resources and activities. Some organizations are doing this quite successfully and getting significant benefits in streamlining their operational management activities and aligning them with the priorities of the business.
 
But there is one benefit of discovery which seems to have been left by the side of the road. The network discovery example I started with provides a good reference. Once you know what is 'out there' and how it is connected together then you can use that topology information to understand how failures in one part of the infrastructure can cause 'ghost events' - symptom events' - to be generated by infrastructure components which rely in some way on the errant component. When you get 5 events from a variety of components - storage, database, email server, network devices - then if you know how those components are 'connected' you can relate the events together and determine which are symptoms and which is the likely cause.
 
Optimizing the Operations Bridge
Now, to be fair, many organizations understand that this is important in optimizing their operational management activities. In our survey, we found that many companies deploy skilled people with extensive knowledge of the infrastructure into the first level operations bridge to help make sense of the event stream - try to work out which events to work on and which are dead ends. But it's expensive to do this - and not entirely effective. Operations still end up wasting effort by chasing symptoms before they deal with the actual cause event. Inevitably this increases mean time to repair, increases operational costs and degrades the quality of service delivered to the business.
 
So where is the automation? We added correlation to network monitoring solutions years ago to help do exactly this stuff, why not do 'infrastructure wide' correlation'?
 
Well, it's a more complex problem to solve of course. And there is also the problem that many (most?) organizations just do not have comprehensive discovery across all of their infrastructure. Or if they do have good coverage it's from a variety of tools so it's not in one place where all of the inter-component relationships can be analyzed.
 
Topology Based Event Correlation - Automate Human Judgment
This is exactly the problem which we've been solving with our Topology Based Event Correlation (TBEC)  technology. Back to basics - although the developers would not thank me for saying that, as it's a complex technology. Take events from a variety of sources, do some clever stuff to map them to the discovered components in the discovery database (discovered using a number of discrete tools) and then use the relationships between the discovered components to automatically do what human operators are trying to do manually - indicate the cause event.
 
Doing this stuff automatically for network events made sense 15 years ago, doing it across the complexity of an entire infrastructure makes even more sense today. It eliminates false starts and wasted effort.
 
This is a 'quick win' for Operational Management teams. Improved efficiency, reduced operational costs, free up senior staff to work on other activities… better value delivered to the business (and of course huge pay raises for the Operations Manager).
 
So what do you need to enable TBEC to help streamline your operations. Well, you need events from infrastructure monitoring tools - and most organizations have more than enough of those. But you also need infrastructure discovery information - the more the better.
 
Maybe infrastructure discovery needs a makeover.

 

For HP Operations Center, Jon Haworth


 

Consolidated IT Operations: Return of the Prodigal Son

Let's face it, the concept of bringing together all of your IT infrastructure monitoring into a single "NOC" or Operations Bridge has been around for years. Mainframe folks will tell you they were doing this stuff 30 years ago.

 

Unfortunately, in the distributed computer systems world, a lot of organizations have still not managed to successfully consolidate all of their IT infrastructure operations. I see a lot of companies who believe that they have made good progress, often they've managed to pull together most of the server and application operations activities, maybe minimized the number of monitoring tools that they use.

 

But when you dig below the surface, often there will be a separate network operations team, and maybe an application support team that owns a 'special' application. And of course the admins who are responsible for the roll out of the new virtualization technology - that just "cannot" be monitored by the normal operations tools and processes.

 

And that's the problem... Often there is resistance from a number of different angles to initiatives which try to pull end-to-end infrastructure monitoring into a single place. Legacy organizational resistance is probably the biggest challenge - silos have a tendency to be very difficult to 'flatten'.

 

Another common theme is that the technical influencers (architects, consultants, application specialists etc.) in the organization create FUD that the toolset used by the operations teams is not suitable for monitoring the new technology that they are rolling out. They need to use their own special monitoring solution or the project will fail. Because it's a new technology and everyone is scared of a failed rollout, management acquiesces and another little fragmented set of monitoring technology, organization and processes is born. Every new technology has potential for this - I've seen it happen with MS Windows, Linux, Active Directory, Citrix, VMware - the list is endless.

 

So what? I hear you say, what's your point? Well I'm seeing a lot of organizations revisiting the whole topic of consolidating their IT operations and establishing a single Operations Bridge - and making some significant changes.

 

Why now? Simple - to reduce the Operational Expenditures associated with keeping the lights on. In the current economic climate organizations are motivated 'top down' to drive cost out wherever they can. Initiatives that deliver cost reductions in the short term get executive sponsors. There is also a lot lower tolerance for the kinds of hurdles that used to be raised as objections - organizational silos get flattened, tool portfolios are rationalized.

 

It's not just about cutting cost of course. Simply reducing headcount would achieve that goal, but the chances are that the quality of IT service delivered to the business would suffer, and there would be direct impacts on the ability of the business to function.

 

Of course, the trick is to consolidate into an Operations Bridge, and be able to deliver the same or higher quality IT services to the business but with reduced cost. Often the economies of scale and streamlined, consistent processes that are enabled by an Operations Bridge will deliver significant benefits - and reduce OpEx.

 

This is where HP's Operation Center solutions have focussed for the last 12 or 15 years. In my next post I'll talk about where HP see the next significant gains being made - where are we focusing so we can help our customers to take their existing Operations Bridge and significantly increase efficiency and effectiveness.

 

In the meantime, if you want to read a little more about the case for consolidated operations, take a look at this white paper "Working Smart in IT Operations - the case for consolidated operations".

 

For HP Operations Center, Jon Haworth.

 

 

Search
Showing results for 
Search instead for 
Do you mean 
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.