Infrastructure Management Software Blog

Overcome key capacity management challenges with service intelligence - free webinar

To gain the agility and cost advantages from virtualization and cloud computing, organizations must be able to combine historical and current infrastructure performance data with usage forecasts.  That ability enables IT to visualize, optimize, and plan workload placement and future infrastructure investments. The dynamic relationships in a complex IT environment, however,  means that correlating and mapping physical, virtual, and cloud-based elements is beyond the realm of human judgment and spreadsheets. It requires business service intelligence.

 

Attend a free webinar on Service Intelligence to see HP's exciting new offerings in this area.

 

Service Health Reporter - service centric, cross domain reporting for dynamic IT infrastructure

You collect performance metrics and response time statistics throughout your environment, and you have a dynamic model of the IT infrastructure and the business services - so what happens if you bring the two sets of data together?

Does ITIL v3 require good IT monitoring solutions? Is Service Operation the key?

Recently I attended a 3-day ITIL v3 Foundation training course. While there are 5 stages in the lifecycle of an IT Service, according to the ITIL framework, “Service Operation” or the 4th stage is “where the customer really sees the value” (according to my course materials). What does this mean? It means that having a good IT monitoring solution in place is an absolute necessity.

Smart Plug-Ins (SPIs) and Agent-Based/Agentless Data Collection Explained: What you need to manage your IT environment

Have questions about HP Operations Center SPIs like the infrastructure SPI or the SPI for virtualization? Wondering what agents you need to get? And how many? The following post is a good high-level summary of what you need and where you need it. Read on …


In general, an HP Operations Manager solution consists basically of two things: an Operations Manager server along with data collection technologies. Data collection technologies, at a very high level, are either agent-based or agentless. The purpose of this post is to explain the latter two: agent-based data collection and agentless data collection and what you need to implement a solution comprised of both. The following two figures provide high-level architectural representations of the following discussion.



 


 


 


 


 


 


 


 


 


 


 



Agent-based data collection explained


HP Operations Manager agents collect, aggregate, and correlate monitoring information to manage data and events collected and aggregated from multiple sources. The agents can suppress irrelevant and duplicate events and correlate the remaining relevant events to produce actionable and enriched management information. In addition, dependencies and propagation rules show the cause of an incident, which assist in reducing mean-time-to-recovery and downtime. Agents are installed on each managed system or node, regardless if it is a physical or a virtual machine, and have the following additional capabilities:


· Allow the addition and customization of monitoring sources not included in out-of-the-box monitoring policies.


· Collect and analyze performance data from operating systems and installed applications and use historical patterns to establish performance baselines.


· Autonomously perform automated corrective actions (in isolation from the Operations Manager server) and manage by exception (forward only actionable events to the Operations Manager server through the use of intelligent filtering, duplicate suppression, and correlation techniques).


· Set up HTTPS communication with the Operations Manager server – even in outbound-only communications configurations.


· Support monitoring data center technologies such as virtualization and clusters.


Agentless data collection explained


Agentless data collection, through the use of HP SiteScope monitoring probes, complements agent-based data collection by providing flexibility in how information is gathered from the IT environment. Like agent-based data collection, agentless monitoring is performed on both physical and virtual systems and has the following capabilities:


· Gathers detailed performance data for infrastructure targets without installing an agent on the managed node.


· Provides easy monitoring of the IT infrastructure.


· Has an intuitive user interface.


· Allows actions to be initiated automatically when a monitor’s status changes.


· Provides solution templates that enable quick deployment of monitoring probes, which include specialized monitors, default metrics, proactive tests, and best practices for an application or monitoring component.


· Has the ability to monitor previously unmanaged or hard-to-manage systems and devices through easy-to-use customization tools.


Infrastructure Smart Plug-Ins – what are they and where do they fit in?


Infrastructure Smart Plug-ins supplement agents by collecting data at the infrastructure or managed systems level. They provide out-of-the-box, packaged, and intelligent management and are comprised of the following three SPIs:


· The “system” SPI discovers operation system and platform resources, generates alerts on system diagnostic events, monitors system services and processes, and monitors resource utilization.


· The “cluster” SPI automatically discovers and represents cluster nodes and configured resource groups in a clustered environment, monitors cluster services and processes, and enables monitoring of clustered applications - even as they move “on-the-fly” between cluster servers.


· The “virtualization” SPI -which is supported on the most common virtualization hypervisors - discovers and monitors virtualization platforms (both host and virtual machines) and provides graphs and reports on resource utilization.


Application SPIs – are these different than infrastructure SPIs?


In a sense, yes and no. Yes because they basically perform the same functions as infrastructure SPIs in terms of collecting, aggregating, and correlating monitoring information. No in terms of what data they are responsible for. Infrastructure SPIs, as mentioned before, do this at the system level, whereas application SPIs do this at the application level. The following picture builds on the previous one, but more clearly depicts where application and infrastructure SPIs reside:


 



 


And does HP have SPIs! We have SPIs for databases (Oracle, Informix, Microsoft SQL Server, IBM DB2), web application servers (IBM WebSphere, JBoss, Oracle WebLogic), storage (HP Storage Area Manager, Veritas NetBackup and Volume Manager), and ERP/CRM (PeopleSoft, SAP, Siebel) products. Not to mention lots of SPIs developed by HP partners around Cisco, Novell NetWare, and Documentum products.  


New Licensing of Agents and SPIs!


Yes, we’ve changed our licensing structure for both SPIs and Agents. They are now instance-based, meaning you have one per operating system or application instance. Plus, we’ve got this great new “Operating System Instance Advanced License”, which includes the following:


o Operations Manager agents


o “System” Smart Plug-In


o “Cluster” Smart Plug-In


o 15 agentless monitoring probes/points


If monitoring a virtualized environment, a “virtualization” SPI - although an infrastructure SPI like the “system” and “cluster” SPIs - is purchased separately from the Operating System Instance Advanced License. One virtualization SPI is required for each monitored Virtual Server host.


I hope that this has helped explained agents and SPIs. If you have any questions about this post or instrumentation in general, please feel free to comment on this post.


For HP Operations Center, Sonja Hickey.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group on LinkedIn.

Sonja Hickey has joined the HP Operations Center team

My name is Sonja Hickey and I just joined the HP Operations Center team 3 days ago.  I will be focusing on the data collection aspects of the Operations Center product line and working with Peter Spielvogel and Jon Haworth as a Product Marketing Manager.


I’m really excited to be working for Hewlett-Packard for several reasons.  First, I like HP’s financial strength, which I think is due to its diversification in many different areas.  It’s pretty obvious that HP doesn’t put all of its eggs in one basket and that has paid off.  You’ll learn through my posts that I am very much into investing and as I read through my latest financial newsletters, I take note of comments like “HP has a pragmatic culture” and “HP was better positioned than most when the recession hit”.  I think you can now see why I’m very happy to be part of the HP team.


I also truly believe in the products that I will be representing - products that help IT teams effectively monitor and manage their IT infrastructure and environment.  In my opinion, these are not “nice-to-have” products, but rather “must-have” products for any company that want to be successful in today’s marketplace.


On a different note, I’d like take this opportunity to let you know a little about me.  From a work/career perspective, I’ve primarily worked in the high-tech space in either product management or product marketing roles for the past 12 years.  Outside of work, my free time is spent with my family (3 children and a wonderful husband) as well as reading historical fiction books and honing my investing skills (sorry, I don’t provide stock tips!).


I’d like to end this post with a request.  Pease do not hesitate to post a comment should you have any questions, concerns, issues, etc. around the Operations Center products.  This will help me understand the problems and issues you face and how our products can help solve them.  I may not have an immediate answer or I may not be the right person to answer your question, but I will get back to you.  That’s a promise.


Thanks so much for reading my blog and I look forward to hearing from you. 


For HP Operations Center, Sonja Hickey.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group on LinkedIn.


 

Why I love Information Technology (IT)

Last night I watched the U2 live concert webcast on YouTube (www.youtube.com/u2). During the show, I just enjoyed the concert, which of course comes with Bono’s philosophizing. (BBC News review of the concert.) Afterwards, I marveled at the technology that made it all possible.


 


Billed as “the world's biggest band performing on the world's largest stage” (on the YouTube ad for the event). They said that people were watching on all seven continents. I can’t even imagine how much bandwidth this single event consumed.


I cannot recall a live event available in the identical format to people everywhere on the planet. Sure, major news events are followed around the world, but people typically watch on their local news channel or in some local language portal.


There was a similar effort several years back with Live 8, a free concert to “make poverty history”. But, this contained several channels, one for each of the global venues, and I recall the user experience was less than optimal, both in video and audio quality. The U2 show was crisp on my home DSL connection (1.5 Mbps), which is far from speedy compared to what is available here in Silicon Valley or especially in Asia. There were a few sub-second lapses, but overall the experience was amazing.


Think about the U2 event for a moment. Technology (and global standards) made it possible for anyone on earth to experience the same concert at the same time in the same format, regardless of the viewing platform.


To me, this is truly amazing. And that’s why I love IT and am proud to work for a company that makes so much of the infrastructure possible.


Kudos to Google and YouTube (and U2) for pulling this off!


If anyone knows what IT infrastructure they used to do this, please let me know.


For HP Operations Center, Peter Spielvogel.


Get the latest updates on our Twitter feed @HPITOps http://twitter.com/HPITOps


Join the HP OpenView & Operations Management group on LinkedIn.


 

Free Webinar: 5 Tips to Reduce Incident Resolution Costs

On Tuesday July 28, I will be participating in an EMA webinar with researcher and Vice President Dennis Drogseth. The official title “What is New in the Not-so-New Area of Event Management: Five Tips to Reduce Incident Resolution Costs” is very telling. Many people believe that there is nothing new in managing IT infrastructure. The reality is that some of HP’s biggest R&D investments have been in this area.


Displaying disparate events may not be rocket science, but correlating events from different IT domains to determine which is the cause and which are the symptoms certainly is. This is exactly the premise of OMi, which uses topology-based event correlation (TBEC) to consolidate event storms into actionable information.


Here’s the webinar abstract:


Event management may not be the next new thing but it is quietly making dramatic advances that can save your company both time and money. These new approaches rely on understanding up-to-date service dependencies to accelerate problem resolution.


During the 45 minute webinar, we will answer the following questions.



  • Why should you reconsider your event and performance management strategy?

  • What is the impact of ITIL v3 and the concept of an operations bridge on your people, processes, and tools?

  • What innovations can help you more cost-effectively manage events?


We will also leave time at the end to address your questions.


Register for the EMA event management webinar.
www.enterprisemanagement.com/hpeventmanagement



For Operations Center, Peter Spielvogel.

Controlling SiteScope from Operations Manager

I have been getting many questions from both customers and colleagues about how Operations Manager and SiteScope work together. This is a very timely topic as we have some new capabilities connecting SiteScope and Operations Manager.


Since many readers will not be attending my talk about this topic at Software Universe, I’ll preview the information here. (Alex Ryals and I will be focusing on customer success stories in our presentation, so it is still very much worth attending, even if you already know the product integration part.)


The main role of Operations Manger is to serve as an enterprise event console or operations bridge, consolidating events from various domain managers for servers, storage, and networks, from both HP and other vendors. It accomplishes this using agents that run on each managed node, monitoring availability and performance. These agents send information to the Operations Manager server based on user-defined policies. The agents can also act autonomously, performing corrective actions without communicating with the server. This is very useful for minimizing network traffic, or even assuring operation if a connection between the server and managed node gets interrupted.



SiteScope complements this mission by monitoring servers and the applications running on them using agent-less technologies. SiteScope too monitors both HP and other hardware. In some cases, enterprises have some servers on which administrators either cannot or will not install agents. In other cases, customers will monitor servers using a combination of both agent-based and agent-less technology. One common example is for monitoring email environments running Microsoft Exchange, Active Directory and all the supporting infrastructure.


So, how do Operations Manager and SiteScope fit together?



  • SiteScope forwards its events into Operations Manager with the full details.

  • SiteScope targets also appear in the Operations Manger Service Map.

  • Operations Manager lets you control multiple SiteScope servers, including transferring configuration information from one SiteScope instance to another and synchronize settings between multiple SiteScope servers.


The ability to monitor your IT infrastructure using a combination of both agent-based and agent-less technology lets you simultaneously improve the quality of service and reduce IT management costs.


For HP Operations Center, Peter Spielvogel

Smart Decisions Today instead of Desperate Ones Tomorrow

Yesterday, I had the opportunity to participate in a briefing in which a dozen IT executives from a leading financial institution came to discuss their unique requirements and how we can help them meet their aggressive growth goals. The first slide that their head of technology presented contained the equation “change = opportunity + risk”. This started a six hour discussion about market and technology disruptions and how they affect their quest for increased performance, capacity, and efficiency.


The theme of the day was business value. The customer had recently made several major changes to their IT infrastructure, all to embrace innovations that provided more performance for less money. Some of these upgrades meant dropping vendors of proprietary technologies who have served them well for years in favor of open platforms such as Linux. In financial markets, efficiency rules.


While efficiency was the foundation of many of their technology decisions, innovation was their passion. Advanced technology was the enabler that allows them to be first, best, and fastest in meeting their customers’ demanding needs. The company needs to develop new financial instruments to address rapidly changing and unprecedented (at least in recent history) market conditions. With rising transaction volumes (long-term trend, anyway) and rising customer expectations, there is a constant need to increase system performance and capacity. And, for IT solutions that can manage growing complexity.


In addition to technology changes, the regulatory environment for financial services firms is also shifting rapidly. While some decade-old regulations such as Glass-Steagall are now gone, new ones are taking their place and there will be new regulations to fix perceived free market inadequacies. This places additional load on IT systems as they must track and log every moving part to ensure compliance with new rules.


Reporting granularity is a major requirement for any IT system. While averages provide useful trend information, they fall short in deliverable actionable intelligence for troubleshooting. Generally, it is spikes that cause system outages rather than averages. And, if your data collection times are too broad, you lose the ability to focus on the indecent that caused an outage or lost data. This was a big discussion topic.


Another memorable take away from the day was when one of their executives said “I would rather make smart decisions today instead of desperate ones in the future.” This was the premise for setting up what turned out to be a very productive meeting.


What smart decisions are you making to simplify managing your IT infrastructure?


For HP Operations Center, Peter Spielvogel

Making Infrastructure Management Less Taxing

I bought my copy of TurboTax over the weekend. While waiting in the long checkout line, it occurred to me that the United States tax code has several similarities to IT infrastructure management. Let me explain.


At the beginning, there was a needs assessment, planning process, growth forecast, and then implementation. When the system went live, everything was simple and all the pieces worked smoothly together. Over time, came growth, “enhancements,” complexity, and some unintended consequences. Current state of affairs: a mess. But at least we know the system’s idiosyncrasies or at least have processes in place to make everything work.


Quick, which am I talking about, your IT infrastructure of the tax code?


In a new datacenter, companies design in the management tools that will help them best meet their end-users’ needs and run the equipment to gain maximum efficiencies. Over time, requirements evolve (expand!) and new technologies such as virtualization proliferate. IT managers keep their environments current by adding new management tools, often with new software focused on the most recent infrastructure enhancement.


Modern datacenters include dedicated tools for their latest rack of blade servers that include environmental and power monitoring, software to dynamically move virtual machines from one physical server to another without the users even noticing, and monitoring consoles that give you metrics based on the actual performance your end-users experience.


But, even with all these innovations, it’s still really hard to keep track of all the moving parts in any IT infrastructure. So, periodically, companies undergo datacenter consolidations to streamline their operations, reduce the number of management tools, and make the whole process less taxing.


If you want help in simplifying the tax code, contact your local member of Congress.


If you want to simplify your IT infrastructure management, HP has a variety of solutions to help you. We’ll talk in depth about datacenter consolidation and server virtualization in future posts.


For HP Operations Center, Peter Spielvogel.

Search
Showing results for 
Search instead for 
Do you mean 
HP Blog

HP Software Solutions Blog

Featured


Follow Us
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.