07-20-2010 05:01 AM
The whole purpose of performance monitoring may be loosely defined as collecting metric data for later analysis with the ultimate goal of recognizing the root causes of bottlenecks.
While this statement is usually undisputed, there are some common misconceptions that can deviate from this goal, produce high overhead and increase costs. They are:
➤ Monitoring basic infrastructure is enough.
Monitoring system metrics (such as CPU, memory, and disk) is important, but these metrics do not provide adequate information to truly understand whether actual users or applications are experiencing performance
problems. The causes of most performance problems today are usually problems with application components, as opposed to individual pieces of hardware. As a result, system monitoring alone, while still critical, will not provide an accurate or complete picture of true application performance.
➤ Monitoring processes or services for an application is enough.
Today’s applications, whether packaged, J2EE, .NET, or customized SOA applications, are complex and span multiple systems and various technologies. In order to thoroughly understand application health,
detailed component monitoring and diagnostics are required to understand the complex interactions between the various services. HP Diagnostics enables you to start with the end-user business process, then drill down into application components and system layers, thus ensuring you can achieve rapid resolution of the problems that have the greatest business impact, as well as meeting service level agreements.
➤ Monitoring all of the available metrics for a system or application is the best approach.
Collecting too much data leads to an analysis burden that can distort the revelation of real performance problems. However 100 percent coverage is not necessary or even desirable. The famous 80/20 rule - “80 percent of problems are generally caused by 20 percent of the system’s or application’s components” - is true for performance monitoring as well. The solution is in knowing which systems relate to critical business functions, and which ones do not.
➤ All tests can be done using the same set of metrics.
While some metrics would most probably remain selected for the majority of load tests, good performance monitoring includes various sets of measurements depending on the type of test to be performed.
➤ Monitoring the web server is usually enough.
When monitoring complex modern applications, understanding its architecture is essential to getting a realistic picture of the performance cause. Standard web application deployment consists of at least a web
server, an application server, and a database server, in most cases spread across multiple physical machines and even physical locations. With SOA proliferation, even more infrastructure and services may be involved in generating responses to the end user. Therefore it is very important to monitor all relevant servers - especially database machines. Sometimes it may also be necessary to monitor client workstations.
This post is part of the Performance Monitoring Best Practices series - you may see all of the posts under PerfMonitoring tag.