08-31-2010 05:40 AM
For applications to comply with performance objectives, their performance has to be monitored continuously. By monitoring, we obtain performance data which is useful in diagnosing performance problems under productionlike conditions. This data may indicate the existence of a bottleneck, that is, a situation where the performance or capacity of an entire system is severely limited by a single component.
Formally speaking, a bottleneck is located on a system's critical path and provides the lowest throughput. In client-server and especially Web based systems, there may be numerous slow points such as the CPU, memory, database, network link and so on. Some of them can be identified through monitoring the operating system’s relevant counters, while some may only be pinpointed by instrumenting the application.
HP provides a product, HP Diagnostics for J2EE/.Net, that enables IT professionals to:
➤ Proactively detect problems in production.
➤ Rapidly isolate problems to system or application tiers.
➤ Pinpoint root causes to specific application components.
An application may perform well in the development and QA environment, but fail to scale or may exhibit performance problems in production. It is important to understand the impact of the infrastructure in which the
application runs and the behavior of the many application components as they interact under load. From the diagnostic perspective, it is important to be able to isolate the problem by tier of the application architecture, by
application component, and to have progressive drill-down visibility into J2EE/.Net performance problems, the J2EE/.Net environment, and into the actual logic with sufficient detail to determine the root cause of the
From the business perspective though, seeing system resources fully utilized is the intended goal - after all, all these CPU units, lots of memory and discs were paid for in order to be busy as much as possible. Therefore an informal definition of bottleneck would be the situation where a resource is fully
utilized and there is a queue of processes/threads waiting to be served. Distributed environments are especially vulnerable to bottlenecks due to:
➤ Multitude of operating systems where each of the application components may reside.
➤ Network configuration between the components.
➤ Firewalls and other security measures.
➤ Database malfunctioning where poor schema design, lack of proper indexing and storage partitioning may greatly slow the overall system response time.
➤ Ineffective thread management causing a decrease in concurrent usage.
➤ Unverified high number of connections.
➤ Fast growing number of threads due to lackluster thread pool size management.
➤ Database connection pool size misconfiguration.
➤ Unoptimized frequently used SQL statements.
➤ No memory tuning, both physical and shared, which is required for high volume transaction processing
As mentioned above, performance monitoring ideally leads to the identification of bottlenecks and their elimination and/or application tuning.
Another application of the 80/20 rule mentioned above is that 80% of resources are consumed by 20% of operations inside any given application. Needless to say, these most popular operations are most probably the ones causing bottlenecks. Therefore improving this 20% of the code may greatly reduce overall performance.
The process of the performance tuning is by itself partly science, partly art as it may involve intervention at the design level, compile level, assembly level, and at run time. It usually cannot be done without trade-offs -
normally only one or two aspects can be addressed at the time of optimization, such as: execution time, memory usage, disk space, bandwidth, power consumption, or some other resource. For example,
increased caching (and request execution time) leads to greater memory consumption, multi-processor use may complicate the source code etc.
This post is part of the Performance Monitoring Best Practices series - you may see all of the posts under PerfMonitoring tag.