[Best Practices] Performance Monitoring - Guidelines (510 Views)
Reply
Trusted Contributor
leobor
Posts: 195
Registered: ‎07-08-2009
Message 1 of 1 (510 Views)

[Best Practices] Performance Monitoring - Guidelines

There are simple general guidelines to keep in mind when preparing for
performance monitoring:


➤ Start from a standard sampling interval. If the problem is more specific, or if you are able to pinpoint a suspected bottleneck, then lower the time period.
➤ Based on the sampling interval, decide on the entire monitoring session length. Sampling at frequent intervals should only be done for shorter runs.
➤ Try to balance the number of objects you are monitoring and the sampling frequency, in order to keep the collected data within manageable limits.
➤ Pick only monitors that are relevant to the nature of the application under test in order to comprehensively cover testing scenario, while avoiding redundancy of deploying similar monitors under different names.

➤ Too many deployed counters may overburden analysis as well as performance overheads.
➤ Make sure the correct system configuration (for example, virtual memory size) is not overlooked. Although this is not exactly a part of the monitoring discipline, it may greatly affect the results of the test.
➤ Decide on a policy towards remote machines. Either regularly run the monitor service on each remote machine in order to collect results and then transfer results to the administrator at the end of the run by bulk, or rather continuously gather metrics and move over the network to the administrator. Choose a policy based on the application under test and the defined performance objectives.
➤ When setting thresholds, consider any "generic" recommendations set by hardware and/or operating system vendors (for example, Average CPU usage should be below 80% over a period of time, or disk queue length should be less than 2) as relevant for any test and application. This does not mean that not meeting these "generic" recommendations is always bad, but it does mean that it’s always worth checking the monitoring results and load test response times with other metrics.
➤ Choose the parameters that will monitor the most worthwhile activity of the application and its objectives. Having too much data can overburden the analysis process.
➤ Monitoring goals can be achieved not only by using built-in system or application objects and counters, but also by watching application-specific logs, scripts, XML files etc.
➤ It may be a good idea to have a small number of basic monitors constantly running (for example, in HP SiteScope), and more detailed monitoring defined for the load testing scenario during test execution.
Measure metrics not only under load, but also for some periods before and after the load test to allow for creating a "local baseline", and verifying that the application under test goes back to the baseline once the load test is complete.

 

This post is part of the Performance Monitoring Best Practices series - you may see all of the posts under PerfMonitoring tag.

Please use plain text.
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation