11-11-2010 01:02 PM
I already checked these ones but did not find it
EVA4400 performance white paper 4AA1-8473ENW
I have seen different ITRC posts suggesting a general estimation of 170-200 IOPS for 15K drives and 120-150 for 10K drives.
I am trying to determine a performance baseline of an EVA and to know whether is overloaded or reaching max performance capacity with the current configuration.
EVA4400 w/ dual controllers (09534000 firmware)
4GB total cache
2x Embedded 8GB switches
24x 450GB 15k & 10x 146GB 15K drives.
11-11-2010 11:27 PM
Have you seen this support document?
I think together with the "A tactical approach to performance problem diagnosis" paper you referenced above you do have the info you need to get it sorted out.
11-12-2010 02:08 PM
Now, should I assume that the 120 and 170 IOPS per disk applies the same way when having VRAID 1,5,6 or how do I figure it out?
Also, about the read and write latencies, should I consider them individually each host port on the EVA or should I do a sum of the reads and a sum of the writes (EVA4400 has 4 ports: 2 ports per controller)
I did a capture with evaperf for a few days, using tlviz, I see in the array an average of 3000 IOPS (total req/s) and peaks on the 6000 IOPS. I see each host port very similar, the read latency averages 20-25ms and write latency 8-16ms. The latencies are over the numbers that HP recommends but the IOPS within the limits from the 5780 max calculated IOPS (2 disk groups, 1 with 24 450GB 15K and the 2nd 10 146GB 15K (1 vraid 5 and the rest vraid6) . Is this still an indication that EVA is overloaded or what other counters should I take in consideration? I am kinda confused here. Thanks
11-12-2010 02:27 PM
Depending on the vRaid level these will be reached with more or less front-end IO.
Pure reads are equal for all vRaid levels.
Example for 1000 IOPS random front-end writes:
on vRaid1 this will caues 2000 back-end IOs
on vRaid5 this will cause >2000 back-end IOs
- Read the original data and parity block (two requests)
- Calculate the new parity block
- Write the new data and parity block (two requests)
For vRaid 6 it will be even more.
11-15-2010 02:12 AM
If it is IOPS you're after, i.e. for an OLTP-style database use:
- As many 15k disks as you can afford
- All disks to one disk group
- Total # of disks divisible by eight i.e. 8, 16, 24, 32,...
- Use only vRAID-1
- check SCSI command queue length
Suggested reading EVA Best Practises docu:
11-15-2010 05:55 AM
11-15-2010 06:02 AM
If you check Fig 1 in http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-8473
11-24-2010 11:10 AM
I've spent the last few weeks doing just what your looking at and have written a tool. If you can get a perfmon log to me I reckon I could spot any performance bottle necks in about 10 minutes.
I'm interested in testing the tool on other peoples machine's perflog data, so we'd both get something out of it.
11-24-2010 12:41 PM
Thanks for the offering, I have a evaPerf capture that it's about raw 650MB/zipped 15MB.
Let me know what do I need to do and what will your tool will do as well.
11-25-2010 03:50 AM
It also produces performance and utilization graphs for all element of the EVA, from physical discs to LUNS to ports including mirror port activity. And it identifies LUNS that are initiating proxy reads or writes and displays this traffic. Other parts show the busiest LUNs and even busy physical discsâ ¦.
I need to re-build it a bit as itâ s only written for a single disk group, wonâ t take long. Iâ ll setup an FTP location this evening and let you know.
11-30-2010 02:03 AM
the firewall is open now.
Just to confirm, the tool analyses EVA perf logs, generated by the command;
evaperf all -csv -cont 10 -dur 7200 -ts2 -sz 5000-xxxx-xxxx-xxxx > c:\perf_bench.csv
The variables are:
-cont frequency of snapshot (seconds)
-dur duration of capture (seconds)
-sz wwn name of eva