EVA 4400 - slow performance? (Testing with IOmeter) (967 Views)
Reply
Regular Advisor
Posts: 111
Registered: ‎11-03-2009
Message 1 of 5 (967 Views)

EVA 4400 - slow performance? (Testing with IOmeter)

Hi,

I'm new to the EVA and not an expert in storage and have a question regarding the measured performance of an EVA4400 using IOmeter.

I have two HP DL380 G6 with Windows Server 2008 x64 that are connected to a Vraid5 Vdisk of an EVA4400 through QLogic FC1242SR FCA and HP Storageworks 8/8 SAN-Switches. The EVA4400 is equipped with 24 10krpm 300GB FC-drives which are all configured in a single disk group.

When testing the performance on the server using IOmeter I seem to get strange results.

I.e., when performing a test with 4K 100% random reads over a 1GB test file I only get a result of about 224 IOPS (0.88MB/s):

http://www.abload.de/image.php?img=eva4400-4k-randomreadkd0o.jpg

However, when performing exactly the same test on the server's local raid array (RAID5 over 3x 10krpm 146GB 2,5" SAS) I achieve a higher result of about 532 IOPS (2.08 MB/s):

http://www.abload.de/image.php?img=local-4k-randomreadrc35.jpg

When using a test profile with 32k 100% sequential reads, performance is about the same (3200 IOPS for the EVA, 3400 IOPS for the local storage).

Isn't the EVA with its 24 disks supposed to achieve much higher results than the local storage consisting of only three disks (there by the way currently is no production load on the ECA)? Or are there some other considerations to take into account when testing and comparing performance using IOmeter?

Thanks
Sam
Honored Contributor
Posts: 2,397
Registered: ‎11-09-2007
Message 2 of 5 (967 Views)

Re: EVA 4400 - slow performance? (Testing with IOmeter)

With 24x10K and 100% reads you can have up to 2,839 IOPS (11.3 MB/s) before read latency goes above 15 ms.

You must check how many I/Os are reaching the EVA and how long they take. EVAperf inserts counters on the Windows performance monitor.

The parameters on the HBA driver can have a high effect on the maximum throughput. See attached JPG.
Regular Advisor
Posts: 111
Registered: ‎11-03-2009
Message 3 of 5 (967 Views)

Re: EVA 4400 - slow performance? (Testing with IOmeter)

Thanks for your reply.

In the meantime I've done some reading. With setting the outstanding I/Os in IOmeter from 1 to 64 the EVA achieves 5743 IOPS / 22.4 MB/s, with the average response time being 11ms.

Running the same test (4k 100% random read) on the local storage achieves 2240 IOPS / 8.5 MB/s with the response time being 30ms.

I don't really understand what the response time means and how it is connected to the overall picture here (as I already wrote, I'm a beginner in storage). Is there some reading on this you can recommend to achieve some basic understanding?

Right now, all this is a little confusing. I.e., when testing 32k 100% sequential reads I also see some strange values, like around 550MB/s for the EVA (which is connected via 4Gb/s FC) and 700MB/s for the local storage. I really don't know how a 4Gb/s FC-link could provide 550MB/s (where the 700MB/s could be explained with accessing cache on the PCIe SAS controller).
Honored Contributor
Posts: 2,397
Registered: ‎11-09-2007
Message 4 of 5 (967 Views)

Re: EVA 4400 - slow performance? (Testing with IOmeter)

Too many parameters influence the performance.

If you set outstanding I/Os = 1, the server must wait for the I/O to reach the EVA, be processed and the ACK to come back before sending another I/O. That's why you get bigger numbers with more outstanding I/Os.

The same for the HBA queue depth, default value is 16 in many cases, but you can try 32 or 64.

The sequential read speed depends also on whether you have 2 HBAs on the server and are using load balancing (a setting on MPIO). Each FC connection can carry 400 MB/s.
Regular Advisor
Posts: 111
Registered: ‎11-03-2009
Message 5 of 5 (967 Views)

Re: EVA 4400 - slow performance? (Testing with IOmeter)

Can the sequential read speed increased by using two adapters even with a single LUN? I thought that on the EVA only one controller can issue I/O to a LUN and that I/O that is coming over the other controller gets proxied to the LUN managing controller. Hence, if I understand correctly, it's not possible to achieve more than 4 Gb/s for one LUN (since it's limited by the managing controller).
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.