04-05-2013 07:05 AM
Chris was having some issues installing a 3PAR Storage System with VMware ESX5.1
I am working on the installation of a 3Par 7200 iSCSI to a C7000 Blade enclosure. The Blades are Gen 8 with the Emulex (554FLB) Nics; the current SPP has been run on them. The OA is 3.71 and the VC is a Flex 10/10D running 3.75. The iSCSI network is 2 A5920s, they are linked but have not been configured in an IRF stack. The VC connection is 2 ports from each VC (only one port on each VC is active until the switches are configured.
The current HP ESX 5.1 image was downloaded from Vmware and installed. The ESX servers are configure with 4 VSwitches. A single VSwitch with 2 VMkernels was created for iSCSI. The VMkernels are in Nic teaming override so that only one Nic is bond to the VMkernel and the other Nic is bound to the other VMkernel. The iSCSI initiator is bound to the VMkernals. We pretty much followed LeftHand Best Practices. As the volumes were connected the path was change to Round Robin.
The issue we are seeing on the 3par is too many iSCSI sessions per port. The 3Par looks to have a limitation of 64 iSCSI sessions per port and 4 ports total in the 7200, so 256 iSCSI sessions. There is very little information in the 3par ESX configuration guide on iSCSI. I am unsure if the number of iSCSI sessions is a hard limitation or if he can be changed to handle more or be setup differently. Each ESX server has 2 ports and there are 4 ports on the 3Par, and initially 6(?) Luns were exported for 48 iSCSI sessions or more per server. The multi pathing option seems to be generating even sessions so we are reaching the 64 iSCSI session per port really fast and the 3Par becomes unusable.
Any ideas? Did we configure it correctly? Is there a way to limit iSCSI sessions? Should we only connect a host to one port on each 3Par controller? It this a known driver issue. The customer said he did some ESX update, I do not know which ones and if they are on the ESX recipe.
Some input from Benjamin:
I found an interesting article on port binding that may explain some of this.
I had always assumed that binding the adapter was for any iSCSI software implementation because the VMware storage guide does not differentiate when discussing port binding.
I will tell you that with my previous life of installing VMware and other arrays, the install guides from a three lettered competitor shows port binding on arrays using multiple IP addresses and I never bothered to look any further.
Port binding with P4000=yes. Port binding with 3PAR=no?
Port binding is used in iSCSI when multiple VMkernel ports for iSCSI reside on the same broadcast domain to allow multiple paths to an iSCSI array that broadcasts a single IP address. When using port binding, you must remember that:
- Array Target iSCSI ports must reside on the same broadcast domain as the VMkernel port
- All VMkernel ports must reside on the same broadcast domain
- Currently, port binding does not support network routing
When not to use port binding
In this sample scenario, there are multiple VMkernel ports on different broadcast domains and the target ports also reside on different broadcast domains. In this case, you should not use port binding.
If you configure port binding in this configuration, you may experience these issues:
- Rescan times take longer than usual
- Incorrect number of paths are seen per device
- Unable to see any storage from the storage device
Any other input for Chris? The only other thing I have heard about is that we, i.e. HP, are supposed to use the VMware image that is on the hP Software Depot. I don't know what issues there are by using the VMware image.
05-22-2013 09:08 AM
We are experiencing the same issue too - ESXi 5.1 w/ 3PAR StoreServe 7200 shows several hundred paths over iSCSI (in the 3Par management console), whereas VMware shows the correct number in the vSphere Client.
Were you able to resolve this? I have opened a case with 3Par support in the interim.
Thanks in advance.