11-22-2013 11:39 AM
We need to get some more bandwidth for our system and we'd like to go the 10GBASE-T route rather than the 10GbE SFP+ way. Is it possible? Seems the only hurdle in our way from upgrading.
11-24-2013 07:23 PM
As long as your nic has the same chipset as the sfp option it should in theory work.
You also need to add 4gb ram as that comes with the 10gb enablement kit.
Remember you will be completely out of support if you do it, so it is probably not a great idea - but it can be done.
The actual nic in the kit is a NC550SFP - HP do not make an equivalent with RJ45 attach and i dont think emulex do either (bladeengine 2 chipset)
If you are looking to save some cash you may be better off to buy the NC550SFP nics and some ram and use twinax (direct attach copper cabling) rather than optical sfp's. Some people refer to them as copper sfp cables.
You would need to check that the cables are compatible with your switch vendor as they lock out non certified cables normally (If you have Cisco you can disable the lockout however)
12-06-2013 08:21 PM
3 x 4gb UDIMM (FASTEST), 3 x 4gb RDIMM (most reliable).
one dimm is 33% of bandwidth!! too slow for dual 10GBE nic!
Check out www.servethehome.com they have deals like mellanox connect-X3 card for $300 .
with ESXI 5.5 you can run 40gbe with the VSA.
The VSA is faster than hardware!
Also you can run any hardware you want, and it is not going to fall into "unsupported" like what you suggest 10gbase-T
VSA also have option for Tiering so you can setup SAS and SATA volumes or SSD and SATA volumes!
I wish they have Physical -> VSA trade-in license!
05-22-2014 07:48 AM
I have the same problem. 10GBase-T SFP+ modules don't exist, and nobody at HP can tell me if any 10GBase-T cards can be used in the StoreVirtual product line.
05-23-2014 01:41 AM
If you reallly, reaaaly wanted to go down this path, you could get something like:
It might be reliable as anything, but you get exactly (0) support.
I would suggest that if you are even thinking that you need 10gbe on the p4000 it may be worth looking at another solution such as the 3par. If you really want to bodge something together then that is about your only option.
The reality is, 10GbaseT is not really a great system - it is power hungry and relies on pushing copper cables to the limit of their capacity, leaving little in reserve.
The other elephant in the room is that most of the inexpensive 10GbaseT switch manufacturers are pretty crappy as well - do you really want to trust your storage network to a crappy broadcom based zero port buffer netgear etc?
The P4000/storvirtual is really great if you want high availablity etc, but if you really want low latency and high throughput you are probably better off to look at something like 3par and at this point fibrechannel is actually likely to come out more reliable and better performing.
When I think of the use cases for P4000 I think co-location, where U space is at a premium and performance is not so important, but reliablity and replication is paramount.
After attempting to run a hosting environment for a number of years on this technology, it was obvious that 3par was a better choice for performance in every way.
Keep in mind, it only takes about 4 p4000 nodes before you are in 3par territory now that the 7000 series is in play.