Jumbo Frames-Lesson Learned

Some info from Malcolm:

 

**************

 

I wanted to share a quick Jumbo Frame experience.

 

First, thank you Dan for your help on this.  Dan had a similar experience which helped me get pointed in the right direction when this issue came in last week.  And of course this all started with the network team and their Cisco team pointing to Virtual Connect as the most likely culprit as to why the client-server throughput through their switches and firewalls was so slow.  Now Cisco is scrambling to make their ASA run faster and the customer is looking at other firewall vendors that can push more IO:  http://www.fortinet.com/products/fortigate/5000series.html

 

My customer  just deployed a pair of HP Blade Enclosures with BL660s spread across them using Flex-10 and 8-24 VC FC interconnects.  Each enclosure has 4 x 10G uplinks using a single SUS (so they are 20G Active and 20G Passive).  Then using LACP and vPC on their Nexus 7Ks, the enclosures wire to 10G line cards which route to 10G ASAs and then to 10G Exadata hosts.  These are hosted in a facility so the team there always uses single SUSs and Active-Passive VC uplinks and since we have Flex-10 we have numerous 10G uplink ports we can wire up if and when we need them, before going Active-Active (dual SUSs).  Their goal is to push the 40G of Exadata Active ports to the max, from the BL660s running the Oracle Client SW.

 

Lessons Learned:

 

  1.  Blade to Blade, through a 10G Nexus line card can push close to 9G (8.7) using the default Frame or MTU size of 1500.  So as long as you have a recent Linux or Windows build the default TCP settings should move packets very fast between hosts.  But Jumbo Frames took the test results over 9Gbps.
    NIC = 9000 (default 1500) to VC = 9216 (the default) to Nexus switch = 9216 (default 1500) to VC = 9216 to NIC = 9000, all on the same VLAN (skipping the firewall for now) went to 9.7Gbps using Iperf (default settings and 5m runs).  And different versions of Iperf performed differently (v2 was faster than v3).  Netperf is another option.  And short tests perform worse than the 5m ones so stick with longer tests.
  2. Dan’s customer was testing VMotion traffic (I believe) between blades and were not getting the performance they expected.  Jumbo Frames on the ESX NIC settings really helped prove the Blade to Blade performance.
  3. When you put a firewall in the middle, you definitely want Jumbo Frames enabled (end to end).  So you need to enable Jumbo Frames on the firewall as well.  They have an ASA capable of 40Gbps of throughput and by adding Jumbo Frames you get about a 30+% boost through the links.
  4. When using M1 (and especially F1) 10G Nexus line cards, watch out for their over-subscription.  It is 4 to 1 once you leave each 4 port group.  You might need to spread your uplinks across dedicated 4-port groups to get the full 10G to other hosts or switches (depending on where they are plugged in).  The customer is now looking at spending hundreds of thousands of dollars upgrading a 1.2M pair of switches they just bought 3 years ago, when they bought the c7000s and Flex-10s (which are still solid).

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the Community Guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
About the Author


Follow Us
Top Kudoed Posts
Subject Kudos
4
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation