Can the fiber channel switch be a bottleneck?

While it isn't the most common place that a performance bottleneck can happen, the fiber channel switches or Ethernet switches if you are using iSCSI or Fiber Channel Over Ethernet (FCoE) can become a performance bottleneck. The switches which run the network that the data travels over, reguardless of the technology involved can become a bottleneck.

Denny Cherry

December 7, 2011

2 Min Read
ITPro Today logo in a gray background | ITPro Today

Question: Can the fiber channel switch be a bottleneck?

Answer: While it isn't the most common place that a performance bottleneck can happen, the fiber channel switches or Ethernet switches if you are using iSCSI or Fiber Channel Over Ethernet (FCoE) can become a performance bottleneck.  The switches which run the network that the data travels over, reguardless of the technology involved can become a bottleneck.

Now it probably isn't going to be the back plane of the switch that cause the performance problem, but the ports which connect the switches either to other switches or to devices.  The back planes of the storage arrays are designed to handle a lot of bandwidth.  Looking at a Cisco MDS 9513 (manual here) which is a pretty typically Enterprise Class fiber channel switch has 192 Gigs of bandwidth available on the back plane per slot.  Each slot can hold 48 8 Gig ports which could in theory push 384 Gigs of bandwidth through the blade which fits into that slot.  However if we used 4 Gig ports (which are much less expensive and provide plenty of bandwidth for most servers) we how cannot overload the back plane with the data from the ports.

Think of a fiber channel SAN which has 4 gig ports for the servers and the storage array.  Now lets assume that we have 10 really big high load SQL Servers on connected to the storage array and there's a switch in the middle.  In our example there are two heads for the storage array, and each head has two ports.  This gives us a total of ~16 gigs of bandwidth available (4 ports at 4 Gigs each).

Now our 10 servers are all hitting the disks really hard pushing 2 Gigs of bandwidth to the storage array.  When we look at each SQL Server in isolation that's not a problem.  We've got two Host Buss Adapters (HBAs) on each server at 4 Gigs of bandwidth each, so each server can crank out ~8 Gigs of bandwidth to the servers.  But when we look at the storage array side of things our 10 servers are trying to push in total 20 Gigs of bandwidth to the storage array, which is more than the storage array can handle.  At this point the ports going to the storage array are maxed out.  The only way to increase the speed available would be to either add more ports or upgrade those ports to 8 Gig ports.

This leads me back to what is probably one of the most important things when it comes to storage; Monitoring!  And every component of the storage platform needs to be monitored.  Not just at the host (server) level, but also at the storage array and the switch level.  Without monitoring at all these levels you'll never be able to be sure that you have found the performance problem.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like