Should I be using Fiber Channel, iSCSI or FCoE?

This is really an it depends question. There are a variety of things which can influence which technology you select.

Denny Cherry

December 14, 2011

6 Min Read
ITPro Today logo in a gray background | ITPro Today

Question: We are bringing our first SAN into work and we have been presented with three options for connectivity.  Which one should we be using, Fiber Channel (FC), iSCSI or Fiber Channel over Ethernet (FCoE)?

Answer: This is really an it depends question.  Some of the things that the decision needs to be based on include:

  • The amount of data that you'll be moving between the servers and the storage array.

  • Your budget

  • The number of free 1 Gig and 10 Gig ports available in your network infrastructure.

  • Your experience with zoning

  • What technologies does your tape backup solution support

  • Will virtual machines need direct access to the storage array

  • How many servers will be accessing the storage array

Some of the big benefits of these technologies also tie directly to their down sides.  Let's review these three technologies one by one.

Fiber Channel

Fiber channel has been around for decades as the most well known, well baked storage solution available.  Fiber channel is very quick, with a very low latency with moving data around the fiber channel network.  Port speeds are increasing from 8 Gigs per port to 16 Gigs per port with Emulex's announcement in November when they announced their 16 Gig HBAs (Host Buss Adapters) and with Brocade bringing out their 16 Gig Fiber Channel switches. 

Related: My SAN admin wants to put my transaction logs on FAST storage. Should we?

Ignoring the 16 Gig ports for a minute as they are basically so new that you won't have them in your shop for months or years, lets look at the 8 Gig and below systems for a minute.  When you buy a fiber channel switch these days it'll support basically anything 8 Gigs / second or slower.  So if you've got some machines that are older and that old support 4 Gig HBAs or even 2 Gig HBAs those will work just fine as you can simply put 2 Gig or 4 Gig SFPs into the switch (the little connectors that let the fiber cables connect to the switch).

Many of the newer tape backup solutions out there only support fiber channel connections at this point for backing up directly from the storage array.

Related: Can the fiber channel switch be a bottleneck?

Now there are a few downsides to fiber channel.  These revolve around the need to properly setup zoning within the fiber channel switches, granted this isn't actually all that hard, but a simple mistake can stop a lot of servers from talking to the storage array.  A couple hours of training and you'll know all you need to know about how to do this.  I'd recommend getting a little training on it instead of having a consultant setup the zoning so that you know how to make the changes yourself in the future.  Another is the cost of the switches.  Fiber channel switches cost a lot more per port than Ethernet switches do, which means that the cost of getting into fiber channel is going to be quite a bit higher.  For example I just purchased a couple of Fiber Channel switches (Cisco 9124s) which are 24 port switches (which come with only 8 ports per switch active) and they cost ~$4300 each plus support.  That's about $530 per port.  Now when I go to activate the rest of the ports they will cost less than that as I already have the hardware, but that's a pretty expensive cost per port.

iSCSI

iSCSI was introduced several years ago, not long after 1 Gig Ethernet ports started becoming pretty standard on servers.  Back then if you had asked me if iSCSI was OK to use on a production network my answer was pretty simple; "No".  Today that answer has changed from "No" to "probably".  If you've got 10 Gig NICs and 10 Gig Switch Ports and 10 Gig ports on the storage array then iSCSI should work just fine, provided that you have the iSCSI network isolated from the rest of the network on it's own vLAN or separate network switches.  For iSCSI designs where the storage IO requirements aren't all that high sharing the network switches with the rest of the network is fine, which can be a huge cost savings.  However when you'll be moving a lot of data over the iSCSI network I always recommend moving the servers and the storage over to their own network switches to reduce the latency and contention on the network.

There are a few downsides to iSCSI however.  The first being that it runs over TCP/IP which means that it's got the same timeouts and retries as everything else which runs over TCP/IP.  While these timeouts and latencies work fine for file servers and web browsing, they aren't always the greatest for when moving storage around.  The second is the ability to overload the Ethernet network causing end user performance problems.  The third revolves around security.  If you have which is sensitive and you are storing that data in plain text within the database, someone could capture the network packets of the iSCSI communications between the server and the storage array and use that to view your customer data.  iSCSI can easily run over 1 Gig or 10 Gig ports, but it'll probably be a while before faster ports are available which means that you'll need to aggregate 10 Gig ports together to get more bandwidth.  There is also the annoyance that a decent percentage of the TCP packets are header information.

FCoE

Fiber Channel over Ethernet is the newest of the three storage communication technologies.  It uses traditional Fiber Channel communication but using Ethernet instead of fiber cables to transfer the data.  You can find some interesting reading about the standard and how it merges Fiber Channel and Ethernet here.  Unfortunately not a lot of venders support FCoE at the moment.  Cisco and Brocade both have switches which support FCoE (Cisco was one of the companies who started the build FCoE) but only a few server venders have FCoE cards (regular NICs aren't supported) and only a couple of storage venders support FCoE on their storage platforms.  Another problem is that the FCoE spec isn't finished yet.  A new version of the spec was supposed to be released Q3 or Q4 of 2011, but that didn't happen so some features of the FCoE protocol aren't supported yet.  And since the new spec won't be ready until at least Q2 or Q2 of 2012 at this point it'll be a bit until these features are supported.

A problem that I have run across is that multihop isn't really supported in FCoE, yet.  The basic jist of this multihop problem is that your FCoE traffic can't pass from switch to switch without going multihop and FCoE doesn't like multihop because at its core FCoE is actually fiber channel and has to follow all the rules of Fiber Channel.  J Metz (blog | blog | Twitter) has a great blog post talking about what Multihop is and how all this works in a much more technical way than I possible could.

Based on all this information, my preference today is to stick with good old Fiber Channel.  In fact the data center which I'm currently building for Phreesia was going to be using FCoE but with the missing features multihop issue with FCoE we had to switch back to Fiber channel. With the 16 Gig announcements that have been coming out, I'm glad that we did.

Hopefully this helps you make your decision when selecting the storage technology for your data center.

Denny

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like