Switching to Fiber Channel for SAN access. Part 1

As I have learned more advance features of VMware’s virtualization products my rack has slowly been filling up with servers. With the increase of servers there has been a steady increase in the amount of traffic to my SAN server as well. As previous posts indicate I was utilizing iSCSI primarily for my SAN access. While iSCSI worked pretty well, the traffic was on a shared medium (Ethernet) so there was some delay from other traffic in the same broadcast domain.

While I could have placed the iSCSI traffic on a dedicated VLAN to gained a performance bump at no cost; I decided, after reading about Fiber Channel to go that route instead. Like iSCSI, Fiber Channel is a storage oriented protocol. Unlike iSCSI, Fiber Channel is a dedicated media, meaning the only traffic transversing the optical fiber is the Fiber Channel Protocol. One of the many benefits of Fiber channel is that it can be setup at a low cost; currently $52 for the host and first client, and +$18 for each additional client, for up to 4 total clients. After four clients have been reached, a Fiber-Channel-switch or an additional 4x port card is required. What is impressive is at those prices you are getting an optical based media using a storage oriented protocol that operates at 4.25Gb/s. The Fiber Channel PCIe card adapters I bought have drivers for Windows, Apple’s OS X, Red Hat, SUSE, Solaris, VMware ESXi, and Xen. The cards support boot from Fiber Channel which is really cool as I am now running the ESXi servers completely diskless.

2014-02-21 09_18_18- - vSphere Client - Storage Adapters

You may wonder why I would want a 4.25Gb/s network for my SAN traffic when typical harddrives will only push between 100-145MB/s; a range that is mostly within the operating speed of a gigabit Ethernet network. I have added a SSD cache drive that has greater read and write performance. Having a 4.25Gb/s network should be able to handle the bursts that may occur from the ARC + L2ARC. I looked at creating a 10Gbe Ethernet network but the cost was incredibly high and the price of 10Gbe switches is still out of reach for my home lab. This high cost of 10Gbe Ethernet is ultimately why I selected Fiber Channel over 10Gbe Ethernet. Another point, with my setup, each client has a dedicated link to the SAN.

Fiber Channel is a pretty cool technology and was challenging to initially setup and understand with the limited resources outside of an enterprise support contract. It is important to note that I barely have touched the surface of this cool technology, there are so many more features are available such as bonding, arbitrated loops, Fiber Channel over Ethernet that I haven’t even touched in this article. I encourage you to look into it.


Price/Parts Breakdown:

Qlogic QLE2464 – 4x port PCIe adapter – $25

Qlogic QLE2460 – 1x port PCIe adapter – $9

LC to LC OM3 Fiber Cable – $9

This entry was posted in News and tagged , , , , , , , . Bookmark the permalink.

One Response to Switching to Fiber Channel for SAN access. Part 1

  1. Matt S says:

    I am setting up the same thing right now but i can’t for the life of me get the QLE2464 to passthrough to a VM. Any thoughts or assistance. Thanks for the info man, I love this stuff.

Leave a Reply

Your email address will not be published. Required fields are marked *