So Fibre Channel Lives… but is FCoE dead?
As Amirul discussed in a recent post, for many the SAN protocol of choice is Fibre Channel. Reliability, performance and scalability are the perceived benefits which attract organisations, large and small, to the technology.
iSCSI has undeniably made SAN technology more accessible to the mid-market. It started gaining popularity about ten years ago – especially with the emergence of ESX and host software based initiators that actually performed well. After all, it was often a low cost solution that could even use existing network kit – and it was ‘good enough’ for most SMBs.
When 10Gb Ethernet became more prevalent, it gave iSCSI the potential to reach into those larger environments and even high performance computing. However, iSCSI has taken a long time to gain acceptance (and trust) within those larger enterprise customers, and even the mid-market. There are still some big customers who I think will never fully embrace iSCSI.
So, where does FCoE fit in?
FCoE was introduced alongside the converged network solutions – including converged switches, host adapters and storage. Putting it simply, FCoE packages up fibre channel frames and transports them over an Ethernet network. This, in conjunction with a CNA (Converged Network Adapter) allows hosts to send a load of different data (LAN, SAN, etc.) over a single Ethernet cable and switching infrastructure, but maintain all the higher level protocols that customers know and love.
Converged Networking: Before and After
Don’t get me wrong, converged networking is definitely a good thing and is here to stay. Some of NG-IT’s own SmartStack projects have been a living example of this:
Before and After
However, I firmly believe FCoE was conceived purely as a bridge technology that would placate traditional fibre channel shops, while still allowing them to take advantage of a consolidated network topology.
FCoE is to storage networking what VTL was to tape (and who still uses VTL?)
Over time, FCoE has arguably proven less beneficial from the perspective of SAN storage presentation. Even with multi-protocol unified storage systems (which seem to be losing popularity anyway), there is little point in using FCoE or converged adapters in the array. The vendor best practice is typically to separate out FCoE and IP traffic into separate physical connections anyway. Even then, we can use a network stack such as Cisco Nexus to act as an aggregation layer for native FC feeds from a storage array and transport these via FCoE to a downstream host if required.
So why use FCoE on the array?
Whilst FCoE is definitely a better solution than FCIP ever was (that’s SCSI over Fibre Channel Protocol over IP over Ethernet), most real world figures show little discernible difference in throughput and latency between FCoE and native FC. All we are doing is adding a layer of complexity and extra processing (and more expensive HBAs) within the storage system.
This is what the marketing guys tell you – simple, huh?
The lack of uptake on the storage array is perhaps demonstrated with some of the newer storage technologies that we see today. For example, Nimble Storage has recently introduced Fibre Channel support to broaden its appeal with enterprise customers – but there’s no hint that on-board FCoE will ever be implemented. Other vendors, including those offering all-flash arrays, have gone the same way.
So, is it a case of ‘R.I.P. FCoE’?
Not quite. FCoE makes a lot of sense from a host connectivity perspective – if the customer or architecture insists on fibre channel. It potentially reduced ToR to host cabling by a considerable amount. Solutions will continue to use it for host to switch cabling consolidation, and for transport of FC storage frames over their compute fabric.
But for SAN storage presentation? iSCSI or good old native FC all the way!