Storage Wars the Epic Battle Rages On

So tonight as I was getting into bed I did my normal scan twitter to see who I have pissed off or what might be going on that should rob me of sleep.  Well tonight @david_Strebel asked the following questions;

“Who thinks FCoE will win over iSCSI?”  and I responded “Not I” and then David asked the next logical question which was why not and here is what I had to say in the incredible detail that Twitter allows;  “l2 boundaries, specialized hardware other than nics, hate relationship from most network people.”

 

The problem with this answer is pretty clear though.  It does not really answer the question just gives a few power point bullets to appease the crowd.  I don’t feel like this is enough though.  So I am going to attempt to lay out my overall view on this issue of who will win iSCSI or FCoE and why.  For those of you who don’t want to read the whole article which might get a a tad windy I don’t think either will win.  But I don’t think FCoE will emerge as the leader until something better come along.  For those masochists who like this kind of crap read on.

Let me start my analysis of this issue by saying December 1st of 2010 I pretty much rapped up the 3.5 year part of my career that covered Data Centers and the DC 3.0 movement mainly tied to Cisco UCS, FCoE and integration with legacy systems through Nexus Switching and a hybrid of FC switches.  Since then I have been serving as the V.P. of Network Services for Language Access Network and while we have a rapidly growing business and a weekly expanding national network we are not heavy on the Data Center side as of yet.  So while I try to keep up with all the details I am sure I will get some of it wrong and will gladly take my punishment for any mistakes.  As for the analysis side of this post some of you will flat out disagree.  Some of you will have a vested interest through your employer that you have to defend.  Simply put I don’t really care.  I will listen and I will learn from what you have to say but in the end this blog is just my opinion and my public scratch pad.

With all that out of the way let me lay the groundwork for this post.

1.  iSCSI will for the most part win the day until more robust Network Storage protocols (NSPs) arrive on scene

2.  FCoE wil drive innovation initially into DC Ethernet and eventually into the complete enterprise that will benefit iSCI and future NSPs.

3.  FCoE as a protocol will largely drive DC 3.0 and next generation platforms as a robust interconnect protocol much like Infiniband has in the HPC environments.

4.  Storage Protocols regardless of their ability to inter-operate with standard data streams will largely remain segmented networks in highly scalable environments and will only converge based on platform requirements or when budgets cannot justify segmented data and storage networks.

 

So lets deal with my first statement that iSCSI “wins” in the short term.  This is pretty straight forward and I will break it down into simple bullet points;

  • iSCSI is cheap.  Any IP San Supports it, you can build custom arrays based on standard platforms, all network equipment that supports 1000BaseT and Jumbo frames supports it and every modern server and desktop OS can talk it.
  • iSCSI uses TCP.  The storage guys and the FCoE zealots point this out as a major downfall and extra protocol overhead.  My answer is in an 8 Gbps LACP group or in a 10, 40 or 100Gbps Ethernet link who freaking cares!.  Add that to the fact that we have TOE cards that perform iSCSI offload and this is a moot point and it simply gives us guaranteed delivery of a packet that does not have to be handled by the storage protocol now.
  • Network admins know and understand how iSCSI works, which is just like every other TCP/IP applications.  We know how to route it, QoS it, secure it, troubleshoot it and bitch about it.  FCoE on the other hand is Frankenstein protocol of the parts of our networks that most Network Admins sadly know the least about (Ethernet) and a totally foreign protocol Fibre Channel that they don’t even know where to start with other than expensive new gear that may or may not at some point in time support end to end FCoE when and if that ships (depends on who you talk to about this) and any number of other features and functions that are not clear to them yet.

Next we get to what I think are the really cool things about FCoE.  And that is already FCoE and DC 3.0 structures have driven the adoption of 10Gbps Ethernet  as well as fast tracked the discussions for 40 and 100Gbps Ethernet.  Tie these overall bandwidth and design changes with better understanding of PAUSE frames, loss-less Ethernet (something seldom practiced till now) and a demand for Network Architects and Admins to better understand storage and emerging virtualization platforms  and what you get is crazy amounts of bandwidth at all points in the network.  Don’t believe me?  Well let me give two quick examples;

  • The first is a major government operation in the City of Columbus.  For three years we worked with them to debunk the old methodology of just slap Cat6500 Cores in and give each Distribution and Access Switch a few Gig up-links spit between cores.  Instead we looked at their upcoming demand for HD live video streams from every primary workspace in their prevue as well as mass storage and huge increases in application traffic.  Once we had that as a basis for our design the answer was simple at the time Nexus 7000s to replace their core and service as collapsed Distribution for their DC and then using Nexus 5000s for their L2 Distribution to L2 Access switches with multiple 10 Gig up-links.
  • Example two is a small college that currently runs with a pair of Catalyst 3550s as there core and HP 2650s as all their Distribution and Access Switches.  Their design was old and flat, their bandwidth was over committed almost to the point of unusable and they were adding, VOIP, IP Security, 802.11N Access Points, HD Video over IP and god knows what else.  So what did we put in?  This time a single Nexus 7000 with dual sups and N+1 Power to act as the Core and Data Center Access.  Then we deployed Stacks of 10Gig enabled 3750s to support the demands at the edge.

What is my point you may ask?  My point is that these were all technologies initially slated for Data Center 3 years ago and day one I was using them with my customers to provide cost equal or cheaper solutions that were more scalable over time.  So how on earth can the things we are getting from the in roads made by FCoE be bad?  My take is they can’t be.  And as we march towards NBPoE (Next Big Protocol over Ethernet) or iNBP (IP Next Big Protocol) then they will benefit even more due to the greater adoption of and skill sets provided by this point in history for data centers and storage transport.

So this next one will get all sorts of jeers and calls for my head I’m sure.  And that is that FCoE makes a better interconnect platform for DC 3.0 and beyond that it does as a pure storage play. What I mean by that is look at Cisco UCS and in some ways even at HP’s Flex10 on the C7000 (even though I think Flex10 is evil and any C7000 bearing that mark should be doused in holy water then dunked in a tank in Salem) and how they use high bandwidth interconnects for more than simply sending storage data.  In both cases they use them for chassis command and control (management), general data, storage and virtualization options.  Now needless to say the Cisco UCS model of central command and many chassis is the way of the future for data centers.  HP already knows this as do Dell, IBM and every other server player.  Now everyone just needs to catch up.  Where FCoE comes into play with these platforms is providing unified Northbound (from server towards core) communications for everything in one or two cables.  Int this case this requires a “special switch” to terminate those unified connections but also to function as the “brain” of the chassis cluster.  This requires short haul high bandwidth connections and FCoE is perfect for this due to the ability to natively send standard TCP/IP traffic via aggregation  Ethernet while also encapsulating native FC out of the chassis to a point where it can then be dropped back into a legacy native FC switch for transit to a SAN.  If network admins like it or not FC as a stand alone protocol and platform will be around for quite awhile.  So instead of fighting it and generating huge amounts of expense for multiple interfaces, non-unified management platforms and political arms races just let it ride to an aggregation point and then go back to its native transport.  So what does this make FCoE perfect for?  Interconnect and legacy integration with next gen platforms.

To wrap this up I want to talk really high level architecture here.  And part or most of what I am about to say flies in the face of my unified interconnect rantings int he previous paragraph.  But bear with me because I think it all makes sense in the end.  As legacy systems stand today we have a few types of networks.

  • Data
  • In Band Management
  • Out of Band Management
  • Storage

Out of these Data and in Band Management usually run on the same wire or wires and are virtual segmented with VLANs and security policies.  OOB management and storage however usually ride their own private networks that never or minimal ways ever interconnect to the core data network in a Data Center.  This wont change overall.

In order to continue to provide the bandwidth required for apps and storage and management as well as the guaranteed delivery of storage data and guaranteed access for OOB management these networks must necessarily be separate entities as cost allows.  In the scenario of emerging platforms like Cisco UCS that I talked about above this means that DATA, In Band management, OOB management and Storage of the Server chassis themselves (pizza box or Blade Enclosures) will be handled via a unified network connection (Ethernet) in a highly redundant fashion over a very short distance to an aggregation point.  From their never shall the data streams meet again.  From a storage perspective this means that whether you are using iSCSI or FCoE out of the server once it hits the Aggregation point it should be traversing a separate link that standard data and management traffic on its way to the SAN.  People can argue till they are blue in the face but all I have to say is if Amazon would have employed this in their Cloud Storage environment the Elastic Compute Cloud failure would have been much less likely to occur.  But more importantly to this discussion no matter if your iSCSI or FCoE you are still riding dedicated storage networks as quickly as possible to ensure data integrity at very low latency.  The beauty is that in either case you are using Ethernet as your transport and ultimately Ethernet as a medium will be cheaper than straight up FC because it is multi-use versus single use technology.

In the end I did not discuss L2 only traversal for FCoE nor completing or emerging tech.  But I hope I at least made the point that one way or the other where we are today is a transition not the future.  And where we stand today the opportunity that FCoE has brought to the table does nothing but help iSCI while enabling future protocols and technologies.  Vendor adoption of either or both protocols also fuels the future buy giving us options and giving us flexibility in our designs to use FC and FCoE where we must and to deploy iSCSI where it is potentially already enabled without increased outlay in infrastructure costs other than storage arrays.

For me as an executive of a rapidly growing company I am looking at both UCS and standard servers for use with VMWare.  In either of those cases I have no intention on building out FCoE environments when iSCSI provides me exactly what I need now and will grow with me for the foreseeable future.  Any FC that may find its way into the mix will be a simple fact of it was already there so use it.

3 comments

  1. John Martin says:

    From a storage weenie’s perspective, one of the things that FCoE has that’s attractive is that it allows you to keep at least a small portion of the network away from the so called “Network team”, none of whom actually understand the absolute requirement for a massively overengineered non-blocking interconnect with no oversubscription and completely lossless layer-2 without which the world of the storage administrator as we know it will ultimately end.

    You will pry the last Fibre Channel switch away from the storage team from their cold dead hands.

    • Josh says:

      As far as I am concerned that is the problem. IT “professionals” impede the uptake of new better technology and keep legacy tech festering in organizations long after it should have died. This type of IT fails everyone especially the people paying the bills. Zealots have a huge impact but very little real value.

  2. John Martin says:

    From a storage weenie’s perspective, one of the things that FCoE has that’s attractive is that it allows you to keep at least a small portion of the network away from the so called “Network team”, none of whom actually understand the absolute requirement for a massively overengineered non-blocking interconnect with no oversubscription and completely lossless layer-2 without which the world of the storage administrator as we know it will ultimately end.

    You will pry the last Fibre Channel switch away from the storage team from their cold dead hands.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.