So no one who is reading this should be in the dark about some of the interesting things I have been doing as of late. But if you are here is a quick re-cap.
Language Access Network my employer is undergoing an installation of a first of its kind Video Call Center. I will more on that to write soon. As part of this process we had a WHOLE LOT of infrastructure put into place. For starters we needed a “SAN”, we needed Servers, we needed DC Switching and we needed lots and lots of licensing and that all before the developers and engineers jump in to make the whole thing work. The cool bits of this first part are what we did for “SAN”, Server and Network. As you all know I am a past Cisco UCS zealot and I have a NetApp in my basement so you would think that it would be simple math as to what I would have installed. You would be right. UCS and NetApp were about $100,000 more than I could scrape out of my budget and still have anything left for other major components. Before people get bent out of shape about me saying Cisco UCS and NetApp are to expensive, I did not say that. Honestly I think within existing DC platforms they are both very well priced if you don’t bring next gen platforms into the mix. In my case the next gen platform is Nutanix. If you don’t know anything about these guys click the link and check them out.
In a nutshell Nutanix is 4 Blades of Compute and 20TB of Storage in a 2RU chassis with FusionIO, SSD and SAS Drives and no common backplane between the 4 nodes. Along with my four pod node we added Arista 7124SX as our DC Switching/Fabric. There are lots of details around this combination like currently Nutanix does not support using the Node for a bare metal server like you can do with UCS or other Blade Enclosures and the storage has limited access to the outside world (it is setup to presented to ESXi Hosts as iSCSI targets and VMs as ViSCSI targets) but so far I love the platform. It gave me what I needed in the price point I needed and offers huge scale out options considering it is based of the GFS files system that Google uses across their DC’s. Read more
So tonight as I was getting into bed I did my normal scan twitter to see who I have pissed off or what might be going on that should rob me of sleep. Well tonight @david_Strebel asked the following questions;
“Who thinks FCoE will win over iSCSI?” and I responded “Not I” and then David asked the next logical question which was why not and here is what I had to say in the incredible detail that Twitter allows; “l2 boundaries, specialized hardware other than nics, hate relationship from most network people.”
The problem with this answer is pretty clear though. It does not really answer the question just gives a few power point bullets to appease the crowd. I don’t feel like this is enough though. So I am going to attempt to lay out my overall view on this issue of who will win iSCSI or FCoE and why. For those of you who don’t want to read the whole article which might get a a tad windy I don’t think either will win. But I don’t think FCoE will emerge as the leader until something better come along. For those masochists who like this kind of crap read on.
So last night while working on a Scalable Compute and storage design for a client, this post popped up in my twitter stream from @ErinatHP;
New HP blog post “In the light of day the Cisco UCS hype doesnt match the promise” ; UCS not all its marketed to be http://bit.ly/dKj88W
So in my normal do not let a stupid dig by a lame duck player go unmatched I responded Oh I can’t wait to read this FUD (you can check me out on twitter @joshobrien77)
All the twitter marketing and pissing matches aside I meant what I said and I did look forward to reading the HP Spin on where their market is vanishing to. And here are my responses, while they might not be the most technical they are not un-informed from the basis of the Cisco UCS platform or the HP C7000 with FLEX-10 Platform. And remember at the end of the day I represent me not Cisco not my employer, just little old me.
Also just so if this gets nasty I want to make sure that I am crediting this correctly:
All of the HP Writes: Are direct Quotes from Duncan Campbel with HP on his blog which you can find here: http://h30507.www3.hp.com/t5/Converged-Infrastructure/In-the-light-of-day-the-Cisco-UCS-hype-doesn-t-match-the-promise/ba-p/83537
PLEASE READ ALL of Duncans Post BEFORE you READ Mine. I DO NOT PRETEND to REPRESENT HIS SIDE WELL AT ALL!
San Components: Multi-Protocol Storage Routers (ex. MDS 9222i)
- Supports FC, FCIP, iSCSI (Initiators and Targets)
- 1,2,4,8 and 10 Gbps FC and 1Gbps Ethernet (10 G is for Switch to Switch trunking. Not HBA’s exist)
- Server Based config and management just like Nexus with DCNM
- Combined Ethernet and FC switch
- 8 ports of 1/2/4 FC or 6 ports of 2/4/8 FC
- Requires both Fabric Manager and CLI or DCNM