Wow so here I am just a few days from flying to San Jose for Network Field Day #3. What an honor! A bit over a year ago I sadly had to decline Steve Foskett’s invitation to attend a field day. Ever since then I was hoping I had not blown my chance to be part of what I think is where our industry is heading in concern to vendor interaction. For so long the process has been ran by insiders and in a closed off to the majority of the comment. Tech Field Day’s opened the kimono and I can’t wait participate and share information from my perspective what we get to see and talk about while in San Jose.
I have been doing some thinking and homework prior to flying west and I figured I would share some of that here. First off I have to right up front and say that I am a client of three of six of the vendors that are presenting. Not sure if that is a good thing or bad thing for them but its how it goes. Read more
So no one who is reading this should be in the dark about some of the interesting things I have been doing as of late. But if you are here is a quick re-cap.
Language Access Network my employer is undergoing an installation of a first of its kind Video Call Center. I will more on that to write soon. As part of this process we had a WHOLE LOT of infrastructure put into place. For starters we needed a “SAN”, we needed Servers, we needed DC Switching and we needed lots and lots of licensing and that all before the developers and engineers jump in to make the whole thing work. The cool bits of this first part are what we did for “SAN”, Server and Network. As you all know I am a past Cisco UCS zealot and I have a NetApp in my basement so you would think that it would be simple math as to what I would have installed. You would be right. UCS and NetApp were about $100,000 more than I could scrape out of my budget and still have anything left for other major components. Before people get bent out of shape about me saying Cisco UCS and NetApp are to expensive, I did not say that. Honestly I think within existing DC platforms they are both very well priced if you don’t bring next gen platforms into the mix. In my case the next gen platform is Nutanix. If you don’t know anything about these guys click the link and check them out.
In a nutshell Nutanix is 4 Blades of Compute and 20TB of Storage in a 2RU chassis with FusionIO, SSD and SAS Drives and no common backplane between the 4 nodes. Along with my four pod node we added Arista 7124SX as our DC Switching/Fabric. There are lots of details around this combination like currently Nutanix does not support using the Node for a bare metal server like you can do with UCS or other Blade Enclosures and the storage has limited access to the outside world (it is setup to presented to ESXi Hosts as iSCSI targets and VMs as ViSCSI targets) but so far I love the platform. It gave me what I needed in the price point I needed and offers huge scale out options considering it is based of the GFS files system that Google uses across their DC’s. Read more