Ok I know this will pull a significant amount of hate from all of the NAT haters. 99.9% of the time I would agree. However our business is unique. That is the first thing I am going to layout for sake of the discussion that will happen.
What we do: We do real time video communication.
Who we do it for: Medical Institutions.
How we deliver it: Via Private MPLS from the client site to our call centers. At the client side we ride their infrastructure.
Hopefully the issue becomes immediately clear. If it does not let me help out. I own my network and the MPLS links and CPE router. I do not own, control, influence or have any visibility into the client infrastructure. In most cases the answer would be who cares push it to the gateway NAT it and be done with it. However real time communications using SIP first don’t natively like NAT (but I have that issue fixed…..I think.) and these systems are not simple point to point communications. Instead they are CientX to server, server to ClientY, ClientY to ClientX communications. The solutions should be pretty obvious; Read more
Wow so here I am just a few days from flying to San Jose for Network Field Day #3. What an honor! A bit over a year ago I sadly had to decline Steve Foskett’s invitation to attend a field day. Ever since then I was hoping I had not blown my chance to be part of what I think is where our industry is heading in concern to vendor interaction. For so long the process has been ran by insiders and in a closed off to the majority of the comment. Tech Field Day’s opened the kimono and I can’t wait participate and share information from my perspective what we get to see and talk about while in San Jose.
I have been doing some thinking and homework prior to flying west and I figured I would share some of that here. First off I have to right up front and say that I am a client of three of six of the vendors that are presenting. Not sure if that is a good thing or bad thing for them but its how it goes. Read more
So no one who is reading this should be in the dark about some of the interesting things I have been doing as of late. But if you are here is a quick re-cap.
Language Access Network my employer is undergoing an installation of a first of its kind Video Call Center. I will more on that to write soon. As part of this process we had a WHOLE LOT of infrastructure put into place. For starters we needed a “SAN”, we needed Servers, we needed DC Switching and we needed lots and lots of licensing and that all before the developers and engineers jump in to make the whole thing work. The cool bits of this first part are what we did for “SAN”, Server and Network. As you all know I am a past Cisco UCS zealot and I have a NetApp in my basement so you would think that it would be simple math as to what I would have installed. You would be right. UCS and NetApp were about $100,000 more than I could scrape out of my budget and still have anything left for other major components. Before people get bent out of shape about me saying Cisco UCS and NetApp are to expensive, I did not say that. Honestly I think within existing DC platforms they are both very well priced if you don’t bring next gen platforms into the mix. In my case the next gen platform is Nutanix. If you don’t know anything about these guys click the link and check them out.
In a nutshell Nutanix is 4 Blades of Compute and 20TB of Storage in a 2RU chassis with FusionIO, SSD and SAS Drives and no common backplane between the 4 nodes. Along with my four pod node we added Arista 7124SX as our DC Switching/Fabric. There are lots of details around this combination like currently Nutanix does not support using the Node for a bare metal server like you can do with UCS or other Blade Enclosures and the storage has limited access to the outside world (it is setup to presented to ESXi Hosts as iSCSI targets and VMs as ViSCSI targets) but so far I love the platform. It gave me what I needed in the price point I needed and offers huge scale out options considering it is based of the GFS files system that Google uses across their DC’s. Read more
So tonight as I was getting into bed I did my normal scan twitter to see who I have pissed off or what might be going on that should rob me of sleep. Well tonight @david_Strebel asked the following questions;
“Who thinks FCoE will win over iSCSI?” and I responded “Not I” and then David asked the next logical question which was why not and here is what I had to say in the incredible detail that Twitter allows; “l2 boundaries, specialized hardware other than nics, hate relationship from most network people.”
The problem with this answer is pretty clear though. It does not really answer the question just gives a few power point bullets to appease the crowd. I don’t feel like this is enough though. So I am going to attempt to lay out my overall view on this issue of who will win iSCSI or FCoE and why. For those of you who don’t want to read the whole article which might get a a tad windy I don’t think either will win. But I don’t think FCoE will emerge as the leader until something better come along. For those masochists who like this kind of crap read on.
Quick and dirty is how I like it when I have 4000 menial tasks to get done. So another oldy but goody that I had to dig up today was how to delete a full directory structure and its contents from a Cisco files system. So here it is enjoy.
From normal enable mode:
delete /recursive /force flash:(enter the root file name)
So delete is the easy one.
/recursive sets the flag to recursively cycle through the whole directory structure you specified. So you should probably never type
delete /recursive /force flash: BAD DON’T DO IT!
And finally /force eliminates all the are you sure you don’t want to shoot yourself in the forehead messages.
Again quick and dirty saves time but if your dumb about using it can get you in trouble.