Category: Switches

Fake it till ya make it!

So no one who is reading this should be in the dark about some of the interesting things I have been doing as of late.  But if you are here is a quick re-cap.

Language Access Network my employer is undergoing an installation of a first of its kind Video Call Center.  I will more on that to write soon.  As part of this process we had a WHOLE LOT of infrastructure put into place.  For starters we needed a “SAN”, we needed Servers, we needed DC Switching and we needed lots and lots of licensing and that all before the developers and engineers jump in to make the whole thing work.  The cool bits of this first part are what we did for “SAN”, Server and Network.  As you all know I am a past Cisco UCS zealot and I have a NetApp in my basement so you would think that it would be simple math as to what I would have installed.  You would be right.  UCS and NetApp were about $100,000 more than I could scrape out of my budget and still have anything left for other major components.  Before people get bent out of shape about me saying Cisco UCS and NetApp are to expensive, I did not say that.  Honestly I think within existing DC platforms they are both very well priced if you don’t bring next gen platforms into the mix.  In my case the next gen platform is Nutanix.  If you don’t know anything about these guys click the link and check them out.

In a nutshell Nutanix is 4 Blades of Compute and 20TB of Storage in a 2RU chassis with FusionIO, SSD and SAS Drives and no common backplane between the 4 nodes.  Along with my four pod node we added Arista 7124SX as our DC Switching/Fabric.  There are lots of details around this combination like currently Nutanix does not support using the Node for a bare metal server like you can do with UCS or other Blade Enclosures and the storage has limited access to the outside world (it is setup to presented to ESXi Hosts as iSCSI targets and VMs as ViSCSI targets) but so far I love the platform.  It gave me what I needed in the price point I needed and offers huge scale out options considering it is based of the GFS files system that Google uses across their DC’s. Read more

Storage Wars the Epic Battle Rages On

So tonight as I was getting into bed I did my normal scan twitter to see who I have pissed off or what might be going on that should rob me of sleep.  Well tonight @david_Strebel asked the following questions;

“Who thinks FCoE will win over iSCSI?”  and I responded “Not I” and then David asked the next logical question which was why not and here is what I had to say in the incredible detail that Twitter allows;  “l2 boundaries, specialized hardware other than nics, hate relationship from most network people.”

 

The problem with this answer is pretty clear though.  It does not really answer the question just gives a few power point bullets to appease the crowd.  I don’t feel like this is enough though.  So I am going to attempt to lay out my overall view on this issue of who will win iSCSI or FCoE and why.  For those of you who don’t want to read the whole article which might get a a tad windy I don’t think either will win.  But I don’t think FCoE will emerge as the leader until something better come along.  For those masochists who like this kind of crap read on.

Read more

Quick and Dirty…Ooohhhh….Yeahhhh

Quick and dirty is how I like it when I have 4000 menial tasks to get done.  So another oldy but goody that I had to dig up today was how to delete a full directory structure and its contents from a Cisco files system.  So here it is enjoy.

From normal enable mode:

delete /recursive /force flash:(enter the root file name)

So delete is the easy one.

/recursive sets the flag to recursively cycle through the whole directory structure you specified.  So you should probably never type

delete /recursive /force flash:  BAD DON’T DO IT!

And finally /force eliminates all the are you sure you don’t want to shoot yourself in the forehead messages.

Again quick and dirty saves time but if your dumb about using it can get you in trouble.

TACACS+ on Nexus 7000

I have been through a couple of these Nexus deployments now that use a combination of 7Ks, 5Ks, and 2Ks. If you know anything about this platform you know that TACACS and AAA only really apply to the 7K and 5Ks. Here is my working template of what it takes to get these guys talking to and ACS server.

tacacs-server key 0 YOUR.ACS.KEY
tacacs-server host X.X.X.X
tacacs-server host X.X.X.X
tacacs-server host X.X.X.X
aaa group server tacacs+ GROUP.NAME
server X.X.X.X
server X.X.X.X
server X.X.X.X
source-interface YOUR.VLAN or YOUR.VRF or YOUR.ETHERNET

aaa authentication login default group GROUP.NAME
aaa authentication login console group GROUP.NAME
aaa authorization commands default group GROUP.NAME
aaa accounting default group GROUP.NAME
aaa authentication login error-enable
Read more

I HAVE THE POWER!!!!!!

It is funny how things cycle. We have been doing a bunch of Cisco 4500 installs ranging from 4506’s through the 4510 and even a few 6500s in the mix. And no matter how hard we try we have power issues with them every single time. We either are in a hurry and spec the wrong cables, the client requests the wrong cable, we don’t have the correct power to stage the equipment in our office or the client doesn’t have the right power for the unit. In many cases we temporarily fall back to using 110 power with NEMA 5-15/20T cables and then force the power supplies to combined mode in order to get enough power to bring up the entire chassis.  I should point out that this is usually only good for temp fix and that you should fix your power issue (usually installing bigger circuits) and move back to redundant mode.  But for that quick fix here is the command on a 4500 or 6500 chassis to combine the power supplies:

power redundancy-mode combined

This command should be ran from config mode and once your config is saved it will return to this state after reboot.

And for a bit of extra fun scream out BY THE POWER OF GREYSKULL as you type this in.