Posts

Showing posts from March, 2011

Dell PowerEdge M1000e

We recently installed a Dell PowerEdge M1000e chassis with a couple of blades in it. Since we wanted to run FCoE to the blade, our only option (at least that was available at the time) was the 10 Gig pass-through blade. After hooking everything up and installing ESX, we found that only one of the 10 Gig links would come up for each host - not a promising start. We also had a score of annoying little problems across the CMC management interface and the KVM interface. So far, I'm not exceedingly impressed by Dell's blade offering.

My love/hate relationship with Cisco Nexus 1000v Part 2

Continuing on with the hate part of my relationship, we recently ran into an issue where our Primary VSM died on us. It would boot up to this: Loader Loading stage 1.5. Loader loading, please wait... User Access Verification KCN1K login: admin Password: No directory, logging in with HOME=/ Cisco Nexus Operating System (NX-OS) Software TAC support: http://www.cisco.com/tac Copyright (c) 2002-2010, Cisco Systems, Inc. All rights reserved. The copyrights to certain works contained in this software are owned by other third parties and used and distributed under license. Certain components of this software are licensed under the GNU General Public License (GPL) version 2.0 or the GNU Lesser General Public License (LGPL) Version 2.1. A copy of each such license is available at http://www.opensource.org/licenses/gpl-2.0.php and http://www.opensource.org/licenses/lgpl-2.1.php System coming up. Please wait... Couldn't open shared segment for cli server System is not ready. Please retry when...

My love/hate relationship with Cisco Nexus 1000v Part 1

Over a year ago, we deployed Cisco Nexus 1000v virtual switching into our VMware cluster. I love some of the features it offers, but the problems we have run across still haunt me. The first time I deployed into our environment, it was into a virtualized vCenter instance. We were running it on a separate SQL server VM in the cluster. This was all fine and dandy - until we had a major power outage that took the entire cluster offline. The problem with this scenario is that it leads to recursive and cascading failures if not carefully designed. Even if it is carefully designed, it can still make for an unpleasant scenario. Think this through with me. In the event of a complete power outage, what is the recovery process with a typical virtualized vCenter installation? I believe it looks something like this: 1) Power the hosts back up 2) Power up AD server (physical) 3) Locate and power up SQL server (unless this is on-box with vCenter) 4) Locate the vCenter VM and boot it up What about wi...