Category Archives: Virtualization

Everything about Virtualization


I did not closely follow the vsan announcement from VMware however this is what I could pick up by the folks who were there to listen about this first hand.

  • 32 node support (up from the 16 node support announced at Partner Exchange last month, and up from the 8 nodes which we supported during the beta)
  • 2 million IOPS (using IOmeter 100% read, 4KB block size. Also 640K IOPS using IOmeter with 70/30 read/write ratio & 4KB block size)
  • 3200 virtual machines (100 per node)
  • 4.4 PB of storage (using 35 disk per host x 32 hosts per cluster)

There is also interoperability with vSphere Data Protection for backups, vMotion, DRS, HA, VMware View and vSphere replication for DR etc. VSAN is set to go GA on March 10th and no pricing or licensing was announced yet. This will probably be after March 10th.

Taken from Source.



> SRM will report operation timed out when trying to power on virtual machines on a busy shared DR vCenter site

> Increasing the time out from the default 900 seconds will help prevent the issue.


VMware came out with a KB article that talks about time our errors occurring while powering on the virtual machines on a shared recovery site. Now with SRM 5.1, you can one shared recovery site for upto 10 production sites.

However you  might see that SRM reports operation timed out errors when powering on the virtual machine.

The error message is – Error:Operation timed out:900 seconds.

VMware recommends changing the default time out to more than 900 seconds. The time out occurs when the vCenter is running too many virtual machines and is way too busy to respond to the SRM server. We are talking thousands of virtual machines here.

  1. Go to C:\Program Files\VMware\VMware vCenter Site Recovery Manager\config on the SRM Server host machine on the recovery site.
  2. Open the vmware-dr.xml in a text editor.
  3. Increase the default RemoteManager timeout value from 900 to a larger number, for example 1200. <RemoteManager>
  4. Restart the SRM Server service.

This should take care of the error. You could do this if you had a high latency network however we would not want to run a DR site over a high latency network in the first place.

Here is the KB Article.

A workaround would be to split up a busy vcenter to multiple instances. This could incur additional licensing costs however it could prevent such time outs as well.

Hope this helps 🙂


I had a project to work on that involved Key Management Service. Here is a write up on different troubleshooting tips and techniques to make your life easier. Below are excerpts and some data taken from a pretty good technet article.

If you are a windows user then you must be aware of the license activation methods in Windows. However many enterprise customers set up the Key Management Service (KMS) to enable activation of Windows in their environment. It is a simple process to set up the KMS host and the servers activating and the KMS clients discover and attempt to activate on their own.

KMS Overview

KMS client-server model is conceptually similar to DHCP. Instead of handing out IP addresses to clients on their request, KMS enables product activation. KMS is also a renewal model, with the clients attempting to reactivate on a regular interval. There are two roles: the KMS host and the KMS client.

  • The KMS host runs the activation service and enables activation in the environment. The KMS host is the system where you will need to install a key (the KMS key from the Volume License Service Center (VLSC)) and then activate the service. The service is supported on Windows Server 2003, Windows Vista, Windows 7, Windows Server 2008, or Windows Server 2008 R2 and also the newer Windows Server 2012.
  • The KMS client is the Windows operating system that is deployed in the environment and needs to activate – basically your servers that you deploy in your environment. KMS clients can be running any edition of Windows that uses Volume Activation which include Windows 7, Windows Vista, Windows Server 2008 R2, and Windows Server 2008 and the recently released Windows 2012 Servers as well. The KMS clients come with a key pre-installed, called the Generic Volume License Key (GVLK) or  also called as KMS Client Setup Key. The presence of the GVLK is what makes a system a KMS client. Think of the GVLK as a way how that client knows it needs to look for a KMS host. For the whole thing to work – you have to have the DNS SRV record setup. The KMS clients find the KMS host via a DNS SRV record (_vlmcs._tcp) which allows them to discover and use the service to activate themselves. When in the 30 day Out of Box grace period, they will try to activate every 2 hours. Once activated, the KMS clients will attempt a renewal every 7days.

There are two areas of check on the KMS host. To check the status of the software license service on the host. From an elevated command prompt, type SLMGR.vbs /dlv. This will give you verbose output of the Software Licensing service. 


Below is a list of the information

Read More …


I heard the term and threw me off. So this is me making it easier for myself and for all of you on  what VIF’s are all about.


> VIF = Virtual Interface

> VIF is a feature in Data ONTAP. Data ONTAP is the OS that NETAPP storage devices run on. So basically VIF is a feature of NETAPP storage devices.

> VIFs implement link aggregation – combining multiple network links to work as one.

> Other vendors may call VIFs as Virtual aggregations, link aggregations, trunks and Etherchannels.

> There are three types of VIFs – Single-Mode VIF, Static Multimode VIF and Dynamic multimode VIF.

> VIFs allow you to have higher throughput, fault tolerance and allow no single point of failure to allow HA features for your storage system


VIF stands for Virtual Interface which is a feature of your Netapp storage system. The feature allows you to aggregate multiple network interfaces into one logical interface. This allows for a variety of options that now allow higher throughput, fault tolerance and no single point of failure.

Now lets talk about why would you need or use a VIF on your Netapp Storage system. The main bottle neck between compute and storage is the network connectivity between them, among other things.

You may have your Netapp storage equipped with flash cards and SSDs however all that is still throttled because of that 1 GB link you have between your Netapp and your Hypervisor. That means regardless of how fast your random read writes are on your storage system, it is only as good as as that 1GB link. With the VIF you can now aggregate your multiple nics on your Netapp storage system – so that 1 GB link is now looking as a 4GB link by aggregating 4 nics as 1 VIF.

Gets even better – now if you loose one port on that aggregate – no problem! You lost 1 GB bandwidth however you still have 3GB of throughput to your storage system and your virtual machines are still up and no downtime!

You can create VIFS in three different types – single-mode VIF, Static multimode VIF and Dynamic multimode VIF.

In a Single-mode VIF, only one interface is active while others are on standby and ready to take over if the active nic fails. One thing to remember is that all interfaces share a common MAC address. If there are more than one interfaces on stand by in a Single-mode VIF configuration – the storage system picks the interface randomly should the active NIC fail. Since the link failover is monitored and controlled by the storage system, in a Single-mode VIF you DO NOT need a switch that supports link aggregation.

In a Static Multimode VIF all the interfaces in the VIF are active and share a single MAC address. Unlike the Single-mode VIF where only one interface is active at all times, Static multimode VIF has all interfaces communicating at all times. This mode is in compliance with IEEE 802.3ad (static) standard which is the Link aggregation protocol. Any switch that supports aggregates could be used for this mode. The switch here does NOT have to control the packet flow or exchange because that will be taken care of by the storage system and devices on the other end. It is important to remember that the Static Multimode VIF does NOT support IEEE 802.3ad (dynamic) which is also known as Link aggregation control protocol (LACP) or Cisco’s proprietary – Port Aggregation Protocol (PAgP).

In a static multimode VIF the failure tolerance rate is “n-1” where n is the number of interfaces participating in the aggregate. Flow control from transmission perspective is controlled by the NETAPP but it cannot control how inbound frames arrive. That control rests on the devices on the other side of the switch, pressumbly a hypervisor.

Dynamic multimode VIFS can not only detect loss of link status but also loss of data flow as well. This is the most used in high-availability environments. This mode is also in compliance with the IEEE 802.3ad (dynamic) standard which is the Link aggregation control protocol (LACP). However there are some things to remember about Dynamic multimode VIFS.

1. Dynamic multimode VIFS must be connected to a switch that supports LACP

2. The VIFs must be configured as first-level VIFS. First-level VIF is nothing but a trunked interface.

3. They have to be configured to use port based and IP based load balancing methods.

It should be noted that in this mode all interfaces are active and share a single MAC address.

Since we mentioned Load balancing here – it is worth noting that there are three load balancing methods for a multimode VIF. By default when no method is specified, the IP address based load balancing method is used.

The three load balancing methods are, Ip address and Mac address load balancing, Round robin load balancing and Port based load balancing.

Hope this helps 😉 Please comment for any clarifications.



Found this interesting Infographic about upgrading your network with 10GB Ethernet. Oh and by the way it was made by Dell 😛



After the long 2 week vacation during the Christmas and New year’s break – I think I forgot everything I knew about technology.

Wanted to refresh memory and this is one of such posts.

Below is a list of some good articles to help understand LACP and Static EtherChannel. I landed there based on a NetApp storage requirement that a customer wanted.

Hope you like them do share if you have more 🙂


This is my first script that I published ever as in EVER! Feel free to comment.

The script’s intention is to create a role with privileges of either set 1 or set 2. As in these sets are two different permission sets with Set 2 being a little more enhanced.

This is what this bit of code does – takes a predefined role and privileges and define it in the $privs1 array.

Once you have all you need – you just pass the parameter and call the script as .\scriptname.ps1 -vcenter vcentername -role-set number

Here the number can be 1 or 2 with 2 pointing to more privileges to the role. There is not much error checking but this is it for now, more in the future..:)

 param([string]$vcenter, [string]$roleset)
if((-not($vcenter)) -and (-not($roleset))){Throw "You must supply vcenter followed by the roleset you want to execute upon. Please input the roleset id as applicable. Valid Rolesets are 1 or 2"}
Add-PSSnapin VMware.* -erroraction silentlycontinue
$privs1 = @("Acknowledge Alarm","Create Alarm","Disable Alarm Action", "Modify Alarm", "Remove Alarm", "Set Alarm Status", "Create Datacenter", "Move Datacenter", "Remove Datacenter", "Rename Datacenter", "Allocate Space", "Configure Datastore", "Create Folder", "Delete Folder" , "Rename Folder", "Cancel Task", "Assign Network", "Modify Intervals", "Assign Virtual Machine to Resource Pool", "Create Resource Pool", "Migrate", "Modify Resource Pool", "Move Resource Pool", "Remove Resource Pool", "Rename Resource Pool", "View", "Add Virtual Machine", "Assign Resource pool", "Assign vApp", "Clone", "Export", "Import", "Move", "Rename", "Suspend", "Unregister", "vApp Application Configuration", "vApp Instance Configuration", "vApp Resource Configuration", "View OVF Environment", "Add Existing Disk", "Add New Disk", "Add or remove device", "Advanced", "Change CPU Count", "Change Resource", "Configure Managedby", "Display Connection Settings", "Extend Virtual Disk", "Host USB Device", "Memory", "Modify Device Settings", "Query Fault Tolerance Compatibility", "Query Unowned Files", "Raw device", "Reload from path", "Remove disk", "Rename", "Set Annotation", "Settings", "Swapfile placement", "Upgrade virtual Machine compatibility", "Guest Operation Modifications", "Guest Operation Program Execution", "Guest Operation queries", "Create from existing", "Create new", "Move", "Register", "Remove", "Unregister", "Create Snapshot","Remove Snapshot", "Rename Snapshot", "Revert to Snapshot", "Answer Question", "Configure CD Media", "Configure Floppy media", "Console interaction", "Create Screenshot", "Defragment all disks", "Device connection", "Disable fault tolerance", "Enable Fault Tolerance", "Record session on Virtual machine", "Replay session on virtual machine", "Reset", "Suspend", "test Failover", "Test restart secondary VM", "Turn off Fault Tolerance", "Turn On Fault tolerance", "VMware tools install", "Guest operation program execution", "Guest operation queries", "Allow disk access", "Allow read-only disk access", "Clone template", "Clone virtual machine", "Create template from virtual machine", "Customize", "Deploy template" , "Mark as Template", "Mark as virtual machine", "Modify customization specification", "Promote disks", "Read customization specifications")
$Privs2 = $Privs1 + "Create", "Delete", "Power OFF", "Power ON"
Connect-VIServer -Server $vcenter &gt; $Null
if(!(get-virole -Name "Customer Privileges" -erroraction 'silentlycontinue'))
if($roleset -eq "1")
{New-virole -Name "Customer Privileges" -Privilege $Privs1
Write-host "Role created with set 1 privileges"}
if($roleset -eq "2")
{New-virole -Name "Customer Privileges" -Privilege $Privs1
Write-host "Role created with set 2 privileges"}
{ write-host "Role already exists"



Wanted to refresh our memories with some basics of VMware with the available network adapters for a vm.

Below is a extract from VMware itself!

Available Network Adapters

Only those network adapters that are appropriate for the virtual machine you are creating are available configuration options in the Choose Networks window.

  • Vlance: This is an emulated version of the AMD 79C970 PCnet32- LANCE NIC, and it is an older 10 Mbps NIC with drivers available in most 32-bit guest operating systems except Windows Vista and later. A virtual machine configured with this network adapter can use its network immediately.
  • VMXNET: The VMXNET virtual network adapter has no physical counterpart. VMXNET is optimized for performance in a virtual machine. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.
  • Flexible: The Flexible network adapter identifies itself as a Vlance adapter when a virtual machine boots, but initializes itself and functions as either a Vlance or a VMXNET adapter, depending on which driver initializes it. With VMware Tools installed, the VMXNET driver changes the Vlance adapter to the higher performance VMXNET adapter.
  • E1000: An emulated version of the Intel 82545EM Gigabit Ethernet NIC. A driver for this NIC is not included with all guest operating systems. Typically Linux versions 2.4.19 and later, Windows XP Professional x64 Edition and later, and Windows Server 2003 (32-bit) and later include the E1000 driver.

    Note: E1000 does not support jumbo frames prior to ESXi/ESX 4.1.

  • E1000e: This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the “e1000e” vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere 5. It is the default vNIC for Windows 8 and newer (Windows) guest operating systems. For Linux guests, e1000e is not available from the UI (e1000, flexible vmxnet, enhanced vmxnet, and vmxnet3 are available for Linux).
  • VMXNET 2 (Enhanced): The VMXNET 2 adapter is based on the VMXNET adapter but provides some high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. This virtual network adapter is available only for some guest operating systems on ESXi/ESX 3.5 and later.

    VMXNET 2 is supported only for a limited set of guest operating systems:

    • 32- and 64-bit versions of Microsoft Windows 2003 (Enterprise, Datacenter, and Standard Editions).

      Note: You can use enhanced VMXNET adapters with other versions of the Microsoft Windows 2003 operating system, but a workaround is required to enable the option in the VMware Infrastructure (VI) Client or vSphere Client.

    • 32-bit version of Microsoft Windows XP Professional
    • 32- and 64-bit versions of Red Hat Enterprise Linux 5.0
    • 32- and 64-bit versions of SUSE Linux Enterprise Server 10
    • 64-bit versions of Red Hat Enterprise Linux 4.0
    • 64-bit versions of Ubuntu Linux

    In ESX 3.5 Update 4 or higher, these guest operating systems are also supported:

    • Microsoft Windows Server 2003, Standard Edition (32-bit)
    • Microsoft Windows Server 2003, Standard Edition (64-bit)
    • Microsoft Windows Server 2003, Web Edition
    • Microsoft Windows Small Business Server 2003

    Note: Jumbo frames are not supported in the Solaris Guest OS for VMXNET 2.

  • VMXNET 3: The VMXNET 3 adapter is the next generation of a paravirtualized NIC designed for performance, and is not related to VMXNET or VMXNET 2. It offers all the features available in VMXNET 2, and adds several new features like multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery.

    VMXNET 3 is supported only for virtual machines version 7 and later, with a limited set of guest operating systems:

    • 32- and 64-bit versions of Microsoft Windows 7, XP, 2003, 2003 R2, 2008, 2008 R2, and Server 2012
    • 32- and 64-bit versions of Red Hat Enterprise Linux 5.0 and later
    • 32- and 64-bit versions of SUSE Linux Enterprise Server 10 and later
    • 32- and 64-bit versions of Asianux 3 and later
    • 32- and 64-bit versions of Debian 4
    • 32- and 64-bit versions of Ubuntu 7.04 and later
    • 32- and 64-bit versions of Sun Solaris 10 and later


    • In ESXi/ESX 4.1 and earlier releases, jumbo frames are not supported in the Solaris Guest OS for VMXNET 2 and VMXNET 3. The feature is supported starting with ESXi 5.0 for VMXNET 3 only.
    • Fault Tolerance is not supported on a virtual machine configured with a VMXNET 3 vNIC in vSphere 4.0, but is fully supported on vSphere 4.1.
    • Windows Server 2012 is supported with e1000, e1000e, and VMXNET 3 on ESXi 5.0 Update 1 or higher.


So vmware has put up a KB article about possible data corruption that can happen in a Windows 2012 VM that has the E1000e network adapters are used.

VMware says that data corruption may occur when copying data over the network and/or could occur after a network file copy event.

The issue is still under investigation but be advised, use VMXNET3 or the E1000 network adapters  instead of the E1000e adaopter when deploying Windows 2012 vm.

KB Article.


I have had quite a ride trying to install VMware’s vCloud Automation center – although it wasn’t super easy – it wasn’t super hard either. Just imagine it to be a complex maze that you have to figure out one step at a time.

In my googling and searching – I realized how little resources were available. Although resources were little, they were quite good.

So here is a dump of the same in case you need it!

Pretty good –

Two part series –

Helpful video –

Automatic the install –

Another good post –

This is super awesome however it is not set to VCAC 5.2 but still good enough –

The annoying installation guide will only confuse you – Installation Guide

Another good one that helped –

Hope this will help you get off to the right start.

Post more links or comment as needed 🙂