Category Archives: Virtualization

Everything about Virtualization

vSAN Design Decision

Just a quick read and a refresher on vSAN Design Decision to keep my memory fresh..

Sizing for capacity, maintenance and availability 

1. The minimum configuration required for Virtual SAN is 3 ESXi hosts, or two hosts in conjunction with an external witness node. However, this smallest environment has important restrictions.

2. In Virtual SAN, if there is a failure, an attempt is made to rebuild any virtual machine components from the failed device or host on the remaining cluster. In a 3-node cluster, if one node fails, there is nowhere to rebuild the failed components. The same principle holds for a host that is placed in maintenance mode.

3. One of the maintenance mode options is to evacuate all the data from the host. However, this will only be possible if there are 4 or more nodes in the cluster, and the cluster has enough spare capacity.

4. One additional consideration is the size of the capacity layer. Since virtual machines deployed on Virtual SAN are policy driven, and one of those policy settings (NumberOfFailuresToTolerate) will make a mirror copy of the virtual machine data, one needs to consider how much capacity is required to tolerate one or more failures.

Design decision: 4 nodes or more provide more availability options than 3 node configurations. Ensure there is enough storage capacity to meet the availability requirements and to allow for a rebuild of the components after a failure.

Brew Update Error in MacOS El Capitan

Brew or Homebrew kept erroring out on MacOS with El Capitan – turns out it had to do with security feature called SIP = Security integrity protection which prevents changes to the /System, /Usr and /sbin directory.

You can disable it by running – “csrutil disable” command.

For homebrew to work

$ sudo chown -R $(whoami):admin /usr/local  

If it still doesn’t work, use following steps

1. open terminal  
2. $ cd /usr/local  
3. $ git reset --hard  
4. $ git clean -df
5. $ brew update


vRealize Business 7.0.1 Blueprint Deployment Bug!

Hello all,

Deployed vRB 7.0.1 cloud and turns out there is a known issue where once you deploy vRealize business cloud in your environment – it immediately causes all your blueprints to fail.

This is a known issue and some work arounds were discussed across the communities. One such work around being,

After vRealize Business for Cloud 7.0.1 upgrade, you cannot provision vRealize Automation with a blueprint

After upgrade, attempt to provision vRealize Automation by using a blueprint fails with an error message.

Request failed while fetching item metadata: Error communicating with Pricing service.

Workaround: Perform the following steps:

  1. Unregister vRealize Business for Cloud with vRealize Automation.
  2. Generate the self-generated certificate for vRealize Business for Cloud.
  3. Re-register vRealize Business for Cloud with vRealize Automation.
  4. Wait for 10 minutes for all services start.
  5. Start provisioning vRealize Automation.

Hope that helps,

Disks Will Fail – VSAN Doesn’t!

I was building my lab with VSAN backed storage – because I did not have a compatible RAID card I grouped the disks as a RAID-0 and presented it to VSAN. It worked and worked well – VSAN saw one big disk of about 1.6 TB (Had three disks worth 550GB each).

I was advised that it was a bad idea – because if the data got striped by the RAID controller and if one disk goes bad – VSAN now is stuck with missing and corrupted disk.

The proper way to do this was to have a RAID-0 on per disk and present all these individual disk volumes to VSAN to participate in a disk group. I went back and did that and boy am I glad that I did.

Today this happened


The fix was easy – Just highlight the disk and click on actions to remove the disk from the disk group. Depending on the failure you might be able to recover the data vsan-fail2

Thanks to Luke Huckaba – @thephuck for pointing it out in my lab which saved me hours of work later.

Now time to setup SPBM to avoid multiple failures 😀

Running Nested ESXi on VSAN

Was trying to deploy a nested ESXi on VSAN backed storage and kept running into this error during install.

This program has encountered an error:

Error (see log for more info):
Could not format a vmfs volume.
Command ‘/usr/sbin/vmkfstools -C vmfs5 -b 1m -S datastore1
/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0:3′ exited with status 1048320

Turns out the problem is with a SCSI-2 reservation being generated as part of creating a default VMFS datastore. You can read more here.

The fix was this simple hack. = run this in each of your hosts and no system reboot is required.

esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1


New NSX Versions

VMware today announced new NSX versions

New NSX Offerings

Standard Edition: Automates IT workflows, bringing agility to the data center network and reducing network operating costs and complexity.

Advanced Edition: Standard Edition plus a fundamentally more secure data center with micro-segmentation. Helps secure the data center to the highest levels, while automating IT provisioning of security.

Enterprise Edition: Advanced Edition plus networking and security across multiple domains. Enables the data center network to extend across multiple sites and connect to high-throughput physical workloads.

See more at:

EMC World – Virtustream Cloud Security and Risk Compliance

EMCWORLD – Getting started with Containers

VCSA DNS Error Troubleshoot

Setup Bind but vCenter appliance couldn’t see it. This is how you troubleshoot that.

To resolve this issue from vCenter:

  1. Open the console to the vCenter Server Appliance and press CTRL+ALT+F3 and log in with root credentials that you mentioned during the install phase.
  2. To enable shell, run this command.

    shell.set –enabled true

  3. Enter shell with this command:


  4. Ping the DNS servers to confirm communication with this command:

    ping yourdnsfqdn

  5. Use nslookup to make sure the vCenter Server Appliance can be resolved:

    nslookup vcenterFQDN

  6. Use the nslookup command to resolve the shortname.
  7. After underlying networking issue is resolved, redeploy the vCenter Server Appliance.

My issue was that firewalld was blocking DNS and once I took it out it worked. Stop yelling I will add the rules once vCenter is installed fully.

It is recommended that once you are able to reach DNS – you should redeploy vCenter.

Hope this helped.