Author Archives: Ranjit Singh Thakurratan

SDDC Era …

A post that went online on Monday.

Been preoccupied lately with loads of work and SDDC stuff but time to get back to sharing and caring!

The Software-Defined Data Center Era is Here

DR in vCloud Director

On Jan 26, VMware silently announced the release of vCloud Availability 1.0.1 which allows you to build disaster recovery solutions in vCloud Director. The idea here is replication of VMs to a multi-tenant cloud. The best thing about this replication technology is that you don’t need a special replication network and replication traffic can safely travel over the internet. Another thing to note is that replication in either way is always initiated from the on-prem site only.

After VMware cloud showed up, there was a dire need for having cloud as replication traffic target and VMware is aiming to solve it by vCloud Availability 1.0.1. The service is said to scale to hundreds of customers which means any hosting provider can now be a DR target to a plethora of customers.

You can read more details on how it works – here

The official VMware blog about it is – here.

 

 

 

So Sensitive… vRA

vRA is very sensitive. All the users, all the permissions and all the DEM workers have to be all set and happy or else you will start missing tabs in vRA which causes you to break your head.

I started missing the Reservations tab and it was frustrating! Yes it was.

There was this decent KB that made me realize how important it is to add users and connect to their appropriate permissions in vRA.

Here is the KB article.

vSAN Design Decision

Just a quick read and a refresher on vSAN Design Decision to keep my memory fresh..

Sizing for capacity, maintenance and availability 

1. The minimum configuration required for Virtual SAN is 3 ESXi hosts, or two hosts in conjunction with an external witness node. However, this smallest environment has important restrictions.

2. In Virtual SAN, if there is a failure, an attempt is made to rebuild any virtual machine components from the failed device or host on the remaining cluster. In a 3-node cluster, if one node fails, there is nowhere to rebuild the failed components. The same principle holds for a host that is placed in maintenance mode.

3. One of the maintenance mode options is to evacuate all the data from the host. However, this will only be possible if there are 4 or more nodes in the cluster, and the cluster has enough spare capacity.

4. One additional consideration is the size of the capacity layer. Since virtual machines deployed on Virtual SAN are policy driven, and one of those policy settings (NumberOfFailuresToTolerate) will make a mirror copy of the virtual machine data, one needs to consider how much capacity is required to tolerate one or more failures.

Design decision: 4 nodes or more provide more availability options than 3 node configurations. Ensure there is enough storage capacity to meet the availability requirements and to allow for a rebuild of the components after a failure.

Brew Update Error in MacOS El Capitan

Brew or Homebrew kept erroring out on MacOS with El Capitan – turns out it had to do with security feature called SIP = Security integrity protection which prevents changes to the /System, /Usr and /sbin directory.

You can disable it by running – “csrutil disable” command.

For homebrew to work

$ sudo chown -R $(whoami):admin /usr/local  

If it still doesn’t work, use following steps

1. open terminal  
2. $ cd /usr/local  
3. $ git reset --hard  
4. $ git clean -df
5. $ brew update

Credit 

VMworld 2016, Book giveaway and New Updates!

vmworld2016

I have been very very busy with work and strategic leadership sessions which have kept me away from contributing to the community so let me begin with a huge sorry!

So VMworld 2016 – Be_Tomorrow is on and going and its really great to see VMware focus on IT transformation and hybrid cloud strategy to enable multi cloud “seamless” consumption – something few vendors out there offer and none is better than VMware themselves.

I am sure you heard about the new vRealize Automation 7.x release and the upcoming NSX features of multi cloud and hybrid cloud environments. This means you can use NSX to manage networks in Amazon and Azure! What you can also do is create “ONE” security policy that spans across all the clouds and automatically applies it! This means no going to each cloud platform to modify your policy – apply it in NSX and see it get enforced in AWS, GCP, vCloud Air, Azure and OpenStack! This is huge – unless you think security is not important – in which case you will be in trouble soon.

Horizon Suite has some major updates as well! Below is a grab from this blog site.

  • Further enhancements to the protocol
  • Improvements in the GPU-encode/decode that significantly lower bandwidth and latency
  • Improvements in the JPG/PNG codec to reduce bandwidth utilization by 6x
  • vRealize Operations integration with Blast Extreme.  I can now see Blast statistics in the vROPs console
  • UEM Smart Policies Integration with Blast.  I can now use the same PCoIP smart policies to control the Blast protocol.  This enhancement also allows administrators to set per-device policies so I can set different policies for Windows, Mac, Android, and IOS.
  • A Raspberry Pi client

vRealize automation features are awesome – The new 7.x version includes

  • Improved Views – You can now see the IP Address of the provisioned VM in items view and change the All Services Icon!
  • Improved Migration and Upgrades – vRA 6 to 7.1
  • Out of the Box Active Directory Integration for provisioning
  • Out of the box IPAM integration – Infoblox
  • Portlets for external web content – Great for posting how to videos or company wide messages.
  • Scale out/in -Support for elastic applications

And still in Beta and super exciting

  • Native Container Support
  • Azure Endpoint – Provision to Microsoft Azure  (YAYYYYYYYYYYYY!!!!!!!)
  • vRA Service Now integration – Use Service Now portal for vRA (I FAINTED!!!)

Pretty awesome right? Well there are more upgrades and improvements in the entire VMware portfolio so will blog about it soon.

And before I forget – we had a very successful book signing session at the Rackspace booth! Stop by and have a chat with me or other awesome folks.

Be_Tomorrow!

vRealize Business 7.0.1 Blueprint Deployment Bug!

Hello all,

Deployed vRB 7.0.1 cloud and turns out there is a known issue where once you deploy vRealize business cloud in your environment – it immediately causes all your blueprints to fail.

This is a known issue and some work arounds were discussed across the communities. One such work around being,

After vRealize Business for Cloud 7.0.1 upgrade, you cannot provision vRealize Automation with a blueprint

After upgrade, attempt to provision vRealize Automation by using a blueprint fails with an error message.

Request failed while fetching item metadata: Error communicating with Pricing service.

Workaround: Perform the following steps:

  1. Unregister vRealize Business for Cloud with vRealize Automation.
  2. Generate the self-generated certificate for vRealize Business for Cloud.
  3. Re-register vRealize Business for Cloud with vRealize Automation.
  4. Wait for 10 minutes for all services start.
  5. Start provisioning vRealize Automation.

Hope that helps,

Azure vs AWS IAAS/Networking Comparison

This is a good picture of how AWS and Azure IAAS/networking looks like – the side by side product comparison is really helpful when you are looking at both the products and pulling your hair out.

This again is a high level but begins to draw a picture in your head and helps connect dots.

Enjoy!

azurevsaws

You can read more @ MSDN – 

Disks Will Fail – VSAN Doesn’t!

I was building my lab with VSAN backed storage – because I did not have a compatible RAID card I grouped the disks as a RAID-0 and presented it to VSAN. It worked and worked well – VSAN saw one big disk of about 1.6 TB (Had three disks worth 550GB each).

I was advised that it was a bad idea – because if the data got striped by the RAID controller and if one disk goes bad – VSAN now is stuck with missing and corrupted disk.

The proper way to do this was to have a RAID-0 on per disk and present all these individual disk volumes to VSAN to participate in a disk group. I went back and did that and boy am I glad that I did.

Today this happened

vsan-fail

The fix was easy – Just highlight the disk and click on actions to remove the disk from the disk group. Depending on the failure you might be able to recover the data vsan-fail2

Thanks to Luke Huckaba – @thephuck for pointing it out in my lab which saved me hours of work later.

Now time to setup SPBM to avoid multiple failures 😀

Running Nested ESXi on VSAN

Was trying to deploy a nested ESXi on VSAN backed storage and kept running into this error during install.

This program has encountered an error:

Error (see log for more info):
Could not format a vmfs volume.
Command ‘/usr/sbin/vmkfstools -C vmfs5 -b 1m -S datastore1
/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0:3′ exited with status 1048320

Turns out the problem is with a SCSI-2 reservation being generated as part of creating a default VMFS datastore. You can read more here.

The fix was this simple hack. = run this in each of your hosts and no system reboot is required.

esxcli system settings advanced set -o /VSAN/FakeSCSIReservations -i 1

Enjoy!