Author Archives: Ranjit Singh Thakurratan

VSAN NETWORK CHATTER

What goes on on the VSAN Network? Let’s take a brief look at that so we can understand the different types of chatter that goes on this network.

First things first, there is the communication that takes place between all the hosts participating in a vSAN cluster. A heartbeat is sent from the master node to all the other nodes participating in a vSAN cluster. Since vSAN 6.6, this communication is done via unicast traffic.

When a host is part of the vSAN cluster, it can get one of the three roles – master, agent, and backup. As an admin, you have no control over who you can pick as a master vs a backup and this is completely handled by vSAN. This is the second type of communication that happens between the hypervisors participating in a vSAN cluster. The master node is responsible for getting the clustering, monitoring, membership and directory services updates to all nodes (CMMDS).  This traffic is unicast since vSAN 6. The volume of traffic between the master, agent, and backup is light and in steady state, so high bandwidth is not of a concern.

The majority of traffic on a vSAN network comes from the virtual machine disk I/O. VMs on the vSAN datastore is made up of a set of objects which are made up of one or more components. When a VM has multiple copies, it will have its replicas traverse the vSAN network on to other nodes. This is unicast traffic and forms the majority of the vSAN network traffic.

Best practice for the vSAN network is to have a minimum of 10Gb and no routing. If the traffic needs to be routed, then only use static routes in the environment but it is not recommended. Also do not put vSAN traffic on an overlay NSX network, because of circular dependency, this configuration is NOT supported.

CAN VSAN NETWORK RUN ON VXLANS?

An interesting question, if VSAN networking can be done/configured on VXLANS backed by NSX?

The answer is No and this is to avoid a circular dependency.

“However, very often, the question of compatibility is asked in the context of being able to place the vSAN network traffic on an NSX managed VxLAN/Geneve overlay. In this case, the answer is no, NSX does not support the configuration of the vSAN data network traffic over an NSX managed VxLAN/Geneve overlay. This is not unique to vSAN. The same restriction applies to any statically defined VMkernel interface traffic such as vMotion, iSCSI, NFS, FCoE, Management, etc.

Part of the reason for not supporting VMkernel traffic over the NSX managed VxLAN overlay is primarily to avoid any circular dependency of having the VMkernel infrastructure networks dependent on the VxLAN overlay that they support. The logical networks that are delivered in conjunction with the NSX managed VxLAN overlay are designed to be used by virtual machines which require network mobility and flexibility.”

Now you know..

Learning VMware NSX Second Edition Released

vmware, nsx

Just in time for VMworld 2017 which officially kicks off in an hour from now, the second edition of “Learning VMware NSX Second Edition” released.

I took all the constructive feedback from the first book and incorporated in the second edition. The second edition comes with all the applicable updates for NSX 6.3.3 and brings deep clarity to help you get started quickly with VMware NSX.

Software-defined Networking not only makes it easy to connect your networks and expand at fission pace, but also makes it a breeze to connect to multiple public clouds with near zero infrastructure investment (The statement depends on your topology).

Order your’s today and feel free to get back to me for all feedback. It will be most welcome.

Order today at –

Publisher

Amazon

Lastly, many thanks to my readers and last but not the least, my lovely Wife and my lovely Pup who remind me why we need to smile every day and celebrate our lives.

VMware Security Advisory

VMware blog announced a security advisory today.

The advisory documents a hard to exploit denial of service vulnerability in the implementation of the OSPF protocol in NSX-V Edge (CVE-2017-4920). This issue is present due to incorrect handling of link-state advertisements (LSA). NSX-V Edge 6.2.8 and NSX-V Edge 6.3.3 address the issue.

More Info VMSA-2017-0014

VMworld NSX SWAG :: AIRDROP

VMworld 2017 is close and the vExpert team made sure they kept us happy.

Today I got my vExpert NSX VMworld Swag! A big box of goodies that helps vExperts stand out in the crowd.

Have a look

Inside there is a Jacket, a T-shirt and a water bottle – all branded vExpert NSX 2017!

Sweet – Thanks NSX vExpert Team!

VMworld 2017 – IT Meeting OT

IT/Digital transformation is the wave upon which all the major enterprises are hitching a ride. It promises not just innovation but faster product to market, cost reductions and over all revenue increases across the business.

This VMworld 2017, the theme is undoubtedly focused not just on technology, but on business outcomes through IT/Digital Transformation. It is important to remember that what you are doing or working on does not matter if it fails to achieve effective long term business outcomes. This is what every business executive is driven towards – to achieve these business outcomes with maximum efficiency and minimal costs.

As technologies evolved, business were quick to extract more value out of their physical assets without increasing costs (and assets) exponentially. This has undoubtedly increased the consumption and management of these ever growing and highly sought after resources.

Companies today lack the resources, the understanding and the necessary customization that are required to build these transformational processes that drive the efficiency engine behind IT/digital transformation. Applying older processes to a advanced technology is equivalent to having an advanced engine in a 1940’s sedan. The engine will work well but the sedan will simply fall apart.

We will continue this discussion and talk about identifying different aspects of IT transformation and gradually take the conversation to OT (Operational) transformation. Stay tuned.

SDDC Era …

A post that went online on Monday.

Been preoccupied lately with loads of work and SDDC stuff but time to get back to sharing and caring!

The Software-Defined Data Center Era is Here

DR in vCloud Director

On Jan 26, VMware silently announced the release of vCloud Availability 1.0.1 which allows you to build disaster recovery solutions in vCloud Director. The idea here is replication of VMs to a multi-tenant cloud. The best thing about this replication technology is that you don’t need a special replication network and replication traffic can safely travel over the internet. Another thing to note is that replication in either way is always initiated from the on-prem site only.

After VMware cloud showed up, there was a dire need for having cloud as replication traffic target and VMware is aiming to solve it by vCloud Availability 1.0.1. The service is said to scale to hundreds of customers which means any hosting provider can now be a DR target to a plethora of customers.

You can read more details on how it works – here

The official VMware blog about it is – here.

 

 

 

So Sensitive… vRA

vRA is very sensitive. All the users, all the permissions and all the DEM workers have to be all set and happy or else you will start missing tabs in vRA which causes you to break your head.

I started missing the Reservations tab and it was frustrating! Yes it was.

There was this decent KB that made me realize how important it is to add users and connect to their appropriate permissions in vRA.

Here is the KB article.

vSAN Design Decision

Just a quick read and a refresher on vSAN Design Decision to keep my memory fresh..

Sizing for capacity, maintenance and availability 

1. The minimum configuration required for Virtual SAN is 3 ESXi hosts, or two hosts in conjunction with an external witness node. However, this smallest environment has important restrictions.

2. In Virtual SAN, if there is a failure, an attempt is made to rebuild any virtual machine components from the failed device or host on the remaining cluster. In a 3-node cluster, if one node fails, there is nowhere to rebuild the failed components. The same principle holds for a host that is placed in maintenance mode.

3. One of the maintenance mode options is to evacuate all the data from the host. However, this will only be possible if there are 4 or more nodes in the cluster, and the cluster has enough spare capacity.

4. One additional consideration is the size of the capacity layer. Since virtual machines deployed on Virtual SAN are policy driven, and one of those policy settings (NumberOfFailuresToTolerate) will make a mirror copy of the virtual machine data, one needs to consider how much capacity is required to tolerate one or more failures.

Design decision: 4 nodes or more provide more availability options than 3 node configurations. Ensure there is enough storage capacity to meet the availability requirements and to allow for a rebuild of the components after a failure.