Category Archives: Virtualization

Everything about Virtualization

vSphere Web Client Architecture

If you ever want to know how the vsphere webclient works but without doing a developer level deep dive, remember the following picture.

The Inventory service runs as a separate service in vcenter 5 and now in vsphere 5.1 vcenter server, the Inventory service comes as a separate module and can be installed on a completely separate server. Now the Inventory service, as mentioned above, obtains optimized data from the vcenter. The data is optimized as in what ever the user is seeing in the web browser, only that data is requested.

This is fed to the application server that runs java/virgo/spring. The flex based vSphere web client gets its info from the Application server.

Hope this was a little helpful. Feel free to discuss or correct me 🙂

ESXi 5.1 Supports SR-IOV. But What is SR-IOV?

I was going through all the exciting things that ESXi 5.1 supports. But one thing caught my eye and it was something I had never heard of.

vQuicky – For the Impatient Like me 🙂

> SR-IOV stands for Single-root I/O Vritualization and is now supported in ESXi 5.1

> SR-IOV is different from VMdirect-path in that it allows multiple vms to share the same nic and have higher throughput as they bypass the hypervisor layer

> SR-IOV supported PCIe card allows one PCIe card to be presented to many virtual machines directly without going through the hypervisor layer

> SR-IOV runs on the concept of Physical functions and virtual functions. PFs have configuration capability to the PCIe device while VFs do not. VFs are attached to the virtual machines and are of the same device type as the PF/PCIe device.

> PCI SIG  maintain the SR-IOV specification and say that one SR-IOV supported card can have as many as 256 VFs in theory.

> SR-IOV needs to be supported in BIOS, OS and Hypervisor for it to work correctly.

> SR-IOV can cause issues with physical switches as physical switches do not know about the VFs. Inters-witching is another functionality that I believe is available that allows direct traffic flow between VFs.

> You cannot have a different type of VF from that of a PF or the PCIe device type.

> If you have a SR-IOV support HBA card, then you will need NPIV, N_PORT_ids to manage WWNs on the virtual HBAs.

> MR-IOV is multiple systems using VFs and sharing one PCIe device.

inDepth

So what is single-root i/o virtualization or sr-iov in short. Here goes.

Single-root I/O virtualization is a technology or infact a specification that allows you to present a single physical PCIe device to appear as multiple PCIe devices. For instance if you had a single network PCIe card, with SR-IOV you can now present this PCIe card to the virtual machines and these virtual machines will look at it as if its their own dedicated PCIe physical card. You will now have the ability to present one PCIe card to multiple virtual machine instances. This is different than VMdirectpath. SR-IOV allows hypervisor bypass by allowing a virtual machine to attach a virtual function to itself and share the same network card. More about virtual functions below.

SR-IOV works by introducing the concept of Physical Functions and Virtual Functions, PF and VF in short respectively. Physical functions are full blown PCIe functions while virtual functions are light weight and are associated with a virtual machine. A virtual function is assigned to a virtual machine, the traffic flows directly to it bypassing the virtual layer allowing higher throughput. Basically, PFs have full configuration capability to the PCIe card, aka, they are full featured PCIe functions and can do configuration changes to the card. VFs lack the configuration capability and can only move data in and out.

SR-IOV requires support in BIOS and also in the operating system and/or the hypervisor as well. Since VFs can’t be treated as full PCIe devices because of their lack in their configuration capability, the OS need extra drivers to work with them.

The PCI SIG SR-IOV (these guys maintain this specification) says that one instance can have as many as 256 VFs. So a quad port SR-IOV PCIe card will have 1024 VFs in theory. Remember that VF replies on the physcial hardware of the card itself so practically the number of VFs may be lower. For SR-IOV fibre channel HBAs, you will have logical HBAs sharing a physical HBA and you will need NPIV enabled so you can manage the WWNs and N_Port_IDS on a single HBA.

Some things you have to be careful about is that switches will not be aware of the VF ports and can cause some issues. I read that the SR-IOV allows some functionality where VF Switching is possible, that means you can send traffic between VFs and not have to use a physical switch. You also cannot have a VF of one type other than that of the PF.

I picked up some information for comments from Scott Lowe’s blog. The servers below support SR-IOV. Apparently there is also MR-IOV (Multi Root – i/o virtualization) where multiple systems will share PCIe VFs. I heard it was terribly complicated but Googling did not result in too much info.

Dell: PowerEdge 11G Gen II Servers: T410, R410, R510, R610, T610, R710, T710 and R910.
Further information is available at http://en.community.dell.com/techcenter/os-applications/w/wiki/3458.microsoft-windows-server-8.aspx

Fujitsu: PRIMERGY RX900 S2 using BIOS version 01.05.http://www.fujitsu.com/fts/products/computing/servers/primergy/rack/rx900/

HP: Proliant DL580 G7 and Proliant DL585 G7. Further information is available athttp://www.hp.com/go/win8server

Comment if you liked it or if I said something wrong 🙂

Unable to Connect to the vCenter Inventory Service

My vCenter Web client popped up an error that said “Unable to connect to the vcenter inventory service : https://ip:10443

 

Remember, in vCenter 5.0, the inventory service is running as a separate service and not as part of the vcenter service. The inventory service works for the web client. It manages the web client inventory objects and property queries that the web client requests when a user is browsing it. The web client runs efficiently by only requesting queries that the user is seeing on their screen – this enhances user experience and navigation.

In vCenter 5.1 – the inventory service is a completely separate component than the vcenter and can be installed on a separate box itself.

In my case, as expected, the service was off and I started the service on the vcenter server. The issue was resolved.

VM Replication in ESXi 5.1

rjapproves QuickY (For the impatient like me!)

  • VMware introduced virtual machine replication in today’s ESXi 5.1 announcement.
  • Virtual machine replication replicates your primary virtual machine to another datacenter so you can quickly bring up your virtual machines in a disaster event at the primary site.
  • A virtual machine is copied to the other site, a full copy followed by changed block copies are done.
  • Intelligent copy where application and/or database consistency is maintained using Microsoft Volume shadow copy service(VSS)
  • Replication is free product and is added as a feature
  • Replication runs off a agent that comes with each hypervisor install and a vSphere Replication appliance that runs on each vcenter at per vcenter basis. As of now, its maximum of 10 appliances per vcenter supporting 500 virtual machine replications per site.
  • Replication can be done on a per virtual machine and per hard disk basis.
  • The target virtual machine will not power on if the primary (source) virtual machine is pingable by vcenter server and/or is still powered on.
  • Recovery vm once powered on will have to be manually connected to the network via the vcenter web console (one click action).

InDepth

This is exciting, VMware finally takes up Microsoft’s Hyper-V challenge and offers replication as a free product add on for all of us to enjoy.

So what is vmware replication really all about? Think of vmware replication as basically having a clone of your virtual machine on another hypervisor which may or may not be on the same site. Ideally, the replication will occur from one physical site to another. The catch is that this clone is constantly being updated – aka – any changes in the source site are transferred to the target site depending on the interval for replication you set.

The hypervisor inherently has a agent and there is a vmware replication appliance per vcenter whose job is to receive replication data and also keep track of the statuses. A vCenter can have upto 10 maximum vmware replication appliances. I am not clear on the maximum virtual machines that can be replicated but it seems like there is a cap on 500 virtual machines per vCenter.

The way it works, in the web based vcenter portal, you simply enable replication by a click and choose a virtual machine to replicate to another datacenter hypervisor. You get to choose how often the replication is to run, basically a RPO for your virtual machine. VMware allows a low RPO of 15 mins to a maximum of 24 hours. You can also set the destination virtual machine to have its disks as thin as opposed to the thick disks at the primary site. This allows you to save storage space on the target site.

Once that is set, you are all done, the initial replication will be a complete copy of your virtual machine over the wire. Now if you have a huge virtual machine, you can choose to pre-seed it to cut down on replication time. You can either copy it/clone it to a usb storage or FTP and then pre-seed that at the destination. Once Pre-seed is done, only changed blocks are copied. Change block tracking, i believe, is automatically enabled and only blocks that are changed are copied over. Remember the VA Agent is responsible to send over changed blocks and these are received by the VMware replication appliance on the other end.

 

Some cool things you can do is specifically pick which disks in a virtual machine you want to replicate and which you choose not to. For instance you can have a designated disk as swap disk for a virtual machine and you may choose not to replicate that disk.

To failover the virtual machine you simply have to initiate it from the vcenter web client, however remember, failover will not work or will not be allowed if your primary vm is still powered on and/or vcenter can ping it. When the target virtual machine comes up, it will disconnected from networks. The idea is for an administrator to look at the target and then to connect it to the network and not to have them blindly power on. This is to prevent accidental power ons that may result in network conflicts if any could happen.

I am quite sure that VMware is using the Virtual Disk Development Kit(VDDK) to get this all to work. The host agents that come loaded with the ESXi 5.1 install on the box use the VDDK to attach/detach disks and also ship data over wire. There are no security issues as of today, and replication is pretty secure.

I don’t have specific details yet but I will add more as we play along.

Do comment if you have any more information or need to correct/add info.

VMware Announces no more vRAM Entitlements!

Lets not even debate the whole structure about the vRAM pricing but VMware at VMWorld just announced (a minute back) that there are no more vRAM license entitlements. No vRAM, no per vm licensing model. Only per socket model which makes a lot of things easier. This was received with a lot of applause!

My First OpenStack Alamo Installation

This week, after I was able to create the containers, I had my first taste of Alamo deployed on my home environment.  I will write down step-by-step installation process in the next blog. The Alamo installation is very easy and comparable to ESX + a few extra options about passwords and networks.

Below is a screen shot of the console how it would look. Pretty similar to vmware’s ESX(i).
Also a really interesting article about Cloud computing, if you are trying to understand what it really and truly means is below. Its a 2009 article but gives you a decent idea about what its about.

Deploying OpenStack Alamo in a Nested ESX(i) box to test

So its exciting news that Rackspace has gone Open Cloud. However, very few of us have the luxury to run Open stack on its own dedicated controller/compute nodes, although an all-in-one configuration can be run. For instance, in my home lab, I am running it now as a nested hypervisor because that will allow me ample flexibility to get to know it.

If you are running vmware ESX(i) then you are in luck as the doc lists all the changes that you will need to do to the vm container in order to get OpenStack Alamo to run. Cody’s instructions on how to get the container ready helped greatly.

However, I have the OVA’s in here so you can download them. These containers, one for controller and the other for the compute node, have been preset with all changes necessary so your OpenStack Alamo installation will boot up with no issues. They are only 78KB each because they are empty containers 🙂 So clone them to multiple compute nodes as needed 🙂

Remember to change the cpu/ram/disk sizes as necessary. I marked disks as thin but if you have tons of disk to play with then you may delete and recreate it. These are empty containers and you can boot them up to the Alamo iso or cdrom.

Controller-Node – Download Here (78KB)

Compute-Node – Download Here (78KB)

Let me know if you run into problems!

How big is a snapshot by default?


So, I have been in the dark. (I attached a random pic not really of a snapshot :D)

I was under the impression that the snapshot in vmware – when a snapshot a snapshot is created will only be a few megabytes. Well I was wrong!
When a snapshot is created – the .vmsn which is the snapshot state file and stores the running state of the virtual machine – aka its RAM. So this will be = to the size of the RAM that is set for the virtual machine.
So if a virtual machine is with 16GB of RAM – then the .vmsn file which is created in a snapshot will be of that size 16GB!
Needless to say , the snapshot file which is the <vname>-Snapshot<###>.vmsn grows with the snapshot.

Welcome to rjapproves!

Hello there!

 

Thanks for stopping by, I will get started on posting stuff soon!

 

RJ