My ramblings about all things technical

Leave a comment

VMworld US Day 1

Now that the dust has started to settle on day 1 of VMworld US 2013 let’s have a look at what was announced, what seems to have been missed from the keynote that I felt are a few major improvements/fixes in vSphere/vCloud 5.5 and all the other important releases coming from the conference. *disclaimer* I am not at VMworld US so this is my take from across the Atlantic.

The day started with the keynote form VMware CEO Pat Gelsinger. I’m not going to detail a minute by minute commentary on it as I think the blog postings I will be mentioning below cover everything you need to know and you can watch the keynote for yourself clip_image001 Also Scott Lowe has done a brilliant live blogging of the keynote here.

I was fortunate enough to again be invited to an early access blogger program by VMware almost two months ago around all the announcements that were due to come out at VMworld. It has been really hard as a consultant to not mention it to customers especially the changes/rebuild of SSO. I did have a few blog postings in the works on the announcements but felt I could not do them justice so left it for better people and I was right in doing this I think as Chris Wahl has done an amazing nine part series on all the announcements which I think are a great overview of all the new features and changes and would have destroyed mine:

As I mentioned one of the big changes in vSphere 5.5 that I felt should have been mentioned in the keynote and would have probably got a loud cheer from the crowd was the massive changes to SSO. The SSO service has been almost totally rebuilt and when I was on the early access blogger webinars everyone breathed a sigh of relief as the SSO in vSphere 5.1 was not a simple thing to install especially seeing as it was recommended to break up all the individual components. This has now changed and it is recommended that they are all kept on one machine. Below is the recommended layout now for the vCenter Server design.


Kendrick Coleman also gave a great overview of it from 30k feet here . For me the real improvement is the simple steps to setup SSO now which are:

1. Accept License agreement (EULA)

2. Prerequisite check summary

3. Edit default port number 7444 (if necessary)

4. Select Deployment placement

5. Provide Administrator@vSphere.local password

6. Provide a site name or select a previous site name

7. Edit destination directory (if necessary)

8. Summary

9. Installation Complete

I’m one of the hosts of the EMEA vBrownbag and all of the US Brownbag and a few of the APAC vBrownbag team are out at VMworld US doing the very popular Tech Talks. The Tech Talks are 10 to 15 minute presentations by members of the VMware community on topics of their choice, almost like a mini #vBrownBag. They are being streamed live by the vBrownbag guys and are being recorded for people like me to watch them when you can. The schedule for the Tech Talks can be found here. Make sure you watch the stream live and give the guys the support they deserve as all of these presentations are from the community.

Talking about the vBrownbag crew one of the main culprits Nick Marshall has released alongside Scott Lowe, Forbes Guthrie, Matt Liebowitz and Josh Atwell (another vBrownbag host) the next instalment of the Mastering VMware vSphere book for vSphere 5.5. A massive congratulations to Nick on this project and for being asked and doing such an awesome job whilst still helping out on the vBrownbag. Nick has detailed the announcement on here blog here.

One of the biggest announcements from the keynote was the release of VMware NSX, as Forbes Guthrie said I’m waiting for NSXi clip_image003 but until that day the below are some of the highlights of the new feature and I would highly encourage you to read Chris Wahl’s detailing of the feature from above.

NSX Highlights:

  • VMware NSX is a next-generation network virtualization solution
  • Provide the key functions of network virtualization: decouple, reproduce, and automate
  • NSX will support any hypervisor, any CMP, any network hardware
    • vSphere, KVM, and Xen are currently supported
    • CMPs currently supported are OpenStack, CloudStack, and vCAC/VCD
  • NSX optimized for vSphere leverages the platform’s enhanced functionality

High-level View of VMware NSX Architecture:


VMware NSX Controllers:

  • Designed with a distributed, scale-out architecture.
    • Minimum of 3 controllers for an NSX controller cluster.
    • NSX optimized for vSphere scales to 5 controllers.
  • NSX controllers run a common code base in different form factors.
    • Controllers run as infrastructure/service VMs in NSX optimized for vSphere.
    • Controllers run as physical appliances in multi-hypervisor environments.
  • Controller functions optimized in each delivery option.

VMware NSX Virtual Switches:

  • NSX uses programmable virtual switches on the hypervisors
  • In NSX optimized for vSphere, NSX leverages:
    • the vSphere Distributed Switch (VDS)
    • the UW (Userworld) Agent for communications with NSX controllers
  • In multi-hypervisor environments, NSX uses:
    • Open vSwitch for KVM and Xen
    • NSX vSwitch (an in-kernel virtual switch) for ESXi

VMware NSX Gateways:

  • The gateways are the “on ramp/off ramp” into or out of logical networks
  • Both L2 (bridging) and L3 (routing) gateway functionality available
  • Basic functionality the same regardless of delivery option
    • NSX optimized for vSphere leverages NSX Edge (derived from vCNS Edge)
    • In multi-hypervisor environments, gateways are physical appliances leveraging a scale-out architecture

VMware have also posted the What’s New pdf for vSphere 5.5 which gives you a very good overview of all the new features and services here

VMware have released a new VMware certification called the VMware Certified Associate for those people looking to get into the IT industry. Unlike the VCP there is no required training but there are free eLearning courses available for people to skill up for the exam. These do look like a good starter for people thinking of learning the basics of virtualization and in my opinion would be great for high school students thinking of going into IT and virtualization after high school.

Well that is what caught my attention from day 1 of VMworld US. I’m looking forward to more information coming out and to getting my hands on all the new vSphere 5.5 tools.



VMware vSphere 5.5 Latency-Sensitivity Feature

Today at VMworld US vSphere 5.5 was announced in the keynote and one of the new features released with vSphere 5.5 is the Latency-Sensitivity Feature. The latency-sensitivity feature is applied per VM, and thus a vSphere host can run a mix of normal VMs and VMs with this feature enabled. To enable the latency sensitivity for a given VM from the UI, access the Advanced settings from the VM Options tab in the VM’s Edit Settings pop-up window and select high for the Latency Sensitivity option as shown below:



What Latency-Sensitivity Feature Does

With the latency-sensitivity feature enabled, the CPU scheduler determines whether exclusive access to PCPUs can be given or not considering various factors including whether PCPUs are over-subscribed or not. Reserving 100% of VCPU time increases the chances of getting exclusive PCPU access for the VM. With exclusive PCPU access given, each VCPU entirely owns a specific PCPU and no other VCPUs are allowed to run on it. This achieves nearly zero ready time, improving response time and jitter under CPU contention. Although just reserving 100% of CPU time (without the latency-sensitivity enabled) can yield a similar effect in a relatively large time scale, the VM may still has to wait in a short time span, possibly adding jitter. Note that the LLC is still shared with other VMs residing on the same socket even with given exclusive PCPU access.


The latency-sensitivity feature requires the user to reserve the VM’s memory to ensure that the memory size requested by the VM is always available. Without memory reservation, vSphere may reclaim memory from the VM, when the host free memory gets scarce. Some memory reclamation techniques such as ballooning and hypervisor swapping may significantly downgrade VM performance, when the VM accesses the memory region that has been swapped out to the disk.  Memory reservation prevents such performance degradation from happening. 


Bypassing Virtualization Layers:

Once exclusive access to PCPUs is obtained, the feature allows the VCPUs to bypass the VMkernel’s CPU scheduling layer and directly halt in the VMM, since there are no other contexts that need to be scheduled. That way, the cost of running the CPU scheduler code and the cost of switching between the VMkernel and VMM are avoided, leading to much faster VCPU halt/wake-up operations. VCPUs still experience switches between the direct guest code execution and the VMM but this operation is relatively cheap with the hardware-assisted visualization technologies provided by recent CPU architectures.


Tuning Virtualization Layers:

When the VMXNET3 para-virtualized device is used for VNICs in the VM, VNIC Interrupt coalescing and LRO support for the VNICs are automatically disabled to reduce response time and its jitter. Although such tunings can help improve performance, they may have a negative side effect in certain scenarios. If hardware supports SR-IOV and the VM doesn’t need a certain virtualization features such as vMotion, NetIOC, and FaultTolerance, we recommend the use of a pass-through mechanism, Single-root I/O virtualization (SR-IOV), for the latency sensitive feature.