Saturday, January 30, 2010

A Look at Some VMware Infrastructure Architectural Advantage

Below summary explain the elements of the ESX architecture that I believe set it apart from Hyper-V and Xen and the reasons behind some of Vmware design decisions. I thought it would be interesting material for the readers of this blog, so take a look and tell us what you think...

VMware Infrastructure - Architecture Advantages

VMware Infrastructure is a full data center infrastructure virtualization suite that provides comprehensive virtualization, management, resource optimization, application availability and operational automation capabilities in a fully integrated offering. VMware Infrastructure virtualizes the entire IT infrastructure, including servers, storage and networks and aggregates these heterogeneous resources into a simple and uniform set of computing resources in the virtual environment. With VMware Infrastructure, IT organizations can manage resources as a shared utility and dynamically provision them to different business units and projects without worrying about the underlying hardware differences and limitations.

Complete Virtual Infrastructure

VMware_VI_stack_slide_23Jun2008

As shown in the preceding figure, VMware Infrastructure can be represented in three layers:

1. The base layer or virtualization platform is VMware ESX – the highest performing, production-proven hypervisor on the market. Tens of thousands of customers deploy VMware ESX (over 85 percent in production environments) for a wide variety of workloads.

2. VMware Infrastructure’s support for pooling x86 CPU, memory, network and storage resources is the key to its advanced data center platform features. VMware Infrastructure resource pools and clusters aggregate physical resources and present them uniformly to virtual machines for dynamic load balancing, high availability and mobility of virtual machines between different physical hardware with no disruption or downtime.

3. Above the virtual infrastructure layers sits end-to-end application and infrastructure management from VMware that automates specific IT processes, ensures disaster recovery, supports virtual desktops and manages the entire software lifecycle.

VMware ESXi – The Most Advanced Hypervisor

VMware ESXi 3.5 is the latest generation of the bare-metal x86 hypervisor that VMware pioneered and introduced over seven years ago. The industry’s thinnest hypervisor, ESXi is built on the same technology as VMware ESX, so it is powerful enough to run even the most resource-intensive applications; however, it is only 32 MB in size and runs independently of a general-purpose OS.

The following table shows just how much smaller the VMware EXSi installed footprint is compared to other hypervisors. These are results from installing each product and measuring disk space consumed, less memory swap files.

Comparative Hypervisor Sizes (including management OS)

VMware ESX 3.5 2GB
VMware ESXi 32MB
Microsoft Hyper-V with Windows Server 2008 10GB
Microsoft Hyper-V with Windows Server Core 2.6GB
Citrix XenServer v4 1.8GB

As the numbers show, ESXi has a far smaller footprint than competing hypervisors from vendors that like to label ESX as "monolithic."

The ESXi architecture contrasts sharply with the designs of Microsoft Hyper-V and Xen, which both rely on a general-purpose management OS – Windows Server 2008 for Hyper-V and Linux for Xen – that handles all management and I/O for the virtual machines.

Indirect_arch Indirect_arch

The VMware ESX direct driver architecture avoids reliance on a heavyweight Windows or Linux management partition OS.

Advantages of the ESX Direct Driver Architecture

Vmware's competition negatively portrays VMware ESX Server as a “monolithic” hypervisor, but vmware's experience and testing proves it to be the best design.

The architecture for Citrix XenServer and Microsoft Hyper-V puts standard device drivers in their management partitions. Those vendors claim this structure simplifies their designs compared to the VMware architecture, which locates device drivers in the hypervisor. However, because Xen and Hyper-V virtual machine operations rely on the management partition as well as the hypervisor, any crash or exploit of the management partition affects both the physical machine and all its virtual machines. VMware ESXi has done away with all reliance on a general-purpose management OS, making it far more resistant to typical OS security and reliability issues. Additionally, our seven years of experience with enterprise customers has demonstrated the impressive reliability of our architecture. Many VMware ESX customers have achieved uptimes of more than 1,000 days without reboots.

ESX_uptime

One of Vmware's customers sent them this screenshot showing four years of continuous ESX uptime.

The VMware direct driver model scales better than the indirect driver models in the Xen and Hyper-V hypervisors.

The VMware ESX direct driver model puts certified and hardened I/O drivers directly in the VMware ESX hypervisor. These drivers must pass rigorous testing and optimization steps performed jointly by VMware and the hardware vendors before they are certified for use with VMware ESX. With the drivers in the hypervisor, VMware ESX can provide them with the special treatment, in the form of CPU scheduling and memory resources, that they need to process I/O loads from multiple virtual machines. The Xen and Microsoft architectures rely on routing all virtual machine I/O to generic drivers installed in the Linux or Windows OS in the hypervisor’s management partition. These generic drivers can be overtaxed easily by the activity of multiple virtual machines – exactly the situation a true bare-metal hypervisor, such as ESXi, can avoid. Hyper-V and Xen both use generic drivers that are not optimized for multiple virtual machine workloads.

VMware investigated the indirect driver model, now used by Xen and Hyper-V, in early versions of VMware ESX and quickly found that the direct driver model provides much better scalability and performance as the number of virtual machines on a host increases.

Netperf_scaling

The scalability benefits of the VMware ESX direct driver model became clearly apparent when we tested the I/O throughput of multiple virtual machines compared to XenEnterprise, as shown in the preceding chart from a paper published here. Xen, which uses the indirect driver model, shows a severe I/O bottleneck with just three concurrent virtual machines, while VMware ESX continues to scale I/O throughput as virtual machines are added. Our customers that have compared VMware ESX with the competition regularly confirm this finding. Similar scaling issues are likely with Hyper-V, because it uses the same indirect driver model.

Better Memory Management with VMware ESX

In most virtualization scenarios, system memory is the limiting factor controlling the number of virtual machines that can be consolidated onto a single server. By more intelligently managing virtual machine memory use, VMware ESX can support more virtual machines on the same hardware than any other x86 hypervisor. Of all x86 bare-metal hypervisors, only VMware ESX supports memory overcommit, which allows the memory allocated to the virtual machines to exceed the physical memory installed on the host. VMware ESX supports memory overcommit with minimal performance impact by combining several exclusive technologies.

Memory Page Sharing

Content-based transparent memory page sharing conserves memory across virtual machines with similar guest OSs by seeking out memory pages that are identical across the multiple virtual machines and consolidating them so they are stored only once, and shared. Depending on the similarity of OSs and workloads running on a VMware ESX host, transparent page sharing can typically save anywhere from 5 to 30 percent of the server’s total memory by consolidating identical memory pages.

clip_image008

Transparent Page Sharing.

Memory Ballooning

VMware ESX enables virtual machines to manage their own memory swap prioritization by using memory ballooning to dynamically shift memory from idle virtual machines to active virtual machines. Memory ballooning artificially induces memory pressure within idle virtual machines as needed, forcing them to use their own paging areas and release memory for more active or higher-priority virtual machines.

clip_image010

Memory Ballooning.

VMware ESX handles memory ballooning by using a pre-configured swap file for temporary storage if the memory demands from virtual machines exceed the availability of physical RAM on the host server. Memory overcommitment enables great flexibility in sharing physical memory across many virtual machines, so that a subset can benefit from increased allocations of memory, when needed.

Memory Overcommit Provides Lowest Cost of Ownership

The result of this memory conservation technology in VMware ESX is that most customers can easily operate at a 2:1 memory overcommit ratio with negligible performance impact. Vmware's customers commonly achieve much higher ratios. Compared to Xen and Microsoft Hyper-V, which do not permit memory overcommit, VMware Infrastructure customers can typically run twice as many virtual machines on a physical host, greatly reducing their cost of ownership.

Cost_per_VM_chart

TCO Benefits of VMware Infrastructure 3 and its better memory management.

The table above illustrates how a conservative 2:1 memory overcommit ratio results in a lower TCO for even our most feature-complete VMware Infrastructure 3 Enterprise edition, compared to less functional Microsoft and Xen offerings.

Storage Management Made Easy with VMFS

Virtual machines are completely encapsulated in virtual disk files that are either stored locally on the VMware ESX host or centrally managed using shared SAN, NAS or iSCSI storage. Shared storage allows virtual machines to be migrated easily across pools of hosts, and VMware Infrastructure 3 simplifies use and management of shared storage with the Virtual Machine File System (VMFS.) With VMFS, a resource pool of multiple VMware ESX servers can concurrently access the same files to boot and run virtual machines, effectively virtualizing the shared storage and greatly simplifying its management.

VMFS_diagram

VMware VMFS supports and virtualizes shared storage.

While conventional file systems allow only one server to have read-write access to the file system at a given time, VMware VMFS is a high-performance cluster file system that allows concurrent read-write access by multiple VMware ESX servers to the same virtual machine storage. VMFS provides the first commercial implementation of a distributed journaling file system for shared access and rapid recovery. VMFS provides on-disk locking to ensure that multiple servers do not power on a virtual machine at the same time. Should a server fail, the on-disk lock for each virtual machine is released so that virtual machines can be restarted on other physical servers.

The VMFS cluster file system enables innovative and unique virtualization-based distributed services. These services include live migration of running virtual machines from one physical server to another, automatic restart of failed virtual machines on a different physical server, and dynamic load balancing of virtual machines across different clustered host servers. As all virtual machines see their storage as local attached SCSI disks, no changes are necessary to virtual machine storage configurations when they are migrated. For cases when direct access to storage by VMs is needed, VMFS raw device mappings give VMware ESX virtual machines the flexibility to use physical storage locations (LUNs) on storage networks for compatibility with array-based services like mirroring and replication.

Products like Xen and Microsoft Hyper-V lack an integrated cluster file system. As a result, storage provisioning is much more complex. For example, to enable independent migration and failover of virtual machines with Microsoft Hyper-V, one storage LUN must be dedicated to each virtual machine. That quickly becomes a storage administration nightmare when new VMs are provisioned. VMware Infrastructure 3 and VMFS enable the storage of multiple virtual machines on a single LUN while preserving the ability to independently migrate or failover any VM.

VMFS gives VMware Infrastructure 3 a distributed systems orientation that distinguishes it from our competition.

VMware Infrastructure 3 is the first virtualization platform that supports pooling the resources of multiple servers to offer a new array of capabilities. The revolutionary DRS and HA services rely on VMFS features to aggregate shared storage, along with the processing and network capacity of multiple hosts, into a single pool or cluster upon which virtual machines are provisioned. VMFS allows multiple hosts to share access to the virtual disk files of a virtual machine for quick VMotion migration and rapid restart while managing distributed access to prevent possible corruption. With Hyper-V, Microsoft is just now rolling out a first-generation hypervisors with a single node orientation. It lacks distributed system features like true resource pooling, and it relies on conventional clustering for virtual machine mobility and failover.

VirtualCenter – Complete Virtual Infrastructure Management

A VirtualCenter Management Server can centrally manage hundreds of VMware ESX hosts and thousands of virtual machines, delivering operational automation, resource optimization and high availability to IT environments. VirtualCenter provides a single Windows management client for all tasks called the Virtual Infrastructure client. With VirtualCenter, administrators can provision, configure, start, stop, delete, relocate and remotely access virtual machines consoles. The VirtualCenter client is also available in a web browser implementation for access from any networked device. The browser version of the client makes providing a user with access to a virtual machine as easy as sending a bookmark URL.

VC_diagram

VMware VirtualCenter centrally manages the entire virtual data center.

VirtualCenter delivers the highest levels of simplicity, efficiency, security and reliability required to manage a virtualized IT environment of any size, with key features including:

  • Centralized management
  • Performance monitoring
  • Operational automation
  • Clustering and pooling of physical server resources
  • Rapid provisioning
  • Secure access control
  • Full SDK support for integrations

I'll stop there for now. All the management and automation and VDI services depicted in the top layer of the figure at the beginning of this post further set us apart from the competition. Services like Update Manager, SRM, Lab Manager and VDM offer amazing capabilities, but we'll save that discussion for some upcoming posts.

No comments:

Post a Comment