At first glance, the Hyper-V and VMware virtual environments may seem similar, but upon closer inspection, a number of important differences between these two virtualization platforms become apparent. Initially, performing administration for VMware vSphere could present some challenges for administrators who are familiar with Hyper-V. There are several reasons a business might be switching from Hyper-V to VMware. The company could be deploying a multi-hypervisor environment – e.g., using Hyper-V for Windows-based virtual machines (VMs) while introducing VMware vSphere for VMs running Linux, Free BSD, Solaris, etc. This blog post is intended to help Hyper-V administrators understand the differences and similarities between these two major virtualization platforms, making the transition to VMware easier.
Comparing Hypervisor Design
The Hyper-V Role for Servers and Workstations
Hyper-V users can set up the Hyper-V role not only on the Windows Server operating system (OS) but also on desktop versions of Windows, such as Windows 7, 8 or 10. The installation is similar for both server and desktop solutions. Hyper-V is classified as a type 1 hypervisor, meaning that it is run directly on hardware. There is also a lighter-weight alternative to installing the Hyper-V role on Windows Server: Microsoft Hyper-V Server. Hyper-V Server is a standalone product that is installed on bare metal.
VMware Workstation for Workstations and VMware ESXi for Servers
VMware provides VMware Workstation for installation on desktops with Windows and Linux-based operating systems. VMware Workstation is a type 2 hypervisor. A type 2 hypervisor is installed on the underlying OS. As you can see, both Hyper-V and VMware support installation on users’ PCs. If you want to install a type 1 hypervisor on bare metal, you should use VMware ESXi Server. VMs running on VMware Workstation can be migrated to ESXi Server with the VMware vCenter Converter.
Management Tools
vCenter as the Analog to SCVMM
System Center Virtual Machine Manager (SCVMM) is a special universal product created by Microsoft for centralized virtualization management of Hyper-V hosts, clusters, virtual networks and storage. SCVMM is useful for the management of complex multi-site and multi-domain environments.
Hyper-V Manager is a common tool that is available by default upon installation of the Hyper-V role. With Hyper-V Manager, you can connect to an available Hyper-V host. You can manage the host, the VMs residing on that host, the virtual devices of those VMs and VM replication. You can also create VM checkpoints, start or stop VMs and connect to VMs in order to interact with the user interface of a guest VM.
Failover Cluster Manager is a tool for managing failover clusters and VMs within a cluster. With this tool, you can create a failover cluster. Failover Cluster Manager is not a substitute for Hyper-V Manager; you still need the latter to manage non-clustered VMs, Hyper-V hosts, virtual switches, etc.
The equivalent of SCVMM in VMware vSphere is vCenter Server, which can perform the tasks that Failover Cluster Manager and Hyper-V Manager handle in Hyper-V environments. With vCenter Server, you can manage almost all components of vSphere – hosts, clusters, virtual networks (including standard vSwitches and distributed vSwitches), datastores, VM virtual devices, etc. vCenter is installed on a virtual machine, and the vCenter web interface can then be accessed through a browser. You can use vSphere Web Client (which comes in HTML5-based or Flash-based versions) to access vCenter via a web browser. To do so, simply enter the IP address of your vCenter server in the address bar of your browser.
If you have a standalone ESXi host, you can similarly access that host using a browser with the vSphere client. Enter the IP address of your ESXi server in the address bar of your browser. Note that you cannot create data centers, configure distributed virtual switches or perform VM migrations when accessing a standalone ESXi host with the vSphere client.
Below, you can see vCenter v.6.5 accessed via browser with vSphere Web Client. If you want to access the VM’s user interface, click Launch Web Console.
A new tab is opened and you can interact with the guest OS.
Another way to access the user interface of a VM is using VMware Workstation. For connection to a VM running on the ESXi host from VMware Workstation, click File > Connect to Server.
Input the IP address of vCenter or ESXi Server, as well as the user name and password required to access that server. Click Connect.
In the left pane of the VMware Workstation interface, you can see a list of VMs residing on the ESXi host to which you are connected. If you connected to vCenter, you would see the objects such as Datacenters, ESXi hosts and VMs that are managed by vCenter Server.
PowerShell Vs. PowerCLI
With the Hyper-V platform, the alternative to using a GUI for management of the virtual environment is using PowerShell. PowerShell is command-line shell that is available on Windows-based systems, allowing users to manage system components. PowerShell provides automation using combinations of cmdlets and scripts.
In VMware vSphere, you should use PowerCLI as the alternative to a GUI. PowerCLI is a scripting tool built on PowerShell. It extends PowerShell and includes a set of more than 600 cmdlets. You can automate installation and configuration of any features, virtual machines, system patching, etc.
Clusters
VMware HA and DRS Clusters Vs. Hyper-V Failover Cluster With PRO
The Hyper-V virtual environment offers failover cluster functionality. A failover cluster provides automatic failover of VMs from failed hosts to healthy hosts within a cluster. Upon host failure, VMs from the failed host are restarted on other Hyper-V hosts in the cluster. The Performance and Resource Optimization (PRO) feature collects data about resource usage in the virtual environment and generates PRO tips, recommending migration or reconfiguration of VMs for more rational usage of hardware resources. Since Windows Server 2012, PRO was replaced with Dynamic Optimization.
There are two types of clusters commonly used in VMware vSphere – High Availability (HA) clusters and Distributed Resource Scheduler (DRS) clusters. The HA cluster provides high availability of VMs and carries out automatic VM failover in case of host failure within a cluster. The VMs that were running on the failed ESXi host are restarted on healthy ESXi hosts. The DRS cluster is used for load balancing. vCenter server monitors the usage of hardware resources such as CPU and memory for VMs and ESXi hosts within a cluster. If, after calculations and analysis of these metrics, the DRS decides that some of VMs can migrate from one ESXi host to another (e.g., from an overloaded host to a host with free resources that is almost idle) for more rational resource usage, then migration recommendations are generated. These recommendations can be applied manually or automatically depending on your DRS configuration.
Power Optimization Vs. Distributed Power Management
Power Optimization is a feature of Dynamic Optimization in Hyper-V. Power Optimization allows the migration of VMs from Hyper-V hosts with low workloads to other hosts that have enough resources within a cluster. The unused hosts are then powered off in order to save energy. The powered-off hosts are powered back on as soon as their hardware resources are needed for the cluster.
Distributed Power Management (DPM) is a component of the DRS cluster in the VMware vSphere virtual environment. Similarly to Dynamic Optimization, the DPM executes VM migration from less loaded ESXi hosts to ESXi hosts that have sufficient resources to run the migrated VMs. The freed-up ESXi hosts are powered off (put into a standby state) to save power. When load is increased and the cluster needs more hardware resources again, those ESXi hosts are powered back on.
Fault Tolerance in VMware HA Clusters
VMware also provides a useful feature for HA clusters: Fault tolerance. If you use an HA cluster, in the event of ESXi host failure, there is a period of VM downtime between the VM’s failure and the VM being loaded on another healthy host. Using the fault tolerance feature in the HA cluster helps avoid any downtime. This is because a second VM (ghost replica), which is an exact copy of the VM protected with fault tolerance, is already running on another ESXi host within the cluster. The ghost replica is in an inactive state (its network connections and output are disabled) until the primary VM fails. When the host on which the primary VM was running fails, the ghost replica becomes active in an instant, and the network connections of the replica become enabled. Switching to the ghost replica is smooth; you would only notice a slightly higher latency for one packet if you were to ping the IP address of the protected virtual machine. There is no data loss or service interruption if you are using fault tolerance.
Migration
Live Migration Vs. vMotion
There are two types of VM migration for Hyper-V: Live migration and quick migration. Quick migration puts the VM into a saved state, which is similar to hibernation. The VM’s memory is saved to the disk where the other VM files are located. There is a small period of VM downtime once the VM is put into a saved state until it is registered on the new host. Live migration is used for VM migration between Hyper-V hosts (including failover clustering) without losing service availability. The migration procedure consists of different stages. The most important stage of live migration is the iterative transfer of VM memory pages.
The analog of Hyper-V Live Migration in VMware vSphere is vMotion. With vMotion, running VMs can be migrated from one ESXi host to another without interrupting the VMs and services that are running on the hosts. When the VM migration process from a source ESXi host to a destination host is initiated, the following operations are performed.
First, compatibility is checked to ensure that the VM can be run on the target host. Hardware resources are reserved on the target ESXi host and the VM process is started. A VM memory checkpoint is created; all further VM memory changes are written to a special area. The data stored in memory before creating a checkpoint is copied to the target VM. The source VM is running and some memory pages are changed (these pages are called dirty pages). They are written to change buffers, which are copied to the target VM and integrated into the memory of the target VM. The process of copying dirty pages is repeated over and over (an iterative process) until there are no memory changes in the source VM’s memory compared to the destination VM’s memory. vMotion then stops the CPU of the source VM, copies the last change buffer, and integrates the memory changes stored in that buffer to the virtual RAM of the target VM. Disk access is discontinued on the source ESXi host and is started on the target ESXi host. vMotion sends a reverse ARP (Address Resolution Protocol) to the physical switch for the purposes of updating the MAC and IP address mappings. The source VM is then shut down and the VM process is deleted on the source ESXi host.
Storage Migration Vs. Storage vMotion
Hyper-V Storage Migration can move VM configuration files, checkpoint files, virtual hard disks and mounted ISO files to a new location. Storage migration can be used when the VM must continue running on the same Hyper-V host but must migrate to another storage location. If the VM must be run on another host and placed on another storage location, then live or quick migration must be used along with storage migration.
Storage vMotion is the vSphere analog of Hyper-V’s Storage Migration. Storage vMotion can perform live migration of VM components such as VM virtual disks (VMDK files) and VM configuration files to another storage (e.g., ESXi local datastore, shared SAN datastore, etc.). First, the VM metadata (including configuration, swap and log files) stored in the VM’s home directory is copied, then the virtual disks are copied.
Changed block tracking is used to track the changes after the first replication of the source VM’s virtual disk. While the source VM is running, the disk changes are copied in several iterations to the target virtual disk until there are no differences between the source and target virtual disks.
The virtual disks can be transformed from thick-provisioned disks to thin-provisioned disks or vice versa. During migration with Storage vMotion, the ESXi host on which the VM is running does not change. Storage vMotion is used together with vMotion when the VM must change hosts and datastores at once.
Storage
Cluster Shared Volumes (CSV) Vs. VMFS Cluster File System
Cluster shared volumes are developed to provide simultaneous access to the same disk by multiple Hyper-V hosts in a cluster for reading and writing data. Disk volumes with NTFS or ReFS file systems must be used for CSV.
VMFS is a virtual machine file system that is optimized for virtual machines and is suitable for clusters in vSphere. A VMFS can be shared by multiple ESXi hosts in a cluster for reading and writing data. This high-performance file system holds files with metadata and has specific locking mechanisms to prevent data corruption caused by writing metadata by multiple hosts concurrently.
SMB 3.0 Vs. NFS
SMB 3.0 is a network-sharing protocol developed by Microsoft that can be used for accessing file-level shared storage by Hyper-V hosts in Hyper-V environments as well as Windows environments in general.
Network File System (NFS) is a sharing protocol developed by Sun Microsystems that provides access to files in file-level shared storage. NFS can be used by different vendors, by different operating systems, and in networks that have different architectures. NFS can be used by ESXi hosts in vSphere for accessing VM files located on shared storage.
VHDX Vs. VMDK (VM virtual disks)
VHDX (VHD) is a file format of Hyper-V VM virtual disks. Gen1 VMs can use either VHD or VHDX virtual disk files. Gen2 VMs can use only the VHDX virtual disk file format.
VMDK is an open file format of VMware VM virtual disks developed by VMware.
Dynamic Disks vs. Thin-Provisioned Disks
Dynamic disks are dynamically expanding virtual disks for Hyper-V VMs. The main purpose of using dynamic disks is to save storage space. When Dynamic Disk is created, not all provisioned space is occupied by the virtual disk file. Initially, the virtual disk is a small file. It grows as data is written to the disk.
Thin-provisioned disks in the VMware virtual environment are similar to Hyper-V’s Dynamic disks. A thin-provisioned disk has a minimal size upon creation and grows as needed as data is written to the virtual disk.
Checkpoints Vs. Snapshots
A Hyper-V checkpoint is used to save the state of a virtual machine at a given point in time. You can create a checkpoint, work with a VM, and then revert the VM back to the state it was in when the checkpoint was created.
The VMware snapshot is the analog to the Hyper-V checkpoint; it, too, is used to save the current state of a VM. You can take a snapshot at any time, e.g., before updating software configuration, updating installations, testing new software, etc. A snapshot can be taken even while a virtual machine is running. The snapshot captures the following VM data: Virtual disk states, virtual memory content, and VM settings. The following files are created and stored in the VM directory when a snapshot is taken: .vmdk, -delta.vmdk, .vmsn, and .vmsd.
Pass-Through Disks Vs. Raw Device Mapping
A pass-through disk is a physical disk that is mounted to a VM and used instead of (VHD/VHDX) virtual disks in Hyper-V environments.
Raw device mapping (RDM) is an alternative to the use of virtual disks for accessing the VM disks in vSphere. Depending on your settings, you can let virtual machines access the physical storage device directly when RDM is used. RDM can be used in virtual compatibility mode or physical compatibility mode.
Virtual compatibility mode provides full virtualization of the mapped device; snapshots can be used.
In physical compatibility mode, the guest OS can access the hardware directly; minimal SCSIvirtualization is provided. If a VM snapshot were taken, the RDM disk would not be included in that snapshot.
For example:
Suppose DiskName.vmdk is a disk heading file.
DiskName-rdmp.vmdk is the mapping file (as evidenced by the -rdmp suffix).
A mapping file resembles a symbolic link to a mapped device.
RCT Vs. CBT
Resilient Change Tracking (RCT) makes it possible to copy only those data blocks that have changed since the last backup when an incremental backup of Hyper-V VMs is performed. RCT technology was released with Windows Server 2016. Files with .RCT and .MRT extensions are created when RCT is enabled.
Changed Block Tracking (CBT) is the vSphere analog of RCT for Hyper-V, helping perform incremental backups. CBT allows backup software to track the blocks that were changed since the last backup and copy only the changed blocks. Files with .CTK extensions are created for each VMDK virtual disk and snapshot file. Block changes are tracked on the virtualization layer. The VMware vSphere API for Data Protection allows third-party software to access the CBT feature for the purposes of performing backups and replications more efficiently. Changed Block Tracking was introduced by VMware in 2009 with ESX 4.0.
Storage Spaces Vs. vSAN
Windows Storage Spaces is a technology that abstracts storage from physical disks, used for storage virtualization and provisioning. With Storage Spaces, you can create cost-effective, scalable, highly-available software-defined storage for Hyper-V virtual environments by using servers with HDD devices.
VMware vSAN (Virtual Storage Area Network) is the equivalent of Windows Storage Spaces. vSAN provides a distributed concept of data storage that is integrated with vSphere at the cluster level and abstracted from hardware disks. ESXi hosts managed with vCenter must be used to deploy vSAN. This approach allows disks of ESXi hosts to be used for vSAN storage while ESXi hosts are used to run VMs. SSDs can be used for caching data, while HDDs are used to store data. Disk groups are aggregated into pools that are accessible for the entire cluster. The data can be mirrored on one or more hosts, which ensures protection against failure. This is defined by the number of failures setting. Using vSAN, you can achieve reliable and scalable enterprise-grade storage.
Dynamic Memory Vs. Virtual Memory Overcommit
Dynamic memory allows a VM to use the amount of memory that it currently needs, rather than all virtual memory provisioned for that VM. The hardware RAM on Hyper-V servers can be used more rationally with this feature.
VMware Virtual Memory Overcommit is the equivalent feature in VMware. Both features use dynamic memory allocation methods, but the methods differ somewhat. With VMware, you can allocate more memory than the ESXi host has for VMs, thereby increasing the density of VMs on the host. The idea is to optimize physical memory consumption – thus, more VMs can run on the ESXi host, but there is no guarantee that all the memory provisioned for it can be provided to a particular VM at a given point in time. Using Memory Overcommit presumes that a memory can be rearranged between the VMs depending on demands. 25% of unused memory is reserved on an ESXi host in case the memory requirements of its VM suddenly increases.
Hyper-V Integration Services Vs. VMware Tools
Hyper-V “Integration Services” is a set of tools and drivers for providing proper functionality of child VMs and time synchronization. When Integration Services are installed, the standard drivers for mouse, keyboard, disk controllers, and network controllers are replaced by drivers included as part of the Integration Services suite for better performance and user experience.
VMware Tools is a collection of drivers, modules, and utilities that is the VMware equivalent of Hyper-V Integration Services. VMware Tools offers advanced functionality to improve performance, user interaction with guest VMs, and integration. When VMware Tools is installed, you can shut down the guest OS directly from menu in the vSphere client or VMware Workstation. Video resolution and graphics performance are increased. Other useful features like Drag & Drop, shared folders, time synchronization, copying/pasting between VMs and hosts, and grabbing/releasing of the mouse cursor also become available. VMware Tools is usually provided as an ISO image or Operating System Specific Package (OSP).
Migration from Microsoft Hyper-V to the VMware vSphere Platform
Transitioning from Hyper-V to vSphere can be useful if the vSphere virtualization platform better meets your company’s needs. First, you should prepare the hardware and software. Check VMware’s hardware compatibility list to ensure that your hardware is officially supported. Install ESXi servers, vCenter, and other software that you need. Download and install VMware vCenter Converter. Run VMware vCenter Converter and click Convert Machine.
Select Hyper-V Server as a source type. Select the IP address or server name of your Hyper-V Server. In this walkthrough, vCenter Converter is installed on the Hyper-V Server and localhost is input as the server name.
Next, input the username and password of an account that has sufficient permissions to perform the operation. The Windows Administrator account is used in this example. Click Next.
Select the virtual machine to be converted. Click Next.
Select VMware Infrastructure virtual machine as the destination type. Type the IP address of your vCenter Server or ESXi host, then enter the username and password. Click Next.
Select the folder and the name that should be applied to the destination VM after conversion. Click Next.
Select a destination location, including the ESXi host and datastore. Click Next.
Configure the options for the destination VM. You can see how to change the type of virtual disk provisioning in the screenshot provided below. Click Edit in the Data to copy section.
In the dropdown list, select the disk provisioning type (thick or thin). Click Next.
Check the summary and, if you are satisfied, click Finish.
Wait until the conversion task is complete.
Once the VM conversion is finished, you can open vCenter with vSphere Web client in your browser and find the converted VM. In this walkthrough, the VM name is Server2016-01. As you can see on the preview screenshot, the VM is successfully running. Click Launch Web console to access the user interface of the VM. You can now tune the virtual machine and install VMware Tools.
You now know how to migrate from Hyper-V to VMware vSphere.
Conclusion
Knowing the differences and similarities between features of VMware and Hyper-V environments makes migration from one platform to another easier and is particularly useful for administrators of a multi-hypervisor environment. Understanding the architecture as well as the working principles of both virtualization platforms, along with their feature sets, helps administrators work comfortably and effectively when they are tasked with maintaining the less-familiar platform. There is one important rule that applies to both VMware and Hyper-V environments: The VMs must be backed up regularly. Robust VM backup can protect your business-critical data against natural disasters, technological failures, human errors and virus attacks.