Solved

How To enable nested virtualization on AHV?

  • 20 February 2017
  • 22 replies
  • 20125 views

Badge +1
How To enable nested virtualization on AHV?
icon

Best answer by Jon 8 March 2017, 16:56

View original

This topic has been closed for comments

22 replies

Userlevel 1
Badge +9
Is this AHV on CE or commercial Nutanix?
Badge +1
Its AHV
Badge
I'm curious about this as well. Is there any way to enable nested virtualization (make hardware assisted virtualization available to the guest os?)
Userlevel 6
Badge +29
Yes, you can enable cpu_passthrough flag in ACLI vm.update

https://portal.nutanix.com/#/page/docs/details?targetId=AMF-Guide-AOS-v50:acl-acli-vm-auto-r.html

That will expose typical nested virtualization support.

Note: Enabling nested virtualization precludes that specific vm from many features, such as ADS and live migration. It also precludes the cluster from doing any sort of rolling maintenance that would require live migration.

i.e if you had to upgrade the hypervisor or BIOS, you'd have to shut down the VM with this flag enabled.

This is because KVM/QEMU based systems, just at general technology level, do not support live migration for nested virtualization enabled VMs. Google apparently just patched this for GCP but that code hasnt made it upstream yet.
Badge +3
I am trying to enable an nested ESXi 6.5 instance running on AHV.
I have neabled the setting which allows the install to process however am now presented with a Network Driver Issue. "No Networks Adapaters were deteced"
Any advice?
Badge +3
Incase it helps I got this working by doing the below:-

1. Created New VM meeting min requirements for ESXi 6.5Using ACLI changed the following2. Add compatible Nic to stop NIC error:-vm.nic_create VMNAME network="NETWORKNAME" model=e10003. Enable CPU Passthrough stop CPU error:-vm.update VMNAME cpu_passthrough=true4. Add and ATA / IDE disk so a boot disk is discoverable at installvm.disk_create VMNAME container=default-container-XXXX create_size=100G bus=ide index=1
Badge +2
Similarly I got it to install however, after sometime, it purple screened. On subsequent reboots, it vails while loading the Balloon VMCI piece. It ran successfuly for quite awhile though. How is yours working?

Thanks
Badge
Hi

I enabled cpu_passtrough flag for my VM, but when trying to configure the Hyper-V on my Windows 2016 Guest VM, I am now getting a different error stating that "virtualization support is not enabled in the BIOS".

I saw there is the argument "nested_hv", so I gave it a try, but I receives the error "Unknow keyword argument: nested_hv"

Any idea of what I'm doing wrong?
Userlevel 6
Badge +29
Hyper-V on KVM based platforms (like AHV) is still maturing upstream. To be clear, nested virtualization in AHV currently is only targeted at KVM based guests. Hyper-V and ESXi are "wild wild west" at best.

As that support matures upstream, and we consume those upstream updates, we'll get better and better here, but for now, I'd suspect this won't work well for now.

Out of posterity, the universal recommendation here is to make sure you're on the absolute latest AOS, with the absolute latest AHV. This will be true when AOS 5.5 comes out (shortly) as we've done quite a massive update on the AHV side, so you may find support here is a bit better. Can't promise as I haven't tested it myself, but its worth checking out.

If you're still having an issue there, feel free to submit a support ticket so we can make sure we're tracking this properly.
Badge +1
Hi Jon,

Do you know whether or not:
a) The API exposes cpu_passthrough
b) The cpu_passthrough setting will be configurable on the VM configuration page via the Prism UI anytime soon
c) The cpu_passthrough setting (if enabled on a VM) will also be applied to a clone of the given VM
d) Support for nested virtualization for Hyper-V is any closer towards coming out of the "wild wild west"

We are using the APIs extensively in our organization but can't find reference to the cpu_passthrough setting in the API.
Userlevel 6
Badge +29
testworksau wrote:Hi Jon,

Do you know whether or not:
a) The API exposes cpu_passthrough
b) The cpu_passthrough setting will be configurable on the VM configuration page via the Prism UI anytime soon
c) The cpu_passthrough setting (if enabled on a VM) will also be applied to a clone of the given VM
d) Support for nested virtualization for Hyper-V is any closer towards coming out of the "wild wild west"

We are using the APIs extensively in our organization but can't find reference to the cpu_passthrough setting in the API.

Hey thanks for reaching out. I'm curious, whats your use case, specifically?

Answers as of today:
A) no
😎 no
C) good question, I don't recall offhand. Should be an easy test, but I'm on a plane right now, don't have good connectivity back to lab.
D) I can't say without saying something forward looking in a public forum. We're working on making overall nested support better. Even then, there is still one key patch missing upstream, more below.

Basically, we won't "GA" full nested support until live migration, which is not committed upstream yet, though that work is in progress. Otherwise, nested VM's become "special" VMs, where you can't do live migration, hypervisor patching, and other lifecycle operations that require host reboots. That means nested VM's would require manual and mandatory downtime during this operations. We dont think thats a good customer experience.
Trying to enable Hypervisor support on a VM in Nutanix 5.5 AHV.

Version 5.0 had an ACLI command that I assume can be applied to a VM as follows:
acli vm.update my_vm_name nested_hv="true"

This is documented here for 5.0:
https://portal.nutanix.com/#/page/docs/details?targetId=AMF-Guide-AOS-v50:acl-acli-vm-auto-r.html

In any version after 5.0 (5.1, 5.2, 5.5) the command is gone. On 5.5 you get an error:
Unknown keyword argument: nested_hv

The nested_hv is gone from the ACLI documentation:
https://portal.nutanix.com/#/page/docs/details?targetId=Command-Ref-AOS-v55:acl-acli-vm-auto-r.html


Use case is for a simulator VM that runs KVM underneath.
Is there another way to do this in 5.5 or was the feature just removed?
I opened a case and got this reply - It answers the question. I used this line:

code:
admin@BLAH~$ acli vm.update VIRL cpu_passthrough=true
VIRL: pending
VIRL: complete
admin@BLAH~$


======EMAIL FROM NUTANIX TECH SUPPORT========

Severity:
====
P3 - Normal


Action plan:
====
+ Although the nested_vm switch was there in AOS 5.0 it actually didn't work and nested VM wasn't supported until AOS 5.5.0.4.
+ Please find the release notes below for AOS 5.5.0.4

https://portal.nutanix.com/#/page/docs/details?targetId=Release-Notes-Acr-v5504:Release-Notes-Acr-v5504

+ Please find the abstract below.

New Features

Nested VMs
PM-615
  • Nutanix now provides limited support for nested virtualization, specifically nested KVM VMs in an AHV cluster as of AOS 5.5.0.4 with AHV-20170830.58. Enabling nested virtualization will disable live migration and high availability features for the nested VM. You must power off nested VMs during maintenance events that require live migration.
+ I'd recommend to upgrade to the latest AOS 5.5.0.5 and then AHV to AHV-20170830.58 as per documenation.
+ Once upgraded, you can use the following to enable nested VM. This passes through all the required CPU features to allow nested virtualization

acli vm.update vmname cpu_passthrough=true
Slightly stale topic to add to, but I'll have a go...

I'm just going through this with a new Nutanix cluster running AHV 5.6, attempting to build nested ESXi hosts for a lab.

Managed to install ESXi, and configure it for Nutanix iSCSI storage, which all works.

However, when I create a VM in the ESXi and try to turn it on, the ESXi gives me the very unusual error:

Failed to power on virtual machine test-vm. VMware ESX and Hyper-V are not compatible. Remove the Hyper-V role from the system before running VMware ESX.


Given there is no Hyper-V in sight, I'm very confused by this error, but I believe it may be due to how AHV is appearing to guest VMs. CentOS 7 certainly gets some idea it's running on Hyper-V, according to virt-what:

code:
$ sudo virt-what
hyperv



Ideas?
Slightly stale topic to add to, but I'll have a go...

I'm just going through this with a new Nutanix cluster running AHV 5.6, attempting to build nested ESXi hosts for a lab.
code:
...

Ideas?


Here is some brave soul posting instructions on running ESXi on AHV http://www.automashell.com/nest-vmware-esxi-on-nutanix-ahv/
HI,

I have a question about nested virualization
I'm new in nutanix.
My company have an AHV cluster and a VM (named server-core-01) with Windows 2016 Server CORE instaled.
I'd like enable Hype-V on it for testing, so I tried to follow this post

$ acli vm.update server-core-01 cpu_passthrough=true
server-core-01: pending
server-core-01: complete

Now if I try to install Hyper-v role on Windows Server I get this error:

Install-WindowsFeature : A prerequisite check for the Hyper-V feature failed.
1. Hyper-V cannot be installed because virtualization support is not enabled in the BIOS.

Is there a way to enable virtualization support on VM or isn't supported?

Many thanks.

A
HI,

I have a question about nested virualization
I'm new in nutanix.
My company have an AHV cluster and a VM (named server-core-01) with Windows 2016 Server CORE instaled.
I'd like enable Hype-V on it for testing, so I tried to follow this post

$ acli vm.update server-core-01 cpu_passthrough=true
server-core-01: pending
server-core-01: complete

Now if I try to install Hyper-v role on Windows Server I get this error:

Install-WindowsFeature : A prerequisite check for the Hyper-V feature failed.
1. Hyper-V cannot be installed because virtualization support is not enabled in the BIOS.

Is there a way to enable virtualization support on VM or isn't supported?

Many thanks.

A

Same Ask? Any update?

 

I even tried to run a P2V windows 2016 srv with hyper-v already enabled. then converted it to Qcow format and uploaded it to Nutanix Cluster. but when I run the VM and open hyper-v manager, I got the error message” one of hyper-v components is not running.”

Userlevel 6
Badge +5

Hi @Kai Li ,

 

Unfortunately nested virtualization is not supported by Nutanix AHV. KB-5233

It may not be officially supported, but it can be done. I have three ESXi VMs running on AHV. Each of them have three disks and I was able to create a complete vSAN cluster running the vCenter appliance and a couple of Windows VMs on the vSAN datastore. So far, I haven’t had any issues and its not really all that slow. Obviously, this shouldn’t be used for production, but it allows us to stay up to date with VMware stuff without needing additional hardware. I haven’t tried Hyper-V yet, but maybe that’ll be next.

Badge +1

It may not be officially supported, but it can be done. I have three ESXi VMs running on AHV. Each of them have three disks and I was able to create a complete vSAN cluster running the vCenter appliance and a couple of Windows VMs on the vSAN datastore. So far, I haven’t had any issues and its not really all that slow. Obviously, this shouldn’t be used for production, but it allows us to stay up to date with VMware stuff without needing additional hardware. I haven’t tried Hyper-V yet, but maybe that’ll be next.

DId you ever get around to trying it in Hyper-V? I’ve been trying but with on luck yet.

It may not be officially supported, but it can be done...

Did you ever get around to trying it in Hyper-V? I’ve been trying but with on luck yet.

Hi Chadd,

Unfortunately, I haven’t had the cycles to get into Hyper-V yet. However, after playing with vSAN on AHV for a while, I realized something interesting. The only drives that will work with vSAN are IDE and you can only have a total of four of them per VM. That leaves you with one CDROM, one ESXi OS drive, and the bare minimum two for vSAN. If you haven’t tried IDE, give it a shot. And to give credit where its due, I got started with the following link as well. That’s where I found that you have to use an e1000 NIC if you want the networking to work. Hope this helps.

http://www.automashell.com/nest-vmware-esxi-on-nutanix-ahv/

Hey All  

 

Has anyone had success with configuring more than one vmnic for the nested esxi vm?  I’ve added up to 6 vmnics and over time the vmnics drop network connectivity.  With 1 e1000, it’s rock solid but you can’t really do DVS with only 1 nic.

 

Thanks