testworksau wrote:Hi Jon,
Do you know whether or not:
a) The API exposes cpu_passthrough
b) The cpu_passthrough setting will be configurable on the VM configuration page via the Prism UI anytime soon
c) The cpu_passthrough setting (if enabled on a VM) will also be applied to a clone of the given VM
d) Support for nested virtualization for Hyper-V is any closer towards coming out of the "wild wild west"
We are using the APIs extensively in our organization but can't find reference to the cpu_passthrough setting in the API.
admin@BLAH~$ acli vm.update VIRL cpu_passthrough=trueVIRL: pendingVIRL: completeadmin@BLAH~$
Failed to power on virtual machine test-vm. VMware ESX and Hyper-V are not compatible. Remove the Hyper-V role from the system before running VMware ESX.
$ sudo virt-whathyperv
Same Ask? Any update?
I even tried to run a P2V windows 2016 srv with hyper-v already enabled. then converted it to Qcow format and uploaded it to Nutanix Cluster. but when I run the VM and open hyper-v manager, I got the error message” one of hyper-v components is not running.”
Hi @Kai Li ,
Unfortunately nested virtualization is not supported by Nutanix AHV. KB-5233
It may not be officially supported, but it can be done. I have three ESXi VMs running on AHV. Each of them have three disks and I was able to create a complete vSAN cluster running the vCenter appliance and a couple of Windows VMs on the vSAN datastore. So far, I haven’t had any issues and its not really all that slow. Obviously, this shouldn’t be used for production, but it allows us to stay up to date with VMware stuff without needing additional hardware. I haven’t tried Hyper-V yet, but maybe that’ll be next.
DId you ever get around to trying it in Hyper-V? I’ve been trying but with on luck yet.
It may not be officially supported, but it can be done...
Did you ever get around to trying it in Hyper-V? I’ve been trying but with on luck yet.
Unfortunately, I haven’t had the cycles to get into Hyper-V yet. However, after playing with vSAN on AHV for a while, I realized something interesting. The only drives that will work with vSAN are IDE and you can only have a total of four of them per VM. That leaves you with one CDROM, one ESXi OS drive, and the bare minimum two for vSAN. If you haven’t tried IDE, give it a shot. And to give credit where its due, I got started with the following link as well. That’s where I found that you have to use an e1000 NIC if you want the networking to work. Hope this helps.
Has anyone had success with configuring more than one vmnic for the nested esxi vm? I’ve added up to 6 vmnics and over time the vmnics drop network connectivity. With 1 e1000, it’s rock solid but you can’t really do DVS with only 1 nic.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.