Hello, first time poster. I’ve created a one-node cluster on an intel NUC 10 (NUC10FNH) using latest phoenix image. install went fine after a couple of hardware bumps but node is up, CVM is running etc. The issue I’m having is I can’t power on any VMs I create. It only gives a generic “InternalException” error. In the uhura.out log, there are these entries “Error: HypervisorError: internal error: QEMU unexpectedly closed the monitor” and “qemu-kvm: Address space limit 0x7fffffffff < 0x4bcbfffffff phys-bits too low (39): 61”. Having a hard time trying to make sense of this. I’ve read other people running on the same hardware with no issues, maybe it’s the specific version of the CPU (it’s an i7-10710U). one-node cluster, 64GB RAM, one 500GB m.2 SSD and one 500GB SATA SSD. booting off a 64GB usb drive. Any info would be appreciated. Thank you.
Hi Oscar,
Hope you’re well.
Is this an install of CE directly or was it upgraded after installation? (I’ve seen this with AHV 10 but not earlier)
A clue is the 39 bits being quoted in the error message that would tie up with your CPU theory on a consumer grade CPU.
A few things to check, does the VM boot if you set a low amount of memory e.g. <1GB?
Can you confirm IOMMU is enabled within the BIOS?
Outputs of the below would help too from the AHV host:
uname -a
ls /sys/class/iommu
grep ‘address sizes’ /proc/cpuinfo
Thanks,
Kim
Thanks for your response.
Yes, after I did the initial install first thing I did was run the LCM and upgraded everything.
I have verified that VT-d was enabled in the BIOS settings and have upgraded to latest BIOS/firmware.
I set the VM to as low as as 256MB and same response when powering on.
Here are the results of those commands:
# uname -a
Linux NTNX-xxxxxxxx-A 6.1.92-10.0.1s2c14r5.el8.x86_64 #1 SMP Wed Dec 11 12:00:00
# ls /sys/class/iommu
dmar0 dmar1
# grep 'address sizes' /proc/cpuinfo
address sizes : 39 bits physical, 48 bits virtual
address sizes : 39 bits physical, 48 bits virtual
address sizes : 39 bits physical, 48 bits virtual
address sizes : 39 bits physical, 48 bits virtual
address sizes : 39 bits physical, 48 bits virtual
address sizes : 39 bits physical, 48 bits virtual
address sizes : 39 bits physical, 48 bits virtual
address sizes : 39 bits physical, 48 bits virtual
address sizes : 39 bits physical, 48 bits virtual
I’m just surprised that everything else installed normally including the CVM which is a VM but not any other one. One other weird thing is that the CVM does not show up in the list of VMs even when checking the box to show the controller VMs.
Thanks.
Did you ever work this out? I’ve just upgraded my home CE to AHV 10 and now no VMs will start up :(
If I could find out where the -cpu options are set then it looks like setting “host-phys-bits=on” on the command line to start the VM might fix it.
Cheers,
Steve
OK I found out how to edit the -cpu options and setting host-phys-bits=on didn’t do anything, but setting it to off and setting phys-bits to 41 or higher enabled me to boot up VMs that were 2GB or less RAM. Any more and they blue screened on start up with a ‘Memory Management’ issue.
OK I found a much better fix. So the problem is the maxmem setting is configured to boot the VM up with 5TB max mem and my little old Kaby Lake Xeon can't cope. I've edited the qemu kvm frodo script to edit the maxmem on runtime and change it down to a more sensible number (I've just tested 8GB, and set the VM to 6GB), and it's working fine
I've put the phys-bits back to 39 too.
Hello everyone,
I have the same problem, I updated my 3-node cluster to 7.0.1.5 and AHV 10.0.1 and I also can't start the VMs, all CVMs are not visible in Prism Element but are connected and seem functional. I also notice that the AHV version in the upper left corner of the Prism Dashboard does not appear updated.
I recently updated my BIOS and now I have the option to enable IOMMU, I will make sure that it is enabled
OK I found a much better fix. So the problem is the maxmem setting is configured to boot the VM up with 5TB max mem and my little old Kaby Lake Xeon can't cope. I've edited the qemu kvm frodo script to edit the maxmem on runtime and change it down to a more sensible number (I've just tested 8GB, and set the VM to 6GB), and it's working fine
I've put the phys-bits back to 39 too.
Hi SteveCooperArch, thanks for your insight and info. Can you tell how or where you go to set these settings (e.g. maxmem, phys-bits)? I’m still new at messing with stuff on Nutanix command line. Where is this script located to edit it?
Thank you!
OK I found a much better fix. So the problem is the maxmem setting is configured to boot the VM up with 5TB max mem and my little old Kaby Lake Xeon can't cope. I've edited the qemu kvm frodo script to edit the maxmem on runtime and change it down to a more sensible number (I've just tested 8GB, and set the VM to 6GB), and it's working fine
I've put the phys-bits back to 39 too.
Hi SteveCooperArch, Is there an example of this script? I tried it this way unsuccessfully:
acli vm.update teste extra_flags=size=4096M;qemu_path=
OK I found out how to edit the -cpu options and setting host-phys-bits=on didn’t do anything, but setting it to off and setting phys-bits to 41 or higher enabled me to boot up VMs that were 2GB or less RAM. Any more and they blue screened on start up with a ‘Memory Management’ issue.
I found how to run it, but I couldn't find Frodo. The indicated place would be:
Access the Frodo script: /usr/local/nutanix/bin/frodo
frodo vm.qemu_param_set <vm_uuid> <parameter> <value>
DM me for more info!
Hello, first time poster. I’ve created a one-node cluster on an intel NUC 10 (NUC10FNH) using latest phoenix image. install went fine after a couple of hardware bumps but node is up, CVM is running etc. The issue I’m having is I can’t power on any VMs I create. It only gives a generic “InternalException” error. In the uhura.out log, there are these entries “Error: HypervisorError: internal error: QEMU unexpectedly closed the monitor” and “qemu-kvm: Address space limit 0x7fffffffff < 0x4bcbfffffff phys-bits too low (39): 61”. Having a hard time trying to make sense of this. I’ve read other people running on the same hardware with no issues, maybe it’s the specific version of the CPU (it’s an i7-10710U). one-node cluster, 64GB RAM, one 500GB m.2 SSD and one 500GB SATA SSD. booting off a 64GB usb drive. Any info would be appreciated. Thank you.
Well, my cluster is a home LAB and I ended up recreating it while keeping the AOS version 6.10.1.6 and AHV el8.nutanix.2023 even with the message of AHV not compatible with the AOS version without problems to boot the VM. Today I ran the LCM which was updated to version 3.2 that released the AHV el8.nutanix.20230302.103014 compatible with my AOS version 6.10.1.6 , I have the impression that LCM 3.2 might solve the problem of starting VM on AOS 7 and AHV 10, but I'm not going to take the risk now; I'll wait for a new version of AOS 7.0.1.5 and AHV 10.0.1.
Reply
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.