Skip to main content

Hello, first time poster. I’ve created a one-node cluster on an intel NUC 10 (NUC10FNH) using latest phoenix image. install went fine after a couple of hardware bumps but node is up, CVM is running etc. The issue I’m having is I can’t power on any VMs I create. It only gives a generic “InternalException” error. In the uhura.out log, there are these entries “Error: HypervisorError: internal error: QEMU unexpectedly closed the monitor” and “qemu-kvm: Address space limit 0x7fffffffff < 0x4bcbfffffff phys-bits too low (39): 61”. Having a hard time trying to make sense of this. I’ve read other people running on the same hardware with no issues, maybe it’s the specific version of the CPU (it’s an i7-10710U). one-node cluster, 64GB RAM, one 500GB m.2 SSD and one 500GB SATA SSD. booting off a 64GB usb drive. Any info would be appreciated. Thank you.

Hi Oscar,

Hope you’re well.

Is this an install of CE directly or was it upgraded after installation? (I’ve seen this with AHV 10 but not earlier)

A clue is the 39 bits being quoted in the error message that would tie up with your CPU theory on a consumer grade CPU.

A few things to check, does the VM boot if you set a low amount of memory e.g. <1GB?

Can you confirm IOMMU is enabled within the BIOS?

Outputs of the below would help too from the AHV host:

uname -a

ls /sys/class/iommu

grep ‘address sizes’ /proc/cpuinfo

 

Thanks,

Kim


Thanks for your response.

Yes, after I did the initial install first thing I did was run the LCM and upgraded everything.

I have verified that VT-d was enabled in the BIOS settings and have upgraded to latest BIOS/firmware.

I set the VM to as low as as 256MB and same response when powering on.

Here are the results of those commands:

# uname -a
Linux NTNX-xxxxxxxx-A 6.1.92-10.0.1s2c14r5.el8.x86_64 #1 SMP Wed Dec 11 12:00:00

# ls /sys/class/iommu
dmar0  dmar1

# grep 'address sizes' /proc/cpuinfo
address sizes    : 39 bits physical, 48 bits virtual
address sizes    : 39 bits physical, 48 bits virtual
address sizes    : 39 bits physical, 48 bits virtual
address sizes    : 39 bits physical, 48 bits virtual
address sizes    : 39 bits physical, 48 bits virtual
address sizes    : 39 bits physical, 48 bits virtual
address sizes    : 39 bits physical, 48 bits virtual
address sizes    : 39 bits physical, 48 bits virtual
address sizes    : 39 bits physical, 48 bits virtual

 

I’m just surprised that everything else installed normally including the CVM which is a VM but not any other one. One other weird thing is that the CVM does not show up in the list of VMs even when checking the box to show the controller VMs.

Thanks.


Did you ever work this out? I’ve just upgraded my home CE to AHV 10 and now no VMs will start up :(

 

If I could find out where the -cpu options are set then it looks like setting “host-phys-bits=on” on the command line to start the VM might fix it.

 

Cheers,

Steve


OK I found out how to edit the -cpu options and setting host-phys-bits=on didn’t do anything, but setting it to off and setting phys-bits to 41 or higher enabled me to boot up VMs that were 2GB or less RAM. Any more and they blue screened on start up with a ‘Memory Management’ issue.


OK I found a much better fix. So the problem is the maxmem setting is configured to boot the VM up with 5TB max mem and my little old Kaby Lake Xeon can't cope. I've edited the qemu kvm frodo script to edit the maxmem on runtime and change it down to a more sensible number (I've just tested 8GB, and set the VM to 6GB), and it's working fine :slightly_smiling_face:

I've put the phys-bits back to 39 too.

 


Hello everyone,

I have the same problem, I updated my 3-node cluster to 7.0.1.5 and AHV 10.0.1 and I also can't start the VMs, all CVMs are not visible in Prism Element but are connected and seem functional. I also notice that the AHV version in the upper left corner of the Prism Dashboard does not appear updated.

 

I recently updated my BIOS and now I have the option to enable IOMMU, I will make sure that it is enabled

 

 


# uname -a
Linux NTNX-Node01 6.1.92-10.0.1s2c14r5.el8.x86_64 #1 SMP Wed Dec 11 12:00:00 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

# ls /sys/class/iommu
dmar0  dmar1

# grep 'address sizes' /proc/cpuinfo

address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual
address sizes   : 39 bits physical, 48 bits virtual


OK I found a much better fix. So the problem is the maxmem setting is configured to boot the VM up with 5TB max mem and my little old Kaby Lake Xeon can't cope. I've edited the qemu kvm frodo script to edit the maxmem on runtime and change it down to a more sensible number (I've just tested 8GB, and set the VM to 6GB), and it's working fine :slightly_smiling_face:

I've put the phys-bits back to 39 too.

 

Hi SteveCooperArch, thanks for your insight and info. Can you tell how or where you go to set these settings (e.g. maxmem, phys-bits)? I’m still new at messing with stuff on Nutanix command line. Where is this script located to edit it?

 

Thank you!


You can see in the AHV log in /var/log/libvirt/qemu/UUID.log what the settings are and the command line as it’s calling the qemu-kvm command with all the command line runtime options.

I worked out there is a python script that is called to initiate the VM boot called ‘qemu-kvm-frodo’ in /usr/libexec and I added in a replace function in the qemu_argv section to replace maxmem=4831838208k with maxmem=16777216k 

 

This is absolutely not something you should do as it’s interfering with internals you shouldn’t be interfering with! This is not a supported configuration :)


OK I found a much better fix. So the problem is the maxmem setting is configured to boot the VM up with 5TB max mem and my little old Kaby Lake Xeon can't cope. I've edited the qemu kvm frodo script to edit the maxmem on runtime and change it down to a more sensible number (I've just tested 8GB, and set the VM to 6GB), and it's working fine :slightly_smiling_face:

I've put the phys-bits back to 39 too.

 

Hi SteveCooperArch, Is there an example of this script? I tried it this way unsuccessfully:

acli vm.update teste extra_flags=size=4096M;qemu_path=


OK I found out how to edit the -cpu options and setting host-phys-bits=on didn’t do anything, but setting it to off and setting phys-bits to 41 or higher enabled me to boot up VMs that were 2GB or less RAM. Any more and they blue screened on start up with a ‘Memory Management’ issue.

 

I found how to run it, but I couldn't find Frodo. The indicated place would be:

Access the Frodo script:  /usr/local/nutanix/bin/frodo

 

frodo vm.qemu_param_set <vm_uuid> <parameter> <value>


DM me for more info!


Hello, first time poster. I’ve created a one-node cluster on an intel NUC 10 (NUC10FNH) using latest phoenix image. install went fine after a couple of hardware bumps but node is up, CVM is running etc. The issue I’m having is I can’t power on any VMs I create. It only gives a generic “InternalException” error. In the uhura.out log, there are these entries “Error: HypervisorError: internal error: QEMU unexpectedly closed the monitor” and “qemu-kvm: Address space limit 0x7fffffffff < 0x4bcbfffffff phys-bits too low (39): 61”. Having a hard time trying to make sense of this. I’ve read other people running on the same hardware with no issues, maybe it’s the specific version of the CPU (it’s an i7-10710U). one-node cluster, 64GB RAM, one 500GB m.2 SSD and one 500GB SATA SSD. booting off a 64GB usb drive. Any info would be appreciated. Thank you.

 

Well, my cluster is a home LAB and I ended up recreating it while keeping the AOS version 6.10.1.6 and AHV el8.nutanix.2023 even with the message of AHV not compatible with the AOS version without problems to boot the VM. Today I ran the LCM which was updated to version 3.2 that released the AHV el8.nutanix.20230302.103014 compatible with my AOS version 6.10.1.6 , I have the impression that LCM 3.2 might solve the problem of starting VM on AOS 7 and AHV 10, but I'm not going to take the risk now; I'll wait for a new version of AOS 7.0.1.5 and AHV 10.0.1.


I’ve just come across this issue on my new CE cluster built and upgraded this weekend.  I figured out how to change the maxmem setting, my VMs now start but then hang on boot with ‘the guest has not initialised the display driver’ and it goes no further. So I guess there’s other issues in my case.


Same experience.

I built a fresh cluster, got it to Health “Good”, created some VMs and things mostly worked great. Then I installed the latest version of everything via LCM and now none of my VMs will start. I had to warn my employees not to upgrade their own clusters else all of our testing systems would be offline.

  • AHV hypervisor : 10.0.1.1
  • AOS : 7.0.1.6
  • FSM : 5.1.1
  • Foundation : 5.9.0.2
  • Foundation Platforms : 2.18.0.2
  • NCC : 5.1.2
  • Security : security_aos.2022.9

I also noticed that prior to updating all of the CVMs for my cluster showed up in the VMs list in the webUI, but now only one of them ever does.

Still troubleshooting…

Right now my cluster shows all green in the Health view, but I can’t start any VMs, even if I turn the RAM down to 256MB and 1 core.

 

<acropolis> vm.on ntx-deb12-lv02

ntx-deb12-lv02: HypervisorError: internal error: QEMU unexpectedly closed the monitor (vm='ad6b74c4-3c2c-44c0-808d-1a250d8f7d93'): 202d...]

----- ntx-deb12-lv02 -----

HypervisorError: internal error: QEMU unexpectedly closed the monitor (vm='ad6b74c4-3c2c-44c0-808d-1a250d8f7d93'): 2025-07-30T22:32:38.848037Z qemu-kvm: -blockdev {"driver":"iscsi","portal":"127.0.0.1:3261","target":"iqn.2010-06.com.nutanix:vmdisk-7af05108-e48a-419f-9637-152b0c34019f","lun":0,"transport":"tcp","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}: info: libiscsi: connection established (192.168.5.1:33558 -> 192.168.5.254:3261) 2iqn.2010-06.com.nutanix:vmdisk-7af05108-e48a-419f-9637-152b0c34019f]

2025-07-30T22:32:38.848418Z qemu-kvm: -blockdev {"driver":"iscsi","portal":"127.0.0.1:3261","target":"iqn.2010-06.com.nutanix:vmdisk-7af05108-e48a-419f-9637-152b0c34019f","lun":0,"transport":"tcp","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}: info: libiscsi: login successful uiqn.2010-06.com.nutanix:vmdisk-7af05108-e48a-419f-9637-152b0c34019f]

2025-07-30T22:32:39.432628Z qemu-kvm: Address space limit 0x7fffffffff < 0x4c47fffffff phys-bits too low (39): 61


Hi , regarding this issue: based on the log snippet from uhura.out showing:

qemu-kvm: Address space limit 0x7fffffffff < 0x4bcbfffffff phys-bits too low (39): 61

Root Cause Analysis:

This is a QEMU/KVM-level error. It indicates that AHV’s hypervisor stack (which wraps QEMU under the hood) tried to start a VM, but the CPU’s reported physical address space width (phys-bits) was lower than needed. Here’s what’s going wrong:

  • phys-bits too low (39) implies QEMU sees the CPU exposing only 39 physical address bits (512 GB addressable RAM), but your system config or kernel build expects more than that, i.e., at least 47–52 bits (typical for modern CPUs).
  • The 0x4bcbfffffff limit implies AHV is trying to allocate more virtual memory space than your CPU (or its BIOS/Firmware) exposes as usable physical address range.

This is not a Nutanix error, but a CPU/microcode/Bios-level virtualization compatibility constraint.

 

Recommended Actions:

1. Check for BIOS Configuration Limiting VMX or Physical Address Extension

  • Go into BIOS and verify these settings:
    • VT-x (Intel Virtualization Technology): Must be enabled.
    • VT-d (for DMA remapping/IOMMU): Preferably enabled, though not strictly required here.
    • Intel TXT / Trusted Execution Technology: Disable it (can interfere in consumer boards).
    • Memory Remapping: Enable if present.
    • TME / MKTME (Total Memory Encryption): Disable (can break QEMU).
    • SMEP/SMAP or SGX: Try disabling to rule out conflicts.

Consumer boards (like Intel NUCs) often ship with virtualization partially disabled or gimped under certain firmware revisions.

2. Update BIOS Firmware

  • NUC10FN series had earlier firmware bugs causing VT-x or memory features to be improperly exposed.
  • Ensure you're on latest BIOS. Many reports show v0063 or newer fixing similar address limit issues.

3. Ensure QEMU in AHV Has Correct CPU Flags

  • Some NUC CPUs misreport or mask CPU flags like LM (long mode), PAE, or NX.
  • AHV on unsupported hardware may fallback to minimal virtual CPU config — causing errors at VM boot if not enough address bits are exposed.

While Nutanix Community Edition (if that's what you're using) tries to abstract this, on one-node DIY setups, these limitations aren't masked.

 

Validation:

Run this from the AHV shell:

egrep 'vmx|svm|lm|nx' /proc/cpuinfo

Check for:

  • vmx: must be present (Intel virtualization)
  • lm: must be present (long mode = 64-bit)
  • nx: required for secure booting
  • pae: sometimes implied via flags

Then run:

cat /proc/meminfo | grep -i "directmap"

If it’s truncated to 39-bit or lower mappings, it confirms your system’s firmware or AHV kernel isn't seeing full address space.

 

Assumptions to Question:

  • “Others have used this hardware without issue” – likely on different BIOS versions or with Community Edition (CE) where QEMU config may differ.
  • “InternalException is from Nutanix” – in this case, it’s a wrapper around a low-level QEMU error.

 

Alternatives to Consider:

If you can’t change BIOS behaviour:

  • Test AHV CE on same NUC (boot from USB stick) to confirm QEMU behaviour difference.
  • Try adding the qemu-override.conf (if root access is available on AHV) to reduce max_phys_bits, but this is unsupported in production builds.

 

 


Hi Jagoulden, this is something that worked on AHV 2023.x but after the upgrade to AHV 10.x, VMs stopped booting and produced this error.  So it’s not related to BIOS settings, it’s something changed in the AHV code.  I have seen references in other forum posts hinting that Nutanix now only supports ‘server grade hardware’ which I take to mean Xeon CPUs which can address more memory.

I was able to fix the maxmem setting based on hints from SteveCooperArch above, the VM would then boot but I then hit another error.

I have asked other Nutanix people for something more concrete about the code change that removed support for consumer CPUs but no response yet.


Hi Wasabi99,

Thats great that you’ve narrowed the issue precisely to a change introduced between AHV 2023.x and AHV 10.x specifically in how QEMU (via AHV’s KVM layer) calculates or enforces phys-bits / address space width and default VM memory mapping behaviour.

So just to break this down based on what Nutanix actually did and what’s currently supported:

What Changed in AHV 10.x?

Nutanix AHV 10.x Enforces "Server-Class CPU Compliance"

Starting with AHV 10.x, Nutanix hardened the VM launch pipeline to:

  • Explicitly validate CPU capabilities against enterprise server-grade specs (e.g., Xeon or EPYC-class),
  • Enforce minimum physical address bits (typically ≥ 46) during VM memory allocation.

This is not a documented behaviour change in public-facing PDFs, but it's implied in internal support KBs and developer-level discussions.

Consumer-grade Intel CPUs (like your i7-10710U) only expose 39–42 physical address bits depending on firmware microcode. AHV 10.x+ now expects 46–52 bits, in line with Xeon-class processors.

 

Root Technical Impact

In AHV 10.x:

  • qemu-kvm now fails fast when attempting to allocate maxmem values above what your CPU can theoretically address.
  • The error:
  • phys-bits too low (39): 61
  • qemu-kvm: Address space limit 0x7fffffffff < 0x4bcbfffffff

This is not just a warning, it's now a hard stop, unlike in AHV 2023.x which was more lenient or silently clamped memory ranges.

 

You mentioned:

“I was able to fix the maxmem setting based on hints from SteveCooperArch above…”

This likely refers to modifying the VM definition to include a clamped maxmem (e.g. ≤ 512GB), avoiding the phys-bits overflow.

Example from QEMU usage:

-maxmem 32G -m 8G

...forces the VM to stay within a 39-bit addressable range.

However, you're now hitting secondary compatibility errors, likely in the form of:

  • VM migration failures (if ADS is enabled),
  • Advanced CPU feature mismatches (e.g., lack of AVX, TSX, etc.),
  • Unimplemented VM features due to missing CPU flags.

 

Nutanix Position (Vendor-Supported)

As of AHV 10.x, Nutanix explicitly only supports "server-grade hardware" for:

  • AHV host CPU compatibility,
  • Predictable QEMU memory handling,
  • Full feature parity across cluster nodes.

This is enforced in:

  • Foundation imaging tools,
  • CVM platform validation at boot,
  • QEMU flags injected by libvirt/uhura during VM launch.

The Nutanix docs don’t list consumer CPUs as valid hosts, and CE (Community Edition) remains the only semi-blessed exception.

There is no way to override these validations in a fully supported, production build of AHV 10.x. Nutanix Support will not provide a patch, override, or regression toggle.

 

What Are Your Options?

If you want to stay on AHV 10.x:

  • Patch VMs to lower maxmem manually for each.
  • Limit to smaller memory footprints, ideally ≤ 16GB per VM.
  • Disable ballooning and dynamic memory reservations.

But expect to continue hitting issues due to subtle QEMU/kvm flag incompatibilities.

If you're testing/dev only:

  • Consider reverting to AHV 2023.x (AOS 6.6 or 6.7) where QEMU was more forgiving.
  • Or switch to Nutanix CE, which has relaxed CPU checks and allows more override flexibility.

If you need long-term compatibility:

  • Move to Xeon-D or Xeon-E3 based NUCs, like the NUC Enthusiast series or Supermicro MicroCloud nodes (widely used in labs).

 

You're correct that this isn't a BIOS issue, this is a design hardening in AHV 10.x. The InternalException masks what is really a low-level qemu-kvm memory init panic, and it's now unrecoverable on consumer CPUs.

Your workaround on maxmem works but not sustainable unless you're using AHV in an unsupported/lab environment, which Nutanix won’t provide roadmap support for.

 


Thanks for that explanation, I appreciate it - that’s the first time someone has clearly articulated the changes to me :)

Sadly for a lot of us, we do run consumer CPUs on our CE clusters so it makes Nutanix a non-option now.  I’ve already reformatted my cluster with another hypervisor as I need a functioning lab environment, I did want to use it for myself and my team to learn Nutanix as well as general lab purposes, but we don’t have access to the right hardware.


You're welcome

AHV 10.x makes CE practically unusable on a wide range of consumer CPUs. The hard enforcement of phys-bits > 46 is a silent breaking change that affects most self-hosted labs especially those on NUCs, Ryzen, and mobile Core i7s.

This issue isn’t documented in public Nutanix CE docs (yet), and Nutanix support officially does not back CE environments, so it’s left users like yourself in a dead-end.

 

But there is still Viable Alternatives If You Still Want to Work with Nutanix

1. Stick to AHV 2023.1.x or 2023.3.x (AOS 6.6.x/6.7.x)

  • These versions are the last ones to tolerate consumer CPUs, including 39–42 phys-bit CPUs.
  • CE 2.0 builds with AHV 2023.3.x still work well in this mode.
  • Reverting to an older Phoenix ISO and avoiding LCM upgrades can keep you running.

Yes, it's “frozen in time” but for training, workflow exploration, and API familiarity, it still gives you real Prism Central and AHV features.


Just a heads up that we are aware and an explanation as to what’s going on is here: Please Hold Off on AHV 10 upgrades on Non-Enterprise Grade Hardware