License-Free Virtualization for Your Enterprise
- 369 Topics
- 1,276 Replies
Can someone chime in on how Licensing woudl work when using AHV? For example, we currently have Windows 2012R2 Datacentre edition on each host and we have the ability to have unlimited licensed Windows Servers virtualised. I know that AHV doesn't require a license but does that mean that a license for each Widnows Virtual Server will need to be purchased hence invalidating our Datacentre license???
Hello,We are buying a Nutanix Cluster of 3 nodes nx-3170-g8, but the management, also intending to buy vSphere 7 licenses thinking to replace the Hypervisor in the Nutanix boxes.I would like to know, if this can be done? and if yes, what documents or procedures must be followed?Thank you in advance,
Hi TogetherIs there a way automatically provisioning Linux Server on Nutanix Platform with RedHat Satellite? It doen’t seem there is a nutanix provider for satellite.And this link covers only Redhat subscription management: Simplified Subscription Management for AHV with Red Hat Satellite ServerSomeone tried “Libvirt” Provider in satellite with Nutanix?Kind RegardSteve
Hello everyone,Someone before me in my current job created a couple of VMs in AHV with IDE bus type for their disks instead of SCSI. Just a quick glance of the documentation I found this:AHV does not leverage a traditional storage stack like ESXi or Hyper-V. All disk(s) are passed to the VM(s) as raw SCSI block devices. So my question is, why would someone decide to create IDE disks? Am I missing something? I know it’s common practice for CD-ROM drives, but not storage disks. No wonder I noticed a really bad performance on those servers recently.
Ok, yesterday one of our many Nutanix clusters was upgraded to 5.15 LTS and I started to create a new Windows Server 2016 template using UEFI-configuration. Now I am stuck at display resolution of 1280x1024 and change is not possible, because it is grayed out. Nutanix Guest Tools from 5.15 LTS and VirtIO 1.1.5 are installed. Is this bug known and does there exist a work-around? What about 5.16 STS? Is it fixed there? Thanks for any reply. Regards, Didi7
Hi,I have a confusion between the role played by foundation and phoenix, during a LCM firmware upgrade, moreover I am not finding a deep dive article explaining this end to end:1. Once the phoenix iso is created and mounted to the host along with the upgrade bundle, Can phoenix run independently and complete the upgrades or needs help from foundation for performing some tasks like host reboot ?2. During LCM upgrade, since the node undergoing upgrades, will have all VMs powered off including the CVM, will it take help from a remote CVM foundation to orchestrate the upgrades on this host ?3. Putting host in maintenance mode is taken care by LCM or foundation service?4. Can you please give a high level working of the phoenix mechanism, since the hypervisor installer itself is a separate iso, which needs to be mounted, just wondering how phoenix handles multiple iso, does it mount simultaneously with the upgrade iso?
Did you ever wonder what the blinking green light or the solid amber light on the LED of the network card indicates?Are you confused between a blinking green light and a solid green light? Different NIC manufacturers use different LED colors and blink states. Not all NICs are supported for every Nutanix platform. To know what each blink state means on the different manufactured LEDs, check out this article here.
Enable or disable console support for a VM with only one vGPU configured. Enabling console support for a VM with multiple vGPUs is not supported. By default, console support for a VM with vGPUs is disabled.To enable or disable console support for each VM with vGPUs, do the following: Run the following aCLI command to check if console support is enabled or disabled for the VM with vGPUs.acli> vm.get vm-nameWhere vm-name is the name of the VM for which you want to check the console support status.The step result includes the following parameter for the specified VM:gpu_console=FalseWhere False indicates that console support is not enabled for the VM. This parameter is displayed as True when you enable console support for the VM. The default value for gpu_console= is False since console support is disabled by default.Note: The console may not display the gpu_console parameter in the output of the vm.get command if the gpu_console parameter was not previously enabled. Run the followin
AHV hosts support GPU pass-through for guest VMs, allowing applications on VMs direct access to GPU resources. The Nutanix user interfaces provide a cluster-wide view of GPUs, allowing you to allocate any available GPU to a VM. You can also allocate multiple GPUs to a VM. However, in a pass-through configuration, only one VM can use a GPU at any given time. Host Selection Criteria for VMs with GPU Pass-Through When you power on a VM with GPU pass-through, the VM is started on the host that has the specified GPU, provided that the Acropolis Dynamic Scheduler determines that the host has sufficient resources to run the VM. If the specified GPU is available on more than one host, the Acropolis Dynamic Scheduler ensures that a host with sufficient resources is selected. If sufficient resources are not available on any host with the specified GPU, the VM is not powered on.If you allocate multiple GPUs to a VM, the VM is started on a host if, in addition to satisfying Acropolis Dynamic Sched
Memory and CPUs are hot-pluggable on guest VMs running on AHV. You can increase the memory allocation and the number of CPUs on your VMs while the VMs are powered on. You can change the number of vCPUs (sockets) while the VMs are powered on. However, you cannot change the number of cores per socket while the VMs are powered on.You can change the memory and CPU configuration of your VMs by using the Acropolis CLI (aCLI), Prism Element (see Managing a VM (AHV) in the Prism Web Console Guide), or Prism Central (see Managing a VM (AHV and Self Service) in the Prism Central Guide).Memory OS Limitations On Linux operating systems, the Linux kernel might not make the hot-plugged memory online. If the memory is not online, you cannot use the new memory. Perform the following procedure to make the memory online. Identify the memory block that is offline.Display the status of all of the memory.$ cat /sys/devices/system/memory/memoryXXX/state Display the state of a specific memory block.$ gr
I downloaded symantec messaging gateway (brightmail) ova file. I uploaded the vmdk file as an image on my prism. I created a virtual machine and attached the disk. but it is not booting. virtio diriver is not installed., i tried to install it and it is not installed. because brightmail root access is restricted. I never found a way to install it. I talked to symantech support, the result; they said only vmware, hyperv support. I tried many ways; ova, backup/restore.. but without success. I can't run brightmail on ahv. it doesn't boot. I seek help from those who have succeeded in this matter. thank you.
Customer is currently running OVM on Sparc hardware. Guest OS on OVM is Solaris and Oracle Database runs on Solaris.Nuatnix AHV will support Solaris?there is a way to move to Nutanix AHV and what could be the best guest OS suitable – Oracle Linux or one of the other Oracle certified guest OS’s .
We have 3 node HPE DX380 deploy nutanix recently.After 2 - 3 days, i saw the physical disk show bad disk suddenly and no flush light in physical disk LED light. When i login iLO port, the storage is good and no error. When i login PRISM, look the disk is Red and show “The drive has failed”. When i unplug it and plug it again, and re-partition it, seems back normal, but i concern the new server have this error suddenly so want to create nutanix case, but when i create case with input HPE serial number, it is not allowed, so i would like to ask how to create Nutanix case for HPE DX380. Many thanks.
Hello! I’m new to the Nutanix community. Loving the tech so far, but... Has anyone used Packer to deploy Windows from within AHV? I’ve seen plenty of how-to’s on how to create a VM image that can be uploaded to AHV. I want to do the entire process within the cluster. Packer has a QEMU builder but not sure how I can tell it to build it on another VM within the cluster. I’m trying to replace my MDT image deployment process with something that can do Windows and Linux and is more RCS/git friendly. I’m not married to Packer so I’ll take suggestions on other possibilities. And if more detail is needed on what I’m doing currently or desired outcome let me know. Thanks in advance!
Is there a way to implement FT at the VM level (i.e. similar to how it’s done on ESXi) on AHV. I’ve seen a couple of different posts that point to preferred technologies like software HA and AGG (not that I know what/how to do this) but we currently have the requirement for a legacy type VM to be always available and the only method we can do that at the moment \is having ESX as the hypervisor. This is for a future project, so am not locked into current versions. Cheers. Paul
Hi fellas,Does anyone have a clear answer of what is the real cost of AHV? I’m told that AHV is free but is this referring to only Nutanix Community Edition? What about AOS? when purchasing a Nutanix node/block/cluster, these come pre-installed but the cluster essentially has no ‘smarts’ without the CVM which is AOS.Referencing this Licensing Quick Reference Guide, it looks like AHV is included with AOS but AOS’ licensing metric is charged per node. Am I reading this correctly?Is it possible to run AHV without AOS? Thanks,JS
My current Nutanix cluster version is 5.18, vCenter and ESXi is 7.0u1. I would like to upgrade the vCenter and ESXi to 7.0.u2. What version Nutanix cluster version support both vCenter and ESXi? And also the Prism version is:-Version pc.2020.9.0.1NCC Version: 220.127.116.11LCM Version: 18.104.22.168 is it above Prism and other component need t upgrade as well.
Hi there community, I’m hoping that someone can give me some guidance. We recently acquired a facility with some Nutanix clusters deployed, and this is my first time working with Nutanix. I have access to the vSphere hosts but no idea on the Nutanix side of things. Trying to activate the support portal with any and all serial numbers I can find just gives me an invalid serial number message….which means I can’t open a support ticket to ask about the serial numbers. I see each of the host machines running a Nutanix Controller VM, but I’m guessing there is a central management console tied to the hardware (NX?), and a spreadsheet from the seller lists some hardware/enclosure serial numbers, but under serial number just mentions refer to PRISM. I’m trying to hunt down someone at the seller to provide more details and make sure things get transferred, but is there anything I can do or look at in the meantime to figure this thing out?
What is the status of Credential Guard on Nutanix VMs? We are running AOA 5.20.I have created a new VM with UEFI, Secure boot and Credential Guard enabled, but I can’t get it to work. Credential Guard is enabled with GPO, but still will not run. When I look at device security, it says “Standard hardware security not supported” and there is no compatible TPM shown in tpm.msc.The OS I’m testing it on is Microsoft Server 2019.
We have a UTM with 6 interfaces that go to separate subnets. Each subnet has a ToR Switch. The UTM handles the routing. We are installing a new Nutanix 4 node Block and virtualization physical servers. Assuming that we have 6 physical servers and each server is on a separate subnet, what would our network diagram look like if we included the Nutanix Block? Would we need another switch that has the 6 subnets vlanned, ie: 1 connection from the block and then based on the traffic destination send the traffic out through the vlan port to the ToR switch in the destination subnet?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.