A customer would like to run Microsoft Windows Server DataCenter 2022 in an AHV environment and we are looking at licensing for this. We want DataCenter cause of unlimited VM’s. We have recently heard that this has changed in the past year and only retail or perpetual licensing can run on Hyper-V and you can not run on AHV or Vmware anymore. This is because it needs the host to communicate with some licensing portal. So we have to purchase Open Value with SA for running AHV which is considerably more money and have to pay on a yearly subscription. They gave us this description. Windows Server 2022 Datacenter 2-Core License + SA (3 Yr)SKU: 9EA-00643Open Value with Software Assurance (3 Years)(3) 24-Core Hosts & Unlimited VMSNutanix HypervisorFulfilled with MAK/KMS LicenseDowngrade Rights to 2019 & 2016 Can anyone confirm this?If true this is crap and Microsoft is pushing their stupid hypervisor.
Our current Nutanix infrastructure has been a bit neglected for a while, AOS, PC, NCC, etc have been kept up to date but not firmware\bios or hypervisor so I need to get these done.Current infrastructure is nothing massive, two clusters one 4 node one 3 running on NX-8035-G6 and NX-8035-G7, both are AOS 5.20.4. We have one PC running at the latest version and its sitting on a separate Management vSphere cluster along with vCenter. Hypervisor is currently ESXi 7.0 Update 2a 17867351 on all hosts, this is Standard Edition so NO DRS ☹ My plan is to upgrade the firmware and BIOS etc via the LCM on the smaller of the two clusters. I have freed up one host so there aren’t any VM’s on it apart from the CVM. I take it I can just run the LCM to do the firmware updates on that host and let it do its thing without having to do anything else, it’ll shut down the CVM maintenance mode the host and restart it?Once that is complete I want to upgrade ESX on that host. I have checked the compatibil
I changed the initial password, but I keep getting a host using default password warning. The same happens with ncc checks.CVM - nutnaixAHV - rootIPMI - USERIDI have changed the above three account passwords.Which account's initial password needs to be changed to resolve this symptom?Please check.Thank you.
Hello,We recently purchased 2 x NX-8235N-G8-CM (total of 4 nodes), which will be explicitly used as storage servers.I am having some trouble performing the initial configuration and wondering if I could get some support as I am stuck. This would be my first time setting up a Nutanix device and not sure where to begin.I did try to use the Nutanix Foundation, but no luck while trying to follow what was done here: I do have few questions regarding Foundation though.What I did:Plug in the new devices’s IPMI/data port to a dummy switch (IPMI/HOST IP will be on same subnet) Plug in my laptop with a prod IP address (i.e. 10.8.1.0.24) to switch Launch Nutanix Foundation for Windows Follow what was done in the above videos with my IP addressesNote: I am not running any DHCP nor did I set any static IP for IPMI/AHV/CVM. I read that AHV is already installed for devices sold in the US; however, not sure if the CVM is already installed though. The setup process does ask for manual entries for IPMI/
HI all,We had a server HPE DL360-8 G10 - IPMI enabled. with m2 SATA stick 875317-b21 + Smart Array E208i-a SR Gen10 + 4x SSD Sata HPE and I’m trying to install AOS6.5 with AHV but it failed with error code “CRITICAL svm_rescue:926 No suitable SVM boot disk found”Very appreciate if anyone could share the way to fix this issue.
Veeam Backup for Nutanix File server SMB Share backup - From Snapshot path? Currently we are backing up the Nutanix file share backups using Veeam backup (NAS SMB file share) Current settings:Backup Directly from FIle shareThis option is skipping some of the locked files whcih is expected…Would like to change it to Backup from storage snapshot at the following path:? I would like to know what would be the snapshot path for Nutanix file SMB shares? what I need to give here? Can someone suggest so that I can avoid skipped files issue on the backup jobs
Hello everybody, I started using Storage VM migration … vm.update_container VMNAME container=test-container … to move VMs from an old to a new container. Big VMs have been moved. I would like to delete the old storage containers, but space usage is still high, allthough VM have been moved to another storage container. I am aware of the directories … .acropolis, .file_repo and .ngt … and those directories are empty now. How can I reclaim space in a storage container, when VMs have been moved, the Recycle Bin has been emptied.No VMs have been deleted, only moved, so the 36 hours rule for space reclaimation doesn’t seem to apply here?Any thoughts?Regards,Didi7
Dear All,New to Nutanix Frame here. Hence, please just let me know if I left out any essential information.We are running a non-persistent account on Frame. We noticed that AD objects that are related to instances that have been removed from the Frame account, e.g. by manually terminating them from the Status page, remain in AD. The expectation would be that these would be purged automatically. We used the Nutanix Frame AD Helper to verify that the service account does have the necessary permissions. Also, we are seeing that new machines are being created without issues.
Hello, I would like to install NGT on Linux Centos 7 Create one more cd-rom for mounting the VM you want to install I know that the prism selects Manage Guest Tools as follows and then proceeds with the installation, but if it stops at the following steps, where should I check? I'd appreciate it if you could give me an answer
Long shot i know, but i’m wondering if anyone every solved the issue of VSS backups failing after June 2022 Update on Hyper-V 2012 R2 hosts. Veeam has this KB Article that references a workaround/fix: https://www.veeam.com/kb4333However, it’s not at all clear how i would make a domain account an administrator/backup operator on the File Server that hosts the SMB3.0 share (the cvm..)
Hi guys,Currently working on Ansible to deploy K8s clusters through Karbon and i got a fatal issue.Ansible is version core 2.12.9, PC is 2022.1, nutanix.ncp collection has been tested in version 1.6.0 and 1.7.0 and there’s absolutely no traffic denied between ansible and PC (to sum up, everything is opened). I use this playbook, based on example provided in ansible-doc (i also tried the example, and i got the same errors in any cases) ###EDIT : i can’t add code in this post, got a banner Something gone wrong everytime ### Here’s the logs when the playbook fails (after ~5min running, from PC, ETCD deployment get stuck at 8%) : ###EDIT : i can’t add code in this post, got a banner Something gone wrong everytime ###Moreover, K8s cluster deployment through PC GUI is working smoothly. Does anyone got an idea to fix the “failed to deploy ntnx dvp” error ? Thanks a lot :) Gael
Tengo unas cargas que fueron tomadas con Live Optics para unos nodos que corren Microsoft Hyper-V,¿Cuál puede ser la forma más eficiente para subir estas cargas a Nutanix Sizing? ¿Qué método oficial tiene Nutanix para manejar estos escenarios? ¿Existe documentación oficial de Nutanix donde se pueda aprender cómo subir las cargas de Live Optics?
Hello everybody,I have an issue that PC Web console refuse login as the user/password were wrong, but that same user/password works fine on SSH, after troubleshooting, PC time zone set to PST not UTC from CLI while on VM configuration it’s UTC, I tried to change it to UTC from CLI , the web console accept the login, but after while it return to PST timezone, I configure NTP to local NTP.
hi community,we want to expand the cluster by adding additional node, the server model is HPE ProLiant DX360 , currently 4 nodes are running and we want to add one node of same hardware model. the hardware is factory AOS imaged. AHV is the hypervisor on current cluster.we want to know what are the physical network connectivity is required to add the node and image AHV.on new node IPMI is connected and two 10g are connected , but we are unable to discover the node from prism element. is it required to connect the node port 1g as well to the switch for imaging?
Howdy,I am trying to envision what this would look like in a Nutanix environment for planning purposes.Currently we have two datacenters. We have one large vcenter cluster (Cisco UCSs, Synchronous Nimbles, Nexus vpc pair, L2 stretch on our own dark fiber (multiple 10gb links)) stretched across both DCs. This allows us to vmotion all VMs from one side to another, take down half our hosts, or even a whole DC without any issues.Thinking what this would look like if we were full Nutanix and AHV I am pretty sure this would not work (1 large cluster as we couldnt take down 1 DC since that would take out half of the hosts\storage). Correct?If so, I am thinking this would require at least a cluster at each DC, and then you metro-availability to uses the real-time sync. Does that sound about right from a high level?Thanks
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.