License-Free Virtualization for Your Enterprise
- 369 Topics
- 1,276 Replies
A customer would like to run Microsoft Windows Server DataCenter 2022 in an AHV environment and we are looking at licensing for this. We want DataCenter cause of unlimited VM’s. We have recently heard that this has changed in the past year and only retail or perpetual licensing can run on Hyper-V and you can not run on AHV or Vmware anymore. This is because it needs the host to communicate with some licensing portal. So we have to purchase Open Value with SA for running AHV which is considerably more money and have to pay on a yearly subscription. They gave us this description. Windows Server 2022 Datacenter 2-Core License + SA (3 Yr)SKU: 9EA-00643Open Value with Software Assurance (3 Years)(3) 24-Core Hosts & Unlimited VMSNutanix HypervisorFulfilled with MAK/KMS LicenseDowngrade Rights to 2019 & 2016 Can anyone confirm this?If true this is crap and Microsoft is pushing their stupid hypervisor.
I wanted to start a discussion on our new Flow Virtual Networking VPC feature based on this video that TME Eric Walters created.You can use VPCs to make isolated overlay networks, do self-service network creation, or multi-tenant separation. Have you tried out VPCs in your environment?
Hello everybody, I started using Storage VM migration … vm.update_container VMNAME container=test-container … to move VMs from an old to a new container. Big VMs have been moved. I would like to delete the old storage containers, but space usage is still high, allthough VM have been moved to another storage container. I am aware of the directories … .acropolis, .file_repo and .ngt … and those directories are empty now. How can I reclaim space in a storage container, when VMs have been moved, the Recycle Bin has been emptied.No VMs have been deleted, only moved, so the 36 hours rule for space reclaimation doesn’t seem to apply here?Any thoughts?Regards,Didi7
We’re running 2 clusters which are pretty much configured the same running ESxi 7.02 AOS 5.20.4 LTS both running on nutanix hardware. One cluster is totally fine no issues but the other you cannot create a virtual disk greater than 2Tb! The issue only came to light when we had to move a VM off to another non-nutanix host for a few days then when we went to vmotion it back it errored complaining about the disk being greater than 2Tb, it is its nearly 6Tb as its a fortinet analyzer. I’m pretty sure when the VM was sat on the nutanix cluster its disk was greater than 2Tb and it was thin provisioned with a capacity of 7Tb when it was created! So you can’t move anything to any of the datastores that are greater than 2Tb and you also can’t create a new disk greater than 2TB. Also if you take a look at the datastores details it says maximum VM disk size 2TB, on the other clusters datastore it says 62Tb. It's getting to be a bit of a pain as the fortinet analyzer is sitting on a box we want
Hello everybody,I have an issue that PC Web console refuse login as the user/password were wrong, but that same user/password works fine on SSH, after troubleshooting, PC time zone set to PST not UTC from CLI while on VM configuration it’s UTC, I tried to change it to UTC from CLI , the web console accept the login, but after while it return to PST timezone, I configure NTP to local NTP.
When running foundation from the same network as the CVM/AHV host should be in, I’m getting this error. Phoenix Failed to load squashfs.img. I can’t ping the gateway, and it looks like during the bring up it’s unable to create bond0 with the NIC, which is 534FLR-SFP+. It’s like the link just doesn’t come up when booting into the rescue image.I’ve tried:Disabling the 4 port onboard NIC Pinging itself from ILO remote console (it works) Pinging the gateway and Foundation VM IP from ILO remote console (it fails) Installing ESXi manually on the server to confirm tagging/networking is set up properly, and it is Changing the IP of the foundation VM to another networkReally stuck hard. Any advice nutants? It’s like the rescue image doesn’tError MessageWaiting for eth0’s link to come up
hello.If you have any questions about UUIDs, please contact us.acli vm.get <VM Name> Nutanix-Clone-Study_1: Original container_id: 8 container_uuid: "04a931b9-4d2f-44a7-9903-8a5a3a3c463b"1. device_uuid: "4c4ae3e4-530d-4167-872a-ef6f2e6a41e9"2. naa_id: "naa.6506b8d8a5a92f6208b7e2380facb355"3. source_nfs_path: "/default-container-64847153591394/.snapshot/44/4359493875056948270-1663090801228723-26644/.acropolis/vmdisk/175ba8a7-0ca1-402f-9ab0-81ac7bfaa704"4. storage_vdisk_uuid: "fffd1e57-0fe4-4ed8-b706-9747e0e99449" vmdisk_size: 161061273600 5. vmdisk_uuid: "c1769c0e-2536-444a-8e08-2b986a6fada6"------------------------------------------------------------ ------------------------------------------Nutanix-Clone-Study_2 : Clone container_id: 8 container_uuid: "04a931b9-4d2f-44a7-9903-8a5a3a3c463b" device_uuid: "5f35410f-5bd1-4e06-a789-308276493787" naa_id: "naa.6506b8d3e59972c080ee639dae788a25" 6. source_vmdisk_uuid: "c1769c0e-2536-444a-8e08-2b98
Howdy,I am trying to envision what this would look like in a Nutanix environment for planning purposes.Currently we have two datacenters. We have one large vcenter cluster (Cisco UCSs, Synchronous Nimbles, Nexus vpc pair, L2 stretch on our own dark fiber (multiple 10gb links)) stretched across both DCs. This allows us to vmotion all VMs from one side to another, take down half our hosts, or even a whole DC without any issues.Thinking what this would look like if we were full Nutanix and AHV I am pretty sure this would not work (1 large cluster as we couldnt take down 1 DC since that would take out half of the hosts\storage). Correct?If so, I am thinking this would require at least a cluster at each DC, and then you metro-availability to uses the real-time sync. Does that sound about right from a high level?Thanks
I have a 8 nodes nutanix with vSphere 7 deployed. I’ve created 2 new storage container with compression enabled and map to the nodes. I test to copy the file that i uploaded to the default container to the new storage container presented as NFS datastore...it was very slow. 5GB took 2hrs. Then i tried storage vmotion of a VM reside on the new datastore to another new datastore. A 10GB VM took 20 minutes. If storage vmotion from the default container to new datastore, it fails.
Long shot i know, but i’m wondering if anyone every solved the issue of VSS backups failing after June 2022 Update on Hyper-V 2012 R2 hosts. Veeam has this KB Article that references a workaround/fix: https://www.veeam.com/kb4333However, it’s not at all clear how i would make a domain account an administrator/backup operator on the File Server that hosts the SMB3.0 share (the cvm..)
Hey Guys,Can anyone give the EOL information for version 6.5?In Support portal the EOL information is mentioned as “End of Maintenance and End of Support Life dates for 6.5.Z are based on the Release Date of the next LTS Release that is an Upgrade; these dates will be published at that time.”Is it recommended to upgrade to this version since EOL information is not mentioned?
Try to apply an upgrade from a Windows server 2012 R2 to Windows 2019 using Microsoft ISO image, but always fails to show this message: Has anyone managed to upgrade from Windows server 2012 R2 to Windows server 2019?I would appreciate some help about the subject. Thanks in advance. Manuel.
New to AHV, just installed this week. I have using VMware since version 2.5. Trying to deploy AHV Windows VMs from template and from AHV VMs. On both methods I get guest customization options using sysprep. I have been choosing “Guided Script” and filling out the fields for User Name, Password, Local, and Host Name. I leave the license key blank becasue we have enterpise licesing which is assigned when teh VM is joined to the domain. So far, evertime I have tried the host name does not get updated to match the Host Name I entered for the guided script. Also, in VMware guest customization you put what you want the password to be after sysprep. Fr this one is it asking for the current Administrator User/Password, or one to be updated or created by sysprep?
Our PC (we only have the one pc.2022.6) is currently hosted on an ESXi cluster, we also have 2 Nutanix clusters running ESXi as the hypervisor. I’m looking at moving PC to one of the 2 Nutanix clusters, as the host that PC is currently sitting on will be retired soon. What's the best way to go about this?
If I wanted to migrate from VMware ESXi 6.7 (no vCenter) to Nutanix AHV, is it possible to mount my existing VMFS6 formatted datastore into Nutanix, or would I have to move all the data off temporarily, re-create it in Nutanix, and copy it all back again?In other words, do I need to buy another 3TB of storage somewhere to temporarily move my data to while I reconfigure the storage array? :)Sorry if this questions seems simple!
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.