License-Free Virtualization for Your Enterprise
- 428 Topics
- 1,421 Replies
Windows Server 2022 C Drive Partitions on AHV
I am posting this as a general FYI in case others encounter this issue and are looking for a work around. Additionally, I am making this more Nutanix specific.I recently created a base image for Windows Server 2022 Standard. I started with a small C Drive, 100GB.When I cloned it for my first production server and I went to expand the C Drive, the default partition scheme that Server 2022 installs with, places the Windows recovery partition AFTER the primary partition. The result is an inability to expand the C Drive partition through Disk Management.Preface: I mention these because many online forums suggested these as solutions.I understood that I could simply delete the recovery partition. For various reasons I did not want to. I understand it is best practice to install most applications etc. onto another drive. In this instance I did not want to. I understand that there are 3rd party applications that I can use to manipulate the partitions. I did not want to.Solution: Overall you n
Network connectivity issues in a Cluster
The setup is a cluster with 3 hosts. The cluster is up and running.However, the VMs connected to one of the hosts are not able to access the internet or other VMs on the network.If any of the affected VM is migrated to another host in the cluster, then it will be able to access the internet or other VMs on the network.Once it is returned to the faulty host, the connectivity problem will persist again.
Can Windows 2013 server on AHV disks be reconverted to SCSI after being migrated with MOVE from VMWare ESX?
Hi,I followed instructions to migrate a Windows 2013 server from ESX to AHV successfully according using Nx MOVE to https://www.vmwaremine.com/2019/01/24/migrate-windows-2003-x86-to-nutanix-ahv/#sthash.8hpeCflP.T3i44mTW.dpbs, it works great by the way. Now… In the process SCSI disks are cloned to IDE and the Windows 2013 SCSI controller is not recognized. In this scenario, Is it possible to reconvert those disks back to SCSI? I guess I need to get the driver first (Fedora?) and follow available procedure to reconvert, Has anyone done this? Thanks!
New - where do we start.
Hi. So our nutanix system is arriving in a few days, it’s now in the country. I’m going onsite in mid January to set it up where do I begin. We are going to be using Acropolis (excuse the spelling if wrong). From what I gather i’m just going to have rather a lot of boxes around 95kg in total, I guess once i’ve worked out how to put the hardware together where the heck do i begin getting acropolis installed on it all, I possibly just missed it but was expecting an easier Idiots guide this is where you start with your new system. We are going to be setting up various windows servers when we get to that stage. I’m ok with VMWare but never used Nutanix before. I might be in the wrong forum dumped it in AHV Virtualisation as i’m assuming that’s Acropolis. That’s how new I am :) Cheers EDIT: Only just thought i guess i should wait till i see the kit it might come with a start here guide.
Cluster Setup With 3 Different NUCs
Planning for moving from VMware to Nutanix CE. I have 3 NUC,Skull Canyon Hades Canyon . Wall Street CanyonPlanning to populate 2x1TB ssd to each of the NUC. Before dumping money in to populate my NUCs with SSDs, i would like to check if there are any potential problem to setup a 3 host cluster with 3 different CPU architecture?
IPs are not conserved for more than one adapter when moving VMs from VMWare on Nutanix to AHV with Nutanix Move 4.6
Hello,We are moving VMs from an existing Nutanix cluster with VMWare to a new cluster with AHV using Nutanix Move 4.6 (last version). The problem is that for VMs with more than one network adapter in use IPs are not assigned most of the times to network adapters after the first one. This is happening for VMs running Windows 2016 but I have not checked if the same thing happens por Linux or other Windows versions but I bet it does. For this cases, Move configures those interfaces to get their IPs via DHCP. I read an old post about this behavior but applies to an older version of Nutanix Move (https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000LWejSAG). Is anyone experiencing the same problem? Am I missing something? Thanks in advance
Microsoft Windows Server DataCenter 2022 license on AHV (perpetual vs open w SA)
A customer would like to run Microsoft Windows Server DataCenter 2022 in an AHV environment and we are looking at licensing for this. We want DataCenter cause of unlimited VM’s. We have recently heard that this has changed in the past year and only retail or perpetual licensing can run on Hyper-V and you can not run on AHV or Vmware anymore. This is because it needs the host to communicate with some licensing portal. So we have to purchase Open Value with SA for running AHV which is considerably more money and have to pay on a yearly subscription. They gave us this description. Windows Server 2022 Datacenter 2-Core License + SA (3 Yr)SKU: 9EA-00643Open Value with Software Assurance (3 Years)(3) 24-Core Hosts & Unlimited VMSNutanix HypervisorFulfilled with MAK/KMS LicenseDowngrade Rights to 2019 & 2016 Can anyone confirm this?If true this is crap and Microsoft is pushing their stupid hypervisor.
Flow Virtual Networking - VPCs and Overlay Networks
I wanted to start a discussion on our new Flow Virtual Networking VPC feature based on this video that TME Eric Walters created.You can use VPCs to make isolated overlay networks, do self-service network creation, or multi-tenant separation. Have you tried out VPCs in your environment?
Space reclaimation in storage containers when VMs have been moved to other storage container
Hello everybody, I started using Storage VM migration … vm.update_container VMNAME container=test-container … to move VMs from an old to a new container. Big VMs have been moved. I would like to delete the old storage containers, but space usage is still high, allthough VM have been moved to another storage container. I am aware of the directories … .acropolis, .file_repo and .ngt … and those directories are empty now. How can I reclaim space in a storage container, when VMs have been moved, the Recycle Bin has been emptied.No VMs have been deleted, only moved, so the 36 hours rule for space reclaimation doesn’t seem to apply here?Any thoughts?Regards,Didi7
VM Disk Greater Than 2TB Issue
We’re running 2 clusters which are pretty much configured the same running ESxi 7.02 AOS 5.20.4 LTS both running on nutanix hardware. One cluster is totally fine no issues but the other you cannot create a virtual disk greater than 2Tb! The issue only came to light when we had to move a VM off to another non-nutanix host for a few days then when we went to vmotion it back it errored complaining about the disk being greater than 2Tb, it is its nearly 6Tb as its a fortinet analyzer. I’m pretty sure when the VM was sat on the nutanix cluster its disk was greater than 2Tb and it was thin provisioned with a capacity of 7Tb when it was created! So you can’t move anything to any of the datastores that are greater than 2Tb and you also can’t create a new disk greater than 2TB. Also if you take a look at the datastores details it says maximum VM disk size 2TB, on the other clusters datastore it says 62Tb. It's getting to be a bit of a pain as the fortinet analyzer is sitting on a box we want
PC time zone keep changed
Hello everybody,I have an issue that PC Web console refuse login as the user/password were wrong, but that same user/password works fine on SSH, after troubleshooting, PC time zone set to PST not UTC from CLI while on VM configuration it’s UTC, I tried to change it to UTC from CLI , the web console accept the login, but after while it return to PST timezone, I configure NTP to local NTP.
Phoenix Failed to load squashfs.img on HPe DL380 Gen10
When running foundation from the same network as the CVM/AHV host should be in, I’m getting this error. Phoenix Failed to load squashfs.img. I can’t ping the gateway, and it looks like during the bring up it’s unable to create bond0 with the NIC, which is 534FLR-SFP+. It’s like the link just doesn’t come up when booting into the rescue image.I’ve tried:Disabling the 4 port onboard NIC Pinging itself from ILO remote console (it works) Pinging the gateway and Foundation VM IP from ILO remote console (it fails) Installing ESXi manually on the server to confirm tagging/networking is set up properly, and it is Changing the IP of the foundation VM to another networkReally stuck hard. Any advice nutants? It’s like the rescue image doesn’tError MessageWaiting for eth0’s link to come up
If you have any questions about UUIDs, please contact us.
hello.If you have any questions about UUIDs, please contact us.acli vm.get <VM Name> Nutanix-Clone-Study_1: Original container_id: 8 container_uuid: "04a931b9-4d2f-44a7-9903-8a5a3a3c463b"1. device_uuid: "4c4ae3e4-530d-4167-872a-ef6f2e6a41e9"2. naa_id: "naa.6506b8d8a5a92f6208b7e2380facb355"3. source_nfs_path: "/default-container-64847153591394/.snapshot/44/4359493875056948270-1663090801228723-26644/.acropolis/vmdisk/175ba8a7-0ca1-402f-9ab0-81ac7bfaa704"4. storage_vdisk_uuid: "fffd1e57-0fe4-4ed8-b706-9747e0e99449" vmdisk_size: 161061273600 5. vmdisk_uuid: "c1769c0e-2536-444a-8e08-2b986a6fada6"------------------------------------------------------------ ------------------------------------------Nutanix-Clone-Study_2 : Clone container_id: 8 container_uuid: "04a931b9-4d2f-44a7-9903-8a5a3a3c463b" device_uuid: "5f35410f-5bd1-4e06-a789-308276493787" naa_id: "naa.6506b8d3e59972c080ee639dae788a25" 6. source_vmdisk_uuid: "c1769c0e-2536-444a-8e08-2b98
Nutanix equivalent of vmotioning to another datacenter cluster
Howdy,I am trying to envision what this would look like in a Nutanix environment for planning purposes.Currently we have two datacenters. We have one large vcenter cluster (Cisco UCSs, Synchronous Nimbles, Nexus vpc pair, L2 stretch on our own dark fiber (multiple 10gb links)) stretched across both DCs. This allows us to vmotion all VMs from one side to another, take down half our hosts, or even a whole DC without any issues.Thinking what this would look like if we were full Nutanix and AHV I am pretty sure this would not work (1 large cluster as we couldnt take down 1 DC since that would take out half of the hosts\storage). Correct?If so, I am thinking this would require at least a cluster at each DC, and then you metro-availability to uses the real-time sync. Does that sound about right from a high level?Thanks
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.