Topics started by Coleman
While attempting to upgrade from v4.7.1 to v5.x.x, the pre-upgrade fails with the following error: "Cluster name must be in FQDN format." Is there a way to upgrade the name from "CLUSTER" to "CLUSTER.domain.com" without downtime? Or, alternatively, is there a way to perform the upgrade without making this change/ingoring this message? Thanks all!
How do Persistent Reservations work with Hyper-V on Nutanix? I'm asking because with our other "Hyper-V with SMB Storage" clusters, you are not able to use Hyper-V manager on HOST1 to modify VHDs that belong to a VM running on HOST2, as you would expect - Error reads: "Access Denied". However, with Nutanix, we are able to do this and it leads to VHD corrouption upon next reboot of the VM Here is the scenario:[list] Hosts: 5 node Nutanix cluster with Hyper-V (Dell XC hardware) VM1 is running on HOST1 Using HOST2, launch Hyper-V manager (not Failover Cluster Manager). Edit Disk to resize a VHD of VM1. Resize successful (This fails in our non-Nutanix clusters) Using HOST2, check the properties of VM1’s VHD and you will notice that the file size has changed Using HOST1 (where the VM is running), check the properties of the VHD and you will notice that the file size HAS NOT changed Reboot VM1, server will not boot – VHD is now corrupt[/list]Is this a known issue/bug? I understand
AOS: v4.7.1 Server Platform: Multiple Hyper-V (2012 r2) clusters It appears that if you whitelist an IP that belongs to either a Server 2016 or a Windows 10 machine, that machine still can not access the SMB share being published by our Nutanix (v4.7.1) clusters. We found this out after upgrading our SCVMM server to Server 2016. At this point, the VMM server can 'see' the shares but can not calculate share size (reports 0GB) and therefore can't manage the share(s). Since then we've tested with multiple other 2016 and Windows 10 machines - all with the same result: added to the whitelist but can't browse the share, even from Windows Explorer. We are looking for confirmation that this is a known issue. If so, can you confirm if it's resolved by an upgrade to v5.0.2?
While learning AHV I’m a bit confused on a few things as it relates to vStores and Remote Sites. What is the purpose of a vStore and how do they differ from Containers? What are the implications of mapping a Site-A’s vStore (Container-A) to Site-B’s vStore (Container-B) that has running VMs in it? What are the implications of not including all VMs from within Site-A’s Container-A in a Protection Domain and failing-over said PD to Site-B? - Will the unprotected VMs from Container-A continue to run in Site-A? - What happens when I fail-back the PD to Site-A? Let’s say I’ve mapped Container-A (Site-A) to Container-B (Site-B). If I create a Protection Domain that contains VMs from Container-A2 (Site-A), how does this factor into my vStore mappings between sites? Thanks for the assist!
Our CVMs are configured with 32GB of dynamic memory. Start, minimum and max are all set to 32GB. I'm curious to know why Dynamic Memory is enabled on these VMs? Since the CVM's OS isn't fully Hyper-V integrated, we don't see memory demand from the hypervisor (this is the only benefit I see from leaving dynamic memory enabled on a VM with all values set the same). As an explicit downside, since we're using Dynamic Memory on these VMs, we lose the vNUMA featureset (not that a 32GB VM is likely to benefit from it). Thanks!
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.