Skip to main content
Hi,we are running 20 Nutanix Nodes with AOS 5.0.1 and ESXi 6.0 as hypervisor splitted into 2 Clusters (10 Nodes per Cluster) which are running in two different vCenter (vCenter Server Appliance 6.0.0.20100 Build Number 4632154) and two different locations.Now we are testing Nutanix Async DR to test failover from one side to the other side. All VMs will be shutdown successfully and will be migrated (unregistered at the local vCenter Server and registered at the second vCenter Server) successfully to the second vCenter.The Issue is, that VMware Connection Server, which is connected to both vCenter Server, will not recognize that the VM have moved from one vCenter to the other. So it is not possible for us to connect to this VDI on the second side.Can anyone assist us find a solution for this problem?Best RegardsMarkus
mstauder - Are these persistent desktops? non persistent desktops? Can you describe your VDI pool setup a bit more? That will dictate the solution .
Hi,we have to different locations. In each Location we have a Nutanix Cluster (each 10 Nodes).We are running different workloads within the VDI Infrastructure, so we using linked clone Desktops which will be destroyed after every log off and full clone Desktops which are persistent.In case of a disaster, we will roll out some more linked clone Desktops at the outlive location, so for this scenario there is no action needed.But for our full clone Desktop Pool we have setup two Protection Domains which will replicate the VMs every 120 Minutes to the other location. Both Location are active, so there is no primary or secondary location. So the full clone Desktop from Site A will be replicated to Site B and the other way round.Our Horizon Infrastructure in short:- We have two VMware Connection server (one in each location) which are behind a loadbalancer which are responsible for both sides.- In each location we have a vCenter Server Appliance running which manage the local Nutanix Cluster (VMware 6.0)During the logon of a VDI User the Connection Server gets the information from the Desktop pool configuration, if its VM is running within vCenter A (Site A) or vCenter b (Site 😎 and will redirect the session to the correct Site.When we will migrate the VMs from Site A to Site B, Prism will shutdown the VMs, will do a last sync, unregister the VM from the ESXi Host and register the VM on an ESXi Server at Site B. So this step is working well, the only Feature we miss, is that all migrated VMs are in the Power State "off". It will be nice if the Power State at the time of the recovery will be recovered.For our Scenario it will not be necessary that the VMs will be powered on automatically because normally the connection server will power on the VMs by itself, if they are powered off. But in our case the connection server did not recognize, that the VM from Site A with vCenter A was moved to Site B with vCenter B, so the Power On action will be send to the wrong vCenter, we see that from the VMware ESXi Task and Event Logs. This Task will be send every few seconds to the "old" vCenter until the VM was powered on manually and the Desktop Agent within the VM will send a "up and running" Status to the connection Server. From now on the User can login to his VM, but only if the vCenter from Site A and all ESXi Host from Site A will be available and all migrated VMs will be listed as orphaned.The connection server still things the VM is located within Site A, but in case that the Desktop Agent is alive, the connection will directly redirected to the VM.But from the moment we delete the orphaned VMs within vCenter A, or we shutdown the vCenter and all Hosts within Site A to simulate a complete outage from Site A, there is no connection to the VDI Desktop VM possible.I fact, we have successfully migrated all VMs without any data lose from Site A to Site B, or in case of a disaster with a maximum delta of 120 minutes to the other side, but we are not able to use the migrated Desktop VDI VMs.I hope this will help you to understand my situation.If you have any further questions, please feel free to ask.ThanksMarkus
I think the only way you'd be able to use the migrated VM's would be to create a new manual pool on the DR side & manually add the recovered VM's to it.



We've got a similar requirement (2 sites, active/active), but instead of using Async, we use MetroAvailablity. This means while we have 2 seperate NTNX clusters, we have a single vSphere cluster, so no problems with View not knowing about VM's whichever site they're running from... Obviously this has some additional overheads (you need a good WAN link, we've got Dark Fibre at ~1.5ms latency) plus makes managing the VM's a bit more difficult (you can't do linked clones on non-metro datastores as View will not show a datastore if all hosts can't see it); but for us the overall solution works out great.