Unable to vmotion svmotion between clusters | Nutanix Community
Skip to main content
We are fairly new to Nutanix. We have two seperate clusters in seperate datacenters. Our goal is to be able to move workloads between clusters based on needs and resources.

We are using VMware 5.5 and everything seems to work as planned with the exception of the moving from cluster to cluster.

We have cluster A and cluster B. We are using metro availability to sync the data from cluster A to cluster B at this time and that seems to work correctly.

When I attempt to perform a cold migration from cluster A to cluster B I get the following error:Relocate virtual machineArtemisCloneFile/vmfs/volumes/ec936530-956aa-9ac/ArtemisClone/ArtemisClone.-vmdk was not found

cold migration from cluster B to cluster A works fine. the whitelists are identical on both clusters.

VMWare isn't really much help as their reply is to just browse the datastore and add it to inventory on a server in cluster B
Can you submit a support ticket on portal.nutanix.com so we can hop on a webex and hammer this out with you?



Thanks,

Jon
Welcome to Nutanix!



I would highly recommend opening a Nutanix Support case to investigate - there's a ton of variables here that would be much easier to hash out over the phone.







Some information that will be helpful to have (with or without a case):


  1. Are the VMs you're trying to svmotion in the metro container?
  2. Do you have more then 1 metro container?
  3. Is the metro relationship active or decoupled?
  4. If active, which site is active?
  5. What is the latency between sites?
  6. Is it only cold migration that fails?
  7. Does it fail if you only do a vMotion?


Basically the goal with these questions is to untangle the problem and find what exactly is breaking down to troubleshoot that further. Once narrowed down, Support will be able to dig into the approprate logs and figure out where to go from there.
I have an active ticket with Nutanix. I will reply to that ticket and ask for a WebEx
We worked with support and found that you are not able to vmotion/svmotion VMs that live in a container that has metro availability enabled. Disappointing, but I guess I see the point.
Hey  are you saying that you have two clusters, with a stretched container, and a VM on there isn't allowed to be vMotioned?



If so, that is incorrect, you can absolutely vMotion and svMotion in and out of metro containers, I've done it myself first hand.



That said, I definitely believe you're having some sort of issues, and the only thing I could think of, off the top of my head, is if you had two clusters, in the same vCenter, each with a container with the same name, and are trying to vMotion / svMotion in between them and metro is either not actually enabled or otherwise not working.



When that happens (conflicting container names and metro not being enabled), ESX gets confused about where its pulling and pushing files to, and gives errors like these.
That is pretty much the scenario.

Two clusters: cluster A has container Tier1, Tier2, TestDev, Tier1B and Tier2B TestDevB

Cluster B has the exact same named containers, which we were told during install was required in order for metro availability to work. Tier1 on cluster A metros over to Tier1 on Cluster B, Tier2 on cluster A metros over to Tier2 ClusterB. et cedera.





Here is the resolution per support:

Clusters also have other containers like:

Thing

Thingb

Other

Otherb



Metro relationship between clusters for non-b containers.





Steps Taken:

Because the "b" containers are not sync'd via metro, cluster A is trying to write the data to its TestDevb and cluster S does not see it, thus throwing the error.



To get this working, we had to create a new datastore on both A and B that points to cluster B's TestDevb container via cluster A's VIP. Now the storage vMotion is working (albeit slowly due to known issues with svMotion on Nutanix).
ah ha! ok, we're on the same page now. once you get things all metro'd up 100% on both sides, you should be able to use things just like you'd imagine.