Migrating VM Disk to a Different Container on AHV | Nutanix Community
Skip to main content

There are times, when we need to move a vm disk to a different container on the same AHV Cluster.

For e.g. : We may want to move this VM Disk on a container with De-Dupliction Disabled. To relocate a virtual machine’s disk to a different container on the same AHV cluster, following steps are required:

Requirements for the Move:

  • Source Container ID (where the vmdisk is located originally)
  • Destination Container ID (our target container on the same cluster)
  • VM Disk(s) UUID (UUID of each disk we need to move “acli vm.get <vm-name>”)
  • Power-off the VM

Summary of Steps:

  • Determine the vmdisk_uuid of each virtual disk on the VM.
  • Make sure the VM for which the VMdisk we are migrating is powered off.
  • Use the Acropolis Image Service to clone the source Virtual Disk(s) into Image(s) on the target container.
  • Attach the disk from the Acropolis Image(s) which created in Step 3 to the VM.
  • Remove the VM disk that is hosted on the original container
  • Optional: Remove the cloned VMdisk from image services

Detailed Steps:

Let us take the example of a VM named "EXAMPLE_VM"

we need to login to a cvm and execute the following command to get more info about “EXAMPLE_VM” :

acli vm.get <vm-name>

acli vm.get EXAMPLE_VM

With the above command we will get the following output:

ide.0 {
addr {
bus: "ide"
index: 0
}
cdrom: True
empty: True
}
ide.2 {
addr {
bus: "ide"
index: 2
}
cdrom: True
empty: True
}
scsi.0 {
addr {
bus: "scsi"
index: 0
}
container_id: 8118
container_uuid: "e9e6b4ca-c482-4dde-9e76-d3b809365370"
source_vmdisk_uuid: "f3dda809-a29b-49c8-9afa-4677e62aceb0"
vmdisk_size: 107374182400
vmdisk_uuid: "0778b93f-239b-4f42-9908-29df5afb160d"
}
scsi.1 {
addr {
bus: "scsi"
index: 1
}
container_id: 1795364
container_uuid: "21bd8952-b9a1-43bd-9295-e3bcd75314c2"
vmdisk_size: 107374182400
vmdisk_uuid: "2c2af38e-d330-459f-8460-d20fe69c63bb"
}
scsi.2 {
addr {
bus: "scsi"
index: 2
}
container_id: 8118
container_uuid: "e9e6b4ca-c482-4dde-9e76-d3b809365370"
vmdisk_size: 107374182400
vmdisk_uuid: "b7942c82-b2da-4ac6-87b9-d30c36abbb94"
}

From the above command you can see that the EXAMPLE_VM ' VM has 3 disks. The disks 

on scsi.0 and scsi.2 are hosted on container with ID 8118 while the disk on scsi.1 is on 

container ID 1795364.

Tip : You can drop on the acli command shell when logged on to the cvm and it offers tab completion

  • Step -1 : Power off the vm

From the acli command line :

nutanix@cvm$ acli vm.shutdown EXAMPLE_VM 

From the Prism console :

Prism > VM > Table > Click on VM > Power off

  • Step - 2 : Clone the vmdisk

First confirm the container names and respective ids:

nutanix@cvm$ ncli ctr ls id=1795364 | grep Name

Name                              : Source_Container

VStore Name(s)            : Source_Container 

 

nutanix@cvm$ ncli ctr ls id=8118 | grep Name

Name                              : Destnation_Container

VStore Name(s)            : Destnation_Container 

 

  • Step - 3 : Clone the vmdisk from original container into the preferred one:
nutanix@cvm$ acli image.create SCSI1_EXAMPLE_VM container=Destnation_Container  source_url=nfs://127.0.0.1/Source_Container/.acropolis/vmdisk/2c2af38e-d330-459f-8460-d20fe69c63bb

Above command without the specifics:

nutanix@cvm$ acli image.create <IMAGE-NAME> container=<DESTNATION_CONTAINER> source_url=nfs://127.0.0.1/<SOURCE_CONTAINER>/.acropolis/vmdisk/<vm-disk-to-migrate-uuid>
  • Step - 4 : Attach the disk 

From the Acropolis Image created in Step 3 to the VM.

This can be done from Prism > VM > Table > Click on vm EXAMPLE_VM > Update.

Under Disks click on "Add New Disk" and use the following settings:

Type = Disk

Operation = "Clone from Image Service"

Bus Type = as needed

IMAGE = select the image that we recently cloned (SCSI1_EXAMPLE_VM as our example)

Click on the Add button. 

 

  • Step - 5 : Removing the original source disk

Remove the VM disk that is hosted on the original container

This can be done from Prism > VM > Table > Click on vm EXAMPLE_VM > Update.

Under Disks click on the "X" next to the disk hosted on the non-desired container.

 

  • Step - 6 : (Optional) Remove the cloned vmdisk from image services

This can be done from Prism's Image Configuration Window

Prism > Gear Icon > Image Configuration

Find the image (SCSI1_EXAMPLE_VM as our example) and click on the "X" to remove. 

 

If for some reason you cannot see the "X" next to the image, you can get around this by removing the image from the CVM command line:

To list the images:

acli image.list

Image name               Image type  Image UUID

SCSI1_EXAMPLE_VM                     475106ad-9c2a-432a-afa8-f9073c76548c

 

To delete the image:

nutanix@cvm$ acli image.delete 475106ad-9c2a-432a-afa8-f9073c76548c

 

This concludes disk relocation on a single AHV cluster. 

 

Hello. I have long been looking for a solution to this problem.  Thank you for guide!


Me too!

Is there any nutanix work in progress to do it as a container live migration?

Hello Nutanix, is there any plan to allow live vm migrations between storage containers? This is a much appreciated feature! Thanks for any clarification!

 


Hello. I have long been looking for a solution to this problem.  Thank you for guide!


Me too!

Is there any nutanix work in progress to do it as a container live migration?


If I understood correctly, this feature is now part of AOS 5.19 directly in the UI right ?


As far as I heard it is now part of AOS 5.19 but not part of the UI directly. Maybe in an LTS build later.


@Mutahir Thanks for the article. It worked for me.

Thanks a Ton.The only thing is, it creates a Clone without a DiskType on Image Service List which we can change manually OR we can add disk type to be created during cloning command.


Hello. I have long been looking for a solution to this problem.  Thank you for guide!


Me too!

Is there any nutanix work in progress to do it as a container live migration?

 

Hello Nutanix, is there any plan to allow live vm migrations between storage containers? This is a much appreciated feature!

 

Thanks for any clarification!

 

Regards,

Didi7

 

Most likely, we will get it in near future.


Near future sounds always good!


Hello. I have long been looking for a solution to this problem.  Thank you for guide!