How To Create A Disk Image From Volume Group | Nutanix Community
Skip to main content
Solved

How To Create A Disk Image From Volume Group

  • January 18, 2021
  • 9 replies
  • 728 views

Piero
Forum|alt.badge.img+2
  • Trailblazer
  • 14 replies

Hi all,

I´m attempting to create one image from a disk stored inside a volume group.

While I have succesfully tested image creation for standard disks using acli and api I didn´t find how to repeat the same task for disk stored inside Volume Group.

As well documented here and in ther KB articles these sample commands works:

ACLI

acli image.create image-name container=default-container-nnnn source_url=nfs://127.0.0.1/default-container-nnnn/.acropolis/vmdisk/uuid image_type=kDiskImage


API with curl

curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{
  "annotation": "cloned disk image",
  "image_type": "DISK_IMAGE",
  "name": "image-name",
  "vm_disk_clone_spec": {
    "disk_address": {
      "vmdisk_uuid": "UUID"
    },
    "storage_container_uuid": "UUID"
  }
}
' 'https://PE-IP-ADDRESS:9440/PrismGateway/services/rest/v2.0/images/'


But when I try to use the disk uuid of the disk inside a VG the commands fails both from ACLI than API.

Any suggestion/workaround?

Thanks in advance
-Piero

Best answer by Piero

# get VM info: I got 2 scsi disks on storage container and one scsi disk on VG
 

$ acli vm.get zzz-back-test-001
zzz-back-test-001 {
  config {
    allow_live_migrate: True
    annotation: "test for vm backup using API"
    disk_list {
      addr {
        bus: "ide"
        index: 0
      }
      cdrom: True
      device_uuid: "4af36e57-f7d8-4a60-8e99-ba9ab09ba06d"
      empty: True
    }
    disk_list {
      addr {
        bus: "scsi"
        index: 0
      }
      container_id: 9
      container_uuid: "c2960ea4-f12e-45cd-bc7a-32d6abdfc61a"
      device_uuid: "a480f6ca-dfe4-48c5-83d0-a3eb1ca0a20a"
      naa_id: "naa.6506b8d26ce248e9e41db36524bf4d07"
      source_vmdisk_uuid: "1efe564d-6304-4c1e-82ad-4c8186ce87e0"
      vmdisk_size: 10737418240
      vmdisk_uuid: "b217d50d-4c13-47f0-a38c-e19c80cf7451"
    }
    disk_list {
      addr {
        bus: "scsi"
        index: 1
      }
      container_id: 1008784
      container_uuid: "39ba803c-8dcf-47d2-8ddb-a25a14fdd9bf"
      device_uuid: "85241664-0537-4f6e-8863-0cddd4e06262"
      naa_id: "naa.6506b8d32384b05a103b97d87d3c20fb"
      source_vmdisk_uuid: "b960e962-4c59-40b0-ab16-293d4e06cf0f"
      vmdisk_size: 1073741824
      vmdisk_uuid: "080bff89-74dd-4342-aa18-33e902035869"
    }
    disk_list {
      addr {
        bus: "scsi"
        index: 2
      }
      volume_group_uuid: "f8ac9d19-7f37-438b-8eb4-719b6dc34e33"
    }
    hwclock_timezone: "UTC"
    machine_type: "pc"
    memory_mb: 1024
    name: "zzz-back-test-001"
    nic_list {
      connected: True
      mac_addr: "50:6b:8d:03:60:e0"
      model: ""
      network_name: "158"
      network_type: "kNativeNetwork"
      network_uuid: "82e50e22-ef90-4214-8271-3baa73a054cb"
      type: "kNormalNic"
      uuid: "7434da86-91a6-4d2b-9073-c15b0eba67d0"
      vlan_mode: "kAccess"
    }
    nic_list {
      connected: True
      mac_addr: "50:6b:8d:ec:b5:bd"
      model: ""
      network_name: "159"
      network_type: "kNativeNetwork"
      network_uuid: "9af52ca8-caee-4c94-b4c8-b9d8faaf93d1"
      type: "kNormalNic"
      uuid: "43ade60a-5683-403b-ae5c-4a1611f16b92"
      vlan_mode: "kAccess"
    }
    nic_list {
      connected: True
      mac_addr: "50:6b:8d:70:5b:1c"
      model: ""
      network_name: "138"
      network_type: "kNativeNetwork"
      network_uuid: "83413344-3708-42bd-bb1b-1783bc5037cf"
      type: "kNormalNic"
      uuid: "66cb027f-5984-4a76-896a-37767a5a0e0d"
      vlan_mode: "kAccess"
    }
    num_cores_per_vcpu: 1
    num_threads_per_core: 1
    num_vcpus: 2
    num_vnuma_nodes: 0
    vga_console: True
    vm_type: "kGuestVM"
  }
  logical_timestamp: 9
  state: "kOff"
  uuid: "6810eecc-b55d-439c-a970-c745a6d66f81"
}

# get storage container info

# get storage conainer info

$ ncli ctr ls id=9 | grep Name
    Name                      : default-container-171248
    VStore Name(s)            : default-container-171248

# create image of scsi disk 0 (to check the command line)

$ acli image.create zzz-back-test-001-scsi0 container=default-container-171248 source_url=nfs://127.0.0.1/default-container-171248/.acropolis/vmdisk/b217d50d-4c13-47f0-a38c-e19c80cf7451 image_type=kDiskImage
zzz-back-test-001-scsi0: pending
zzz-back-test-001-scsi0: complete

# check created image

Image name                                          Image type  Image UUID
.....
zzz-back-test-001-scsi0                             kDiskImage  9ea7710d-8964-4628-9417-2ca1016aa67b  
....

acli image.get 9ea7710d-8964-4628-9417-2ca1016aa67b
zzz-back-test-001-scsi0 {
  architecture: "kX86_64"
  container_id: 9
  container_uuid: "c2960ea4-f12e-45cd-bc7a-32d6abdfc61a"
  create_time: "Monday January 18 2021, 03:40:11 PM"
  file_uuid: "4b62766a-e728-4d1f-b5ff-7095b3412f4d"
  image_source {
    source_url: "nfs://127.0.0.1/default-container-171248/.acropolis/vmdisk/b217d50d-4c13-47f0-a38c-e19c80cf7451"
  }
  image_state: "kActive"
  image_type: "kDiskImage"
  logical_timestamp: 0
  name: "zzz-back-test-001-scsi0"
  owner_cluster_uuid: "000597eb-aa13-1c72-0000-000000029cf0"
  update_time: "Monday January 18 2021, 03:40:11 PM"
  uuid: "9ea7710d-8964-4628-9417-2ca1016aa67b"
  vmdisk_size: 10737418240
  vmdisk_uuid: "ff7449de-c486-47b8-9a52-e2e70f861420"
}

Now let´s try with VG:

# get VG info

$ acli vg.get f8ac9d19-7f37-438b-8eb4-719b6dc34e33
zzz-vg-test {
  annotation: "pbackup vg test"
  attachment_list {
    vm_uuid: "6810eecc-b55d-439c-a970-c745a6d66f81"
  }
  disk_list {
    container_id: 9
    container_uuid: "c2960ea4-f12e-45cd-bc7a-32d6abdfc61a"
    flash_mode: False
    index: 0
    vmdisk_size: 10737418240
    vmdisk_uuid: "773a06b3-83e9-41c1-b12f-d76db869bbc8"
  }
  flash_mode: False
  iscsi_target_name: "zzz-vg-test-f8ac9d19-7f37-438b-8eb4-719b6dc34e33"
  logical_timestamp: 2
  name: "zzz-vg-test"
  shared: True
  uuid: "f8ac9d19-7f37-438b-8eb4-719b6dc34e33"
}

# test image creation from VG

$ acli image.create zzz-back-test-001-scsi2 container=default-container-171248 source_url=nfs://127.0.0.1/default-container-171248/.acropolis/vmdisk/773a06b3-83e9-41c1-b12f-d76db869bbc8 image_type=kDiskImage
zzz-back-test-001-scsi2: pending
zzz-back-test-001-scsi2: complete

# check created image

$ acli image.list
Image name                                          Image type  Image UUID
zzz-back-test-001-scsi0                             kDiskImage  9ea7710d-8964-4628-9417-2ca1016aa67b  
zzz-back-test-001-scsi2                             kDiskImage  f4c487e5-3597-431a-9e28-4bb5a17b14a1

$ acli image.get f4c487e5-3597-431a-9e28-4bb5a17b14a1
zzz-back-test-001-scsi2 {
  architecture: "kX86_64"
  container_id: 9
  container_uuid: "c2960ea4-f12e-45cd-bc7a-32d6abdfc61a"
  create_time: "Monday January 18 2021, 03:49:01 PM"
  file_uuid: "61b3b77d-ca38-4a24-8769-54226be4b5b5"
  image_source {
    source_url: "nfs://127.0.0.1/default-container-171248/.acropolis/vmdisk/773a06b3-83e9-41c1-b12f-d76db869bbc8"
  }
  image_state: "kActive"
  image_type: "kDiskImage"
  logical_timestamp: 0
  name: "zzz-back-test-001-scsi2"
  owner_cluster_uuid: "000597eb-aa13-1c72-0000-000000029cf0"
  update_time: "Monday January 18 2021, 03:49:01 PM"
  uuid: "f4c487e5-3597-431a-9e28-4bb5a17b14a1"
  vmdisk_size: 10737418240
  vmdisk_uuid: "019f3260-3780-4e97-b74e-b843abfecf6b"

Done

View original
Did this topic help you find an answer to your question?
This topic has been closed for comments

9 replies

AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • January 18, 2021

hey @Piero , did you check this KB-8062(Common VM disk management workflows on AHV cluster). Also, it would be great if you can add what error message are you getting when using the acli command?


Piero
Forum|alt.badge.img+2
  • Author
  • Trailblazer
  • 14 replies
  • January 18, 2021

HI @AnishWalia20 you got reason. I already read that KB article, but let me try to repeat the whole sequence. In short I will post the used commands and the error.


AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • January 18, 2021

Hey @Piero , sure please let me know about the error message and I will also research more on this in the mean time. :sweat_smile:


Piero
Forum|alt.badge.img+2
  • Author
  • Trailblazer
  • 14 replies
  • January 18, 2021

Wops…. my fault: is used the wrong storage container on my test….

To apologize i´m going to post the command sequence I used to create my disk images both from storage containers and volume groups.


Piero
Forum|alt.badge.img+2
  • Author
  • Trailblazer
  • 14 replies
  • Answer
  • January 18, 2021

# get VM info: I got 2 scsi disks on storage container and one scsi disk on VG
 

$ acli vm.get zzz-back-test-001
zzz-back-test-001 {
  config {
    allow_live_migrate: True
    annotation: "test for vm backup using API"
    disk_list {
      addr {
        bus: "ide"
        index: 0
      }
      cdrom: True
      device_uuid: "4af36e57-f7d8-4a60-8e99-ba9ab09ba06d"
      empty: True
    }
    disk_list {
      addr {
        bus: "scsi"
        index: 0
      }
      container_id: 9
      container_uuid: "c2960ea4-f12e-45cd-bc7a-32d6abdfc61a"
      device_uuid: "a480f6ca-dfe4-48c5-83d0-a3eb1ca0a20a"
      naa_id: "naa.6506b8d26ce248e9e41db36524bf4d07"
      source_vmdisk_uuid: "1efe564d-6304-4c1e-82ad-4c8186ce87e0"
      vmdisk_size: 10737418240
      vmdisk_uuid: "b217d50d-4c13-47f0-a38c-e19c80cf7451"
    }
    disk_list {
      addr {
        bus: "scsi"
        index: 1
      }
      container_id: 1008784
      container_uuid: "39ba803c-8dcf-47d2-8ddb-a25a14fdd9bf"
      device_uuid: "85241664-0537-4f6e-8863-0cddd4e06262"
      naa_id: "naa.6506b8d32384b05a103b97d87d3c20fb"
      source_vmdisk_uuid: "b960e962-4c59-40b0-ab16-293d4e06cf0f"
      vmdisk_size: 1073741824
      vmdisk_uuid: "080bff89-74dd-4342-aa18-33e902035869"
    }
    disk_list {
      addr {
        bus: "scsi"
        index: 2
      }
      volume_group_uuid: "f8ac9d19-7f37-438b-8eb4-719b6dc34e33"
    }
    hwclock_timezone: "UTC"
    machine_type: "pc"
    memory_mb: 1024
    name: "zzz-back-test-001"
    nic_list {
      connected: True
      mac_addr: "50:6b:8d:03:60:e0"
      model: ""
      network_name: "158"
      network_type: "kNativeNetwork"
      network_uuid: "82e50e22-ef90-4214-8271-3baa73a054cb"
      type: "kNormalNic"
      uuid: "7434da86-91a6-4d2b-9073-c15b0eba67d0"
      vlan_mode: "kAccess"
    }
    nic_list {
      connected: True
      mac_addr: "50:6b:8d:ec:b5:bd"
      model: ""
      network_name: "159"
      network_type: "kNativeNetwork"
      network_uuid: "9af52ca8-caee-4c94-b4c8-b9d8faaf93d1"
      type: "kNormalNic"
      uuid: "43ade60a-5683-403b-ae5c-4a1611f16b92"
      vlan_mode: "kAccess"
    }
    nic_list {
      connected: True
      mac_addr: "50:6b:8d:70:5b:1c"
      model: ""
      network_name: "138"
      network_type: "kNativeNetwork"
      network_uuid: "83413344-3708-42bd-bb1b-1783bc5037cf"
      type: "kNormalNic"
      uuid: "66cb027f-5984-4a76-896a-37767a5a0e0d"
      vlan_mode: "kAccess"
    }
    num_cores_per_vcpu: 1
    num_threads_per_core: 1
    num_vcpus: 2
    num_vnuma_nodes: 0
    vga_console: True
    vm_type: "kGuestVM"
  }
  logical_timestamp: 9
  state: "kOff"
  uuid: "6810eecc-b55d-439c-a970-c745a6d66f81"
}

# get storage container info

# get storage conainer info

$ ncli ctr ls id=9 | grep Name
    Name                      : default-container-171248
    VStore Name(s)            : default-container-171248

# create image of scsi disk 0 (to check the command line)

$ acli image.create zzz-back-test-001-scsi0 container=default-container-171248 source_url=nfs://127.0.0.1/default-container-171248/.acropolis/vmdisk/b217d50d-4c13-47f0-a38c-e19c80cf7451 image_type=kDiskImage
zzz-back-test-001-scsi0: pending
zzz-back-test-001-scsi0: complete

# check created image

Image name                                          Image type  Image UUID
.....
zzz-back-test-001-scsi0                             kDiskImage  9ea7710d-8964-4628-9417-2ca1016aa67b  
....

acli image.get 9ea7710d-8964-4628-9417-2ca1016aa67b
zzz-back-test-001-scsi0 {
  architecture: "kX86_64"
  container_id: 9
  container_uuid: "c2960ea4-f12e-45cd-bc7a-32d6abdfc61a"
  create_time: "Monday January 18 2021, 03:40:11 PM"
  file_uuid: "4b62766a-e728-4d1f-b5ff-7095b3412f4d"
  image_source {
    source_url: "nfs://127.0.0.1/default-container-171248/.acropolis/vmdisk/b217d50d-4c13-47f0-a38c-e19c80cf7451"
  }
  image_state: "kActive"
  image_type: "kDiskImage"
  logical_timestamp: 0
  name: "zzz-back-test-001-scsi0"
  owner_cluster_uuid: "000597eb-aa13-1c72-0000-000000029cf0"
  update_time: "Monday January 18 2021, 03:40:11 PM"
  uuid: "9ea7710d-8964-4628-9417-2ca1016aa67b"
  vmdisk_size: 10737418240
  vmdisk_uuid: "ff7449de-c486-47b8-9a52-e2e70f861420"
}

Now let´s try with VG:

# get VG info

$ acli vg.get f8ac9d19-7f37-438b-8eb4-719b6dc34e33
zzz-vg-test {
  annotation: "pbackup vg test"
  attachment_list {
    vm_uuid: "6810eecc-b55d-439c-a970-c745a6d66f81"
  }
  disk_list {
    container_id: 9
    container_uuid: "c2960ea4-f12e-45cd-bc7a-32d6abdfc61a"
    flash_mode: False
    index: 0
    vmdisk_size: 10737418240
    vmdisk_uuid: "773a06b3-83e9-41c1-b12f-d76db869bbc8"
  }
  flash_mode: False
  iscsi_target_name: "zzz-vg-test-f8ac9d19-7f37-438b-8eb4-719b6dc34e33"
  logical_timestamp: 2
  name: "zzz-vg-test"
  shared: True
  uuid: "f8ac9d19-7f37-438b-8eb4-719b6dc34e33"
}

# test image creation from VG

$ acli image.create zzz-back-test-001-scsi2 container=default-container-171248 source_url=nfs://127.0.0.1/default-container-171248/.acropolis/vmdisk/773a06b3-83e9-41c1-b12f-d76db869bbc8 image_type=kDiskImage
zzz-back-test-001-scsi2: pending
zzz-back-test-001-scsi2: complete

# check created image

$ acli image.list
Image name                                          Image type  Image UUID
zzz-back-test-001-scsi0                             kDiskImage  9ea7710d-8964-4628-9417-2ca1016aa67b  
zzz-back-test-001-scsi2                             kDiskImage  f4c487e5-3597-431a-9e28-4bb5a17b14a1

$ acli image.get f4c487e5-3597-431a-9e28-4bb5a17b14a1
zzz-back-test-001-scsi2 {
  architecture: "kX86_64"
  container_id: 9
  container_uuid: "c2960ea4-f12e-45cd-bc7a-32d6abdfc61a"
  create_time: "Monday January 18 2021, 03:49:01 PM"
  file_uuid: "61b3b77d-ca38-4a24-8769-54226be4b5b5"
  image_source {
    source_url: "nfs://127.0.0.1/default-container-171248/.acropolis/vmdisk/773a06b3-83e9-41c1-b12f-d76db869bbc8"
  }
  image_state: "kActive"
  image_type: "kDiskImage"
  logical_timestamp: 0
  name: "zzz-back-test-001-scsi2"
  owner_cluster_uuid: "000597eb-aa13-1c72-0000-000000029cf0"
  update_time: "Monday January 18 2021, 03:49:01 PM"
  uuid: "f4c487e5-3597-431a-9e28-4bb5a17b14a1"
  vmdisk_size: 10737418240
  vmdisk_uuid: "019f3260-3780-4e97-b74e-b843abfecf6b"

Done


Piero
Forum|alt.badge.img+2
  • Author
  • Trailblazer
  • 14 replies
  • January 18, 2021

BTW: as explained somwhere on KB articles, now the images can be downloaded with sftp or curl.

On my test i found that on 10Gb adapters the sftp is really slower than a curl api call: between 17 and 22 Mbit/s with sftp and close to 210 MBit/s with curl (same cluster, same vlan)

# sftp test

sftp -P 2222 user@CVM-IP
#
Connected to CVM-IP.
sftp> cd /OsImages/.acropolis/vmdisk
sftp> get 43f7d8aa-712d-4bbe-a1e1-a10fa2501878
Fetching /OsImages/.acropolis/vmdisk/f4c487e5-3597-431a-9e28-4bb5a17b14a1 to f4c487e5-3597-431a-9e28-4bb5a17b14a1
/OsImages/.acropolis/vmdisk/f4c487e5-3597-431a-9e28-4bb5a17b14a1                    100% 1986MB  17.5MB/s   01:53
sftp> bye 

# curl test

curl --noproxy "*" --insecure -u pbackup:pbackup-password \ 
-GET  --header "Accept: application/json" \
"https://PE-IP-ADDRESS:9440/api/nutanix/v3/images/f4c487e5-3597-431a-9e28-4bb5a17b14a1/file" \
-o /store/zzz-back-test-001-scsi2.raw \

  % Total   % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10.0G  100 10.0G    0     0   234M      0  0:00:43  0:00:43 --:--:--  248M 

 


AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • January 19, 2021

Hey @Piero , I am really glad it worked out mate. :wink:

Thanks a lot for sharing the outputs too.

As you mentioned SFTP is slower than CURL is because of he 3 primary reasons:

  1. Encryption. Though symmetric encryption is fast, it's not that fast to be unnoticed. If you are comparing speeds on fast network (100mbit or larger), encryption becomes a break for your process.
  2. Hash calculation and checking.
  3. Buffer copying. SFTP running on top of SSH causes each data block to be copied at least 6 times (3 times on each side) more comparing to plain FTP where data in best cases can be passed to network interface without being copied at all. And block copy takes a bit of time as well.

Mainly due to the above SFTP is slower.:slight_smile:

Let me know if I can help in any other way.


Piero
Forum|alt.badge.img+2
  • Author
  • Trailblazer
  • 14 replies
  • January 19, 2021

Hi @AnishWalia20, now I understand better the speed difference between sftp and curl. For the moment is all. Thanks for the support :sunglasses:


AnishWalia20
Nutanix Employee
Forum|alt.badge.img+5
  • Nutanix Employee
  • 201 replies
  • January 19, 2021

That’s great @Piero . Glad you had your doubts cleared. Good luck mate.:smile: