I try to create an image from an nfs datastore (which is an nutanix container). I tried the following command:
image.create disk2 source_url=nfs://127.0.0.1/esx/machine/machine_1-flat.vmdk container=test
I got the Error:
kUploadFailure: Unable to fetch metadata for image nfs://127.0.0.1/esx/machine/machine_1-flat.vmdk, 1: (, qemu-img: Could not open 'nfs://127.0.0.1/esc/machine/machine_1-flat.vmdk': Could not open 'nfs://127.0.0.1/esx/machine/machine_1-flat.vmdk': Unknown error 1358954496
When I created the image from the first disk of the vm it works. The first disk has size 40gb the second has a size of 1.4tb.
best regards, arno
Did some grepping around in the bug database, and it appears that the large image import is a known issue in 4.5.x, and has been fixed in 4.6, due out relatively soon.
Please submit a support ticket, and they can confirm this is the actual issue, advise you on the 4.6 release date, and then help you upgrade when it comes out!
Did you ever submit a support ticket to confirm this?
Also, for the sake of posterity, the internal bug I was looking at is ENG-40701, which you can reference in a support ticket along with this thread URL
ok, thanks. 4.6 is right around the corner. I'm on our internal "Transfer of Information" webex right now, which is something we do right before the final release of code
Have you tried:
1. Create Whitelist for Container?
2. Change the command into the example below:
image.create disk2 source_url=nfs://10.1.1.2/esx/machine/machine_1-flat.vmdk container=test
10.1.1.2 is CVM IP
I'm having a similar issue but in my case I'm trying to bring a VHD from a NFS3 share into AHV. My version is 5.1.3 and the AHV 20160925.90. The disk size is 2TB with 400GB written data. The message that I'm getting is:
Unable to fetch metadata for image nfs://<path>.tmp
quemu-img: Could not open nfs://<path>.tmp
quemu-img: Could not open nfs://<path>.tmp: File Too Large
@marcio - Can you open a support ticket so we can dig in with you?