image.create


Badge +3
Hi!

I try to create an image from an nfs datastore (which is an nutanix container). I tried the following command:

image.create disk2 source_url=nfs://127.0.0.1/esx/machine/machine_1-flat.vmdk container=test

I got the Error:

kUploadFailure: Unable to fetch metadata for image nfs://127.0.0.1/esx/machine/machine_1-flat.vmdk, 1: (, qemu-img: Could not open 'nfs://127.0.0.1/esc/machine/machine_1-flat.vmdk': Could not open 'nfs://127.0.0.1/esx/machine/machine_1-flat.vmdk': Unknown error 1358954496

When I created the image from the first disk of the vm it works. The first disk has size 40gb the second has a size of 1.4tb.

any hints?

best regards, arno

10 replies

Userlevel 7
Badge +30
What version of Nutanix OS are you running? Also, what version of Acropolis Hypervisor?
Badge +3
Hi 

Nutanix OS 4.5.1
AHV 20151109
Userlevel 7
Badge +30
Hi 

Did some grepping around in the bug database, and it appears that the large image import is a known issue in 4.5.x, and has been fixed in 4.6, due out relatively soon.

Please submit a support ticket, and they can confirm this is the actual issue, advise you on the 4.6 release date, and then help you upgrade when it comes out!
Userlevel 7
Badge +30
Did you ever submit a support ticket to confirm this?

Also, for the sake of posterity, the internal bug I was looking at is ENG-40701, which you can reference in a support ticket along with this thread URL
Badge +3
Yes I have opened an support ticket and the Dell-Support told me that this was a known issue and that I should wait for the 4.6 release.

arno
Userlevel 7
Badge +30
ok, thanks. 4.6 is right around the corner. I'm on our internal "Transfer of Information" webex right now, which is something we do right before the final release of code
Userlevel 2
Badge +16
Have you tried:

1. Create Whitelist for Container?
2. Change the command into the example below:

image.create disk2 source_url=nfs://10.1.1.2/esx/machine/machine_1-flat.vmdk container=test

Notes:
10.1.1.2 is CVM IP
Badge +6
Hi Jon,

I'm having a similar issue but in my case I'm trying to bring a VHD from a NFS3 share into AHV. My version is 5.1.3 and the AHV 20160925.90. The disk size is 2TB with 400GB written data. The message that I'm getting is:

Unable to fetch metadata for image nfs://.tmp
quemu-img: Could not open nfs://.tmp
quemu-img: Could not open nfs://.tmp: File Too Large

Any clue?



Userlevel 7
Badge +30
marcio - Can you open a support ticket so we can dig in with you?
Badge +6
Hi Jon

A ticket was opened 00256525

Reply