Solved

Does CE 2018.01.31 support nvme+hdd installation?

  • 23 February 2018
  • 21 replies
  • 23 views

Userlevel 1
Badge +10
I'm trying to install the latest CE version CE 2018.01.31 on home built PC with the following hardware details:

-ASROCK Z270 with builtin intel ethernet
-CPU i7 8700
-RAM 32GB DDR4
-Samsung 960 nvme 512GB
-WD 2TB HDD
-8GB USB stick

On last stage when it can not run the CVM. There are a lot of tries on firstboot.log

I wonder if I can install the CE on nvme+hdd config? Do I need to add the SSD into config?

If I can not install CE with above config then it is possible swap the SSD for NVME?


Regards,
Nemat
icon

Best answer by Nemat 26 February 2018, 15:58

View original

This topic has been closed for comments

21 replies

Userlevel 1
Badge +10
Today, replaced nvme to SSD and tried to install CE again.
The same error that can not run CVM for the first time.

Nothing suspicious on logs (/tmp/NTNX.serial.out.0 and firstboot.out).

But this time CVM comes up, I can ssh into it. Just the installation script can
not find /tmp/svm* file.

Going to try to install an older version 2017.07.20 and see if the same error occurs.


Regards,
Nemat
Userlevel 6
Badge +16
It is working for some NVMe disks but not all, we've had problem with Samsung 960 EVO NVMe as installation was not possible.
Userlevel 1
Badge +10
I can not get it work even without nvme as it turns out...:-(
So going to explore the old version.
Userlevel 6
Badge +16
Do you have regular SSD? As nutanix needs one fast drive(SSD or NVMe)?
Userlevel 1
Badge +10
Primzy,

Yes. As I wrote above I swapped NVME for regular (SATA) SSD 500GB.

My config fits all CE requirements: regular Intel CPU, 32GB RAM, HDD 2TB, SATA SSD > 200GB, Intel network, USB stick as an installation medium.
Userlevel 1
Badge +10
The old CE version (2017-07-20) installed successfully. ;-)

Me thinks something wrong/changed with the latest CE.
No inserted back NVME and trying to install CE again. Fingers crossed 🙂
Userlevel 1
Badge +10
2017-07-20 version of CE installed successfully with nvme. I have not issued cluster creation, but CVM started at the end of of installation and running again after host restart.

Will try to create a cluster and do an upgrade to latest CE.
Userlevel 1
Badge +10
@aluciani @Jon

Do you guys realize that latest CE release broke something and can not be installed on NVME+HDD configs while all other requirements of CE are met?

In case you need logs/screenshots/etc let me know.

Regards,
Nemat
Userlevel 7
Badge +34
Thanks @Nemat I have a few folks looking into this at the moment. Will get back to you.
Userlevel 1
Badge +1
Hi,

I've run into this same issue - is there any update on a resolution?

Thanks,
Tim
Userlevel 1
Badge +10
@aluciani
FYI, A few days ago I've tested with the latest 2018.05.01 version, it still does not work.

Turns out, for the people who have nvme+hdd hardware 2017-07-20 is still the version to go.
Userlevel 7
Badge +34
Hi @Nemat @TimBoothby

We have a new release coming out shortly which should fix a lot of this types of issues. Stay tuned. Thanks 👍
Badge +6
Any news about the new release?
Userlevel 1
Badge +3
About NVMe - I've recently dug into the problem here, and have found out that 20180501 version makes CVM get NVMe drive twice, once as a hostdev entry and once as disk entry. Remove or commecnt out the hostdev entry, redefine the CVM domain and check if it's starting up. This can be an issue with 20180131 as well. 20170720 passed NVMe as PCIe device only, as its AHV didn't see an nvme0n1 device.
Userlevel 1
Badge +3
Update: ISO installer passes NVMe into the VM twice, image installer only once but as a hostdev, while VM expects a disk-type device instead (should be CE vs paid issue, as image installer says "CE is using LUN passthrough instead of PCI passthrough", but the CVM xml file is created with PCI passthrough). So, in order to launch 20180501 CE version installed via image, you need to add a disk-based device into the CVM from /dev/nvme0n1, remove the hostdev entry, redefine the domain, reboot the AHV then try starting the CVM.
@aluciani

Is there any news on an update version that will work with NVMe + HDD configurations?

Thanks
James
Userlevel 7
Badge +34
Hi @RandomNord I'll ping the team and let you know - Thanks
Userlevel 7
Badge +34
Hi @RandomNord

I spoke with the team and at this moment - NVMe + HDD is not yet supported

Development teams are working on this.
Userlevel 1
Badge +1
Hi,

A big thanks to Maxim for sharing his insights above. I've managed to get 20180501 up and running on nvme + hdd hardware. For the benefit of anyone who is new to KVM I thought I'd share the detailed process I used.

Run a clean install from USB in the normal way. At the end of the installation it will hang on "Waiting for CVM to start".

Log onto the host (root / nutanix/4u)

Find CVM Name & Shutdown
code:
  virsh list –all
virsh destroy [name]


Dump out CVM Definition XML and make a copy to edit
code:
  virsh dumpxml [name] > dump.xml
cp dump.xml dump-edited.xml


Undefine CVM
code:
  virsh undefine [name]


Edit XML to remove hostdev & add disk
code:
  nano dump-edited.xml


Delete the entire hostdev block

Add this beneath the existing disks, ensure the unit number is unique (sorry for the image, I couldn't get the forum to display XML)


Reboot host to allow host to see nvme – confirm /dev/nvme0n1 is visible

Define CVM, start it & set to autostart
code:
    virsh define dump-edited.xml
virsh start [name]
virsh autostart [name]


Continue the install as normal - ssh onto the CVM to create the cluster etc.
Userlevel 1
Badge +16
It is working for some NVMe disks but not all, we've had problem with Samsung 960 EVO NVMe as installation was not possible.

I'm using that exact NVMe drive on my home lab ... I've noticed that most of the problems are usually coming from the network driver. Especially on NUCs 7th gen.
Userlevel 1
Badge +16
Hi @RandomNord

I spoke with the team and at this moment - NVMe + HDD is not yet supported

Development teams are working on this.


This is kind of a breaking news ... so this is NVMe + SSD only .... thanks ! I understand why my friend's lab was not working.