Nutanix CE installs but CVM does not create cluster in vmware nested environment.
Hi,
I successfully deployed three nodes as VMs in my VMware nested environment without any issues. All are pinging each other. CVMs are also pingable from each other.
But when I create cluster from a CVM, cluster creation failed, medusa and its below listed services are having DOWN status.
Please help.
Regards,
Gurdeep Sandhu
Page 1 / 1
What are the specs of the virtual machines and what is the spec for the VMWare server?
In short, for each CE VM you need to add advanced parameter disk.EnableUUID = TRUE. Otherwise it can affect Medusa service during cluster creation process.
Hi,
I created 3 VMs on my ESXi Hypervisor with below configurations:
4 vCPUs (HW virtualization enabled)
32 GB memory
SCSI Controller VMware Paravirtual
HDD1- 16GB, thin provisioned, SATA controller 0, used for Hypervisor Boot
HDD2- 200GB, thin provisioned, SATA controller 0, used for Data
HDD3-500GB, thin provisioned, SATA controller 0, used for CVM
disk.EnableUUID to TRUE
1 VMXNET3 Network Adapter
on vSphere Standard Switch: Promiscous mode=Accept, MAC Address change=Accept, Forged Transmits=Accept
Big thanks. With you step-by-step guide, I am able to deploy the cluster.
I was able to login 1st time in Prism Element. Registered for Nutanix NEXT also. Now, I am facing Prism element login issue. Getting Error: Server not reachable. All CVMs are pingable from each other. 1st time login was perfect.
Please help.
If you just started the cluster then wait a couple of minutes more. If the cluster was running for days try to give a cluster start command in cvm.
Hi,
I deployed cluster a few days back. While login, throwing an error: server not reachable.
Why Cluster start process taking too long, It’s taking around 30 minutes or more to start all the services. Is this default behaviour or do I have to make some changes. Please guide.
Thanks.
That depends on the physical hardware. Can take a while to start things up yes.
I agree with Jeroen. It can depends on the underlying hardware, especially on the disks.
Faced the same on non-nested CE. The simpiest indicator that something goes slow:
Check metadata ring - nodetool -h0 ring - nodes may be in the down state.
Check disks utilization and await time from CVMs: sar -dp or iostat