Skip to main content

HI Team,

 

Recently I have installed Nutanix CE on Dell Power Edge R430. I am facing issue while trying to create network configs for creation of VMs. My lab network is shown below,

I have interfaces from eth0 - eth7. eth4 is connected to Network 10.146.20.0/24, I would like to configure the same in Nutanix, so that the VMs that I create fall under the same subnet. I tried creating subnet and booted an Linux instance to check the reachability. I am unable to reach GW from that linux. 

 

I reached out to my lab team, and they suggested to have eth4 alone in a bond. To do that, I tried to assign eth4 to vs0 alone but unable to do it. 

Could you please let me know what are the series of steps that need to be followed so that I could successfully create VMs. 

 

Thanks,

Update the vs0 make sure it is set like this: 

 

And at the bottom make sure only you connected interface is selected. If this is a single node cluster make sure all vm's (except the CVM) are shutdown. before you save this new vs0 setup. 

When the virtual switch is setup correctly with only the eth4 interface make a new subnet with this settings:

 

Off course choose your own name. If you are not using a vlan or the native vlan on the switch set the vlan-id to 0 (zero). Now you can place you virtual machines in this subnet. 


Hi ​@JeroenTielen ,

 

I have deleted existing VMs except the CVM. I have deleted subnets as well. I do see changes made to vs0 failed and reason shows below,

 

Reason for failure displays Failed to apply VirtualSwitch on Canary node for Virtual Switch: None

 

 


Digging into this can take a while. If you want to be up and running quickly then stop the cluster (cluster stop), destroy the cluster (cluster destroy) and run crashcart (select only eth4) and recreate the cluster. 

 

Destroying the cluster will destroy all data on the cluster, but you dont have any vm's on it anymore show that doesnt matter. 

 

Crashcart: https://www.jeroentielen.nl/the-nutanix-network-crashcart-a-hidden-gem/

 

You can also update the virtual switch manually via the cli via this steps:

Login via the console (idrac)on AHV and jump to the cvm (ssh into 192.168.5.2) and set virtual switch correct via 

manage_ovs --bridge_name br0 --interfaces eth4 --require_link=false --bond_mode none --mtu=1500 update_uplinks 

(I dont know 100% if the command is correct 😉 but you will find out)

Reboot the node. 


HI ​@JeroenTielen , Thanks for helping me in this,

 

I have followed below steps, 

 

  1. I tried to modify with the help of CLI first, it says bond_mode cannot be none, it has be active-backup, balance-slb or balance=tcp.

 

I tried with active-backup and I am unable to access Hypervisor as well. Booted AHV but no luck & hence started booting process again from the scratch.

 

  1. When I try to follow the first method ie. using crashcart, I am unable to login to AHV using root login. Default creds are not working. Hence I tried to login to CVM and then try to login to AHV using Admin, though I changed the password for root, I am unable to login and hence crashcart cannot be executed. (Did not try sudo from Admin)

 

  1. Since I choose to install from the beginning, currently I am unable to create CVM again, below is the error message. Please help me in this.

 


I never did a “none” bond mode via cli 😉 van you remove the bondmode parameter completely?


Hi ​@JeroenTielen , added step 2 & 3 for above message. Please let me know on proceeding further.


I dont know what you mean. So you want to do the crashcart setup? If so, did you shutdown and destroyed the cluster already? 

 

In step 2 on the screenshot I see you changed to password successfull. So I dont get your problem. 

The screenshot in step 3 is confusing. Or you are using the wrong IP. ;) 


HI ​@JeroenTielen , I have fixed all the issue mentioned above and I am currently in accordance to updating the interfaces. 

 

manage_ovs cmd failed as mentioned above. Please suggest.


As i said before I dont know the command for a single interface setup ;)

 

But the easiest thing to do now is just start over.

So stop your running cluster: cluster stop

Destroy the cluster: cluster destroy (this will take a while, just let it run) (all you data is gone, but you already told that that is not a problem)

Run crashcart from idrac: https://www.jeroentielen.nl/the-nutanix-network-crashcart-a-hidden-gem/

Create the cluster: cluster ………… create


HI ​@JeroenTielen ,

 

I am following the steps as you mentioned regarding crashcart. I could able to proceed till crashcart configurations and I am not seeing interface eth4 as yes, though the connectivity is provided to it.

 

Blue RJ45 is connected to eth4 from which I can able to access the network.

Thanks,


Ahh there is the issue. No eth0 is the one which is connected with an active link (as you can see on the screenshot from the crashcart youself).  The numbers on the physical node do not correspond with the software. Always verify this. 


HI ​@JeroenTielen ,

 

Even after selecting eth0 for vs, I do see below error, 

Its associated crashcart configs as below,

 

Thanks,


Dont try to change permissions on files. Never do that within nutanix installations. 

Your issue can be fixxed with this: 

 

chattr -i /etc/resolv.conf
sed -i 's/"monitoring_url_root": ":^"]*",/"monitoring_url_root": "",/' /root/firstboot/first_boot_config.json

(The sed line is truncated on the forum but it is 1 line beginning with sed and ending with .json)

 

Then run crashcart again. 


HI ​@JeroenTielen ,

 

with the above command I could able to see eth0 is successfully associated to vs0 with no uplink bond after creating the cluster. Created a subnet by enabling IP Address Mgmt and I can able to successfully deploy VMs and it is accessible. Thank you very much.


Greats news. Have fun with it. 👍👍👍


Reply