Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,144 Topics
- 3,122 Replies
Karbon 2.3 DVP deployment issues (centralised Prism VPN SIte2Site)
Hi there,We are currently deploying (and testing) Karbon as K8 orchestration platform for all our Nutanix platforms (worldwide). I have failed installation attempts from Prism Central.As a side note, we are using VPN Site2Site from centralised Prism Control and I can reach remote K8 VLAN from distant Nutanix deployment. The reachout tests were done with small VM belonging the the K8 VLAN testing node and via private CIDR (bidirectional tests), but utilizing the same encrypted VPN channel.This is what I’m having from karbon_core.out (PCVM):2021-10-13T21:17:12.687Z ssh.go:153: [DEBUG] [k8s_cluster=RGS-PA-K8-STAGING] On 10.20.25.130:22 executing: docker plugin inspect nutanix2021-10-13T21:17:12.825Z ssh.go:166: [WARN] [k8s_cluster=RGS-PA-K8-STAGING] Run cmd failed: Failed to run command: on host(10.20.25.130:22) cmd(docker plugin inspect nutanix) error: "Process exited with status 1", output: "Error: No such plugin: nutanix\n\n"2021-10-13T21:17:12.825Z sshutils.go:44: [ERROR] [k8s_clust
License Expiry Question
Hi All,We’ve got a cluster which we have recently not renewed support/maintenance on as we are in the process of decommissioning it, however still have some workloads which are lagging behind being decommissioned.Noticed today that the license expiry is approaching (November 2021), what will happen once the license expires?CheersJason
Warning - Hosts not connected to vCenter
Hi I’m new to nutanix with ESXi. I recently deployed a 4 nodes nutanix with esxi. I’ve created containers and add the nodes to the vCenter. I have not add the vCenter to the Prism. I check at prism and there’s a warning saying the hosts are not connected to vCenter. Anyone encounter this issue before?
Nutanix CE install failing
I have a used NX-3060-G5, with 4 nodesI am trying to install the new CE 2.0 on itEach node is configured with the e5-2650v4 x24x32 sticks of ramBMC version 3.94BIOS Version G4G5t8.0They all have the same 64gb statadoma 512GB SSD and a 1TB HDDI had used nodes a and b as test nodes orginally, then i tried to reinstall the new CE 2.0 on them and i keep getting failures at 2402/2430 hypervisor installation in progress with the read out to please take a look at the installer_vm_*.log inside foundation logs to debug hypervisor installation, then it restarts the install process and repeats. 2 of the nodes this installed on just fine, but they were not a reinstall of CE, they had vmware running on them previously.I have tried installing the latest supermicro bios and get the same issue, i have screenshots of the logs that i could find. Not sure if i can post them on here. Just seeing if anyone may be able to assist or point me in the right direction to find out why this keeps failing when al
Adding disks to storagepool on CVM
Hi all, I’m wondering if someone can help me out with the first install of a nutanix CE single cluster with esxiThe installation went fine but, and followed this link to make it work. https://vmik.net/2021/01/26/nutanix-ce-install-esxi-2021/I see that only have 1 disk in the storagepool. Is there a way to add a disk into the CVM? or do I have to mount it first. can someone help me out? many thanks.
Host updates, firmware, bios, ESX, etc
Our current Nutanix infrastructure has been a bit neglected for a while, AOS, PC, NCC, etc have been kept up to date but not firmware\bios or hypervisor so I need to get these done.Current infrastructure is nothing massive, two clusters one 4 node one 3 running on NX-8035-G6 and NX-8035-G7, both are AOS 5.20.4. We have one PC running at the latest version and its sitting on a separate Management vSphere cluster along with vCenter. Hypervisor is currently ESXi 7.0 Update 2a 17867351 on all hosts, this is Standard Edition so NO DRS ☹ My plan is to upgrade the firmware and BIOS etc via the LCM on the smaller of the two clusters. I have freed up one host so there aren’t any VM’s on it apart from the CVM. I take it I can just run the LCM to do the firmware updates on that host and let it do its thing without having to do anything else, it’ll shut down the CVM maintenance mode the host and restart it?Once that is complete I want to upgrade ESX on that host. I have checked the compatibil
Repair Host Boot Device failed
Hi,I`ve replaced the SATADOM of one of the four hosts in my NX-1050. After starting from the pnoenix.iso, the repair host disk has been stuck like this for 24 hours. The host_bootdisk_repair_status says it`s at “sm_trigger_imaging” state.The first thing I would like to do is kill the current repair job since it`s most certainly hanging. How can I do that and what further steps should I take to try and get the host back online? Thank you.
IPMI port vulnerability (VNC protocol enabled)
Hi there.A security check on my nutanix clusters (8 nodes) revealed that the IPMI port on every nodes is vulnerable cause the VNC protocol is used to access them through port 5900.Issue:"...Virtual Network Computing (VNC) provides remote users with access to the system it it installed on. If this service is compromised, the user can gain complete control of the system...."Remediation:"...Remove or disable this service..."What are my options? It is possible to disable these ports without affecting the performance of the NUTANIX cluster.Thanks in advance.
Nutanix CE (latest) installation on Intel NUC Gen6 -- CVM issues
Hi Tried googling but can’t find anything to support me. Model: NUC7i5DNHE2x SSD (500+250GB SSD)1x _16_ gb Cruizer Fit USB AHV installation works and I create 1 signle node cluster., After rebooting I get SSH/Ping acces to AHV and can loginw with root. Problem:CVM does not answer to ping/wev.Tried SSH / Ping from AHV but does not get an answer: [root@NTNX-eXXXXX-A ~]# ping 10.255.1.11PING 10.255.1.11 (10.255.1.11) 56(84) bytes of data.From 10.255.1.10 icmp_seq=1 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=2 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=3 Destination Host Unreachable I’ve tried re-installing multiple times but same issue.Is the problem the 16Gb cruizer fit usb? It’s the bootable media.CVM gets 500gb SSD diskData gets 250Gb disk [root@NTNX-eXXXXX-A ~]# virsh list Id Name State----------------------------------------------------[root@NTNX-eXXXXX~A]# Tried the logs from other posts but not able to get any outputs. Anyone got
setting up a AHV cluster on 1G switches ( 3 node and 2 node )
Hello I am looking for guidance regarding setting up a 3-node and 2 node AHV cluster on 1G switches . Our workloads are low. The official response is to use a 10G switch fabric. If I opt for 2 node, can I connect the servers back to back for distributed storage ? Can data network go over 1G ? I plan to procure 2 servers with 2x 10G for distributed storage , 2x 1G NICs for data Thanks,Ajay
110, 220. Whatever it takes?
I was looking for confirmation about a statement that I recently heard that the best practice for the power supply voltage for a block was 220 volts. And that if 110 volts was used, it was less likely that one power supply could handle the load if one power supply failed. Thank you.
Storage access for specific VM on a Nutanix VMware edition
We have a shiny new VMware NTNX environment, yeeh! Question: I'd like to be able to measure the virtual disk metrics for a specific workload (backup). Preferably without any layer in between so I can really see what the workload does. Idea is to use a virtual machine on NTNX as a backup server and see what kind of parameters I use for the backup storage. Is it possible to present a container on the CVM as an SMB share and write my backup on it? Or would it be best to use the Windows NFS client to access an export on the CVM? Background: I'd like to purchase a separate storage platform for the backups. But since I'm not really sure what kind of IO patern and throughput I need to sustain, I'd like to test and measure before we buy.
Errormessage about disabled VM autostartup & vSphere HA cluster
Hey guys, We are busy building a Nutanix 18.104.22.168 infrastructure with vSphere 5.5. We have six nodes in two blocks, and the design consists of 2 Nutanix clusters with each its own vSphere cluster. HA is enabled on each cluster. We are getting an error on both Nutanix clusters in PRISM UI: [i]Virtual Machine auto start is disabled on the hypervisor of Controller VM [/i] Nutanix best practices recommend to enable "Start and Stop Virtual Machines with the system" on each host, and move the Nutanix CVM startup to Automatic startup. But, according to This VMware KB the automatic startup feature is disabled when moving an ESXi host in an HA enabled cluster. Just like the error message says. Is this expected behavior? Can this error message be disabled because it is just not valid in combination with VMware HA enabled clusters? Or is this still a configuration error?
Data Resiliency Status shows error
Hello All, For one of the customers, I did a fresh install and on that one, when I check the data resiliency status, it shows me that for Extend Group, it is 0. The explanation is as follows: "Based on placement of extend group replicas the cluster can tolerate a maximum of 0 node failure." Do you know what this can be about? There is no error or warning on the cluster. Cheers..
Use of LACP for Nutanix Network Configuration
Hello folks, What are the advantages and disadvantages for use of LACP to improve east/west traffic and prevent unnecessary north/south traffic? Are there any documents on it's use for a Nutanix installation? Is the use of LACP very common for Nutanix installations? Are there folks kicking themselves for not implementing LACP. Thank you for your opinions.
Hello folks, Recently I was running some Resilency testing, powering down a node using IPMI (Power off server -= Immediate) to ensure VMware High Availability worked as expected for a simulated power outage. When I finished I powered the node back on via IPMI. I was surprised the CVM did not automatically start. Is it expected that the CVM would not restart when a node was powered back on? Thank you for your help.
IPMI Ports Show "No Connect" In BIOS
Hello, A few of the IPMI ports on a new installation (3K and 6K series) show "no connect" in the BIOS. The ones that are working show "Dedicated LAN." Our networking folks have verified the ports as access ports, and are active with the correct VLAN ID configured. We will be swapping the ports on the switch between the working/non-working IPMI ports to confirm if the issue is on the switch or node side. Could there be some BIOS settings we may have missed? Any thoughts would be greatly appreciated. Thank you.
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.