Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,101 Topics
- 2,924 Replies
Hi there,We are currently deploying (and testing) Karbon as K8 orchestration platform for all our Nutanix platforms (worldwide). I have failed installation attempts from Prism Central.As a side note, we are using VPN Site2Site from centralised Prism Control and I can reach remote K8 VLAN from distant Nutanix deployment. The reachout tests were done with small VM belonging the the K8 VLAN testing node and via private CIDR (bidirectional tests), but utilizing the same encrypted VPN channel.This is what I’m having from karbon_core.out (PCVM):2021-10-13T21:17:12.687Z ssh.go:153: [DEBUG] [k8s_cluster=RGS-PA-K8-STAGING] On 10.20.25.130:22 executing: docker plugin inspect nutanix2021-10-13T21:17:12.825Z ssh.go:166: [WARN] [k8s_cluster=RGS-PA-K8-STAGING] Run cmd failed: Failed to run command: on host(10.20.25.130:22) cmd(docker plugin inspect nutanix) error: "Process exited with status 1", output: "Error: No such plugin: nutanix\n\n"2021-10-13T21:17:12.825Z sshutils.go:44: [ERROR] [k8s_clust
We are considering AHV for our environment. I notice that the OEM partner product list https://www.nutanix.com/partners/oem has fixed set of disks. Most of the hardware in the list are capable to be installed with more disks. For example Fujitsu servers are capable of 12 x 3.5 + 4 x 2.5 (on the rear),Can we utilize all disk slots in a configuration like 12 x 4TB NL-SAS + 4 x 1.92TB 2.5” SSD or 4 x 1.6TB NvME ?
Hi All,We’ve got a cluster which we have recently not renewed support/maintenance on as we are in the process of decommissioning it, however still have some workloads which are lagging behind being decommissioned.Noticed today that the license expiry is approaching (November 2021), what will happen once the license expires?CheersJason
Hi I’m new to nutanix with ESXi. I recently deployed a 4 nodes nutanix with esxi. I’ve created containers and add the nodes to the vCenter. I have not add the vCenter to the Prism. I check at prism and there’s a warning saying the hosts are not connected to vCenter. Anyone encounter this issue before?
I'm looking for some advice.My company recently purchased a used 3460-G4. We're a professional services firm and we want to have it for our lab so we can stand up stuff like ERA, CALM, etc. and bang around on it. You know, lab stuff!We have been trying to Foundation (220.127.116.11 & 18.104.22.168) the block but I am running into a problem with Foundation failing when trying to mount the Phoenix image on the nodes. Here's how things currently stand:BIOS has been upgraded to latest recommended version on Nutanix Support site. (G4G5T6.0)BMC firmware has been upgraded to latest recommended version on Nutanix Support site. (3.64)Each node has x2 SSDs, which this system appears to recognize (I ran an ESXi installer on one of the nodes to test. The installers saw the drives)Each node has 64GB of RAM. The RAM is confirmed to be compatible according to SuperMicro's site.The Motherboard is the X10DRT-P from SuperMicro.IPMI has been set on each node and I can log into the IPMI mgmt page.At first I was wo
Hi,I`ve replaced the SATADOM of one of the four hosts in my NX-1050. After starting from the pnoenix.iso, the repair host disk has been stuck like this for 24 hours. The host_bootdisk_repair_status says it`s at “sm_trigger_imaging” state.The first thing I would like to do is kill the current repair job since it`s most certainly hanging. How can I do that and what further steps should I take to try and get the host back online? Thank you.
Hi all!I’m looking for a HCL that details the exact firmware versions of specific components of a server to make sure that we are in the clear before we install or upgrade Nutanix.As far as I can tell there isn’t anyone to be found, at least not for someone who doesn’t have an account for the support portal (and I find it pretty odd to “hide” the HCL).I know there are HCL’s to be found outside of the support portal but those are extremely generic.Is there an HCL that is this detailed or is it only through trial and error we can figure out if we have the correct firmware?Thanks.
Hi, I have 2 Blocks (2 Nodes each) Cluster. I just added 2 DIMM Memory (2 x 32GB) in 1 of my Node. I put 2 memory on slot P1-DIMMC1 and P1-DIMMF1Before Adding DIMMAfterr Adding DIMMAfter that i ran NCC and shows warning like thisAlert from Nutanix ClusterAm i did something wrong with the steps by steps? Since in PRISM, total memory has been increased and i thought it’s just fine.
Hi all,Running AOS 5.10.5, i have changed my cluster from RF2 to RF3, including my main container and the “SelfServiceContainer”,however a week after the change I’m still seeing this error regarding extent groups,I believe that this is due to NutanixManagementShare still being RF2 (with no ability to change it, since its a systemmanaged container), or is there some other reason?
Hi there.A security check on my nutanix clusters (8 nodes) revealed that the IPMI port on every nodes is vulnerable cause the VNC protocol is used to access them through port 5900.Issue:"...Virtual Network Computing (VNC) provides remote users with access to the system it it installed on. If this service is compromised, the user can gain complete control of the system...."Remediation:"...Remove or disable this service..."What are my options? It is possible to disable these ports without affecting the performance of the NUTANIX cluster.Thanks in advance.
Hi guys, a non technical question about a customization option that I can’t find anywhere. I have two Prism Central used for Synchronous replication. To Identify them at first sight I have customized title and colors of the login page. It was a bit disappointing to not find the customized name on the browser tab or at least on the menu bar title. Even after editing the cluster params to give a name to the PC cluster (only one VM as for now) the name stick on “Prism Central” on the tab and “Prism” on the menu bar.Considering I’m working on a Synchronous replication and that all the objects involved have the same name (Protection Policies, Recovery plans, categories), sometimes i found myself working on the wrong Prism Central. It would be easier to have the PC name always visibile. Any idea? Thanks in advanceil_gianK’ Tab and menu bar title
Hi Tried googling but can’t find anything to support me. Model: NUC7i5DNHE2x SSD (500+250GB SSD)1x _16_ gb Cruizer Fit USB AHV installation works and I create 1 signle node cluster., After rebooting I get SSH/Ping acces to AHV and can loginw with root. Problem:CVM does not answer to ping/wev.Tried SSH / Ping from AHV but does not get an answer: [root@NTNX-eXXXXX-A ~]# ping 10.255.1.11PING 10.255.1.11 (10.255.1.11) 56(84) bytes of data.From 10.255.1.10 icmp_seq=1 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=2 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=3 Destination Host Unreachable I’ve tried re-installing multiple times but same issue.Is the problem the 16Gb cruizer fit usb? It’s the bootable media.CVM gets 500gb SSD diskData gets 250Gb disk [root@NTNX-eXXXXX-A ~]# virsh list Id Name State----------------------------------------------------[root@NTNX-eXXXXX~A]# Tried the logs from other posts but not able to get any outputs. Anyone got
We have a shiny new VMware NTNX environment, yeeh! Question: I'd like to be able to measure the virtual disk metrics for a specific workload (backup). Preferably without any layer in between so I can really see what the workload does. Idea is to use a virtual machine on NTNX as a backup server and see what kind of parameters I use for the backup storage. Is it possible to present a container on the CVM as an SMB share and write my backup on it? Or would it be best to use the Windows NFS client to access an export on the CVM? Background: I'd like to purchase a separate storage platform for the backups. But since I'm not really sure what kind of IO patern and throughput I need to sustain, I'd like to test and measure before we buy.
Hey guys, We are busy building a Nutanix 22.214.171.124 infrastructure with vSphere 5.5. We have six nodes in two blocks, and the design consists of 2 Nutanix clusters with each its own vSphere cluster. HA is enabled on each cluster. We are getting an error on both Nutanix clusters in PRISM UI: [i]Virtual Machine auto start is disabled on the hypervisor of Controller VM [/i] Nutanix best practices recommend to enable "Start and Stop Virtual Machines with the system" on each host, and move the Nutanix CVM startup to Automatic startup. But, according to This VMware KB the automatic startup feature is disabled when moving an ESXi host in an HA enabled cluster. Just like the error message says. Is this expected behavior? Can this error message be disabled because it is just not valid in combination with VMware HA enabled clusters? Or is this still a configuration error?
Hello All, For one of the customers, I did a fresh install and on that one, when I check the data resiliency status, it shows me that for Extend Group, it is 0. The explanation is as follows: "Based on placement of extend group replicas the cluster can tolerate a maximum of 0 node failure." Do you know what this can be about? There is no error or warning on the cluster. Cheers..
Hello, A few of the IPMI ports on a new installation (3K and 6K series) show "no connect" in the BIOS. The ones that are working show "Dedicated LAN." Our networking folks have verified the ports as access ports, and are active with the correct VLAN ID configured. We will be swapping the ports on the switch between the working/non-working IPMI ports to confirm if the issue is on the switch or node side. Could there be some BIOS settings we may have missed? Any thoughts would be greatly appreciated. Thank you.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.