Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,125 Topics
- 3,037 Replies
We just purchased Nutanix ( 2 Appliances , 3 nodes on each ) , we are planning to use VMware Enterprise plus on them. I've few questions, I would be more than happy if I get the answers: 1- if we used VMware, all Nutanix functions will be applied ? ( for example, if one HD failed, the data will be transfered to another Hard disk? ) 2- If I'm using 3 nodes, and I'm utlizing all the CPU and Memory and HD capabilities for all three, once one node goes down, how the VMs on this node will go to the other nodes while we are utlizing their full capacity? 3- if one node fail, the VMware VMs on this Node will be Automatically moved to another Node ? 4- In general, is there any difference if we used Acropolies or VMware ? will we get the same functions on Nutanix?
Hi there,We are currently deploying (and testing) Karbon as K8 orchestration platform for all our Nutanix platforms (worldwide). I have failed installation attempts from Prism Central.As a side note, we are using VPN Site2Site from centralised Prism Control and I can reach remote K8 VLAN from distant Nutanix deployment. The reachout tests were done with small VM belonging the the K8 VLAN testing node and via private CIDR (bidirectional tests), but utilizing the same encrypted VPN channel.This is what I’m having from karbon_core.out (PCVM):2021-10-13T21:17:12.687Z ssh.go:153: [DEBUG] [k8s_cluster=RGS-PA-K8-STAGING] On 10.20.25.130:22 executing: docker plugin inspect nutanix2021-10-13T21:17:12.825Z ssh.go:166: [WARN] [k8s_cluster=RGS-PA-K8-STAGING] Run cmd failed: Failed to run command: on host(10.20.25.130:22) cmd(docker plugin inspect nutanix) error: "Process exited with status 1", output: "Error: No such plugin: nutanix\n\n"2021-10-13T21:17:12.825Z sshutils.go:44: [ERROR] [k8s_clust
Hi, New to the forum, we recently had our Nutanix servers delivered and we are planning to do the install in the coming weeks. We haven’t decided which hypervisor to go with yet its either going to be Hyper-V or AHV. I just wanted to know peoples experiences that they have had? We currently use Hyper-V, so we are comfortable with that but all the nice features seems to be heading to AHV/Prism. I would be grateful for your thoughts and comments Thank you
We are considering AHV for our environment. I notice that the OEM partner product list https://www.nutanix.com/partners/oem has fixed set of disks. Most of the hardware in the list are capable to be installed with more disks. For example Fujitsu servers are capable of 12 x 3.5 + 4 x 2.5 (on the rear),Can we utilize all disk slots in a configuration like 12 x 4TB NL-SAS + 4 x 1.92TB 2.5” SSD or 4 x 1.6TB NvME ?
Hi All,We’ve got a cluster which we have recently not renewed support/maintenance on as we are in the process of decommissioning it, however still have some workloads which are lagging behind being decommissioned.Noticed today that the license expiry is approaching (November 2021), what will happen once the license expires?CheersJason
I have a problem with OS and Distributed Switches on vCenter. When I migrate the vm-network from standard switch to distributed switch show the next error: Detailed information for cvm_startup_dependency_check:Node x.x.x.x: FAIL: .dvsData directory is not persistent yetRefer to KB 2050 (http://portal.nutanix.com/kb/2050) for details on cvm_startup_dependency_check################################################################################PLUGIN RESULTS################################################################################/health_checks/hypervisor_checks/cvm_startup_dependency_check [ FAIL ] I can see that when reboot the ESXi host and start the CVM I lost it network config on the CVM, and I have to assign again network adapter on the CVM. I don't know how configure on the vcenter the distributed switch to make persistent.
Hi I’m new to nutanix with ESXi. I recently deployed a 4 nodes nutanix with esxi. I’ve created containers and add the nodes to the vCenter. I have not add the vCenter to the Prism. I check at prism and there’s a warning saying the hosts are not connected to vCenter. Anyone encounter this issue before?
I'm looking for some advice.My company recently purchased a used 3460-G4. We're a professional services firm and we want to have it for our lab so we can stand up stuff like ERA, CALM, etc. and bang around on it. You know, lab stuff!We have been trying to Foundation (18.104.22.168 & 22.214.171.124) the block but I am running into a problem with Foundation failing when trying to mount the Phoenix image on the nodes. Here's how things currently stand:BIOS has been upgraded to latest recommended version on Nutanix Support site. (G4G5T6.0)BMC firmware has been upgraded to latest recommended version on Nutanix Support site. (3.64)Each node has x2 SSDs, which this system appears to recognize (I ran an ESXi installer on one of the nodes to test. The installers saw the drives)Each node has 64GB of RAM. The RAM is confirmed to be compatible according to SuperMicro's site.The Motherboard is the X10DRT-P from SuperMicro.IPMI has been set on each node and I can log into the IPMI mgmt page.At first I was wo
Here’s my problem. Node A & B are synchronizing to the wrong host. Node C is synchronizing to the right host. This is the behavior I see on the CVMs. When I do “hostssh ntpq -pn”, the hypervisors (AHV) are reporting the correct ntp server. How do I bring nodes A & B back inline to what they should be? I tried manually correcting their ntp.conf files with the correct IP and restarted the ntpd service. No change and eventually the ntp.conf files changed to the wrong settings. Not sure how to wrestle this one to the ground.
Hi Guys, i have some questions, first, can we enable flow on ESX hypervisor on nutanix? i have same question with calm on esxi and i get the answer we can enable calm on esx based on nutanix KB, but how about flow? can we enable/using flow on top esx hypervisor? and just courious, anyone have table comparation about feature nutanix running on ahv, esx, hyperv?
Hi all!I’m looking for a HCL that details the exact firmware versions of specific components of a server to make sure that we are in the clear before we install or upgrade Nutanix.As far as I can tell there isn’t anyone to be found, at least not for someone who doesn’t have an account for the support portal (and I find it pretty odd to “hide” the HCL).I know there are HCL’s to be found outside of the support portal but those are extremely generic.Is there an HCL that is this detailed or is it only through trial and error we can figure out if we have the correct firmware?Thanks.
Hi, I have 2 Blocks (2 Nodes each) Cluster. I just added 2 DIMM Memory (2 x 32GB) in 1 of my Node. I put 2 memory on slot P1-DIMMC1 and P1-DIMMF1Before Adding DIMMAfterr Adding DIMMAfter that i ran NCC and shows warning like thisAlert from Nutanix ClusterAm i did something wrong with the steps by steps? Since in PRISM, total memory has been increased and i thought it’s just fine.
Hi all,Running AOS 5.10.5, i have changed my cluster from RF2 to RF3, including my main container and the “SelfServiceContainer”,however a week after the change I’m still seeing this error regarding extent groups,I believe that this is due to NutanixManagementShare still being RF2 (with no ability to change it, since its a systemmanaged container), or is there some other reason?
Hi there.A security check on my nutanix clusters (8 nodes) revealed that the IPMI port on every nodes is vulnerable cause the VNC protocol is used to access them through port 5900.Issue:"...Virtual Network Computing (VNC) provides remote users with access to the system it it installed on. If this service is compromised, the user can gain complete control of the system...."Remediation:"...Remove or disable this service..."What are my options? It is possible to disable these ports without affecting the performance of the NUTANIX cluster.Thanks in advance.
Hi guys, a non technical question about a customization option that I can’t find anywhere. I have two Prism Central used for Synchronous replication. To Identify them at first sight I have customized title and colors of the login page. It was a bit disappointing to not find the customized name on the browser tab or at least on the menu bar title. Even after editing the cluster params to give a name to the PC cluster (only one VM as for now) the name stick on “Prism Central” on the tab and “Prism” on the menu bar.Considering I’m working on a Synchronous replication and that all the objects involved have the same name (Protection Policies, Recovery plans, categories), sometimes i found myself working on the wrong Prism Central. It would be easier to have the PC name always visibile. Any idea? Thanks in advanceil_gianK’ Tab and menu bar title
Hi Tried googling but can’t find anything to support me. Model: NUC7i5DNHE2x SSD (500+250GB SSD)1x _16_ gb Cruizer Fit USB AHV installation works and I create 1 signle node cluster., After rebooting I get SSH/Ping acces to AHV and can loginw with root. Problem:CVM does not answer to ping/wev.Tried SSH / Ping from AHV but does not get an answer: [root@NTNX-eXXXXX-A ~]# ping 10.255.1.11PING 10.255.1.11 (10.255.1.11) 56(84) bytes of data.From 10.255.1.10 icmp_seq=1 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=2 Destination Host UnreachableFrom 10.255.1.10 icmp_seq=3 Destination Host Unreachable I’ve tried re-installing multiple times but same issue.Is the problem the 16Gb cruizer fit usb? It’s the bootable media.CVM gets 500gb SSD diskData gets 250Gb disk [root@NTNX-eXXXXX-A ~]# virsh list Id Name State----------------------------------------------------[root@NTNX-eXXXXX~A]# Tried the logs from other posts but not able to get any outputs. Anyone got
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.