Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,182 Topics
- 3,240 Replies
Hi, I'm trying to install a 3-node Lenovo HX3321 but I got this issue: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/8553a4c3-ec33-4f00-ba63-20dda5f8508d.png[/img] Every node gets the following log: [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/34211d95-ae4a-4d6d-90dd-e7be72288397.png[/img] It keeps waiting for md arrays to resync but it no longer continues the process and fails. I'm trying to install AOS 5.10.6 and foundation 4.4.1 Do you have any idea how I could fix this issue? Regards,
hi, in the vsphere administrative guide its mentioned to use vmware SRM along with nutanix SRA for the remote replication.if nutanix have its own replication method what is the significance of SRM? Thanks in adance. RGDS rj
Hello, We have a cluster with 4 nodes and I will perform the update from 22.214.171.124 to 126.96.36.199. a some minutes ago, I see that my cluster have a disk space problem and the resilience board is red with the “not resilience” information. In this case I will have a problem when perform the AOS upgrade? I not have time to resolve the space issue now… Please, send me any information.
Would I be able to create storage containers in my cluster for the purpose of "pinning" VMs only to hosts that are assigned to the containers - restricting VMs to the hosts that belong to the container? The need is to control VM migration during Software Upgrades due to multicasting issues with the hosts.
Good day Nutants I got a situation that I'd really appreciate your help in resolving After successfully migrating a PD to a remote site and successfully powering on the VM at the remote site, when I attempt to Migrate the PD back to the Main(Prod) Site it fails with error : critical Protection Domain XXXXXXX activate/deactivate failed and cause is listed as "Protection domain cannot be activated or migrated" Any pointers?? This PD has one CG with one VM in it (WIN SERVER 2012 with NGT installed and NGA service is running)
The instructions for shutting down a cluster are to use “sudo shutdown -P now” on the CVM’s in the cluster to shut each one down, but the command only works on the first CVM. The command errors out the rest of the CVMs because they can’t reach the first CVM that was shut down. Am I missing something?
Hi,Is there any ETA for Nutanix Guest Tools for Debian 10? Debian 11 is probably going to be released this summer, so it would be quite nice if NGT supported Debian 10. Or is there a way to “forcefully” install Nutanix Guest Tool on newer systems? Is it possible to build it ourselves to we can distribute it on our thousands of VM’s that is running?
Hello,My client has this alert "Active Directory Domain Contoller(s) or DNS servers configured on the UVMs in the cluster" due the fact that he moved his domain controller on the nutanix cluster. His environment is 100% Hyper-V, and he is totally aware that SMB3 share of the nutanix cluster, requires authentication from the domain. In order to avoid that my client created ISCSI volumes and presented them to his Hyper-V environment, on which he moved his domain controller, trying to avoid that type of failure if everything goes down after a power failure and when everything comes up, the domain controller to be able to boot prior of the authentication.Please let us know what's the best approach for this matter and what's your recommendation for this kind of setup, especially when all the domain controllers are virtualized.Regards,Adrian
This happens because the Windows Operating System does not have the appropriate drivers (VirtIO driver) to read the disk that has the operating system installed on it. In a nutshell: 1- Download both VIrtIO and Windows ISO to the image store in the cluster (you can use prism “image configuration” or prism central Explore->images->add image”) 2- Mount both VirtIO drivers and Windows ISO as CDROMs for the guest VM with problem. 3- Power on the guest VM booting from Windows ISO CDROM, you will see an option called “Troubleshoot” 4- Choose the above option and go to command prompt 5- Get the list of mounted disk drives and navigate tot eh drive where VirtIO resides 7- Load the drivers: “drvload vioscsi.inf” 8- see that drivers are mounted now “wmic logicaldisk get caption” 9- exit and reboot guest Vm 10 After you login, go ahead and install the VirtIO MSI package For more details please see: https://portal.nutanix.com/#/page/kbs/details?targetId=kA00e000000kAWeCAM
Hello - Can someone reply with the complete power specifications for the Nutanix NX-6260 ( NX-6060 )? I know these have dual power supplies. Please provide the following at minimum: 1.) Voltage of each power supply - I think these are 208V Single Phase 2.) Amps per power supply 3.) Watts per powerr supply 4.) Power Cable Ends - Please confirm - I think these are C13/C14 for each power supply. We do not yet have the boxes so I need to prepare for the power requirements of these devices. I am looking at two of the Tripp-Lite PDU Basic 208V/240V 30A C13 10 Outlet L6-30P Horizontal 1U - MFG Part #: PDUH30HV Also wondering what others are using. Thank You. Sam
I’m looking to upgrade the hypervisor on my cluster of NX-8035-G6 hardware however the download site only lists ESXi version 7.0.0-ga (build 15843807) as the latest for this hardware line - which was released from VMware almost 2 years ago.Can I use this older .json file with newer versions of ESXi? For example version 7 Update 3?Or, since there appears to be support for these newer versions of ESXI for other vendors (like the Dell hardware line), can I use the .json files for the Dell hardware along with the new ESXi binary’s on my cluster of NX-8035-G6 nodes?
Hello,is it possible to enable compression on an [b]existing container[/b]?What will happen to the [b]data already written[/b]?We want to enable [b]post-process[/b] compression, so I hope the Curator MapReduce framework will compress [b]all data[/b] concerned.Can somebody please confirm or correct?thanks in advance!W.
Hello Community, I have some doubts about running Nutanix on VMWare with ESXi with Cisco ACI fabric. To start, we have already married ourselves to a Cisco ACI network fabric (two sites connected with 12x10Gbit fiber (120Gbit)). We are using an IPN / Spine / Leaf topology, with APIC clusters in each site. In each DC there will be 24 Nutanix Nodes. All of my question have to do with the best practices of integrating these technologies. 1) Is it necessary to use VMware NSX to reap the benefits of Cisco ACI? 2) Can we just use a simple VMware installation (without NSX) and allow Nutanix full access to the ACI fabric? 3) Whats the best practices related to these three technologies coexisting together? I have found documents on Nutanix/ACI and Nutanix/VMWare, but I cant find anything on using all three together in terms of hierarchy and how to stitch it all together. Any guidance from those who have experience would be greatly appreciated. Thanks, Michael
Hi, I just installed a nested CE 5.18 2020.9.16, and it is OK. single node. vmware esxi 6.7 The host IP is pingable, but the CVM IP is unpingable from outside pc.so cannot open web consoleWhen i login the host, can ping the cvm ip. how to fix the issue? thank you so much.
Hello, I would like to install NGT on Linux Centos 7 Create one more cd-rom for mounting the VM you want to install I know that the prism selects Manage Guest Tools as follows and then proceeds with the installation, but if it stops at the following steps, where should I check? I'd appreciate it if you could give me an answer
Hi I have confuse about the output of the AHV network command "mange_ovs show_interfaces", In the result, I can check the link status and speed of the NICs. But what is the meaning of [b][u]mode[/u][/b] ? I attached a snapshot as the following that give the output of a node in AHV which have 10Gb and 25Gb nic cards, but the mode all display 10000. What is the meaning of mode? [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/ceda42e0-451d-4f7e-a384-3fc8c9645c90.png[/img]
Hello dear Nutanix community,A client of my company has purchased 3 HX Lenovo nodes and 2 ToR Cisco switches, each node was equipped with only 2 ports 10GbE (the LOM ones). However, with only 2 ports, I don’t know how to setup the cluster in a way that there is a redundancy. The current rack looks like this without any cabling yet: The IPMI port will go to a Management Switch, so there’s no issue there. But so far, I am confused as to how setup the cabling for the two interfaces; Should eth0 go to a port on SW1 and eth1 go to a port on SW2 for each of the 3 servers? or should both eth0 and eth1 go to the same switch? If I understood correctly the AHV Networking articles in Nutanix, eth0 and eth1 will form a logical bond (br0-up) inside the br0 bridge. And I’m guessing since we only have 2 uplinks per nodes (I’m going for the first scenario):> eth0 will go to SW1Port1 (for example), SW1Port1 will be configured as a Trunk port, for which the CVM/AHV VLAN and User VMs VLANs will be de
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.