Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,123 Topics
- 3,029 Replies
AHV supports GPU-accelerated computing for guest VMs. You can configure either GPU pass-through or a virtual GPU. Let us say you have an AHV host with GPU compatible hardware and looking for a simple way to install the required drivers. Nutanix recommends a specific method for installing the Nvidia GPU host driver in AHV hosts. The method involves a script which is used for installation or upgrade of all the hosts in the cluster. Go through the following document to understand the process in-depth Installing AHV GPU Drivers Have questions regarding the usage of the script? What will happen if one of the nodes doesn’t have a GPU? What will happen if the driver version is different on one node than the rest of the cluster? How can I install the driver onto the new nodes only, without affecting the currently running nodes? Can I install different versions of the driver onto different nodes of the cluster? The following knowledge base article can help you to
Dear All We are installing HyperV 2012r2 with foundation 2.0, we successfully installed the Cluster, but after running the scripts for joining the hosts to domain setup_hyperv.py setup_hosts, eventhough the hosts are joined to the domain, the cluster status command stops with the below error "WARNING genesis_utils.py:325 Failed to reach a node where Genesis is up. Retrying... (Hit Ctrl-C to abort)" We disable the 10g interface when the script asks so Please advise. Thank you
I need to change the IPMI IP Address on 2 Nodes of a 4 Node Dell XC Block. I know how to change the IP from the iDRAC Console; however, how do I get it to update the Nutanix Node? I read through the Admin Manual; however, I could not find anything related to changing the IPMI IP Address. Thanks,David
I am a bit confused. On the Nutanix support portal, in the Downloads section for Phoenix, it say's, "Phoenix is a tool used to install the Nutanix Controller VM and provision a hypervisor on a new or replacement node in the field. You can download a Phoenix ISO file from this page. Each Phoenix ISO file is for a specific hypervisor and Acropolis base software (NOS) release." But then when filtering, it shows Phoenix versions 188.8.131.52 to 3.0.4 are for Hypervisor "All". And then Phoenix version 2.0.4 has seperate downloads specificlly for KVM, HyperV, or ESXi, depending on which Hypervisor your are deploying. Which Phoenix download are you supposed to use if wanting to deploy an ESXi 5.x or 6.x Hypervisor on the nodes in the field?
Can't I remove the interface included in the bond with ovs-vsctl For example If br0 contains interfaces eth0 to 3 in bond name br0-up, I want to clear eth0, eth1. If I can't, is there a command to update or remove interfaces like manage_ovs --bridge_name br0 --bond_name br0-up --interfaces eth2,eth4 update_uplinks command in ahv? Thank you for always helping me
Is there any special guide, how to setup cisco 10 Gigabit switched for nutanix environments? Im using vsphere 5.0 U3 with nutanix. Should jumbo frames be enabled? Is it possible with iperf inside the cvm to get 9 Gbit/s with with one process? I get 1,57 Gbit/s Good hints from Intel. The INtel nics are onboard. Which PCI Express speed X4,x8,x16? http://www.intel.com/support/network/sb/CS-025829.htm This graph is intended to show (not guarantee) the performance benefit of using multiple TCP streams. [b]PCI Express Implementation[/b][b]Encoded Data Rate[/b][b]Unencoded Data Rate[/b] [b]x1[/b]5 Gb/sec4 Gb/sec (0.5 GB/sec) [b]x4[/b]20 Gb/sec16 Gb/sec (2 GB/sec) [b]x8[/b]40 Gb/sec32 Gb/sec (4 GB/sec) [b]x16[/b]80 Gb/sec64 Gb/sec (8 GB/sec) http://dak1n1.com/blog/7-performance-tuning-intel-10gbe Maybe useful script: For me, I had 10 machines to test, so I scripted it instead of running any commands by hand. This is the script I used: https://github.com/dak1n1/cluster-netbench/blob/m... http:
When trying to upgrade NOS or NCC I'm getting this error: "Error while executing HTTP REST request: Could not connect to Genesis" The updates have already downloaded. This occurs even if I pick a lower version to upgrade to. I'm going from 4.1.4 to 184.108.40.206 (NOS) and 2.01 to 2.1.4. Any ideas? Is there a service I should restart or something? Thanks! Mike
We moved a NX3000 to a new environment but one of the nodes died. So we had to reimage that node. The other two where still up and running. But after a power-failure on the rack, the two nodes rebooted but won't come up anymore. Genesis showing some failures:2016-08-18 14:38:40 INFO node_manager.py:3492 Svm has configured ip 10.160.35.124 and device eth0 has ip 10.160.35.1242016-08-18 14:38:43 INFO node_manager.py:3542 Setting up key based SSH access to host hypervisor for the first time...2016-08-18 14:38:43 INFO hypervisor_ssh.py:32 Trying to access hypervisor with provided key...2016-08-18 14:38:46 INFO hypervisor_ssh.py:40 Failed.2016-08-18 14:38:46 INFO hypervisor_ssh.py:44 Trying to access hypervisor with provided password...2016-08-18 14:38:49 INFO hypervisor_ssh.py:52 Failed2016-08-18 14:38:49 ERROR node_manager.py:3547 Failed to set up key based SSH access to hypervisor, most likely because we do not have the correct password cached. Please run fix_host_ssh command manually to
Hi, which 10G SFP+ LR (single mode) modules are supported in Nutanix NX-8035-G5 and NX-6035C-G5 nodes with C-NIC-10G-2-SI 10G network interface cards? Do the SFP+ need special branding, as e.g. for use with Cisco switches? Thanks, Erik
Has anyone ever attempted a Host book disk repair and encountered this issue before? All cables and SFPs are plugged in as if it were still in production. Have not yet attempted to rebuild via foundation as these were the steps provided by Nutanix support: https://portal.nutanix.com/page/documents/details?targetId=Hypervisor-Boot-Drive-Replacement-Platform-v510-Multinode-G3G4G5:Hypervisor-Boot-Drive-Replacement-Platform-v510-Multinode-G3G4G5 Steps Taken: Downloaded and burnt phoenix.iso to USB Booted USB into Phoenix Kicked off Repair Host boot Disk and uploaded whitelisted ESXi image Encountered error I have a case open with support and have escalated and figured that while i’m waiting I could ask the community. Will provide a response here once we figure out the issue. Cheers All! (Edit: I posted this in the CE forum by accident)
I have a problem with OS and Distributed Switches on vCenter. When I migrate the vm-network from standard switch to distributed switch show the next error: Detailed information for cvm_startup_dependency_check:Node x.x.x.x: FAIL: .dvsData directory is not persistent yetRefer to KB 2050 (http://portal.nutanix.com/kb/2050) for details on cvm_startup_dependency_check################################################################################PLUGIN RESULTS################################################################################/health_checks/hypervisor_checks/cvm_startup_dependency_check [ FAIL ] I can see that when reboot the ESXi host and start the CVM I lost it network config on the CVM, and I have to assign again network adapter on the CVM. I don't know how configure on the vcenter the distributed switch to make persistent.
You have the option of adding a Witness to a Metro Availability configuration (see Data Protection Guidelines (Metro Availability)). A "Witness" is a special VM that monitors the Metro Availability configuration health. The Witness resides in a separate failure domain to provide an outside view that can distinguish a site failure from a network interruption between the Metro Availability sites. The goal of the Witness is to automate failovers in case of site failures or inter-site network failures. The main functions of a Witness include: Making a failover decision in the event of a site or inter-site network failure. Avoiding a split brain condition where the same storage container is active on both sites due to (for example) a WAN failure. Handling situations where a single storage or network domain fails. Metro Availability Failure Process (no Witness)In the event of either a primary site failure (the site where the Metro storage container is currently active) or the link betwee
While at the time of posting this article neither the process referred to nor Windows 2003 OS itself are supported within AHV environment, Nutanix understands that there are situations where customers might find themselves in a position of not being able to move away from certain OS version.To help customers with the process of migration of Windows 2003 servers from ESXi to AHV we share this post by Artur Krzywdzinski where he explains the process in details. We would like to thank Artur for sharing the solution.Share your own ideas and processes that worked (and did not) with community - help someone, encourage cooperation!Please note that Nutanix Xtract referred to in the post is currently known as Nutanix Move.vmwaremine.com: Migrate Windows 2003 to Nutanix AHV by Artur Krzywdzinski
Sorry if this topic has already been covered. I have been trying to use the new "Portable Foundation" app that runs on windows to image a cluster without using a VM. Every time I try (three so far) I get errors that the IPs I have provided are already used. I am using a flat switch and the 10G and 1G are unplugged from the customers switches. I have tried with my laptop in and out of airplane mode, with the firewall on and off and still receive the errors. The only difference is that each time I try, the listed IP conflicts are never the same. It will tell me in the first try that x, y, and z are in use, then the second time it may say that a, b, and y are in use. Any ideas? I really like the idea of this app working. Thanks, -David [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/d0b2826a-077a-429a-84fe-be3b71f35256.png[/img]
By default, a virtual NIC on a guest VM operates in access mode. In this mode, the virtual NIC can send and receive traffic only over its own VLAN, which is the VLAN of the virtual network to which it is connected. A virtual NIC in trunk mode can send and receive traffic over any number of VLANs in addition to its own VLAN. You can trunk specific VLANs or trunk all VLANs. You can also convert a virtual NIC from the trunk mode to the access mode, in which case the virtual NIC reverts to sending and receiving traffic only over its own VLAN. Trunked NIC can only be added via acli. It is not possible to distinguish between Access and Trunked NIC modes in Prism UI. You can create new NIC for the VM to operate in a required mode or you can change the mode of an existing VM NIC. For the command list and sequence please rerfer to AHV Administration Guide: Configuring a Virtual NIC to Operate in Access or Trunk Mode
I have three protection domains, all containing VM's. All three have schedules set and I can see that local snapshots are being made. I have restored machines with no issues. All seems well, but I am getting "Backup Schedule" health warnings on all three PD's. [b][i]Causes of failure[/i][/b][i]No backup schedule exists for protection domain protecting some entities.[/i] [i]Backup schedule exists for protection domain not protecting any entity.[/i] Neither of the above suggestions is true. I can't seem to get this one to resolve. Is this a bug or is there something here I am missing?
Hello, I'm not sure if this is the best place to ask this, but I'm not sure where else to ask. I keep getting a message that there's an NGT Update Available on just one of our servers running Ubuntu. I ran an update and it went from 1.1.2 to 1.5.2. After resolving/acknowledging the issue it reported the same message again the next day. Is 1.5.2 not the latest?
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.