Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,169 Topics
- 3,202 Replies
purchased 7/15/2016 setup/install 10/24/2016 (by Nutanix tech) office move 4/1/2017 (late april, power outage in San Francisco financial district, was on bus on way to work when power went out, never got to shutdown Nutanix cluster properly, next morning powered up cluster and ran fine. Awesome! no issues since then) hardware: 2 NX-8235-G5 1 NX-8135-G5 NOS: 4.7.2 NCC: 2.3.0 BIOS: 20160516 BMC: 03.28 hypervisor: VMware vCenter Server 5.5 Update 2e (vSphere on Windows Server 2008 R2) I need to upgrade vSphere from 5.5 U2e to whatever latest version of 6.5 and also Nutanix NOS 4.7.2 to 5.5.8 Do I: 1 - upgrade VMware to 6.5 then upgrade Nutanix to 5.5.8 or 2 - upgrade Nutanix to 5.5.8 then upgrade VMware to 6.5 or 3 - upgrade VMware to compatible intermediate version and then upgrade Nutanix to compatible intermediate version. upgrade VMware to 6.5 and then upgrade Nutanix to 5.5.8 or 4 - upgrade Nutanix to compatible intermediate version and then upgrade VMwar
Hi, New to the forum, we recently had our Nutanix servers delivered and we are planning to do the install in the coming weeks. We haven’t decided which hypervisor to go with yet its either going to be Hyper-V or AHV. I just wanted to know peoples experiences that they have had? We currently use Hyper-V, so we are comfortable with that but all the nice features seems to be heading to AHV/Prism. I would be grateful for your thoughts and comments Thank you
Hi All , I would like to know the way how can I move vm's from 1 nutanix block into other Nutanix block using vmotion . - Each Nutanix block configured as a stand alone Nutanix Cluster -2 Nutanix Block under same vmware DC and Cluster and managed by 1 vcenter. but when I run vmtion for one virtual machine from vsphere client I could not choose the target nutanix storage. Thanks
Hi guys, I can't seem to perform an inventory on LCM any more. [b]It fails on the check[/b]: [i]Check 'test_upgrade_in_progress' failed with 'Failure reason: Another Upgrade operation is in progress. Please wait for that operation to complete before starting an LCM operation.'[/i] However after running [b]progress_monitor_cli --fetchall[/b] This does not show anything in progress. [b] host list -[/b] All 3 hosts show: [i]false (life_cycle_management)[/i] - Which is as expected. ~/data/logs$ [b]upgrade_status[/b] 2019-09-04 14:47:32 INFO zookeeper_session.py:131 upgrade_status is attempting to connect to Zookeeper 2019-09-04 14:47:32 INFO upgrade_status:38 Target release version: el7.3-release-euphrates-5.10.6-stable-294f5f671ba8982a0199e18b756e8ef3a453af9a 2019-09-04 14:47:32 INFO upgrade_status:43 Cluster upgrade method is set to: automatic rolling upgrade 2019-09-04 14:47:32 INFO upgrade_status:96 SVM 10.x.x.x is up to date 2019-09-04 14:47:32 INFO upgrade_status:96 SVM 10.x.
While at the time of posting this article neither the process referred to nor Windows 2003 OS itself are supported within AHV environment, Nutanix understands that there are situations where customers might find themselves in a position of not being able to move away from certain OS version.To help customers with the process of migration of Windows 2003 servers from ESXi to AHV we share this post by Artur Krzywdzinski where he explains the process in details. We would like to thank Artur for sharing the solution.Share your own ideas and processes that worked (and did not) with community - help someone, encourage cooperation!Please note that Nutanix Xtract referred to in the post is currently known as Nutanix Move.vmwaremine.com: Migrate Windows 2003 to Nutanix AHV by Artur Krzywdzinski
Hey everyone, I am required to upgrade from ESX 6.7 U1 to U2 to resolve a bug that prevents me from going back more than 8-10mins to look at VM performance, however, Nutanix has only just now tested U3. My question is: Has anyone migrated to U3 yet and if so how has it been so far?
Hello Everybody,Hope you’re all doing well !I want to install Nutanix using Foundation.I have 3 Lenovo nodes and one Cisco 2960X switch that consists of 24 x 1 Gig RJ45 ports and 04 x 1 Gig SFP ports.Unlike Supermicro nodes, Lenovo nodes do not have Shared IPMI ports.So, for each Lenovo node, I have to connect 01 x 10 Gig port and 01 GbE IPMI Port to the same CISCO 2960X switch (10 Gig ports of the nodes connected to 1 Gig SFP ports of the switch and 1Gig IPMI ports to the 1 Gig RJ45 ports ).Please take a look of the picture below…Since 10 Gig ports of the nodes are connected to 1Gig ports of the switch, I wonder if this architecture can allow me to install nutanix and create the cluster without any issues ? Thanks in advance.
Hello FriendsHow are you? Currently i am trying to foundation A Nutanix 3 node environment but after the foundation process begins it will halt at “waiting for the installer to boot” stage with “fatal” error warning i don’t know what is causing the error can anyone tell me something.I have attached screenshots of errors and process.
After a failed upgrade that resulted in a failed node, we reinstalled the node and added it as a remote site so we could migrate to it and reinstall the other nodes. When I added the site on the old cluster it works as expected, but on the reinstalled node I cant add any vStore Mappings and are instead greeted with this message "Remote Datastores are either not configured or not accessible." When I run [i]ncli> remote-site list[/i] on the reinstalled node I get: Status : unreachable and on the old cluster: Status : relationship established Is there any log output/command that could help me troubleshoot this further?
AHV supports GPU-accelerated computing for guest VMs. You can configure either GPU pass-through or a virtual GPU. Let us say you have an AHV host with GPU compatible hardware and looking for a simple way to install the required drivers. Nutanix recommends a specific method for installing the Nvidia GPU host driver in AHV hosts. The method involves a script which is used for installation or upgrade of all the hosts in the cluster. Go through the following document to understand the process in-depth Installing AHV GPU Drivers Have questions regarding the usage of the script? What will happen if one of the nodes doesn’t have a GPU? What will happen if the driver version is different on one node than the rest of the cluster? How can I install the driver onto the new nodes only, without affecting the currently running nodes? Can I install different versions of the driver onto different nodes of the cluster? The following knowledge base article can help you to
Let's say we need to change the MAC Address of a VM we just created due to various reasons. licensing being one of themDoes Nutanix AHV provide you with a feature to set a static MAC Address or change the MAC address of a VM? Yes, we do have a feature to change the MAC address of the VM and assign another address. As of writing this article, the feature can only be achieved through acli and have some limitations and guidelines. Please check out the Knowledgebase article regarding the steps with the latest updates. KB-3670 Want to know more about AHV Networking? Try giving this KB a readKB-2090
Hi, We purchad Huawei servers end of last year (check compatibility matrix brought by Nutinix itself https://docplayer.net/136878906-Huawei-hardware-compatibility-list.html and due to Covid19, we were not able to install them until now. Currently we have 4 servers with part numbers from compatibility matrix 2288Hv5-12 and 2x10G switches for interconnection. Each server has M.2 SSD drives for OS installation configured in RAID 1, while the rest of the disks (SSD of 1.92TB) are in JBOD mode, since that there are 2 RAID cards, one for M.2 and the other for SSD drives respectively. Virtual drive has status OK. During installation process, we receive error message that says the following: “INFO Performing firmware checks for platform Huawei 2288HV5-12 WARNING Some error occurred during firmware checks. This can be due to some unsupported hardware. Please report this to foundation team.” Final error message is: Unable to find SSD in node. Please, help us resolve this issue in form o
Hello, I’m trying to configure two AHV clusters (3 nodes) for replication. Both environments are running AOS 5.11.2 on HPE DX360 Gen10. The network environment is spread between two stacks: one stack dedicated to the storage with 10Gb interfaces and one stack for the LAN with 1Gb interfaces. There is no connection between these two networks. After the deployment of the nodes and the configuration of the clusters, I start to configure the Backplane Network with the following parameter: Subnet: 172.16.250.0 Netmask: 255.255.255.0 VLAN: 202 Host node: br1 The backplane network of the first cluster works without any problem. When I do the same (with the same) for the second cluster, it fails. It says that the IPs addresses are already in use. I would like to specify a VLAN/subnet where both clusters have their backplane IPs so I can use the backplane network for the replication between both clusters. I hope this is clear. Kind regards, Fred
Hello All,I have recently deployed Prism Central and I am trying to give team members access via their AD accounts. I have went through the roles and discovered I cannot add new members to the predefined roles; however, if I duplicate the roles I can add AD users and groups to the new roles. This works for me however when I duplicate a role such as the “Super Admin” role I am warned that not all permissions are going to apply to the new role and I would need to create the new role via CLI to get these permissions. Ok fine that makes some sense. But where is the documentation on how to do that? Can someone point me to the documentation to perform these role creation tasks via CLI?Thanks, Scott
Hello everyone, I was wondering if it's possible to have different VLANs ID/Subnet range for each of the different traffic type bellow: - Hypervisor Management (ESXi) - Nutanix Cluster administration - Nutanix Cluster replication / AutoPath And the very best would be to even have replication & AutoPath on different VLANs. The rationale here is to comply with customer internal security policies regarding DMZ virtualization. We are allowed to use VLANs and are not forced to use differents physical ports, but the security team (worldwide bank) is concerned about the ESXi & Nutanix being on the same VLAN. Sylvain.
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.