Installation & Configuration
This forum is the best way to get up and running with the Nutanix platform
- 1,136 Topics
- 3,094 Replies
Foundation preconfiguration JSON
Is there a KB or documentation providing the JSON format and any additional things that may be coded in? For example, in going through the install.nutanix.com portal to generate the JSON file it doesn't ask for NTP server of the hypervisor. In looking at the file, though, there's this line: "hypervisor_nameserver": "220.127.116.11", I'd like to have our Nutanix deployments in code as much as possible (right now we're looking to various scripts for vSphere configurations). Being able to configure as much as possible up front and possibly make this a template for future deployments.
AHV Networking | Configuring Load Balancing
Every production infrastructure knows the importance of load balancing the network traffic to increase efficiency.Let’s say you have multiple links in your environment and want to use the potential of all the links or want to have a backup configuration in case a link fails, load balancing will come to your rescue.Today we will talk about two load-balancing modesActive-Backup Balance-slbTo know more about the load balancing configuration and AHV networking in detail, give the following document a read AHV Networking Best Practices Guide So how to make a decision regarding active-backup and balance slb?This comparison might help youAdvantages of Bond Mode for the active-backupDefault bond mode is active-backup. One interface in the bond carries traffic and all the other interfaces in the bond are used only when the active link fails. Active-backup is the simplest bond mode that easily allows connections to multiple upstream switches without any additional switch configuration. Disadva
Configuring VMware Distributed Switch (vDS) on the Nutanix Platform
Nutanix cluster works with the vDS and you can use the following guidelines and recommendations to configure the vmk and VM interfaces to be part of the vDS. Nutanix recommendations for implementation Keep the vSwitchNutanix, the vmkernel port (vmk-iscsi-pg), and the Nutanix Controller VM's virtual machine port group (svm-iscsi-pg) configuration intact. It should remain as a standard vSwitch and should not be migrated over to the vDS. Migrating the vSwitchNutanix to the vDS causes issues with upgrade, and also controller VM data path communication. Only migrate one host to a dvSwitch at a time. After migrating the host to the dvSwitch, confirm that the Controller VM can communicate with all other Controller VMs in the cluster. This ensures that the cluster services running on all Controller VMs continue to function during the migration. In general, one Controller VM can be off the network at a given time while the others continue to provide access to the datastore.
Convert existing Nutanix-VMware Cluster to Nutanix only
Dear all We run 2 Clusters with each 3 ESXi and 2 AHV Storage-only Nodes successfully. Now I want to get rid of VMWare, but all vms were built with vmware, of course I see them in prism. Did anyone ever just deinstall ESXi and replaced it with AHV in a running environment? Alternatively I could move all urgent vms to one Cluster and than install Nutanix from scratch and afterwards migrate the vms to the Nutanix only Cluster. thanks and best regards, Claudia
Preupgrade to AOS 5.5.8 fails (zookeeper not running on a CVM) but clean ncc
Hello, We have an 5.1 AOS cluster (EOL I know), we wish to upgrade to latest possible LTS (which seems to be 5.5.8). The ncc is clean but the preupgrade fails at 3°% claiming a zookeeper is not reponding on one of the CVMs. But allssh genesis status | grep zookeeper zookeeper: [3967, 3996, 3997, 4003] Connection to @1 closed. zookeeper: [3899, 3928, 3929, 24979, 25089, 25105] Connection to @2 closed. zookeeper: [3919, 3948, 3949, 7259, 7365, 7380] Connection to @3 closed. zookeeper: [3923, 3952, 3953, 7709, 8032, 8047] Connection to @4 closed. And [code]ncc health_checks system_checks zkinfo_check_plugin [/code] runs normally. What could be the issue? Thank you
SCOM Management Pack Monitoring data disappeared
Hello, Lately I have been adding the Nutanix SCOM Management Pack version 18.104.22.168. From the start on it worked fine as I handed over the Cluster information via Nutanix Cluster Discovery. But now, a few weeks later, SCOM will not display any performance data of the clusters anymore. I am not able to find out where exactely caused this problem. I have different Clusters AHV and ESXI as well as multiple OSVersions running. Currently SCOM displays only one Cluster with it's information. From the other Clusters, there is no performance data shown. All other clusters were discovered correctely (and the same way) but won't show their data anymore. My situation looks like this - this dashboard is showing the clusters information. All other clusters (and their nodes) are not showing data - the dashboard stays empty. [img]https://d1qy7qyune0vt1.cloudfront.net/nutanix-us/attachment/c1d55b84-0d1c-4404-b139-9b48e7e361fd.jpg[/img] What could have happend that the data is not reaching SCOM or
Hello folks, Recently I was running some Resilency testing, powering down a node using IPMI (Power off server -= Immediate) to ensure VMware High Availability worked as expected for a simulated power outage. When I finished I powered the node back on via IPMI. I was surprised the CVM did not automatically start. Is it expected that the CVM would not restart when a node was powered back on? Thank you for your help.
Nodes reset/reboot events
We have had two instances where a node detected/reported a fault event and reset rebooting vm on each occasion. There seems no reason for this to have happened. Details Host 192.168.xx.x4 appears to have failed. High Availability is restarting VMs on hosts throughout the cluster. 08-17-16, 02:01:41am Host 192.168.xx.x4 appears to have failed. High Availability is restarting VMs on hosts throughout the cluster.08-11-16, 07:19:48am We updated the AHV and NCC and since had a repeat last night from the first instance last week Is there a potential hw fault with the host that has not yet been detected or checked?
Nutanix Move: How to add second NIC to Move VM
While it is not as a straightforward process as we would like for it to be there is an option to add a NIC to your Move VM. Login to Prism Element Add New NIC to the Nutanix-Move appliance and select the network Launch the console of Nutanix Move appliance. Switch to root user Use vi editor or any other editor of your choice to open the file /etc/network/interfaces Add the second interface eth1 configuration in the format below based on DHCP/Static IP addressing. Restart the networking service.Please Note: If you are using Move 3.0.3 or above you can skip Step-7 and Step-8. That'll be taken care automatically. There will be an existing script named "start-xtract" under "/opt/xtract/bin". Overwrite that script with the one provided in the KB (see link below). Change the permissions for the script. Stop iptables and restart move services. Verify the new eth1 interface configuration using "ifconfig eth1". See KB7399 - Procedure to add a second NIC interface on Move v3.0.2 for detailed ins
Remote syslog server
Syslog or ‘System Logging Protocol’ is used by routers, switches, access points, servers and of course Nutanix. It is used to send events and logs to a remote syslog server that collects, organizes and filters these logs. In Nutanix, we provide different log modules for the core services that can be enabled separately and you can configure the logging level required for them as well. For example, if you wish to forward just the warning log messages for Acropolis, this would be the command: ncli rsyslog-config add-module server-name=<server_name> module-name=ACROPOLIS level=WARNING For a more comprehensive look at the various modules and log levels, check out syslog server. Bonus: Would you like to know who powered off that VM while you were sleeping? Or in general, WHO did WHAT on WHICH OBJECT, at what TIME, from WHERE, and what was the OUTCOME? You can forward these Audit logs to your syslog server as well from Prism Central. More information on this can be foun
Which hypervisor Hyper-V or AHV?
Hi, New to the forum, we recently had our Nutanix servers delivered and we are planning to do the install in the coming weeks. We haven’t decided which hypervisor to go with yet its either going to be Hyper-V or AHV. I just wanted to know peoples experiences that they have had? We currently use Hyper-V, so we are comfortable with that but all the nice features seems to be heading to AHV/Prism. I would be grateful for your thoughts and comments Thank you
ESXi 6.7 U3 Assessment
Hey everyone, I am required to upgrade from ESX 6.7 U1 to U2 to resolve a bug that prevents me from going back more than 8-10mins to look at VM performance, however, Nutanix has only just now tested U3. My question is: Has anyone migrated to U3 yet and if so how has it been so far?
shrink VM disk
HI Team under Prism we’ve created a few VM’s (windows), one of VM is our File Server, current configuration is IDE disk 1TB, (60GB OS, 900GB shared folders). probably is better to create one virtual disk 60GB on OS and the second virtual disk as storage (1TB) or more.question is how to shrink current IDE disk from 1TB → 60GB, and if IDE should be change on SCSI disk, what Nutanix recommended?instruction to shrink disk IDE->SCSIthanks Dariusz
Disk Firmware Update Stuck
Hi guys, I can't seem to perform an inventory on LCM any more. [b]It fails on the check[/b]: [i]Check 'test_upgrade_in_progress' failed with 'Failure reason: Another Upgrade operation is in progress. Please wait for that operation to complete before starting an LCM operation.'[/i] However after running [b]progress_monitor_cli --fetchall[/b] This does not show anything in progress. [b] host list -[/b] All 3 hosts show: [i]false (life_cycle_management)[/i] - Which is as expected. ~/data/logs$ [b]upgrade_status[/b] 2019-09-04 14:47:32 INFO zookeeper_session.py:131 upgrade_status is attempting to connect to Zookeeper 2019-09-04 14:47:32 INFO upgrade_status:38 Target release version: el7.3-release-euphrates-5.10.6-stable-294f5f671ba8982a0199e18b756e8ef3a453af9a 2019-09-04 14:47:32 INFO upgrade_status:43 Cluster upgrade method is set to: automatic rolling upgrade 2019-09-04 14:47:32 INFO upgrade_status:96 SVM 10.x.x.x is up to date 2019-09-04 14:47:32 INFO upgrade_status:96 SVM 10.x.
vSphere and Nutanix upgrade
purchased 7/15/2016 setup/install 10/24/2016 (by Nutanix tech) office move 4/1/2017 (late april, power outage in San Francisco financial district, was on bus on way to work when power went out, never got to shutdown Nutanix cluster properly, next morning powered up cluster and ran fine. Awesome! no issues since then) hardware: 2 NX-8235-G5 1 NX-8135-G5 NOS: 4.7.2 NCC: 2.3.0 BIOS: 20160516 BMC: 03.28 hypervisor: VMware vCenter Server 5.5 Update 2e (vSphere on Windows Server 2008 R2) I need to upgrade vSphere from 5.5 U2e to whatever latest version of 6.5 and also Nutanix NOS 4.7.2 to 5.5.8 Do I: 1 - upgrade VMware to 6.5 then upgrade Nutanix to 5.5.8 or 2 - upgrade Nutanix to 5.5.8 then upgrade VMware to 6.5 or 3 - upgrade VMware to compatible intermediate version and then upgrade Nutanix to compatible intermediate version. upgrade VMware to 6.5 and then upgrade Nutanix to 5.5.8 or 4 - upgrade Nutanix to compatible intermediate version and then upgrade VMwar
IPMI network configuration
Let say that you have updated your IPMI IP address or moved it to another subnet, after you finish the update you are not able to log back in to the IPMI. This happens as a result of not restarting the genesis service on the local node which people tend to forget, after you make the change you must restart the services on the same node’s CVM by running “genesis restart”. If the restart is successful, output similar to the following is displayed: Stopping Genesis pids [1933, 30217, 30218, 30219, 30241] Genesis started on pids [30378, 30379, 30380, 30381, 30403] You can change the network configuration of your IPMI using one of the following methods: Configuring the Remote Console IP Address (IPMI Web Interface) Configuring the Remote Console IP Address (Command Line) Configuring the Remote Console IP Address (BIOS) For the full documentation check out this page.
VMotion for virtual Machines between 2 Nutanix Blocks
Hi All , I would like to know the way how can I move vm's from 1 nutanix block into other Nutanix block using vmotion . - Each Nutanix block configured as a stand alone Nutanix Cluster -2 Nutanix Block under same vmware DC and Cluster and managed by 1 vcenter. but when I run vmtion for one virtual machine from vsphere client I could not choose the target nutanix storage. Thanks
Already have an account? Login
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.