How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,298 Topics
- 1,981 Replies
Common Vulnerabilities and Exposures (CVE) is a list of publicly discovered security defects seen in the IT landscape. Nutanix is always looking to make sure these defects are patched in a timely manner.A new feature available in the Nutanix Portal under the “Security” Pane of the “Documentation” section allows customers to view the current CVEs and garner the security level of each CVE. The table also includes the Product Release in which the patch for each CVE is included and whether that version has been released.For more information about the Nutanix related CVEs, take a looking at the “Vulnerability List” section in the Support Portal and find the corresponding Nutanix product.
Below are new knowledge base articles published on the week of February 7-13, 2021.KB 10630 - VM preparation in Nutanix Move fails for Linux VMs if installed curl version is below 7.19 KB 10670 - Objects - Connectivity via s3 clients might fail with error "x509: certificate signed by unknown authority" if certificates are signed by ICA KB 10684 - RedHat Satellite GUI does not show AHV hosts due to unsupported characters in AHV hostname KB 10691 - Reactive Hypervisor boot-drive break-fix fails on HPE DL Gen9 platform KB 10704 - Commvault - Unable to add Worm-enabled bucket in Nutanix Objects 3.1 - The content-md5 is message header did not match the content received KB 10722 - Cannot update vstore mapping for a remote site when NearSync is in use KB 10732 - [ Karbon ] Kubernetes upgrade might fail to initiate or get stuck on the first master node before Karbon 2.2.1 KB 10757 - Foundation Central upgrade failure in Prism Central without any Prism Element connected to itNote: You may need
Nutanix recommends that you use the Prism Life Cycle Manager to perform firmware updates. However, firmware must be updated manually if: The component is located in a single-node cluster, OR The component is located in a multi-node cluster, but the hypervisor the cluster is running does not support LCM firmware updates. Shutting Down a Node in a Single-Node Cluster Shutting Down a Node in a Multinode Cluster Manually Updating M.2 RAID Hypervisor Boot Drive FirmwareManually update the firmware on HW RAID M.2 boot drives on -G7 or newer platforms in cases when LCM cannot be used. Manually Updating Data Drive FirmwareManually update the firmware for a SAS or SATA data drive or update the hypervisor boot drive firmware on any -G6 platform M.2 drive (non-RAID). Manually Updating SATA DOM FirmwareUpdate firmware for a SATA DOM. Manually Updating the BMC and BIOSManually update BMC and BIOS versions. Manually Updating HBA Controller FirmwareUpdate the firmware for an HBA card.
The shutdown token is used by a Nutanix cluster to prevent more than one entity from being down or offline during the occasion of software upgrades or other cluster maintenance. The CVM that is holding the token is the only entity allowed to be down or offline. Sometimes, for various reasons, a CVM can remain holding the token even after an upgrade or maintenance has been successfully completed. This usually does not cause any issues, however, until another upgrade or maintenance is invoked on the cluster sometime in the future. Upgrades or other maintenance pre-checks will search for any unrevoked tokens and, if existing, not proceed until that token has been properly revoked. Before manually revoking the token, it is good practice to verify that there are indeed no outstanding or ongoing upgrades or maintenance activities currently occurring with the cluster. Once confirmed, manual token revocation is often accomplished by a simple restart of the Genesis service on the CVM currently
Below are new knowledge base articles published on the week of February 14-20, 2021.KB 9314 - Nutanix-Auth-FAQ KB 9866 - NCC Health Check: ergon_checks KB 10310 - Alert A801101 - L2StretchDeletionFailure KB 10574 - Pre-Upgrade check: test_vmd_node_esxi_upgrade_versions_compatible KB 10657 - Discrepancy between Memory Capacity on Prism and Hyper-V KB 10720 - Modify snapshot expiration date and time for Xi Leap snapshots KB 10758 - PDF Report generation fails on Prism Central after upgrading to pc.2021.1 KB 10762 - Prism Central reports showing "No Data" when multiple filtering rules are applied to filter specific entities KB 10765 - AHV upgrades from 20170830.x to 20190916.x and 20201105.x on Dell clusters fail with "Failed to run 'yum remove -y DellPTAgent dcism': 1" error message. KB 10776 - NGT installation fails on Linux VM with Permission denied error KB 10787 - LCM pre-check failing with Failed while performing pre_actions err: Unable to boot cvm [xx.xx.xx.xx] into ivu. KB 10789 -
The iSCSI Data Services IP address is used to provide iSCSI access to the cluster storage. It is primarily used by Nutanix Volumes, but is also leveraged by other products such as Calm, Leap, Karbon, Objects and Files. This IP address is owned by one CVM of a cluster at a time, with ownership changing among the CVMs as needed to ensure that it is always available. To note, the iSCSI Data Services IP address CVM owner does not necessarily correlate with the CVM that currently maintains Prism service leadership.To find the current CVM acting as the iSCSI Data Services IP address owner, simply obtain the IP address output from all of the CVMs (generally by using the “allssh ifconfig” command) and verify which CVM reports as having this address. To find when a ownership change has occurred, the Stargate service logs from each CVM can be filtered for entries regarding “eth0:2”.You can find more information regarding the iSCSI Data Services IP Address from the Nutanix Volumes Guide along wit
LCM simplifies Nutanix IT infrastructure life cycle operations by consolidating software and firmware component upgrades into a unified control plane. Prerequisites· It's recommended to have a cluster with at least three nodes to perform a normal rolling hypervisor upgrade.· The 2-node case isn't well supported and might require extra steps to enable rolling upgrades, like installing and configuring a Witness VM.· The 1-node cluster upgrade is also supported but requires LCM 2.2.3 or newer. · LCM 2.0 is considered old and not recommended for the AHV upgrades, instead one has to use the legacy "Upgrade Software" method in "Settings".· Starting LCM 2.2.3, AHV upgrade is supported, both single node cluster and regular cluster, but it only support el6 based AHV upgrade.· Starting LCM 2.3, AHV upgrade is fully supported, single node, multiple nodes cluster, upgrade from el6 based AHV to el7 based AHV, el7 based AHV to el7 based AHV, UEFI el7 based AHV to UEFI el7
Below are the top knowledge base articles for the month of February 2021.KB 7503 - NX Hardware [Memory] – G6, G7 platforms - DIMM Error handling and replacement policy KB 1540 - What to do when /home partition or /home/nutanix directory on a Controller VM is full KB 4141 - Alert - A1046 - PowerSupplyDown KB 4409 - LCM: (Life Cycle Manager) Troubleshooting Guide KB 4158 - Alert - A1104 - PhysicalDiskBad KB 1113 - HDD/SSD Troubleshooting KB 2090 - AHV host networking KB 4519 - NCC Health Check: check_ntp KB 6945 - How Upgrades Work at Nutanix KB 5582 - NCC Health Check: idf_db_to_db_sync_heartbeat_status_check KB 2473 - NCC Health Check: cvm_memory_usage_check KB 4639 - How to place CVM and host in maintenance mode KB 6153 - NCC Health Check: default_password_check and pc_default_password_check KB 8932 - NCC Health Check: pc_vm_resource_resize_check KB 1863 - NCC Health Check: sufficient_disk_space_check KB 3741 - NGT: Nutanix Guest Tools Troubleshooting Guide KB 4273 - NCC Health Check:
Below are new knowledge base articles published on the week of February 21-27, 2021.KB 9288 - Alert - A1102 - PhysicalDiskAdd KB 10533 - Alert - A200403 - Non Compliance with Host Affinity policies. KB 10535 - [ Karbon ] Alerts stay in firing state in a Karbon Cluster UI KB 10815 - VMDK disk image uploaded to Prism shows as "Inactive" with no size KB 10819 - [Karbon] Karbon UI not loading after scaleout PC upgrade to pc.2021.1 - "upstream connect error or disconnect/reset before headers. reset reason: connection failure" KB 10830 - vCenter /storage/seat partition becoming full KB 10832 - Nutanix Move | Migration Plans Disappear after Move-3.7.1 Upgrade KB 10840 - LCM: SATA DRIVEs firmware fails with error: 'EnvironmentModule' object has no attribute '__getitem__' KB 10850 - Physically relocating a block within an online clusterNote: You may need to log in to the Support Portal to view some of these articles.
Nutanix cluster requires three nodes minimum but Nutanix also offers the option of two-node cluster. By configuring external witness VM in a separate failure domain to the configuration provides resiliency features of a three-node cluster.Requirements: Controller VM minimum requirements: 6 vCPU and 20 GB memory Replication factor (RF): RF2 spanned over two nodes and RF4 for metadata on SSDs over two nodes. RF4 for metadata helps during a node failure scenario to quickly transition the healthy node to run in single-node mode with the metadata remaining disk fault tolerant. (Metadata in a two-node cluster is typically small, so the storage need for four copies is modest.) Drive failure effects: one node + one SSD failure (on other node) = read-only mode. Guidelines: Two-node clusters are supported only on a select set of hardware models. See KB 5943 for information about supported models. supported model The upgrade process in a two-node cluster may take longer than the usual proc
HPE® ProLiant® Hardware CompatibilityThis document specifies the hardware, software, and firmware that the Nutanix platform requires to run on HPE® ProLiant® servers. Supported Hardware Models Hardware Hardware Supported Models HPE® Gen10 Servers HPE® ProLiant® Servers DL360 Gen10 8SFF DL380 Gen10 12LFF DL380 Gen10 24SFF HPE® Apollo® System Apollo® R2600 XL170r Gen10 24SFF HPE® Gen9 Servers HPE® ProLiant® Servers DL360 Gen9 8SFF DL380 Gen9 8SFF DL380 Gen9 12LFF DL380 Gen9 24SFF Note: HPE® and HPE® ProLiant® are registered trademarks or trademarks of Hewlett Packard Enterprise Company. All other brand names mentioned herein are for identification purposes only and may be the trademarks of their respective holder(s). HPE® ProLiant® Gen10 Servers Software and Firmware Compatibility for DL3x0 Gen10 HPE® ProLia
The flash mode for VM allows to set the storage tier preference to SSD for a virtual machine or a volume group. Without flash mode, data for a mission critical application such as a relational database can run out of the room in the SSD tier because other workloads running on the same cluster. When this happens, the database could potentially migrate to the HDD tier. For extreme latency sensitive workloads, this migration to the HDD tier could negatively affect the read and write performance. By default, you can use up to 25% of the cluster-wide SSD tier as flash mode space for VMs or VGs. If the data size for flash mode enabled VMs or VGs exceeds 25% of the SSD capacity, the system may down migrate the data. Before down migration, the flash mode feature tries to preserve the excess data on the SSD tier for some reasonable amount of time so that you can take corrective actions on the cluster and bring it back to stable. To reduce flash mode usage, we can disable the flash m
Below are new knowledge base articles published on the week of February 28-March 6, 2021.KB 10526 - Alert - A200401 - VM forcibly powered off KB 10856 - Cluster Maintenance Utility version via cli KB 10861 - Acropolis crashing with "KeyError: u'off'" error after upgrade to 5.19.x KB 10863 - Non-admin AD users cannot Clone a VM when added to a Custom RBAC role on PC KB 10878 - VMSA-2021-0002 / ESXi OpenSLP / Disable CIM Server KB 10882 - [Nutanix Move] 1-click-upgrade from 3.7.1 to 3.7.2 may fail when we have migration plans which were created in Move version less than 3.7.0Note: You may need to log in to the Support Portal to view some of these articles.
I am interested on visualizing east west traffic as well as egress within my NTX environment. With Open vSwitch it seems to be supported through the default vTap interface on the bridges created on my interfaces. My question is, has anyone successfully set up a Nutanix vTap interface to capture packet flow on an external appliance and are there any negative side effects of such a design (overhead, congestion, latency… etc)? I do have an upstream appliance that can handle the analysis similar to Gigamon, CloudLens and ExtraHop. https://docs.openvswitch.org/en/latest/topics/tracing/
Acropolis Dynamic Scheduling proactively monitors the nutanix cluster for any compute and storage I/O contentions or hotspots over a period of time. If ADS detects a problem, ADS creates a migration plan that eliminates hotspots in the cluster and migrates VMs from one host to another. We can monitor VM migration tasks from the Task dashboard of the Prism Element web console Following are the advantages of ADS ADS improves the initial placement of the VMs depending on the VM configuration Nutanix volumes uses ADS for balancing sessions of the externally available iSCSI targets By default, ADS is enabled and Nutanix recommends to keep this feature enabled. ADS monitors the following features: VM CPU Utilization: Total CPU usage of each guest VM. Storage CPU Utilization: Storage Controller (Stargate) CPU usage per VM or iSCSI target. ADS does not monitor memory and networking usage Lazan is the ADS service in an AHV cluster. AOS selects a Lazan manager and Lazan solver among the h
Nutanix Insights is a new software-as-a-service (SaaS) offering that aims to redefine the Support experience for our customers, and significantly improve the health of their clusters, by leveraging the telemetry we receive from clusters where a customer has activated Pulse. Nutanix includes a set of features known collectively as Insights that provides a predictive health and support automation platform. Insights dynamically analyzes the extent to which you are following best practices in configuring your clusters for long-term reliability, availability, and performance.Insights works as follows: Pulse collects cluster data and sends it to Nutanix customer support. The Pulse data goes to the Insights engine, a SaaS-like service in the cloud, that does deep analytical processing of the Pulse telemetry and identifies potential issues based on findings or patterns in the data. Insights employs analytics built on historical data and best practices to identify cluster configuration gaps t
Below are new knowledge base articles published on the week of March 7-13, 2021.KB 9484 - Alert - A802003 - VpcRerouteRoutingPolicyInactive KB 10545 - Alert - A110457 - StaleVMPresent KB 10898 - Objects - Unable to register s3 endpoints that are self signed | Peer certificate cannot be authenticated with given CA Certificates KB 10909 - Move fails with SCSI VirtIO device driver not found if RedHat includes kernel version 2.6.32-220 or prior in /boot.Note: You may need to log in to the Support Portal to view some of these articles.
A Storage Replication Adapters (SRA) allows VMware Site Recovery Manager (SRM) to integrate with 3rd party storage array technology. Nutanix SRA is one such software module construct that allows SRM to interact with Nutanix clusters. This allows SRM to perform “Array Based Replication” using Nutanix replication, Data Protection and disaster recovery.SRM is an orchestration tool that allows us to perform recovery plans and runbook functionality. You can set up, test and perform pre and post recovery steps in case of a failover, Set the order in which the VMs come up and decide to change IP address if needed. SRM depends on vCenter and is licensed separately. Before you start, ensure the following. SRM and vCenter versions are compatible. Please visit the VMWare website to confirm compatibility. AOS SRA and SRM versions are compatible. Please refer to the Nutanix portal Nutanix SRA for SRM compatibility matrix to confirm compatibility. You will need 2 clusters managed by 2 vCenter
I’m sure you have seen that one before. In most cases you expect it or at least understand what caused it. In some instances you probably ignore it (we all do, no shame). What if this happens when you log into the CVM or the host? Has cluster security been compromised?During the upgrade or rescue of the AOS new keys are created for each node in the cluster. When you open SSH session, these keys are compared to those that were noted on the client previously and since there is a mismatch a warning is triggered.KB-2388 Upgrade/Re-install of AOS changes the ssh key for remote host identification explains how to clean up the keys to get rid of the warnings.
A single-node cluster is configured like a regular (three-node or more) cluster in many ways, but here are some of the conditions. Nutanix offers the option of a single-node cluster for ROBO implementations and other situations that require a lower cost option and accept lowered resiliency protections. Single-node clusters are supported only on a selected set of hardware models. Refer the following article for details single-node-supported-hardwares Do not exceed a maximum of 1000 IOPS Do not exceed a maximum of 5 guest VMs . To protect the guest VMs from a scenario of node failure, nutanix recommends to configure backups. These are unlike single-node replication targets which are for replication and backup purposes. LCM is supported for software updates, but not firmware updates. There is no built-in resiliency for Prism Central on a single-node cluster. Do not create a Prism Central instance (VM) in the single node cluster. Async DR is supported for 6 hour RPO only Use
Below are new knowledge base articles published on the week of March 14-20, 2021.KB 9365 - Alert - A802002 - AncDnsUnresolvable KB 10693 - Alert - A130334 - NGT CD-ROM not Unmounted on the VM KB 10694 - Alert - A130192 - Conflicting NGT policies KB 10746 - AHV Guest VM Boot Order Being Modified to Default by Prism After Any Config Save KB 10847 - LCM Pre-check: "test_ncc_checks" KB 10857 - UVMs with VLAN tag may disconnect from network when uplink bond is tagged with VLAN on AHV host KB 10874 - VM migration tasks stuck and libvirt in inconsistent state on clusters running AHV 20170830.x KB 10897 - Xi Frame - Enterprise profile disks growing on every reboot (not extending their partition) KB 10912 - Nutanix Files: Managing Files-At-Root on NFS distributed shares KB 10944 - Alert - A130103 - NGT Mount failed KB 10957 - AHV networking interfaces are renamed after AHV upgrade to 20201105.1082 or later if RDMA NICs are presentNote: You may need to log in to the Support Portal to view some o
Nutanix Era is a suite of software which automates and simplifies database management, bringing one-click simplicity and invisible operations to database provisioning and lifecycle management (LCM). Starting with Copy Data Management (CDM) as its first offering, Nutanix Era enables database admins to provision, clone, refresh and restore their databases to any point in time. Through a rich, but simple to use, UI and CLI, they can restore to the latest application-consistent transactionEra enables you to easily provision database environments (either production or otherwise) on your Nutanix clusters. Also, you can only provision the database server VM that hosts a database, so that you can later create or clone databases on that database server.Some of the components include Database engines: Custom software images that are tailor-made to enterprise needs. Database profiles: Customizable database profiles for software, compute, networking, and database parameters. Database recovery
NVIDIA GPUs primarily have two modes of operation: Compute and Graphics.Compute Mode: the GPU operates within a configuration that is optimized for high-performance computing applications.Graphics Mode: the GPU is optimized for graphics processing and can subsequently be assigned into vGPU profiles for virtual machines (vGPU profiles cannot be used while in compute mode).Various NVIDIA GPUs are provided with default configurations for either of these modes and, sometimes, it is necessary to change the mode to better suit the corresponding workload of the host.In previous models of GPUs, it has been necessary to temporarily boot an AHV host into a NVIDIA-provided Linux ISO and invoke a “gpumodeswich” command with options to apply this change. With newer models of GPU, a command can be found natively within the AHV host filesystem after the corresponding GRID driver has been installed.You can find more information regarding this command via the “Nvidia: Unable to Assign vGPUs to guests w
Below are new knowledge base articles published on the week of March 21-27, 2021.KB 10516 - [ Karbon ] PE cluster is showing alerts for VGs used by the Kubernetes cluster(s) KB 10651 - NCC Health Check: metering_rest_connection_check KB 10768 - A number of VMs may be missing from the list when monitoring cluster using SNMP protocol KB 10813 - "UnicodeDecodeError" and "UnicodeEncodeError" for VM operation KB 10936 - Duplicate scheduled reports triggered after Daylight Savings Time (DST) change KB 10946 - Identifying the source IP generating TCP Reset packets in a network path KB 10953 - Using Nutanix Objects Self-Signed Certificate with Veritas Enterprise Vault KB 10954 - SMCIPMITool commands output "The node product key needs to be activated for this device" on BMC 7.10 KB 10967 - Cloning a Secure boot enabled VM on AHV with the "Custom Script" option enabled fails with "q35 machine type does not support ide bus type" error KB 10976 - Cluster instability after upgrading both primary an
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.