How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,177 Topics
- 1,837 Replies
Below are new knowledge base articles published on the week of October 25-31, 2020.KB 10197 - Alert ID - A110260 - Remote replication of the protection domain snapshot is lagging KB 10201 - Network Validation Failing During Network Segmentation Setup KB 10211 - Shutdown token cannot be acquired on single-node Nutanix clusters KB 10217 - AHV hosts in cluster running AOS 5.18 or newer may become non-schedulable if bridge configuration across cluster nodes is not consistent KB 10218 - Imaging new nodes via Foundation may time out on nodes with many NICsNote: You may need to log in to the Support Portal to view some of these articles.
VStore refers to a separate mount point that is created with its own NFS namespace in a storage container. VStore Protection DomainThe namespace of a VStore is mapped to the protection domain when you protect the VStore. Such a protection domain is termed as VStore protection domain. A VStore protection domain allows you to protect the VStore. You can also unprotect the protected VStore. Files in a protected VStore are replicated to a remote site at a defined frequency. You can recover these files from the remote site in the event of a disaster. A Nearsync replication schedule of as little as 20 seconds to protect a VStore can be configured. VStore Mapping for Data Protection Nutanix recommends mapping the VStore container in the primary site to a container in the remote site. If a container containing a protected VStore in the primary site is not mapped to a specific container in the remote site, the container in the primary site is automatically mapped to the default (Self-Service) c
Below are new knowledge base articles published on the week of July 4-10, 2021.KB 10171 - NCC Health Check: 1gbe_check KB 11000 - NCC Health Check: async_and_paused_vms_in_recovery_plan_check KB 11022 - Alert - A160134 - File Server CVM Port Unreachable KB 11072 - NCC Health Check: static_portchannel_check KB 11342 - NCC Health Check: dimm_sel_check KB 11548 - Alert - A160080 - FileServerGenericAlert KB 11586 - LCM API Guide KB 11613 - ESXi AMD Secure Encryption Virtualization (SEV) on AOS KB 11642 - Deploying Move agent or adding Hyper-V cluster to Move returns error "Invalid credentials or WinRM not configured" KB 11654 - Patroni Upgrade KB 11657 - A database server that is patched outside Era appears as a candidate for the same patch update KB 11664 - ESXi takes a long time to start up due to a HBA error KB 11671 - Xi Frame - FGA ScriptingNote: You may need to log in to the Support Portal to view some of these articles.
Data Avoidance technologies typically contribute the most to data efficiency because they prevent the creation of unnecessary data, which also minimizes the need for more resource-demanding data reduction technologies. With fewer back-end operations, more resources are available for front-end (user-driven) operations and applications. As Nutanix enables its built-in data avoidance technologies automatically, there is no need for manual configuration or fine-tuning.Thin ProvisioningThin Provisioning is a simple and broadly adopted technology for increasing data capacity utilization by overcommitting resources. Nutanix enables this feature in all containers by default. In deployments using the VMware ESXi hypervisor, containers are presented to hosts as natively thin-provisioned NFS datastores. Although it is a widely accepted method for increasing capacity utilization, thin provisioning traditionally has been associated with reduced storage performance. However, on Nutanix, thin provisi
It's very simple to identify Xi leap from leapFrom the CLI of the PC, run the command "nuclei availability_zone.list" to list the Availability Zones connected to the local AZFor Xi leap availability zone name shows name as US-WEST-1Bnutanix@cvm$ nuclei availability_zone.list "Total Entities : 2""Length : 2""Offset : 0""Entities :"Name UUID State Local AZ e007dd9a-1910-4a82-a4a5-9a9c7110d0f5 COMPLETE US-WEST-1B cce71719-d160-47b0-874a-171a61f8e6ee COMPLETE For leap it should start something likePC_xxx.xx.xx.xxXi leap can be found in PC by clicking on gear icon-->Xi cloud serviceFor details please refer leap administration guideleap guide
We have plan is to add e.g 10 new HPE Proliant DX380 12LFF server with the latest AOS, eg 5.17.1 into the existing cluster. Then remove 10 existing nodes and reimage and add it back. Can 5.10.5 coexisting with 5.17.1 in the same cluster? Thanks
Leap protects your guest VMs and orchestrates their disaster recovery (DR) to other Nutanix clusters when events causing service disruption occur at the primary availability zone (site). For protection of your guest VMs, protection policies with Asynchronous, NearSync, or Synchronous replication schedules generate and replicate recovery points to other on-prem availability zones (sites). Recovery plans orchestrate DR from the replicated recovery points to other Nutanix clusters at the same or different on-prem sites.Protection policies create a recovery point—and set its expiry time—in every iteration of the specified time period (RPO). For example, the policy creates a recovery point every 1 hour for an RPO schedule of 1 hour. The recovery point expires at its designated expiry time based on the retention policy. If there is a prolonged outage at a site, the Nutanix cluster retains the last recovery point to ensure you do not lose all the recovery points. For NearSync replication (li
HiI'm hoping someone has a similar setup. We have a 2 sites connected by fast links with one Nutanix cluster at each site (16 nodes in each cluster). We use Metro availability to replicate between sites. We have 1 stretched ESXi cluster hosting mainly Window servers.We have a number of VM guest failover SQL clusters (2 nodes in each cluster) which require shared storage and therefore require the use of Nutanix Volume groups. Each VM is connected to the volume group via ISCSI.We can't use Metro replication to replicate these VMS and their associated volume groups to the other site as you can't replicate volume groups with Metro availability - it's not supported.So, the only other option is to use Async replication. Here are my questions:In the manual the conditions for Async say "Do not have VMs with the same name on the primary and the secondary clusters. Otherwise, it may affect the recovery procedures"Q: If we failover the async protection domain containing the cluster node VM
While trying to get the list of vms using the rest api v2 against a Hyper-V configurationThe api return the following error message "Exception while retrieving entities : Hypervisor hyperv not supported" https://www.nutanix.dev/reference/prism_element/v2/api/vms/get-vms-getvmsThere is no mention of api v2 not supporting this, is it meant to be supported? If not how can we get the list of vms from it? Do we need to revert back to V1 api? This one returns the vm list but looking at the v1 v2 documentation the recommendation is to migrate to v2, so would just like some clarification.Also if we need to go back to V1 for hyper-v will any of the endpoints for in api V2 that require a VM_uuid work for a vm hyper-v or not? Since those uuid were returned by the V1 apiFor example:https://www.nutanix.dev/reference/prism_element/v2/api/flr/get-flr-vm-uuid-attached-disks-getattachedflrsnapshotofvmhttps://www.nutanix.dev/reference/prism_element/v2/api/flr/get-flr-vm-uuid-snapshots-getflrsnapshotsof
Files supports AOS software encryption and in-flight message encryption for SMB3 shares.You can only apply AOS software encryption to files by activating it through Prism. Refer to the file release Notes to ensure that you are running a compatible version of AOS. SMB3 Message Encryption: After enabling message encryption, the file encrypts messages on the file server side and decrypts them on the client side (only on the new connections for the share). Clients that do not support encryption (Linux, Mac, windows 7) cannot access a share with encryption enabled. Authentication: Active Directory with Kerberos provides secure authentication. File supports the following Kerberos encryption types. Advanced Encryption Standard 128 with the Keyed-Hash Message Authentication code and Cryptographic Hash Function SHA1 Advanced Encryption Standard 256 with the Keyed-Hash Message Authentication Message Authentication code and Cryptographic Hash function SHA1 (AES-256-HMAC-SHA1) Rivet cipher
I am interested on visualizing east west traffic as well as egress within my NTX environment. With Open vSwitch it seems to be supported through the default vTap interface on the bridges created on my interfaces. My question is, has anyone successfully set up a Nutanix vTap interface to capture packet flow on an external appliance and are there any negative side effects of such a design (overhead, congestion, latency… etc)? I do have an upstream appliance that can handle the analysis similar to Gigamon, CloudLens and ExtraHop. https://docs.openvswitch.org/en/latest/topics/tracing/
Nutanix Clusters Console receives the results of the status checks performed by AWS to see the status of the instances.Status checks on AWS are performed every minute, returning a pass or a fail status. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired. There are two types of status checks, system status checks, and instance status checks. System status checks monitor the AWS systems on which instance runs. Instance status checks monitor the software and network configuration of individual instances.If Nutanix Orchestrator detects that AWS has marked system status or instance status of an instance impaired, following WARNING message will be seen in the Notification Center of the Nutanix Clusters Console: More information on - http://portal.nutanix.com/9704 What does Nutanix Cluster On AWS mean :- Nutanix Clusters provides a single platform that can span private and public clouds but operates as a single c
Hello,Seeking for advice, I have 3 nodes cluster having problem with node B DIMM is undetected. I’m planning to reseat my DIMM, if I will just manually move the VMs to node A and C then perform a shutdown on node B without entering maintenance mode does the VM will join back to node B?Can please also share a link to a document to refer for the procedure.
Below are new knowledge base articles published on the week of August 22-28, 2021.KB 10854 - ESXi upgrade pre-check - test_rf1_vms_are_shutdown KB 11835 - 1-click Hypervisor upgrade from ESXi 6.7 U3 or later to 7.0 U2a is not installing the correct i40en driver KB 11853 - OpenLDAP authentication doesn't work on BMC KB 11941 - Storage-only nodes may have larger amount of memory and CPU assigned on CVMs KB 11945 - LCM may show an upgrade to a version which is older than current version KB 11946 - LCM: Failed to run LCM operation. error: KeyError('https_url',) KB 11947 - Nutanix Files:- Cluster version 3.8.1 and 184.108.40.206 may experience severe performance degradation KB 11950 - How to find shell idle timeout in CVM and AHV KB 11955 - Failed to perform Inventory with LCM for Dell XC hardware due to Dell PTAgent not responding KB 11958 - 1-click Upgrades to ESXi 7.x from 6.5.x and 6.7.0 by using ESXCLI might fail due to a space limitation KB 11965 - Frame - Temporary insufficient AWS capacity
HiI'm hoping someone has a similar setup. I originally posted this question a month or so ago but the answer I received isn't correct. The post has been marked as solved so I'm unable to reply hence why I'm opening a new one. I think there was a misunderstanding with my described setup in the previous post so I've tried to make it clearer here. This is tricky to explain without diagrams so I appreciate this may be difficult to provide a valid answer and I may not get a response.Our SetupWe have a 2 sites connected by fast links with one Nutanix cluster at each site (16 nodes in each Nutanix cluster). These two Nutanix clusters make up 1 stretched ESXi cluster hosting mainly Window servers. We then use Metro availability to replicate between these sites\Nutanix clusters.So in short - One ESXI cluster that's stretched across two sites and we use Nutanix Metro to synchronously replicate.On this environment we host a number of VM guest failover SQL clusters (2 nodes in each cluste
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.