How It works
Have questions about how the Nutanix Platform works? Looking to get started - start here!
- 1,120 Topics
- 1,675 Replies
Below are new knowledge base articles published on the week of December 29, 2019-January 4, 2020. KB 8463 - HDD/SSD failure can cause CVM reboot KB 8464 - LCM - Why VMs are detected as non-migratable during update operation ? KB 8778 - AHV | Bitlocker cannot be enabled on Windows Server 2019 in VM running on AHV KB 8779 - Check the Foundation version installed via CLI Note: You may need to log in to the Support Portal to view some of these articles.
We are thinking of adding two compute only nodes to our cluster for Oracle. What are the hardware requirements for running CO nodes? Does the cluster the CO’s are joined to have to be all flash or do you have to “pin” the VM’s running on the CO to All Flash in the HC storage. We currently have 4 HC nodes running a mix of SSD and HDD.
Hi, I am having a Hyper-V cluster managed by a SCVMM with Kerberos enabled. I already did: Join the cluster and hosts to domain Create the MSFC and join to domain Add hosts and Nutanix container share to SCVMM Enable Kerberos and configure Delegation But, from each host, I currently cannot: Access to Nutanix container via SMB path (\\VIP\container). When I type the SMB path into file explorer, it requires me to login. I tried to use the AD account used to enable Kerberos on Prism as well as AD admin account but could not make it Create the VM (I could create the VM by using SCVMM instead). “Failed to create the virtual hard disk” error window appears every time I try to create Is this the normal behavior of the whole system? As I am quite new to Hyper-V, I am getting lost in finding the solution to solve it
Confused regarding crash-consistent snapshot and application-consistent snapshot? Crash-Consistent SnapshotVM snapshots are by default crash-consistent, which means that the vDisks captured are consistent with a single point in time. The snapshot represents the on-disk data as if the VM crashed or the power cord was pulled from the server—it doesn’t include anything that was in memory when the snapshot was taken. Today, most applications can recover well from crash-consistent snapshots. Application-Consistent SnapshotApplication-consistent snapshots capture the same data as crash-consistent snapshots, with the addition of all data in memory and all transactions in process. Because of their extra content, application-consistent snapshots are the most involved and take the longest to perform.While most organizations find crash-consistent snapshots to be sufficient, Nutanix also supports application-consistent snapshots. The Nutanix application-consistent snapshot uses Nutanix Volume Shad
Dear Sir/ Madam, I am working on a further Nutanix project and I have installed 8 x NX8155 servers. Each server has 6 x 10g TwinX cables and 2 x CAT6 copper connections. I am again trying to find any suitable cable manager due to the amount of physical cabling for each device. Can someone please advise if there are any cable managers that Nutanix is aware of or via a third party as I am struggling to locate any. Thanks in advance. Simon.
Hello all, Hope you’re all doing great. Kindly if you can guide me where to start with Nutanix as I don’t know anything and willing to know everything related .. so I am a VMware guy who is willing to understand the NU platform and I hope I am writing the right post in the right place Thanks in Advance, Cheers.
In some scenarios, we might have a need to find the disks associated with a virtual machine hosted on a Nutanix AHV Cluster. Nutanix offers a distributed storage fabric, which forms a large storage pool across “N” number of nodes. we then create containers for different types of workloads or we can easily use a single container as well.Virtual Machines are created via Prism Management and all vDisks associated with the VMs are hosted on the distributed storage pool. On a Nutanix AOS Cluster, Acropolis service runs across the cluster to manage virtual machine operations and configurations. As both Data & MetaData is distributed across the cluster, we can use the “Acropolis” service on almost any CVM to retrieve virtual machine information running on any node.In order to interact with the Acropolis service, Nutanix offers “acli”. a powerful set of commands for virtual machine operations across your Nutanix cluster.acli offers tab-completion once inside the acli shell or we can execu
Hi, i want to share NIACtool v2.5 Beta. It can be run on windows operating systems, it is possible to obtain a complete infrastructure report detailing the virtual machines created. Supports ESXi and AHV hypervisors. The result is a fully modifiable Excel document. [url=https://github.com/dlira2]https://github.com/dlira2[/url]
Below are the top knowledge base articles for the month of December 2019. KB 4141 - Alert - A1046 - PowerSupplyDown KB 4116 - Alert - A1187, A1188 - ECCErrorsLast1Day, ECCErrorsLast10Days KB 7503 - G6, G7 platforms with BIOS 41.002 -DIMM Error handling and replacement policy KB 1540 - What to do when /home partition or /home/nutanix directory is full KB 4541 - Alert - A101055 - MetadataDiskMountedCheck KB 4494 - NCC Health Check: metadata_mounted_check KB 1113 - HDD/SSD Troubleshooting KB 4158 - Alert - A1104 - PhysicalDiskBad KB 4409 - LCM: (LifeCycle Manager) Troubleshooting Guide KB 2090 - AHV | Host and Guest Networking KB 1888 - NCC Health Check: storage_container_mount_check KB 4188 - Alert - A1050, A1008 - IPMIError KB 4519 - NCC Health Check: check_ntp KB 1507 - Alert IPMI IP address on Controller VM was updated to ... without following the Nutanix IP Reconfiguration procedure, can be misleading KB 3357 - NCC Health Check: ipmi_sel_cecc_check KB 8514 - NCC Health Check: fs_inco
Nutanix AOS supports multiple hypervisors and supports a variety of hardware components. Nutanix qualifies hardware components to ensure optimal performance and minimal disruption. Nutanix AOS with ESXi supports Intel’s 10Gb (ixgbe) Ethernet Network Adapters. It is important to ensure that all network related components are working optimally to ensure reliable cluster operations. If you are running Nutanix AOS with ESXi 5.5 or 6.x and have the following Intel Network Cards installed in your host systems: Intel Ethernet Controllers 82599 and x540 You might receive an NCC alert, pointing to an upgrade required for ixgbe nics or an older firmware detected. This is to ensure optimal networking performance amongst the hosts and to avoid any errors due to NIC driver issues. You can upgrade the firmware for the above Intel NICs (ixgbe) with the following Nutanix KB: Updating the ixgbe Driver Version to the 4.5.1 Release The above KB outlines all the pre and post steps required for a smooth
Many customers buy #hci technology from the usual suspects (#nutanix, #vmware, #dell, etc.) without any consideration to what happens when things break. Support becomes REALLY important when your systems are not operating as expected. Thankfully, they don't have to worry if they bought Nutanix. Hear what #nutanix customers say about our award winning #nps90 #support.
Below are new knowledge base articles published on the week of December 22-28, 2019. KB 8755 - CVM degraded on Dell XC 14G hardware with BIOS older than 2.2.11 due to undetected faulty DIMM KB 8762 - Prism home page shows "loading" when browser has adblock extension installed KB 8766 - Pfsense Vm's remain master after failover on AHV hosts KB 8773 - ESXi does not display CDP information on Intel NICs Note: You may need to log in to the Support Portal to view some of these articles.
I have AHV cluster at both DC & DR, which has number of VMs being protected with Protection Domain feature. These VMs are being replicated from DC to DR in scheduled manner. Now as a part of DR drill, I need to run these production VMs from DR for period of time and then again make it functional from DC once drill is over. I am concerned over how the delta changes made during drill with VMs at DR will be again replicated back to DC as Protection Domain replication is uni directional (DC to DR) ? Any recommendations ?
[code]Hello. My English is bad, so I use Google Translate and in advance I ask you to treat with understanding the semantic distortion. The problem is that we have deployed the ion electronic document turnover system (DV) on our Nutanix for testing. To test performance DV, they began to fill DV with a large number of copies of several files: File Quantity Size file1.pdf 1699713 314KB (321,942 bytes) file2.png 998001 203KB (208,689 bytes) file3.pdf 1138988 389KB (398 354 bytes) file4.pdf 1007002 504KB (516,306 bytes) file5.tiff 2271889 571KB (584,872 bytes) The mechanism of operation is as follows: the script records the file across the DV database. In this case, the database creates a record and writes the file to the SMB shared folder on another server. The server with the SMB shared folder is also implemented on Nutanix as a virtual machine. Total uploaded 2.6TiB with the specified set of files, and the deduplication ratio on the 1: 1 disk saved 12.8GiB. In my opinion, this is th
Below are new knowledge base articles published on the week of December 15-21, 2019. KB 8204 - Alert - A1061 - vDisk Block Map Usage High Critical KB 8284 - Alert - A130151 - Two node cluster state change to KB 8640 - Prism one click upgrade : Preupgrade/Upgrade options not available after manually uploading metadata json and upgrade bundle KB 8743 - Alerts relating to IPMI sensors report that the component cannot be monitored or be permanently damaged. KB 8747 - Anonymous IPMI user Note: You may need to log in to the Support Portal to view some of these articles.
In scripts we use for some ESXi hypervisor configurations we utilize “allssh”. The problem I’m running in to, though, is I must run each allssh command individually rather than using a script to run multiple allssh commands. The first line will kick off and before it has a chance to complete the second will kick off. For example, here’s where we configure DNS on the hypervisor through a CVM: allssh ssh email@example.com esxcli network ip dns server add --server=10.1.1.1 allssh ssh firstname.lastname@example.org esxcli network ip dns server remove --server=18.104.22.168 allssh ssh email@example.com esxcli network ip dns server add --server=10.1.1.2 The first will kick off then the second goes prior to the first completing. The script then gets confused and stops while still connected to one of the ESXi hosts. Does anyone know of a way to initiate subsequent allssh commands only after the prior one running has completed? Although not very clean, should I put a sleep command in between each?
Hi I can use the command Get-NTNXAllFileServerShares to get a list of all the shares on a file server but how do I use Get-NTNXFsShareStat to get the usage of the share? I assume I need to user New-NTNXObject to build a list metrics but I dont know what metrics are available
Data is everything in this modern IT world and we want to have as much storage capacity as possible in our infrastructure. Let’s say you have a new Nutanix cluster and want to know the recommended maximum storage utilisation capacity of the cluster. NOTE :- We should not try to utilise the cluster to it’s peak storage as we need to have sufficient space available in case a node or a disk fails to rebuild the data. Whenever a disk fails in a Nutanix cluster, the extent groups of that disk needs to copied to another disk to ensure fault tolerance, same goes for a node failure.So how can we calculate the maximum storage utilisation of our cluster and if different scenarios, if we have RF-3 or if we have RF-2? Please go through the KB-1557 to understand the formula to calculate the maximum recommended usage for a cluster.
Someone who can support me, was doing the idrac update (Hyper-V hypervisor) and failed, but now the node is out and can not lift the services, I try to restart it from the console and I get the error message:2019-12-11 14:04:41 INFO zookeeper_session.py:131 cvm_shutdown is attempting to connect to Zookeeper 2019-12-11 14:04:41 WARNING lcm_genesis.py:219 Failed to reach a [localhost] where LCM [LcmFramework.is_lcm_operation_in_progress] is up. Retrying... 2019-12-11 14:04:46 WARNING lcm_genesis.py:219 Failed to reach a [localhost] where LCM [LcmFramework.is_lcm_operation_in_progress] is up. Retrying... 2019-12-11 14:04:51 WARNING lcm_genesis.py:219 Failed to reach a [localhost] where LCM [LcmFramework.is_lcm_operation_in_progress] is up. Retrying... 2019-12-11 14:04:56 WARNING lcm_genesis.py:219 Failed to reach a [localhost] where LCM [LcmFramework.is_lcm_operation_in_progress] is up. Retrying... 2019-12-11 14:05:01 WARNING lcm_genesis.py:219 Failed to reach a [localhost] where LCM [Lcm
We are on 5.10. I want to leverage the API(s) to remove disks from a VM, and then add new disks to a VM (using images as the sources). Which API should I use? v3 has "Update a existing VM" (sic) but I am unclear about how to (a) remove scsi:1 and scsi:2 and (b) add new disks (using images). The v2 API has "Detach disks" functionality. If I detach a disk does it get deleted? If not how do I delete it after detaching? "Attach disks" is also available in v2 but I don't see a way to use an image as the source (which I can do using ACLI). Thanks in advance for your responses.
Below are new knowledge base articles published on the week of December 8-14, 2019. KB 8306 - Pre-Upgrade Check: test_if_any_upgrade_is_runningKB 8461 - LCM Pre-check - test_oneclick_hypervisor_intentKB 8545 - NSX-T Support on Nutanix InfrastructureKB 8608 - Finding the serial ID of a bad HDD or SSDKB 8660 - [DIAL] Conversion stuck: Migrating UVMsKB 8670 - Prism Central: After upgrading to Prism Central 5.10.6 on Hyper-V, "Illegal instruction (core dumped)" message results when running NCC.KB 8673 - AHV | VM power on may fail with NoHostResources error when initiated from Prism UI or acliKB 8674 - 3rd party storage might cause issues on a Nutanix systemKB 8675 - Cannot plug out the Phoenix (or other) ISO from the IPMIKB 8690 - Alert - A160061 - FileServerShareAlmostFullKB 8694 - Alert - A400111 | EpsilonVersionMismatchKB 8701 - Dell XC-Hyper-V LCM Inventory won't recognize the installed PTAgent & iSM versionsKB 8702 - LCM Operation Failed. Reason: Failed to validate update request.
If your hypervisor is ESXi you know how vCenter manages all the Vms in the vmware data center. You can either use vCenter for Vm management or Prism to control to do most of the same. This stated long ago after AOS 5.0 was released. Most core VM management functions like creating, cloning, updating and deleting VMs, and attaching/deleting disks and NICs as well as power operations alongside console access and guest tool management But for most of the above you will need to register the prism with vCenter. There are rules and guidelines and requirements and limitations that you need to be aware of. You can find out about these and how to register/unregister prism element with vCenter by reviewing: https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v56:wc-management-multi-hypervisor-prism-c.html One thing to notice is that the network traffic between the Prims and vCenter will be much higher than usual when prism is not register with vcenter. Ask any qu
Login to the community
Login with your account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.