Why do we see this alert? Warning : Some storage containers have a high number of NFS files

  • 2 March 2020
  • 0 replies

Userlevel 3
Badge +4

On a Nutanix cluster running VMware vSphere, you may be seeing this alert while there is no other sign of issue:

Warning : Some storage containers have a high number of NFS files

Impact          : High number of NFS files may cause vpxa services on esxi hosts to restart.
Cause           : Number of files for respective storage containers has increased beyond 20K. This is expected with large VDI setups.
Resolution      : Reduce the number of files if you observe vpxa instability.

Why do we see this alert? What needs to be done about it?

As is noted on the relevant KB article this alert is set to be deprecated in an upcoming release of NCC. As suggested by the Impact and Resolution information above, the alert is related to a problem encountered in vpxa. The alert was created for an issue in vSphere 5.x that caused instability for the vpxa service if the number of files in a single NFS datastore was too high. The issue was resolved by VMware some time ago so you do not need to worry about this alert on currently supported vSphere.

If you are running any vSphere 6.x or later version and you are seeing this alert you can disable this check. To do so:

  • log in to Prism
  • go to the Health dashboard
  • click “Checks” at the top of the right-hand column
  • Find and click on “NFS file count check”
  • click “Turn Check Off” at the top of the screen.

If you are running an older version of vSphere, are seeing this alert, and you are seeing instability with the vpxa service (ESXi host communication with vCenter), and you cannot upgrade ESXi, in that case you can address this issue by adding a new storage container and mounting it to all hosts, then using vMotion to storage-migrate VMs to the new container until you no longer see this alert. For details on creating a storage container see the Prism Web Console Guide Do keep an eye on storage free space when migrating VMs between containers. You may need to pause periodically and wait some time for back-end clean up to restore free space on the cluster.

This topic has been closed for comments