Visit Nutanix

Nutanix Connect Blog

Welcome to the Nutanix NEXT community. To get started please read our short welcome post. Thanks!

Showing results for 
Search instead for 
Do you mean 

Nutanix Acropolis File Services (AFS): Performance and Scalability by Design

by Community Manager ‎01-24-2017 11:07 AM - edited ‎01-24-2017 11:11 AM (2,634 Views)



This post authored by Dan Chilton, Sr Solutions Performance Engineer at Nutanix


Nutanix recently released version 5.0 of the Acropolis Operating System and with that release came an exciting Nutanix product feature—Acropolis File Services, or AFS, for short. This new feature deploys like an app on the Nutanix platform. It provides the ability to deploy a robust, enterprise-capable SMB file sharing solution on a Nutanix cluster. By providing this feature within the Nutanix platform, it removes the need to maintain a separate stand-alone network attached storage (NAS)  solution for Windows / SMB  file services.


AFS is a clustered, distributed file server that runs as a set of file server VMs on the Nutanix platform. AFS has performance scalability built into the flexible design. As storage and performance needs grow, AFS can easily scale out or up, while also allowing dynamic load balancing. For more of a deep dive on AFS features and architecture, read our tech note.


The following workloads are well suited to be supported by the Nutanix designed AFS.

  • Windows user home directories
  • Virtual desktop user remote profiles
  • Departmental shares
  • Application shares

In this blog, we discuss file server performance requirements and testing methods, as well as how AFS meets and exceeds these requirements.


One of the first things that customers want to know when considering a file server solution is how well it performs. As someone who focuses on solution performance in my day job, I know how important this is. However, to answer the question, we first need to understand what the customer wants to do with the solution. The customer interactions with the file server constitute the workload.


Here are some example workloads:

  • Copying one large file at a time to the file server.
  • Browsing a list of files in a directory and then reading the desired file.
  • Downloading a file from the file server to the local client desktop.
  • Editing a document with Microsoft Word and then saving the document.
  • Multiple users logging on to virtual desktops that have their remote profiles stored on a file server

We used tools throughout the development and release cycle to test the performance of these workloads and others to make sure that we would perform well for our target use cases. To test workload #1, we used robocopy/copy and paste, #2-#4 Microsoft File Service Capacity Tool or FSCT, and #5 LoginVSI.


Single File Copy Test = Poor Test of Performance


File copy is easy to test (just copy and paste a large file) but it is a poor indicator of file system performance due to a low number of outstanding I/O and the single threaded nature of the workload. This means that a file copy test doesn’t accurately test what the fileserver can do.


Typical real world situations have many clients or applications requesting data concurrently driving a workload. Don’t just take my word for it, check out this discussion from Microsoft OneDrive team member Jose Barreto.




FSCT is a performance test suite based on the analysis of real user home directory operations at Microsoft. It has been used to prove performance by file server vendors including Microsoft, NetApp, and EMC. Some key aspects of FSCT that make it so valuable for simulating customer user home directory environments are:

It tests Active Directory integration, including Windows Domain Controllers, clients, user accounts, authentication, and permissions

  • Users connect to their home directory with multiple sub-directories, for a total of 270 files/folders and ~80MB of data
  • Users execute scenarios that create file service metadata workloads
  • Operations include cmdline file download/upload, Windows Explorer file delete, drag/drop, MS Word file open/close, and save
  • Throughput is measured in user count of concurrent sustained users, instead of IOPS/FileOPs.

We found that AFS provides a solution for home directories that can be scaled up as user count grows.

  • With FSCT, we pushed our file server to the limits and established reliable user connection counts per file server VM (FSVM) node that can be scaled up as needed.
  • We translated this data into the Nutanix Sizing tool to provide quality sizing proposals coupled with room for future growth.
  • The chart below shows the capability of a single AFS file server node in terms of concurrent heavy user workload.



The AFS solution starts with as few as three VMs, but we have successfully tested a solution scaled out to as many as 16 AFS virtual machines on a 16-node cluster. AFS functionality can be added to existing Nutanix clusters to leverage extra storage capacity or deployed as a standalone file server cluster. 


You can scale out small four vCPU and 16 GB RAM AFS VM nodes to support thousands of users by distributing the load across AFS nodes.




Why LoginVSI


We configured LoginVSI, a popular, industry-standard VDI sizing tool, to store the virtual desktop user remote profiles on the AFS solution. We also used Citrix Profile management, a robust enterprise VDI deployment suite. The workload includes Windows OS operations and applications including Microsoft Outlook.


With VDI profiles, the AFS server consumes cycles mainly during the boot phase; once the desktops are booted, they consume most of the CPU and RAM cycles. Accordingly, to gauge AFS performance, we focused on average logon time as the key metric (of course, less is more when it comes to logon time). We wanted to see if logon time increased with clients connecting to AFS file shares to read their profiles at boot up, rather than to the local c:\drive.


We found that we could easily support a 400 virtual desktop deployment with our smallest AFS cluster. As shown in the chart below, the total user desktop logon time was 7.2 seconds for local c:\drive and 6.8 for remotely stored profile. (Note that these logon times are typical for LoginVSI virtual desktops and should not be confused with individual I/O response times, which are often in the ms and us range.)




Why Choose AFS? Performance Scalability by Design


  • Scale Out—Clustered by design with at least three nodes, AFS currently supports up to 16 nodes. Workloads can be distributed evenly across small or large Nutanix clusters. You can add more file server VMs as additional storage or compute are needed.
  • Scale Up—Recognizing that some file services workloads require large amounts of processing (CPU) and caching (memory), AFS nodes can scale up as requirements grow by adding additional vCPUs and RAM.
  • Load Balancing–As file data grows, the storage or processing demands sometimes cause a hot node. AFS easily solves this by rebalancing data and processing across nodes.
  • Analytics-driven—A fine-grained analytics engine built into AFS continually analyzes the storage and performance consumption of the file server. From this analysis, AFS recommends scaling out, scaling up, or rebalancing. This feature can reduce TCO by minimizing administration time and performance troubleshooting.

Our testing shows that the flexible, clustered design of AFS can provide the performance and scalability enterprises need for the most demanding SMB file sharing environments, all without the added expense of a standalone NAS appliance. Talk to your Nutanix partner to request a demo for AFS and leave the file storage and management to us.


If you are new to Nutanix, we invite you to start the conversation on how the Nutanix Enterprise Cloud Platform can work for your IT environment. Send us a note at info@nutanix.com or follow us on Twitter and join the conversation in our community forums.


Disclaimer: This blog may contain links to external websites that are not part of Nutanix.com. Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.



Data Distribution with Acropolis File Services (AFS)

by Community Manager ‎12-05-2016 07:29 AM - edited ‎12-05-2016 07:32 AM (3,644 Views)

This blog was authored by Dwayne Lessner, Sr Technical Marketing Engineer at Nutanix


There two types of shares that can be created with AFS (Acropolis File Services); the Home share and General share. The General share is backed by a volume group with 6 vdisks when its’s created. The Home share is backed by 5 of the some type of volume group per File Server VM that make up the cluster. So in a small AFS deployment there would be 15 volume groups backing the Home Share. The Home share is automatically created when you deploy AFS.



 Figure 1Volume Group used by AFS


Home shares distribute data by dividing the top-level directories across all of the file server VMs that make up the file server. Acropolis File Services maintains the directory mapping for each responsible file server VM using an internal scale out database called InsightDB.




 Figure 2 Distribution of Home Directory Shares


If a user creates a share called “\\FileServer1\Users,” which contains top-level directories \Bob, \Becky, and \Kevin, \Bob may be on file server VM1, \Becky on file server VM2, \Kevin on file server VM3, and so on. The file server VMs use a string hashing algorithm based on the directory names to distribute the top-level directories.


This distribution can accommodate a very large number of users in a single share. The scaling limits of more traditional designs can force administrators to create multiple shares in which, for example, one set of users whose last names begin with A through M run off one controller and users whose names begin with N through Z run off another controller. This design limitation leads to management overhead headaches and unnecessary Active Directory complexity. For these reasons, AFS expects to have one home directory share for the entire cluster. If there is a reason to have more than one home directory share, you can create it using nCLI.


The top-level directories act as a reparse point, essentially a shortcut. Consequently, all user folders must be created at the root for optimal load balancing. Since it appears as a shortcut, we don’t allow user files in the share root; we recommend setting permissions at the share root before deploying user folders.


General-purpose shares (non-user directories) do not distribute top-level directories. The files and subfolders for general-purpose shares are always owned by a single file server. The diagram below illustrates two general-purpose shares (for example, accounting and IT) on the same file server.




 Figure 3 Two General Purpose Shares on the Same File Server


Unlike home directory shares, with general shares you can store files in the share root.


Continue the conversation in our comunity forums and share your experiences with the community. You can also ask questions on Twitter, using the hashtag #AskNutanix


Disclaimer: This blog contains links to external websites that are not part of Nutanix.com. Nutanix does not control these sites and disclaims all responsibility for the content or accuracy of any external site. Our decision to link to an external site should not be considered an endorsement of any content on such site.  






This post was authored by Shubhika Taneja, Product Marketing Manager, at Nutanix.


We are starting this new blog series on “Ten things you need to know” about each of the exciting capabilities that will be a part of our future software release. As with our previous releases, this future release is also expected to be packed with many exciting capabilities such as Acropolis File Services, Self Service Portal, Network Visualization and many more. So let’s begin with Acropolis File Services (AFS), here are the 10 things that you need to know:


  1. AFS precludes the need of a separate Network Attached Storage (NAS) appliance. AFS is a software-defined scale-out file storage solution designed to address a wide range of SMB/CIFS use cases, including Windows user profiles, home directories and departmental shares. It can be enabled on any AHV or ESXi cluster in a few clicks out of Prism, the Nutanix management solution
  2. AFS is a fully integrated, core component of the Nutanix Enterprise Cloud Platform. It can be deployed on an existing cluster or a standalone cluster. Unlike standalone NAS appliances, AFS consolidates VM and file storage thereby eliminating another infrastructure silo. AFS can be managed out of Nutanix Prism, just like VM services, thus unifying and simplifying management.
  3. AFS inherits the rich enterprise storage features from the underlying Nutanix Enterprise Cloud Platform such as intelligent tiering, deduplication, erasure coding, compression, and distributed self-healing.
  4. AFS requires a minimum of 3 file server VMs with each file server VM needing a minimum of 4 vCPU and 12GB of RAM. The performance of the AFS cluster can be easily enhanced by either scaling up (by adding more vCPU and memory to the file server VMs), or by scaling out (by adding more file server VMs).
  5. AFS supports SMB and works with Active Directory.
  6. AFS supports User and share quotas.
  7. AFS supports Access Based Enumeration (ABE).
  8. AFS enables user self service recovery by integrating with Windows previous version.
  9. To backup shares to 3rd party, traditional file backup from vendors like Commvault can be used. Backup can also be done to a secondary Nutanix cluster or to public cloud (AWS and Azure).
  10. AFS provides integrated automated disaster recovery to a secondary Nutanix cluster.





One of the fun things about participating in an online community is developing a community identity. One way to do that is with a personalized avatar.

Read More: How to Change Your Community Profile Avatar