VMware App Volumes - Storage

  • 16 July 2019
  • 2 replies

Badge +4
We plan to implement VMware App Volumes in our View Horizon VDI.

In the Nutanix solution design example they use separate datastores/containers for Desktops, App Stacks and Writeable volumes.

  • Is there any advantage in creating separate datastores or can all use one single datastore?
  • Also is there any advantage in enabling compression on the VM container, except for saving space?

Best answer by JeremyJ 6 March 2020, 21:05

View original

This topic has been closed for comments

2 replies

I did implement Horizon 7 instant clone with appvolume on Nutanix.

3 separates datastore according the following replication factors

  • RF3: Datastore for Servers infrastructure as Connexion Servers, Composer and so on...
  • RF2: Datastore for Instant Clone Pools running Windows 10 (would be RF1 if I could because CP-parent are spread all across hosts anyway)
  • RF3: Datastore for Appstack to be able to achieve replication to site B

Hope this help

Userlevel 3
Badge +4

Hi @beyondtheblack 
I think @FabriceB has given some good information here. I’d like to add a bit just to clarify. 
Storage features on Nutanix such as deduplication, compression, and replication factor are configured at the container level. I find this is the main driver for building separate containers: so that different feature sets can be applied. See the Prism Web Console Guide, Storage Management section for more details on container setup options.

To your second question, the answer is actually yes. In addition to space savings, for many workloads you can actually see a performance gain with compression. This has to do with the way the Nutanix cluster serves “hot” data. Frequently requested data blocks being read on a Nutanix node are held in an in-memory cache so that read requests can be serviced more quickly. In addition to this the storage service caches some, but not all, lookup data in memory. Servicing a read from memory is of course much faster than performing a lookup and then reading from disk. The portion of read requests being served from this in-memory cache is called the cache hit ratio, and generally having a higher cache-hit ratio means better performance.

So how does compression help with this? The answer is simple when you see it. Compressed data takes up less space in the in-memory cache, allowing for more data to be served from memory. While there is processing involved in decompressing data, the compression algorithms used are quite fast. The delays for de-compressing data tend to be smaller than the delays which can be encountered in performing non-cached data lookups and sourcing data from disk. All of this means that for many workloads you’ll see better performance when compression is enabled.

If you’d like to know more, take a look at the Nutanix Bible or watch this Tech TopX video

This isn’t a universal truth. I/O operations for cold data won’t get the same benefit, but for many environments I’ve worked on the answer is yes, there are advantages beyond space savings when you enable compression.