Create different storage SLA tiers

  • 10 July 2014
  • 4 replies

Userlevel 2
Badge +11
In a previous version of NOS (probably 2.0 or 2.5?), I thought it was possible to 'pin' a container on the SATA tier. I cannot see a similar option in v3.5.x and v4.0.1. Is there such a feature? Background to this question: I'm setting up a Nutanix-based environment that will have various SLA tiers. These tiers specify both performance and availability characteristics. For availability, we differentiate based on RTO and RPO using multiple physical sites, replication and backup options. For performance, I'm considering having a Bronze tier that is not RAM/flash accelerated and heavily deduplicated and compressed. For that to work, I need to configure a container to not only have dedupe and compression (which shouldn't be a problem), but I cannot find the 'pin container to SATA tier' like in the older NOS versions.

4 replies

Userlevel 4
Badge +21
I don't think it's supported anymore.You may be able to define an threshold on your container for tier offloading, that will force data to be offloaded directly to SATA, but it's a real bad idea.The thing is, the SSD is used by NOS for writes, and not only for data.The way NOS function, it uses what's called "fragments", that are really burst of IOs from the VMs. It will write these IOs in sequence in fragments files, then periodically "commit" these fragments to regain space.In order to do so, the NOS needs a place to "store" these fragments and write (and read) them at a high speed.This place is the SSD/hot tier.By placing a very low threashold for SSD to SATA data migration, in effect you will still use SSD for "some" write operations, but you will also put presure on the Nutanix CVM to commit these fragment at a higher rate than normal, then push these data to SATA.In effect, you will have somewhat fast writes, but very slow reads.Depending on the size of your dataset to be store in this "lower tier" and the model of your nodes, you could also create a dedicated storage pool with 1 or 2 SATA disk per node and create your "lower tier" container on it. But as far as I know, is is also unsupported.Essentially, Nutanix's NOS is custom build to store VMs, without you worrying where the data of your VMs is... What you're trying to do is mix functionality (Compression and/or deduplication) with storage tier.The best way to handle that (in my opinion) would be to treat the Nutanix nodes as one storage tier (for VMs) and provide another storage tier for cold data.I know it seems counter intuitive.Sylvain.
Userlevel 2
Badge +11
Hi Sylvain, Thanks for your elaborate answer. I know how and why NOS does tiering using RAM, flash and disk. My backup plan is to have a single pool (which was the plan anyways), and create multiple containers with different settings for deduplication and compression and differentiate that way. All data will now go through the regular NOS data path with caching, tiering and optionally dedupe and compression. Cold data will eventually and automatically be moved to the SATA tier. I wanted to see if I could bypass the automated caching/tiering for data that I know will always be cold. Really, the only downside treating cold data as regular data is that they'll populate the cache for some time during data ingestion and will only be offloaded to the SATA tier after some time. When at rest, there's no need to actually differentiate between the data types, since cold data won't be brought up into the cache. Adding a physically separate tier for 'really, really' cold data (archiving, etc.) is being considered. One option we have is integrating such a tier with our backup solution, which will run on a physically separate infrastructure.
Badge +5
you can make this one:
create storage pool with console not GUI, you can choose not every disk to add but some.
create another pool and add only SATA.

Nutanix doesn't support more than 2 storage pool in a cluster.
And with only SATA disks you'll have bad performance.

It may be OK if you have only sequential write and you change some settings per container in console but bad with random write
Somethins like this
ncli> ctr list
ID : 731Name : XXXStorage Pool ID : XXXMax Capacity : XXXReserved Capacity : XXXReplication Factor : 2Oplog Replication Factor : 2NFS Subnet Whitelist :VStore Name(s) : Container1Random I/O Pri Order : SSD-SATA, DAS-SATASequential I/O Pri Order : DAS-SATACompression : offFingerprint On Write : offOn-Disk Dedup : none
Userlevel 2
Badge +11
The impact of creating separate pools for this purpose is simply too big. The ideal solution would be on the container level, since that shouldn't have any impact on other workloads, whereas physically separating storage in different pools will affect all containers (and thus, all workloads).