Solved

Thin Provisioning?

  • 16 March 2016
  • 7 replies
  • 3914 views

Badge +5
From this tech note - http://go.nutanix.com/rs/nutanix/images/TechNote-Nutanix_Storage_Configuration_for_vSphere.pdf, my impression is that Nutanix thin provisions a VM if the disk are set to "Thick Provisioned Lazy Zeroed".

All Nutanix containers are thin provisioned by default; this is a feature of NDFS. Thin provisioning is a widely accepted technology that has been proven over time by multiple storage vendors, including VMware. As containers are presented by default as NFS datastores to VMware vSphere hosts, all VMs will also be thin provisioned by default. This results in dramatically improved storage capacity utilization without the traditional performance impact. Thick provisioning on a VMDK level is available if required for the limited use cases such as fault tolerance (FT) or highly demanding database and I/O workloads. Thick provisioning can be accomplished by creating Eager Zero Thick VMDKs. Eager Zero Thick VMDKs will automatically guarantee space reservations within NDFS.

Here is the disconnect. When I provision a lazy or eager zeroed, 1TB VM, both cause container storage utilization to jump by 2TB (replication factor of 2 is in use). I would expect to see only the eager zeroed VM to cause the jump, since lazy should be thin provisioned. Also, when I create a lazy zeroed VM, after the VM creation is complete the disk is listed as eager?
icon

Best answer by Jon 16 March 2016, 20:56

View original

This topic has been closed for comments

7 replies

Userlevel 6
Badge +29
Its a bit of a syntax disconnect in that document.

Thick ANY causes NOS to reserve the space on the backend. The only difference is whether it zeros it out up front or not, as you know lazy vs eagar.

This is because if you are telling Nutanix you want thick, we dont want to accidently have the system run out of space, like some other storage systems do, so we'll respect that lazy reservation for you automagically.

Summary - the jump you are seeing is because we automatically reserve the space. This doesnt mean we zero it out for you, just make a reservation.


Anyhow, fun fact, win NOS 4.6, we added Zero avoidance to the write cache layer (aka oplog), so zero operations should be much faster now, because we just see the zero and update metadata, instead of writing the zero to cache. very slick and cool stuff.
Badge +5
thanks for the reply John and you beat me to my next question of how do you protect against exceeding the actual usable disk amount if you are thin provisioning with lazy 🙂 Very cool on using metadata magic to improve things. So, if VMs are provisioned thick, I'm assuming dedup and compression will not allow for overcommiting storage as the space will still be reserved per the VM being thickl provisioned.
Userlevel 6
Badge +29
RE Compression - you nailed it. You could have a file that was a zillion to 1 compressable and deduped down to the finest grain of sand, but if you thick provision the VMDK, NOS will still respect the frontend reservation set by the thick disk.


Thats not to say that compression and/or dedupe are useles with Thick, exactly the opposite, as you'll still get performance benefits from less read/write to the backend disks and cache amplifcation (respectively), you just wont be able to over subscribe those GB's and TB's.

Jon
Userlevel 6
Badge +29
Also, for the sake of posterity, we already did zero avoidance in the persistent storage (aka Extent Store) since as far back as I can remember, so we never stored the zeros. This is why if you ran sdelete inside a nutanix VM, you'd see the VMDK (assuming thin) get smaller, and space go back available into the container.

In those previous versions, we'd still write zero's to cache, then just not write them when we drained the cache.

The change in 4.6 was to promote that logic to the write cache, so we just dont write them to cache, and therefore make both the acknowledgement of zero writes super fast, and use the cache more effectively.
Badge +5
I had a hunch that we'd see huge disk usage savings with cloning, but want to double check with the community. We basically, have a 125GB thin provisioned template. We've deployed 100+ VMs from this template (all deploys were also thin). we are seeing mind boggling savings in disk usage and I'm thinking its due to the way Nutanix handles the clones on the back-end. IE, it's just a creation of a new block map to the immutable block map, so almost no cost with regard to disk usage. I get that as the VMs overwrite and write new data the disk usage will grow, but we are seeing usage of 1.5GB or so per VM in stargate page. Again, this is sort of what I expected, but the very little bit of disk space used is crazy. Wondering if everyone else is seeing the same thing. I'm not thinking shadow clones play a part here as they are more active in mult-reader scenarios and also I'm thinking only improve performance and not actual disk usage.
Userlevel 6
Badge +29
You've hit it right on the head (block map copies, immutable block slices, etc), we do very efficient clones, both from a speed and utilization perspective, so they are fast, and only take up the "delta" as you called out.

Aka "Data Avoidance", meaning since the source data is never duplicated (per say, i.e. not copying the image over and over again), it's super slick.

See this snippet from the Nutanix Bible that talks about how we do it:
http://nutanixbible.com/#anchor-snapshots-and-clones-65

The neat thing here is this makes things like Caching more efficient, and also allows you to be more efficient with data reduction technologies like compression, dedupe, and ecx.


Also, you're right on shadow clones, that strictly multi-reader situations, like linked clone VDI (and vCloud Director too).


Now, all of that said, one thing we can do a better job on is reporting the impact of both Data Avoidance and Thin Provisioning, something our competitors use to inflate their "data savings" numbers. Obviously you can see it with what you're looking at, but Data Avoidance is NOT calculated in Prism under Data Savings. That picture / ratios only show things like dedupe, compression, and ECX.

A future release will break down these areas so you can see holistical Data Efficicent numbers from all angles.:
Data Avoidance (Smarts)
Data Reduction (Savings)
Thin Provisioning
Badge +5
Super cool! Thanks Jon for the great info!