Nutanix Tech Marketing Engineer Andy Daniel wrote up a short paragraph recently as an overview to the Storage efficiency features including compression. Share what you are doing for your different workloads in your environments (Nutanix or otherwise).
To optimize storage capacity and accelerate application performance, the Acropolis Distributed Storage Fabric uses data efficiency techniques such as deduplication, compression, and erasure coding. They are intelligent & adaptive, and in most cases, require little or no fine-tuning. In fact, two levels of post-process compression are enabled in conjunction with cold data classification by default on new shipping clusters. Because they’re entirely software driven, it also means existing customers can take advantage of new capabilities and enhancements by upgrading AOS.
DSF provides both inline and post-process compression to maximize capacity. Many times, customers incorrectly associate compression with reduced performance, but this isn’t the case with Nutanix. In fact, as of AOS 5.0, all random writes are compressed inline before being written to OpLog (write cache), no matter the chosen configuration. Increased Oplog space utilization as the result of compression improves burst handling for sustained random writes and allows absorption of sustained random writes for a longer duration.
Large and sequential reads and writes also see a performance benefit from compression, so there are very few workloads where inline compression (compression delay = 0) isn’t appropriate. It’s even recommended for Tier-1 workloads such as Oracle, Microsoft SQL Server and Exchange. Inline compression additionally improves performance within the capacity tier (Extent Store) while maximizing total available storage capacity.
With potentially dramatic performance improvements and the ability to significantly increase your cluster’s effective storage capacity, there’s no reason you shouldn’t enable compression on your containers today!