capacity for RF2 and N+1 | Nutanix Community
Skip to main content

Hi team, i have questions about capacity calculator

let say i have config for 3 node like the picture below, 

so in prism will show me 17TB logical capacity correct? 

and i have concern in extent store RF2 and N+1 (11TB) in sizer tool, what happen if i have data 15TB which is bigger than RF2 and N+1 size (11TB)?

please any answer for my question? 


Hi @randyramandani 

If I understand your question, you understand the capacity guidelines for a Nutanix cluster (like discussed in the post https://next.nutanix.com/how-it-works-22/recommended-maximum-storage-utilization-37234 or the Nutanix KB article “Recommended guidelines for maximum storage utilization on a cluster” but want more information on what happens when those storage guidelines are not met.

You might also be interested in storage efficiency technologies like deduplication, compression, intelligent cloning, and erasure coding. All of these can reduce the physical space required for your workload. You can see more on those technologies in the Data Efficiency tech note, but here I’ll focus on the question you asked.

Given the numbers you provided, we’re talking about not honoring rebuild capacity.

Rebuild capacity is the amount of space required to restore RF2 resiliency after the loss of a single node (or two nodes if we’re configured for RF3). Automated calculations on the cluster which will check and alert about this are based on the largest single-node (or two-node) storage capacity in the cluster.

Not having rebuild capacity will result in alerts. Check the article “NCC Health Check: sufficient_disk_space_check” for examples. That same check is also part of upgrade pre-checks, so automated upgrades requiring restarts would be blocked. 

The reason for the space recommendations really comes down to avoiding bigger problems when a node goes offline. What we want is a cluster which can automatically recover from a single node failure, restore resiliency, keep data service online the whole time, and even be able to suffer an additional node or disk failure without any permanent data loss.