5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
Hey @LinKan , a fairly common solution for protecting your object data is to set up streaming replication to another Nutanix Objects instance (on separate physical cluster, ideally remote) and enable versioning for past restore point capabilities. You can also choose to not replicate delete markers, thereby ensuring that even if an object was deleted at source it will still be available on the target bucket.
Thanks Jason. You certainly can use it as a service provider.
Hi Emulet, try specifying region as us-east-1
You can reduce the quantity of FSVMs underpinning a fileserver (‘scale in’) or reduce the vCPUs / memory allocated to FSVMs (scale down), but you cannot reduce the storage capacity assigned to a fileserver. Having said that, all capacity is thin provisioned anyway. This means you only actually consume whatever TiBs of data are written by clients (plus snapshots), regardless of how much capacity has been assigned to the fileserver. Any unused capacity in the cluster can be utilised by other services running on the cluster (assuming it’s a mixed environment) You can always set a maximum size for each share and to help even further with capacity management we also support hard and soft quotas. Hope this helps.
If you have Nutanix file servers deployed at both locations you could set up Smart DR replication between them, that results in an active/passive setup where the remote copy is read-only accessible. Does that satisfy your use case?Here is a useful video overview with more information.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.