Question

AWS Cloud Connect S3 consumption issue

  • 6 January 2023
  • 0 replies
  • 44 views

Badge

We bought several Nutanix clusters in 2019 and on two of them, we deployed a cloud connect appliance in AWS via Prism element so that we could backup our VMs to the cloud. We setup retention to 7 snaps locally and 62 on the remote (AWS) side.

 

The question I have for everyone that uses Cloud Connect for backups is, does the S3 bucket that gets created by Cloud Connect ever stop growing in size. On our HQ Nutanix cluster, that Cloud Connect S3 bucket is currently sitting at 164TB and 177 million objects. This is a small 3 node cluster with only 10 VMs being backed up to the Cloud Connect appliance. Current allocated space for those VMs is 3TB. Actual consumed by them as reported by Prism is about half of that.

 

Looking at the S3 bucket created by cloud connect, I see that a lifecycle rule was created to move objects to infrequent access after 30 days. I see nothing to expire old objects and there are objects in the bucket dating all the way back to 2019. So at first glance, it doesn’t look like there is anything pruning old objects from the S3 bucket. I don’t see anything on the documentation on Cloud Connect regarding this.

 

In Prism, when I look at the snapshots available on a protected VM for local and remote, it does show me 7 and 62, as configured. As you can imagine, this is costing up a pretty penny, on top of the EC2 instance itself.

 

Has anyone using Cloud Connect ran into a similar situation with the S3 consumption? I do have tickets opened on it but curious if someone else have seen this.


This topic has been closed for comments