ASync PD and Snapshot Retention on Remote Site | Nutanix Community
Skip to main content
Hi guys,



Is there a way to change the min. snap retention count for local site AND remote site?

We have a PD wich has two schedules.

-every 60 mins - keep 3:3

-every 24 hours - keep 4:4



The problem now is that if for example the replication of the 60 min schedule fails because of network problems, or because the replication takes longer than normaly, the snapshots on the remote site expires.



So let's say the disaster happens monday night at 1am. If the first admin arrives at 7am, every 60min snapshot will be expired and we can only work with the older 24hour based snaps.



In the nutanix kb i can only find this article:

https://portal.nutanix.com/#/page/kbs/details?targetId=kA0600000008dn2CAA

which says: "...It's also important to note that the min-snaps value only affects local protection domains, not remote sites..."
Hi there,



I maybe miss reading your post but how are the remote snapshots expiring?



I havent checked lately but you set the amount of snapshots to retain, so the destination should keep the snapshots until its replaced by a new one...



"Is there a way to change the min. snap retention count for local site AND remote site"



Yes you can have different retention policies for local and remote sites, this option is in the schedule when creating the PD



Cheers



J
"I haven't checked lately but you set the amount of snapshots to retain, so the destination should keep the snapshots until its replaced by a new one..."



- It would be great if it were like that, but unfortunately the Remote site / Destination doesn't care if a new snapshot is there or not. If the expiry time is reached, the snapshot will be deleted. Only the last one will retain. But if you have two schedules for a PD, the last one is not always the newest.



Below is a screenshot of our schedule for one PD:





I have tested this behaviour by adding a 1,5TB vm to the PD.

I've added the machine around 11.30am to the PD. So the most "up-to-date" snapshot was from 11am.

Then the replication took about 12- 14 hours and we had only 4 snpas on the remote site left (3.30am and 3 day backwards)



i hope i have explained this properly 🙂
Why not just get the data over first then setup your schedules so its only the updated blocks need to copy over...



If the initial copy of 1.5TB is not at the destination the snaps betweeen will not be correct as the VM data is not at the destination, so when you first initital snapshot is synced at the destination this will be the only availble snapshot...
That is correct but this is not the problem. I've just done this big vm snap replication to proof i'm right.



The Problem is:

Snapshots on the remote site expire. No matter if the site is receiving a new one or not. Only the last (base) snapshot will persist. But with multiple schedule, this isn't the newest...



So if we setup our data protection to retain last 3 hours local+remote and last 4 days local+remote we might only have access to the last days, because the hour snapshots will be deleted unless we notice the disaster in under 3 hours.



I know that there is a "min-snap-retention-count" which affects the local snapshots. In our example we could set this to 7. So no matter what happens, we will always have all the snaps. But this seems not to affect the remote site ...
Its not based on time, its based on how many snapshots to retain, it says per interval...



Whats your recovery requirement for the remote site?



Why not create one schedule and keep local retention higher than the remote?