Managing log truncation (SQl/Exchange etc) when using CloudConnect backup | Nutanix Community
Skip to main content
Hello,



Could anyone offer ideas around how to manage the log truncation for applications like Exchange/SQL when moving to CloudConnect from a product that used to perform that automatically?



This seems like a pretty important aspect which I can't seem to find any information on.



Many thanks,



Daniel
Hi IT_Guy



Thanks for asking, vcdxnz001 any insights on this.
hi IT_Guy



We don't truncate anything today with the standard workflow but you could use a pre and post script.



https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v50:sto-pd-guidelines-r.html



The pre_freeze and post_thaw scripts can be Python or shell scripts or any executable files. These scripts should contain commands specific to the particular applications that are running on the Linux or Windows VMs. Backup vendors like CommVault can provide these scripts. You can also write your own scripts. Following are some of guidelines and samples for pre_freeze and post_thaw scripts.

Location

  • For Windows VMs, create pre_freeze.bat and post_thaw.bat scripts under system_drive:Program FilesNutanixscriptspre_freeze.bat andsystem_drive:Program FilesNutanixscriptspost_thaw.bat. For example, if your system_drive is C, create these scripts at C:Program FilesNutanixscriptspre_freeze.bat and C:Program FilesNutanixscriptspost_thaw.bat.
  • For the Linux VMs, you must create the pre_freeze /sbin/pre_freeze and post_thaw /sbin/post_thaw scripts with owner root:root and 700 permissions. The pre_freeze script is executed before creating the snapshot and post_thaw script is executed after the snapshot creation. If the pre_freeze or post_thaw scripts are not present or the permissions are incorrect then VSS gets disabled and crash-consistent snapshots are taken. The pre_freeze and post_thaw scripts can be Python or shell scripts or any executable files. These scripts should contain commands specific to the particular applications that are running on the Linux VMs. Backup vendors like CommVault can provide these scripts. You can also write your own scripts. The pre_freeze script should finish within 50 seconds andpost_thaw script should finish within 25 seconds.
Requirements

  • For Windows VMs, the administrator should have read and execute permissions on the scripts.
  • For Windows Server operating systems if scripts are present, pre_freeze script is executed first, then the VSS quiesce is performed, and then post_thaw script is executed.
  • For Linux VMs, the scripts should have 700 permissions and should be owned by root:root.
  • Both pre_freeze and post_thaw scripts should be present for the operation to complete successfully.
  • Timeout for both the scripts is 60 seconds.
  • Return code of 0 from the script is considered as a success. Otherwise, it implies that the script execution has failed.
  • If pre_freeze script executes, irrespective of its success or failure post_thaw script is executed.

Hi dlink7,



Thanks for your reply. The pre/post scripts are defintely useful however it's more about how to convince exchange it's been backed up. As far as I know I have limited options involving circular logging, windows server backup



Just as general feedback it feels a bit jarring coming across from an application aware backup program to snapshot based backups. SQL is less of an issue but Exchange is a pretty big deal and even the Nutanix guide only has this to say from section 7.8 on Backup and DR:



"Nutanix takes a VM-centric approach to data protection and disaster recovery, complementingMicrosoft Exchange database availability groups. Nutanix uses VM caliber snapshots, alongwith protection domains, to back up your Microsoft Exchange deployment. A snapshot is apoint-in-time copy of a single virtual disk or VM, or of a group of virtual disks and VMs. Virtualdisks are grouped together in a Nutanix protection domain."



From what I can tell setting up a DAG doesn't mitigate the requirement to truncate logs.



So just to confirm, do I have any options other than:


  • Windows Server Backup
  • 3rd party application aware program (either in guest or at VM level)
  • Circular logging + change the way deleted items are handled


Thanks!
Hi IT_Guy



You bring up a good topic. I think your overall list is correct, but I'd want to ensure the choice is matching your requirements. An application/VSS aware backup has benefits with not only truncating logs, but allowing for increment/differential backups and for recovery with log reply/roll forward.



If the last point-in-time backup (or snapshot in this case) matches your RPO requirements (ie. you don't need roll forward), then truncating logs through circular logging or simple recovery mode with SQL databases might suffice. In fact, you'll find Microsoft recommendations to enable circular logging where DAGs are used (https://technet.microsoft.com/en-us/library/ee832792(v=exchg.150).aspx).



But if you have requirements to restore with roll forward, you'll need an application which performs both incremental backups and restores along with log truncation (since you won't want circular logging enabled or simple recovery with SQL). Today something like cloud connect does not perform incremental application level backups, so you would need a third party to perform that operation independently.



Happy to discuss further so we can ensure our documentation is clear.



Thanks,

Mike
Hi mmcghee,



Thanks for your reply. Good information around the DAG, I have no experience running Exchange in that configuration so good to know. Putting in a DAG with circular logging is an option.



Our previous schedule was weekly full, nightly incrementals (10pm). If Exchange irrecoverably died at 9pm we would lose the last 23 hours of email data and this afaik was acceptable to the business. The way I see it, we have the chance to improve that with the Nutanix platform as we can schedule snapshots hourly (or less) during business hours and store these locally + replicate to AWS site, keeping them on a 1 week retention to overlap our weekly schedules.



With this configuration we can provide a much better RPO (This could be extended to include the SQL servers + associated applications where feasible). This may mitigate moving to circular logging/simple recovery but I will need to investigate the details.



Thanks for clarifying my options, I've thought a great deal about moving platforms but I didn't have the 'ah-ha' moment about this piece until recently..lots to learn!



Many thanks,



Daniel
I was logged in to my named account for this last reply, we're just trying to keep everything on a central account.