It mainly depends from what happens after the failure but the chance to lose data exists and you have for sure to check the exact state of nodes and cvms with support guys.
There is no right answer or workaround here.
For example, few days ago an NTC fellow found himself in the exact situation you described, while one of the nodes was in maintenance mode another one has been rebooted, one of the nodes was in fault, the other one after the reboot was online but with about 200 vms down.
With the help of support staff he rebuilded the failed node with a phoenix iso with aos and hypervisor embedded and everything gone fine but...i really dont want to find myself in a situation like that, about 200vms down….you know…
The best rule i can suggest is:
any cluster more than 5 nodes RF3 (FT2) at the cluster level, and two containers, 1 for critical VM on rf3 and rest on rf2 container