Short answer is, No, you would need a Nutanix device to read the data locally and pull it back down.
The typical "DR" example is the datacenter burns down, so you stand up a new datacenter, put some equipment back in it (in this case, Nutanix), and point it to Azure, and you'd be able to pull that data back.
A quick/rough analogy here would be if you backed up your Data to tape, and sent it off to a tape storage warehouse, and your tape drive burned down. You'd need another tape drive of similar/same specifications to read that tape again.
Jon Kohler | Principal Architect, Nutanix | Nutanix NPX #003, VCDX #116 | @JonKohler Please Kudos if useful!
While this is not a qualified method, it is possible to browse the snapshots within Azure and copy them out. We've had a customer stand up a VM within Azure and whitelist that VM to have access to the clould connect instance. They were able to pull out specific VMs as required. Again, this is not a fully supported method.
We unfortunately had to push this feature, to make room for some other really neat cloud connect enhancements. The feature is done, code is checked in, but disabled in AOS 5.0. We'll be QA'ing it for a release shortly after
Quick question on this @Jon - is there also going to be support for backup to other S3 compliant (private) object stores at some point? I realise there is the added complexity of needing the extra CVM to "front" this, but (let's create a quick what-if scenario) you had a single Nutanix "backup" node running a CVM and a large lump of S3 compliant e.g. Cloudian object storage behind that? Seems to me like a low cost backup/deep archive solution?