According to my material on hand that Nutanix will keep a copy in local storage for local data required. So if there are some storage intensive VMs running in one node and the local storage in that node is not enough for data of all local VM, some data should still need to be retrieved in remote node? would like to know the exact behavior.
And one more question for DR. If I use vsphere, I still need to buy SRM license or SRM is optional (e.g. for some dedicated features?). For Hyper-V, since the DR feature of Hyper-V is not as rich as VM, can we still use the "Protect Domain" feature in Nutanix for DR in Hyper-V?
Thanks a lot in advance!
Best answer by JonView original
RE "Big VMs" - Here's a rough high level example
If each node had 10TB of capacity, and you had a 4 node system, you'd have 40TB of storage.
On any hypervisor we support, the "datastore" (aka container) will show up as 40TB, You could provision a very large VM, perhaps with 20TB of capacity assigned to it just fine.
Obviously this is bigger than a single node, so our software will monitor the access patterns of the data and keep the hottest data local to that node, and put the rest of the data elsewhere.
Now, for example, if that 20TB VM was 100% active data all the time (which is seen in applications like EPIC Cache), this would be a perfect usecase for all flash everywhere, such that data local or remote would all be in flash. This is not the case in most applications, where lots of data is actually cold, and only a subset (known as the "Working set") is actually hot.
From a sizing perspective, if you dont do all flash, you'll want to make sure you've got enough SSD to cover the working set size.
It is optional only, not strictly required. SRM is great at providing orchestration to basically make your runbooks digital. We have a certified SRA (SRM Adapter), so if you chose to use SRM, thats great.
If not, you'll still have great built in per-VM replication, but not (yet) orchestration built in.