I’m new to Nutanix and we have a Nutanix cluster online. For testing purposes I’m running diskspd test in a VM.
I’m running some diskspd test with identical VM configs in several enviroments 3-tier Hyper-V/ VMware and HCI Nutanix AHV.
Why is it that running a test with -Sh flag (no software/hardware cache) gives almost 10 times lower results in Nutanix compared to other platforms? Without -Sh flag Nutanix is running test with higher results compared to my other testing platforms. Is it because of how CVM is designed to work? I’m just trying to understand.
Best answer by AlonaView original
Diskpd was developed to test physical storage and that’s something to keep in mind. Go through KB-9653 [Performance] Benchmarking storage with Microsoft Diskspd. See if that makes things any clearer. Let me know if you still have questions.
I was able to get similar diskspd results when using volume group on Nutanix AHV and legacy 3-tier Hyper-V virtual machine with test spread to several virtual disks. Altough when using several SAN storage luns and spreading virtual disks on those, I was able to achieve better results with SAN storage.
When using Nutanix AHV one virtual disk is limited to performance of one physical disk and when using volume group withs several vdisks load is balanced to several stargates and physical disks?
The answer to your question is not really. Virtual and physical layers are not directly (or easily mapped) together. As per balancing between several stargate processes, data locality principles ensures that the data is served from the local node and that would also mean by the same stargate process.