5 Essential Tips for Maximizing Your Experience at Nutanix .NEXT for Bloggers
[i]Which version of ESXi is being used? [/i] ESXi 5.1 U2 Thank you mates for your assistance. I will reply asap.
>In fact you can look inside of our diagnostic VM (password is the standard one), it's Centos 6.5. There >are multiple vdisks connected and manged by LVM. This way you can get much better performance - 15-20kIOPS per single VM. We've deployed diagnostic VM with VMware Workstation and it has only one vdisk both in VM settings and inside VM. Please, could you explain a little bit more how can we see multiple drives in diagnostic VM? Because today we performed such tests with spanned volume created from 8 vdisk. Unfortunately, we got only 1.4K IOPS for random read. And by the way, we moved all the heavy VM out of Nutanix, so now it handles only VDI and thin app VMs.
Thanks for helping me, >You can never get 50k IOPS on a single VM without multiple disks attached. >Again, this is normal / expected behaviour by design (covered by multiple guidances, for example MS SQL on Nutanix Best Practices) Not on a single VM but we got 4 VM for each type of workload for each node. Totally, we got 12 VM. And we had approximately 40K IOPS. Those VM was simple ubuntu VM with fio with no advanced configuration. >The only fast way to fix this issue is to open a support case. Obviosly, the situation is not normal. Thanks you for advice. I've already opened case for this issue I don't say we have troubles with NFS container. Again, as I've told beforem when we mounted NFS container inside the VM, we got perfect results. We are facing performance problems only when we read/write from the hypervisor datastore.
Thanks for reply, >Multiple vdisks should be attached (they can be unified with LVM for example) to get more performance from VM's, as Nutanix OS limiting oplog size per vdisk (to avoid "noisy neighbour" problem) Why then does my test VM work fine when I mount container inside it and run IO tests? It has only one vdisk, and when I run i.e. sequential write test, the results are ~270MBs throughput? Also, roughly half a year ago, we tested the same model 1x NX-3350. I ran 4 VM on each node: 1 with random read load, 1 with random write load, 1 with sequential write load and 1 with sequential read load. Totally, I had 12 VM on 3 nodes. And I did not feel any oplog per vm limits, because that time we got 50K read IOPS and 30K write IOPS etc. during more than 20 hour testing. The only difference was version of NOS, it was 3.5.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.