Diagnostic VM Results

  • 17 October 2014
  • 5 replies
  • 5933 views

Userlevel 1
Badge +8
Hello -

We just spun up a 5-node cluster running on the NX-6020 series with HP top-of-rack 10GbE switches. No VM workload is running in the environment. We just ran the diagnostic VMs, and the results are below.

*******************************************************************************
Running test 'Sequential write bandwidth' ...
Begin fio_seq_write: Wed Oct 15 09:53:40 2014
1078 MBps
End fio_seq_write: Wed Oct 15 09:54:40 2014
Duration fio_seq_write : 60 secs
*******************************************************************************
Waiting for the hot cache to flush ......... done.
Running test 'Sequential read bandwidth' ...
Begin fio_seq_read: Wed Oct 15 09:55:11 2014
2391 MBps
End fio_seq_read: Wed Oct 15 09:55:39 2014

Duration fio_seq_read : 28 secs
*******************************************************************************
Waiting for the hot cache to flush ............. done.
Running test 'Random read IOPS' ...
Begin fio_rand_read: Wed Oct 15 09:56:34 2014
77179 IOPS
End fio_rand_read: Wed Oct 15 09:58:16 2014
Duration fio_rand_read : 102 secs
*******************************************************************************
Waiting for the hot cache to flush ... done.
Running test 'Random write IOPS' ...
Begin fio_rand_write: Wed Oct 15 09:58:19 2014
64890 IOPS
End fio_rand_write: Wed Oct 15 10:00:01 2014
Duration fio_rand_write : 102 secs
*******************************************************************************

Are these results typical for the said hardware? Any concerns with the numbers?

Any input would be greatly appreciated.

Thank you.

5 replies

Userlevel 4
Badge +21
Hi rvillarin

I would except with NOS 4.0.1 on ESXi to be:

Random read IOPS: ~272,000
Randow write IOPS: ~189,055
Sequential Read: 9167.5 MB/s
Sequential Write: 2577.5 MB/s

Your numbers seem off, maybe some of the diag vm's didn't run. I would open up a support request.
Badge +1
Looking at the engineering numbers and adjusting for a 5 node cluster for a 6020, the numbers that you are reporting are in line with our internal numbers. I'm not sure what numbers dlink is referencing.
Badge +2
thank you for making a concise article that proves me your good time I will follow the subject loan and I nhésiterai not to intervene if it seems to be bad thank you.
See you later bye bye and good job
Userlevel 1
Badge +14
Do nutanix provide the performance test step and result with different workload scenario ?
I`m just agonizing for the performance test. I have 6node 3060G4 and using vmare IO Analyzer (base on IOmeter), use 4 works MAX iops&throughput configuration running. but the test result is more lower then yours . IOPS~~60000, throughput 1800MB
how do you achieve that awesome result ?
Badge +5
it would be better to get the nutanix sales that you are speaking to to do a HammerDB test. It will be a more fruitful test imho.

I also feel that pure IOPS numbers and latency numbers are just for marketing and benchmarking purposes. the real s*** happens when you put actual workloads and you start scratching your head on what's wrong haha.

Reply