NvME SSD+SSD node performance and I/O path

  • 13 October 2021
  • 0 replies

Badge +6

I have 5 nodes equipped with 2*3.2TB NVMe SSD and 6* 1.92TB SATA SSD. of every one. all nodes are connected with 25Gb network. all nodes upgarde from AOS 5.15 to 5.20

From the nutanix bible, in the scenarion with NVMe SSD + STATA SSD, the oplog will only reside in the NVMe SSD, and not in SATA SSD.

So that is mean all the random write will first write in NVMe SSD and then drain to all the NVMe SSD+ SATA SSD?

all the sequential write will pass OPlog and only writhe to extent store parally in all NVMe SSD + SATA SSD? 

How about the read I/O path? 

In some test, I found that when I do some I/O operation in VM, All the device (NVMe and SATA) have I/O counters and the IOPS are roughly the same in the prism-hardware--disk page. Seems the fast speed of NVMe SSD cannot be utilized.

Does you have any suggestions to prove the fast performance improve by used NVMe in the nodes.

I also checked that the disk status, as the folloing: It is used SPDK.

================== =================
Slot  Disk          Model           Serial              Size
0     ------------  --------------  ------------------  ------
1     /dev/sda      SSDSC2KG019T8L  PHYG0506026Z1P9DGN  1.9 TB
2     /dev/sdb      SSDSC2KG019T8L  PHYG050602HB1P9DGN  1.9 TB
3     /dev/sdc      SSDSC2KG019T8L  PHYG0506024J1P9DGN  1.9 TB
4     /dev/sdd      SSDSC2KG019T8L  PHYG050602FT1P9DGN  1.9 TB
5     /dev/sde      SSDSC2KG019T8L  PHYG050602781P9DGN  1.9 TB
6     /dev/sdf      SSDSC2KG019T8L  PHYG050605TU1P9DGN  1.9 TB
7     ------------  --------------  ------------------  ------
8     ------------  --------------  ------------------  ------
9     ------------  --------------  ------------------  ------
10    ------------  --------------  ------------------  ------
11    /dev/nvme1n1  SSDPE2KE032T8L  PHLN0201024T3P2BGN  3.2 TB
12    /dev/nvme0n1  SSDPE2KE032T8L  PHLN020102EP3P2BGN  3.2 TB

0 replies

Be the first to reply!