code:
Hello.
My English is bad, so I use Google Translate and in advance I ask you to treat with understanding the semantic distortion.
The problem is that we have deployed the ion electronic document turnover system (DV) on our Nutanix for testing. To test performance DV, they began to fill DV with a large number of copies of several files:
File Quantity Size
file1.pdf 1699713 314KB (321,942 bytes)
file2.png 998001 203KB (208,689 bytes)
file3.pdf 1138988 389KB (398 354 bytes)
file4.pdf 1007002 504KB (516,306 bytes)
file5.tiff 2271889 571KB (584,872 bytes)
The mechanism of operation is as follows: the script records the file across the DV database. In this case, the database creates a record and writes the file to the SMB shared folder on another server. The server with the SMB shared folder is also implemented on Nutanix as a virtual machine.
Total uploaded 2.6TiB with the specified set of files, and the deduplication ratio on the 1: 1 disk saved 12.8GiB. In my opinion, this is the inadequate work of the deduplication mechanism.
Secondary questions:
2.1. For some reason, with this kind of operation, the system shows a slow file writing speed - 5.5-8.3 Mbps. Even with the departure of the cache entry, I expected the write speed to be higher. In general, it should be space if we assume that during deduplication work it should only be creating links in the Nutanix database to deduplicated data blocks.
2.2. To increase the recording speed, I tried to use the File Server which is built into Nutanix. The write speed was 3.8 Mbps. I do not understand why such a low write speed.
Please understand the reason for such work of the deduplication function and, if possible, fix it.
Thanks Ildar.