On 13 September 2017 at 09:25, hw hw@gc-24.de wrote:
John R Pierce wrote:
On 9/9/2017 9:47 AM, hw wrote:
IsnĀ“t it easier for SSDs to write small chunks of data at a time? The small chunk might fit into some free space more easily than a large one which needs to be spread out all over the place.
the SSD collects data blocks being written and when a full flash block worth of data is collected, often 256K to several MB, it writes them all at once to a single contiguous block on the flash array, no matter what the 'address' of the blocks being written is. think of it as a 'scatter-gather' operation.
different drive brands and models use different strategies for this, and all this is completely opaque to the host OS so you really can't outguess or manage this process at the OS or disk controller level.
What if the collector is full?
I understand that using small chunk sizes can reduce performance because many chunks need to be dealt with. Using large chunks would involve reading and writing larger amounts of data every time, and that also could reduce performance.
With a chunk size of 1MB, disk access might amount to huge amounts of data being read and written unnecessarily. So what might be a good chunk size for SSDs?
It will depend on the type of SSD. Ones with large cache and various smarts (SAS Enterprise type) can take many different sizes. For SATA ones it depends on what the cache and write of the SSD is and very few of them seem to be the same. The SSD also has all kinds of logic which moves data around constantly on disk to wipe level so it makes it opaque. The people who have tested this usually have to burn through an SSD set to get an idea about a particular 'run' of a model but it doesn't go over every version of the model of SATA SSD.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos