Hi list,
I'm building a NFS server on top of CentOS 8. It has 8 x 8 TB HDDs and 2 x 500GB SSDs. The spinning drives are in a RAID-6 array. They are 4K sector size. The SSDs are in RAID-1 array and with a 512bytes sector size.
I want to use the SSDs as a cache using dm-cache. So here what I've done so far: /dev/sdb ==> SSD raid1 array /dev/sdd ==> spinning raid6 array
I've added "allow_mixed_block_sizes = 1" to lvm.conf to be able to add sdb and sdd in the same VG (because of the sector size missmatch). But as the LVs will only use one and only one PV I guess it's OK.
# lvcreate -L 500M -n lv_cache_meta VGnfs /dev/sdb # lvcreate -l +100%FREE -n lv_cache VGnfs /dev/sdb # lvconvert --type cache-pool /dev/VGnfs/lv_cache --poolmetadata /dev/VGnfs/lv_cache_meta # lvcreate -l +100%FREE -n LVnfs VGnfs /dev/sdd # lvconvert --type cache /dev/VGnfs/LVnfs --cachepool /dev/VGnfs/lv_cache # lvconvert --cachemode writeback --type cache /dev/VGnfs/LVnfs --cachepool /dev/VGnfs/lv_cache # mkfs.xfs /dev/VGnfs/LVnfs
And now I'm asking myself if I'm not doing something somewhat dangerous for a server that will be critical when in production :] The server has 128GB of RAM so maybe the caching in memory will be sufficient to achieve good performance ?
What do you think ? Do you use something similar ?
Thanks !
Hi,
Hi list,
I'm building a NFS server on top of CentOS 8. It has 8 x 8 TB HDDs and 2 x 500GB SSDs. The spinning drives are in a RAID-6 array. They are 4K sector size. The SSDs are in RAID-1 array and with a 512bytes sector size.
I want to use the SSDs as a cache using dm-cache. So here what I've done so far: /dev/sdb ==> SSD raid1 array /dev/sdd ==> spinning raid6 array
Looks like you're using a hardware raid controller, right?
I've added "allow_mixed_block_sizes = 1" to lvm.conf to be able to add sdb and sdd in the same VG (because of the sector size missmatch). But as the LVs will only use one and only one PV I guess it's OK.
# lvcreate -L 500M -n lv_cache_meta VGnfs /dev/sdb # lvcreate -l +100%FREE -n lv_cache VGnfs /dev/sdb # lvconvert --type cache-pool /dev/VGnfs/lv_cache --poolmetadata /dev/VGnfs/lv_cache_meta # lvcreate -l +100%FREE -n LVnfs VGnfs /dev/sdd # lvconvert --type cache /dev/VGnfs/LVnfs --cachepool /dev/VGnfs/lv_cache # lvconvert --cachemode writeback --type cache /dev/VGnfs/LVnfs --cachepool /dev/VGnfs/lv_cache # mkfs.xfs /dev/VGnfs/LVnfs
And now I'm asking myself if I'm not doing something somewhat dangerous for a server that will be critical when in production :]
I can't comment here as I've never used SSD as a cache the way you do it.
The server has 128GB of RAM so maybe the caching in memory will be sufficient to achieve good performance ?
The 128GB will of course help as read cache but what about write cache? If your raid hardware has battery/flash based write cache and is big enough, then you could set it in write back mode and make use of it. I'm not sure your proposed configuration is safe enough.
Regards, Simon
On 24/03/2020 17:37, Simon Matter via CentOS wrote:
Hi,
Hi list,
I'm building a NFS server on top of CentOS 8. It has 8 x 8 TB HDDs and 2 x 500GB SSDs. The spinning drives are in a RAID-6 array. They are 4K sector size. The SSDs are in RAID-1 array and with a 512bytes sector size.
I want to use the SSDs as a cache using dm-cache. So here what I've done so far: /dev/sdb ==> SSD raid1 array /dev/sdd ==> spinning raid6 array
Looks like you're using a hardware raid controller, right?
Yes it's a "PERC H740P Mini" with 8GB of cache.
I've added "allow_mixed_block_sizes = 1" to lvm.conf to be able to add sdb and sdd in the same VG (because of the sector size missmatch). But as the LVs will only use one and only one PV I guess it's OK.
# lvcreate -L 500M -n lv_cache_meta VGnfs /dev/sdb # lvcreate -l +100%FREE -n lv_cache VGnfs /dev/sdb # lvconvert --type cache-pool /dev/VGnfs/lv_cache --poolmetadata /dev/VGnfs/lv_cache_meta # lvcreate -l +100%FREE -n LVnfs VGnfs /dev/sdd # lvconvert --type cache /dev/VGnfs/LVnfs --cachepool /dev/VGnfs/lv_cache # lvconvert --cachemode writeback --type cache /dev/VGnfs/LVnfs --cachepool /dev/VGnfs/lv_cache # mkfs.xfs /dev/VGnfs/LVnfs
And now I'm asking myself if I'm not doing something somewhat dangerous for a server that will be critical when in production :]
I can't comment here as I've never used SSD as a cache the way you do it.
The server has 128GB of RAM so maybe the caching in memory will be sufficient to achieve good performance ?
The 128GB will of course help as read cache but what about write cache? If your raid hardware has battery/flash based write cache and is big enough, then you could set it in write back mode and make use of it.
Yes it has, the current mode is write back until the battery condition is problematic.
I'm not sure your proposed configuration is safe enough.
Yes, I'm thinking about abandoning it, especially since I'm achieving acceptable performance without another layer of cache. I was just wondering if other people were using it in production.
Thanks anyway, take care kfx
Regards, Simon
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos