[CentOS] raid 5 install

Roberto Ragusa mail at robertoragusa.it
Fri Jun 28 21:54:20 UTC 2019


On 6/28/19 4:46 PM, Blake Hudson wrote:
> 
> Unfortunately, I've never had Linux software RAID improve availability - it has only decreased availability for me. This has been due to a combination of hardware and software issues that are are generally handled well by HW RAID controllers, but are often handled poorly or unpredictably by desktop oriented hardware and Linux software.

I have to add my data point, and it is an opposite experience.

Software RAID1 and RAID5 (and RAID10) have done their job perfectly for me with disk failing
and being replaced without issues; neither the resync is a too noticeable speed degradation.

On the other hand, hardware RAID boards have always been a disaster.
Slow and ridiculous BIOS utilities, drives being pushed out of the array randomly,
SMART data not available anymore and undocumented "formatting" headers on drives
so good luck finding an identical controller when the board dies (yeah, with a battery
onboard, not the best component for years of reliability...).

It is always software RAID for me. Software RAID + LVM on top is great.
For example:
RAID1 with sda1 sdb1 sdc1 sdd1 for /boot (yes, 4 disk RAID1, and have a look at "mdadm -e" to have it
bootable without the bootloader even knowing it is a RAID), then sd{a,b,c,d}{2,3,4,5,6,...} partitions
of reasonable sizes (e.g. 500GB), composed as you prefer, such as RAID1 between sda2-sdb2,
RAID1 between sdc2-sdd2, RAID5 between sda3-sdb3-sdc3-sdd4, RAID5 between sda4-sdb4-sdc4-sdd4, ...
then pvcreate on the RAID assemblies to place your vgs and lvs.
Any movement/enlargement of filesystem will be easy thanks to LVM.
Any drive failure will be easy thanks to the Software RAID.
You can basically never need to turn off the system anymore.

Regards.
-- 
    Roberto Ragusa    mail at robertoragusa.it


More information about the CentOS mailing list