[CentOS] RAID1 setup

Sun Jan 8 23:16:31 UTC 2023
Robert Moskowitz <rgm at htt-consult.com>

Continuing this thread, and focusing on RAID1.

I got an HPE Proliant gen10+ that has hardware RAID support.  (can turn 
it off if I want).

I am planning two groupings of RAID1 (it has 4 bays).

There is also an internal USB boot port.

So I am really a newbie in working with RAID.  From this thread it 
sounds like I want /boot and /boot/efi on that USBVV boot device.

Will it work to put / on the first RAID group?  What happens if the 1st 
drive fails and it is replaced with a new blank drive.  Will the config 
in /boot figure this out or does the RAID hardware completely mask the 2 
drives and the system runs on the good one while the new one is being 

I also don't see how to build that boot USB stick.  I will have the 
install ISO in the boot USB port and the 4 drives set up with hardware 
RAID.  How are things figure out?  I am missing some important piece here.

Oh, HP does list Redhat support for this unit.

thanks for all help.


On 1/6/23 11:45, Chris Adams wrote:
> Once upon a time, Simon Matter <simon.matter at invoca.ch> said:
>> Are you sure that's still true? I've done it that way in the past but it
>> seems at least with EL8 you can put /boot/efi on md raid1 with metadata
>> format 1.0. That way the EFI firmware will see it as two independent FAT
>> filesystems. Only thing you have to be sure is that nothing ever writes to
>> these filesystems when Linux is not running, otherwise your /boot/efi md
>> raid will become corrupt.
>> Can someone who has this running confirm that it works?
> Yes, that's even how RHEL/Fedora set it up currently I believe.  But
> like you say, it only works as long as there's no other OS on the system
> and the UEFI firmware itself is never used to change anything on the FS.
> It's not entirely clear that most UEFI firmwares would handle a drive
> failure correctly either (since it's outside the scope of UEFI), so IIRC
> there's been some consideration in Fedora of dropping this support.
> And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for
> things where it writes to the FS (grubenv updates for "savedefault" for
> example).  But, there's other issues with GRUB2's FS handling anyway, so
> this case is probably far down the list.
> I think that having RAID 1 for /boot and/or /boot/efi can be helpful
> (and I've set it up, definitely not saying "don't do that"), but has to
> be handled with care and possibly (probably?) would need manual
> intervention to get booting again after a drive failure or replacement.