Hi
Continuing this thread, and focusing on RAID1.
I got an HPE Proliant gen10+ that has hardware RAID support. (can turn it off if I want).
What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all.
I am planning two groupings of RAID1 (it has 4 bays).
There is also an internal USB boot port.
So I am really a newbie in working with RAID. From this thread it sounds like I want /boot and /boot/efi on that USBVV boot device.
I suggest to use the USB device only to bot the installation medium, not use it for anything used by the OS.
Will it work to put / on the first RAID group? What happens if the 1st drive fails and it is replaced with a new blank drive. Will the config in /boot figure this out or does the RAID hardware completely mask the 2 drives and the system runs on the good one while the new one is being replicated?
I guess the best thing would be to use Linux Software RAID and create a small RAID1 (MD0) device for /boot and another one for /boot/efi (MD1), both in the beginning of disk 0 and 1 (MD2). The remaining space on disk 0 and 1 are created as another MD device. Disk 2 and 3 are also created as one RAID1 (MD3) device. Formatting can be done like this
MD0 has filesystem for /boot MD1 has filesystem for /boot/efi MD2 is used as LVM PV MD3 is used as LVM PV
All other filesystems like / or /var or /home... will be created on LVM Logical Volumes to give you full flexibility to manage storage.
Regards, Simon
I also don't see how to build that boot USB stick. I will have the install ISO in the boot USB port and the 4 drives set up with hardware RAID. How are things figure out? I am missing some important piece here.
Oh, HP does list Redhat support for this unit.
thanks for all help.
Bob
On 1/6/23 11:45, Chris Adams wrote:
Once upon a time, Simon Matter simon.matter@invoca.ch said:
Are you sure that's still true? I've done it that way in the past but it seems at least with EL8 you can put /boot/efi on md raid1 with metadata format 1.0. That way the EFI firmware will see it as two independent FAT filesystems. Only thing you have to be sure is that nothing ever writes to these filesystems when Linux is not running, otherwise your /boot/efi md raid will become corrupt.
Can someone who has this running confirm that it works?
Yes, that's even how RHEL/Fedora set it up currently I believe. But like you say, it only works as long as there's no other OS on the system and the UEFI firmware itself is never used to change anything on the FS. It's not entirely clear that most UEFI firmwares would handle a drive failure correctly either (since it's outside the scope of UEFI), so IIRC there's been some consideration in Fedora of dropping this support.
And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for things where it writes to the FS (grubenv updates for "savedefault" for example). But, there's other issues with GRUB2's FS handling anyway, so this case is probably far down the list.
I think that having RAID 1 for /boot and/or /boot/efi can be helpful (and I've set it up, definitely not saying "don't do that"), but has to be handled with care and possibly (probably?) would need manual intervention to get booting again after a drive failure or replacement.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos