I have found a:
HPE 873830-S01 ProLiant MicroServer Gen10
for <$300 without drives. If I can believe the seller, it has a AMD Opteron X3216 Dual-core (2 Core) 1.6GHz and 8GB Installed.
It has 4 3.5" bays. and 1? "Media" bay?
https://www.servertechsupply.com/873830-s01/
this could well be acceptable. Got to find out power draw. Looks like ~40W.
Any input on issues of OS install? Do I go with separate OS and data RAID1 sets?
Also HPE is ClearOS. I ran ClearOS6 for years before going with QNAP turnkey. Perhaps current ClearOS is better, but it does not handle multi-domain email as I need. Or it did not. So I am going to install my own CentOS variant and iRedMail...
thanks
On 1/5/23 13:08, Jon LaBadie wrote:
On Thu, Jan 05, 2023 at 08:18:08AM -0500, Robert Moskowitz wrote:
Proliant gen8 does NOT have UEFI.
So I think this means I better move up to the gen10...
I'm pleased with my Gen 10+ (Plus). Pricer than I thought you want.
I like the "no power supply", just an external brick. Quiet. I put in a PCI card to use 2 NVMe sticks, one for system and one for /home. You can also boot from an internal usb port like a thumb drive permanently installed.
Carriers for 2.5inch SSD drives work fine so with few fans, no drive motors, could be quite low power.
Mine acts as email server, local caching DNS server (dnsmasq) and Amanda backup server.
Jon
Hi
I have found a:
HPE 873830-S01 ProLiant MicroServer Gen10
for <$300 without drives. If I can believe the seller, it has a AMD Opteron X3216 Dual-core (2 Core) 1.6GHz and 8GB Installed.
It has 4 3.5" bays. and 1? "Media" bay?
https://www.servertechsupply.com/873830-s01/
this could well be acceptable. Got to find out power draw. Looks like ~40W.
Any input on issues of OS install? Do I go with separate OS and data RAID1 sets?
I usually do
[ 1(+n) RAID1 ]->[ LVM ]->[ XFS ]
then you can use LVM to manage different filesystems as required.
/boot and/or /boot/efi should be on its own RAID1 with old metadata version but I'm not up to date about how the situation is exactly with EL9.
Simon
Also HPE is ClearOS. I ran ClearOS6 for years before going with QNAP turnkey. Perhaps current ClearOS is better, but it does not handle multi-domain email as I need. Or it did not. So I am going to install my own CentOS variant and iRedMail...
thanks
On 1/5/23 13:08, Jon LaBadie wrote:
On Thu, Jan 05, 2023 at 08:18:08AM -0500, Robert Moskowitz wrote:
Proliant gen8 does NOT have UEFI.
So I think this means I better move up to the gen10...
I'm pleased with my Gen 10+ (Plus). Pricer than I thought you want.
I like the "no power supply", just an external brick. Quiet. I put in a PCI card to use 2 NVMe sticks, one for system and one for /home. You can also boot from an internal usb port like a thumb drive permanently installed.
Carriers for 2.5inch SSD drives work fine so with few fans, no drive motors, could be quite low power.
Mine acts as email server, local caching DNS server (dnsmasq) and Amanda backup server.
Jon
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
At Fri, 6 Jan 2023 08:39:22 +0100 CentOS mailing list centos@centos.org wrote:
Hi
I have found a:
HPE 873830-S01 ProLiant MicroServer Gen10
for <$300 without drives. If I can believe the seller, it has a AMD Opteron X3216 Dual-core (2 Core) 1.6GHz and 8GB Installed.
It has 4 3.5" bays. and 1? "Media" bay?
https://www.servertechsupply.com/873830-s01/
this could well be acceptable. Got to find out power draw. Looks like ~40W.
Any input on issues of OS install? Do I go with separate OS and data RAID1 sets?
I usually do
[ 1(+n) RAID1 ]->[ LVM ]->[ XFS ]
then you can use LVM to manage different filesystems as required.
/boot and/or /boot/efi should be on its own RAID1 with old metadata version but I'm not up to date about how the situation is exactly with EL9.
It depends on the version of Grub. Grub V1 needs /boot to be RAID1 with old metadata (metadata at the *end* of the partition, so Grub just sees a plain ext2/3/4 file system to find vmlinuz and initrd). Note: /boot/efi or the grub fs that Grub2 seems to want cannot be RAID, but you should duplicate the partitions across all of the physical disks in the raid set and arange some other way of "mirroring" them (eg rsync or some such -- does not need to be continious, since these file systems don't change continuiously). I believe Grub V2 understands raid and LVM, so having a separate /boot raid set might not be needed. Things like /boot/efi and grub's on fs still need to exist outside of the raid set and will need "manual" mirroring.
Simon
Also HPE is ClearOS. I ran ClearOS6 for years before going with QNAP turnkey. Perhaps current ClearOS is better, but it does not handle multi-domain email as I need. Or it did not. So I am going to install my own CentOS variant and iRedMail...
thanks
On 1/5/23 13:08, Jon LaBadie wrote:
On Thu, Jan 05, 2023 at 08:18:08AM -0500, Robert Moskowitz wrote:
Proliant gen8 does NOT have UEFI.
So I think this means I better move up to the gen10...
I'm pleased with my Gen 10+ (Plus). Pricer than I thought you want.
I like the "no power supply", just an external brick. Quiet. I put in a PCI card to use 2 NVMe sticks, one for system and one for /home. You can also boot from an internal usb port like a thumb drive permanently installed.
Carriers for 2.5inch SSD drives work fine so with few fans, no drive motors, could be quite low power.
Mine acts as email server, local caching DNS server (dnsmasq) and Amanda backup server.
Jon
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Well I just ordered a Proliant gen10+ microserver, as the gen10 of for 1/2 the price was the $0.59 hamburger (we ad it, but you can't order it).
I also ordered 4 Seagate 4T terascale drives (seems nothing smaller around).
So in some 2 weeks I will have it all together and will see what happens when I do the install.
:)
Supposedly there is an internal boot usb port in the gen10+
On 1/6/23 09:42, Robert Heller wrote:
At Fri, 6 Jan 2023 08:39:22 +0100 CentOS mailing list centos@centos.org wrote:
Hi
I have found a:
HPE 873830-S01 ProLiant MicroServer Gen10
for <$300 without drives. If I can believe the seller, it has a AMD Opteron X3216 Dual-core (2 Core) 1.6GHz and 8GB Installed.
It has 4 3.5" bays. and 1? "Media" bay?
https://www.servertechsupply.com/873830-s01/
this could well be acceptable. Got to find out power draw. Looks like ~40W.
Any input on issues of OS install? Do I go with separate OS and data RAID1 sets?
I usually do
[ 1(+n) RAID1 ]->[ LVM ]->[ XFS ]
then you can use LVM to manage different filesystems as required.
/boot and/or /boot/efi should be on its own RAID1 with old metadata version but I'm not up to date about how the situation is exactly with EL9.
It depends on the version of Grub. Grub V1 needs /boot to be RAID1 with old metadata (metadata at the *end* of the partition, so Grub just sees a plain ext2/3/4 file system to find vmlinuz and initrd). Note: /boot/efi or the grub fs that Grub2 seems to want cannot be RAID, but you should duplicate the partitions across all of the physical disks in the raid set and arange some other way of "mirroring" them (eg rsync or some such -- does not need to be continious, since these file systems don't change continuiously). I believe Grub V2 understands raid and LVM, so having a separate /boot raid set might not be needed. Things like /boot/efi and grub's on fs still need to exist outside of the raid set and will need "manual" mirroring.
Simon
Also HPE is ClearOS. I ran ClearOS6 for years before going with QNAP turnkey. Perhaps current ClearOS is better, but it does not handle multi-domain email as I need. Or it did not. So I am going to install my own CentOS variant and iRedMail...
thanks
On 1/5/23 13:08, Jon LaBadie wrote:
On Thu, Jan 05, 2023 at 08:18:08AM -0500, Robert Moskowitz wrote:
Proliant gen8 does NOT have UEFI.
So I think this means I better move up to the gen10...
I'm pleased with my Gen 10+ (Plus). Pricer than I thought you want.
I like the "no power supply", just an external brick. Quiet. I put in a PCI card to use 2 NVMe sticks, one for system and one for /home. You can also boot from an internal usb port like a thumb drive permanently installed.
Carriers for 2.5inch SSD drives work fine so with few fans, no drive motors, could be quite low power.
Mine acts as email server, local caching DNS server (dnsmasq) and Amanda backup server.
Jon
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
At Fri, 6 Jan 2023 08:39:22 +0100 CentOS mailing list centos@centos.org wrote:
Hi
I have found a:
HPE 873830-S01 ProLiant MicroServer Gen10
for <$300 without drives.'?''?''?'? If I can believe the seller, it
has a AMD
Opteron X3216 Dual-core (2 Core) 1.6GHz and 8GB Installed.
It has 4 3.5" bays. and 1? "Media" bay?
https://www.servertechsupply.com/873830-s01/
this could well be acceptable.'?''?''?'? Got to find out power
draw.'?''?''?'? Looks like
~40W.
Any input on issues of OS install?'?''?''?'? Do I go with separate OS
and data
RAID1 sets?
I usually do
[ 1(+n) RAID1 ]->[ LVM ]->[ XFS ]
then you can use LVM to manage different filesystems as required.
/boot and/or /boot/efi should be on its own RAID1 with old metadata version but I'm not up to date about how the situation is exactly with EL9.
It depends on the version of Grub. Grub V1 needs /boot to be RAID1 with old metadata (metadata at the *end* of the partition, so Grub just sees a plain ext2/3/4 file system to find vmlinuz and initrd). Note: /boot/efi or the grub fs that Grub2 seems to want cannot be RAID, but you should duplicate the partitions across all of the physical disks in the raid set and arange some other way of "mirroring" them (eg rsync or some such -- does not need to be continious, since these file systems don't change continuiously). I believe Grub V2 understands raid and LVM, so having a separate /boot raid set might not be needed. Things like /boot/efi and grub's on fs still need to exist outside of the raid set and will need "manual" mirroring.
Are you sure that's still true? I've done it that way in the past but it seems at least with EL8 you can put /boot/efi on md raid1 with metadata format 1.0. That way the EFI firmware will see it as two independent FAT filesystems. Only thing you have to be sure is that nothing ever writes to these filesystems when Linux is not running, otherwise your /boot/efi md raid will become corrupt.
Can someone who has this running confirm that it works?
Thanks, Simon
Simon
Also HPE is ClearOS.'?''?''?'? I ran ClearOS6 for years before going
with QNAP
turnkey.'?''?''?'? Perhaps current ClearOS is better, but it does not
handle
multi-domain email as I need.'?''?''?'? Or it did not.'?''?''?'? So I
am going to install
my own CentOS variant and iRedMail...
thanks
On 1/5/23 13:08, Jon LaBadie wrote:
On Thu, Jan 05, 2023 at 08:18:08AM -0500, Robert Moskowitz wrote:
Proliant gen8 does NOT have UEFI.
So I think this means I better move up to the gen10...
I'm pleased with my Gen 10+ (Plus).'?''?''?'? Pricer than I thought you want.
I like the "no power supply", just an external brick. Quiet.'?''?''?'? I put in a PCI card to use 2 NVMe sticks, one for system and one for /home.'?''?''?'? You can also boot from an
internal
usb port like a thumb drive permanently installed.
Carriers for 2.5inch SSD drives work fine so with few fans, no drive motors, could be quite low power.
Mine acts as email server, local caching DNS server (dnsmasq) and Amanda backup server.
Jon
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
-- Robert Heller -- Cell: 413-658-7953 GV: 978-633-5364 Deepwoods Software -- Custom Software Services http://www.deepsoft.com/ -- Linux Administration Services heller@deepsoft.com -- Webhosting Services
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Once upon a time, Simon Matter simon.matter@invoca.ch said:
Are you sure that's still true? I've done it that way in the past but it seems at least with EL8 you can put /boot/efi on md raid1 with metadata format 1.0. That way the EFI firmware will see it as two independent FAT filesystems. Only thing you have to be sure is that nothing ever writes to these filesystems when Linux is not running, otherwise your /boot/efi md raid will become corrupt.
Can someone who has this running confirm that it works?
Yes, that's even how RHEL/Fedora set it up currently I believe. But like you say, it only works as long as there's no other OS on the system and the UEFI firmware itself is never used to change anything on the FS. It's not entirely clear that most UEFI firmwares would handle a drive failure correctly either (since it's outside the scope of UEFI), so IIRC there's been some consideration in Fedora of dropping this support.
And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for things where it writes to the FS (grubenv updates for "savedefault" for example). But, there's other issues with GRUB2's FS handling anyway, so this case is probably far down the list.
I think that having RAID 1 for /boot and/or /boot/efi can be helpful (and I've set it up, definitely not saying "don't do that"), but has to be handled with care and possibly (probably?) would need manual intervention to get booting again after a drive failure or replacement.
Continuing this thread, and focusing on RAID1.
I got an HPE Proliant gen10+ that has hardware RAID support. (can turn it off if I want).
I am planning two groupings of RAID1 (it has 4 bays).
There is also an internal USB boot port.
So I am really a newbie in working with RAID. From this thread it sounds like I want /boot and /boot/efi on that USBVV boot device.
Will it work to put / on the first RAID group? What happens if the 1st drive fails and it is replaced with a new blank drive. Will the config in /boot figure this out or does the RAID hardware completely mask the 2 drives and the system runs on the good one while the new one is being replicated?
I also don't see how to build that boot USB stick. I will have the install ISO in the boot USB port and the 4 drives set up with hardware RAID. How are things figure out? I am missing some important piece here.
Oh, HP does list Redhat support for this unit.
thanks for all help.
Bob
On 1/6/23 11:45, Chris Adams wrote:
Once upon a time, Simon Matter simon.matter@invoca.ch said:
Are you sure that's still true? I've done it that way in the past but it seems at least with EL8 you can put /boot/efi on md raid1 with metadata format 1.0. That way the EFI firmware will see it as two independent FAT filesystems. Only thing you have to be sure is that nothing ever writes to these filesystems when Linux is not running, otherwise your /boot/efi md raid will become corrupt.
Can someone who has this running confirm that it works?
Yes, that's even how RHEL/Fedora set it up currently I believe. But like you say, it only works as long as there's no other OS on the system and the UEFI firmware itself is never used to change anything on the FS. It's not entirely clear that most UEFI firmwares would handle a drive failure correctly either (since it's outside the scope of UEFI), so IIRC there's been some consideration in Fedora of dropping this support.
And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for things where it writes to the FS (grubenv updates for "savedefault" for example). But, there's other issues with GRUB2's FS handling anyway, so this case is probably far down the list.
I think that having RAID 1 for /boot and/or /boot/efi can be helpful (and I've set it up, definitely not saying "don't do that"), but has to be handled with care and possibly (probably?) would need manual intervention to get booting again after a drive failure or replacement.
Hi
Continuing this thread, and focusing on RAID1.
I got an HPE Proliant gen10+ that has hardware RAID support. (can turn it off if I want).
What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all.
I am planning two groupings of RAID1 (it has 4 bays).
There is also an internal USB boot port.
So I am really a newbie in working with RAID. From this thread it sounds like I want /boot and /boot/efi on that USBVV boot device.
I suggest to use the USB device only to bot the installation medium, not use it for anything used by the OS.
Will it work to put / on the first RAID group? What happens if the 1st drive fails and it is replaced with a new blank drive. Will the config in /boot figure this out or does the RAID hardware completely mask the 2 drives and the system runs on the good one while the new one is being replicated?
I guess the best thing would be to use Linux Software RAID and create a small RAID1 (MD0) device for /boot and another one for /boot/efi (MD1), both in the beginning of disk 0 and 1 (MD2). The remaining space on disk 0 and 1 are created as another MD device. Disk 2 and 3 are also created as one RAID1 (MD3) device. Formatting can be done like this
MD0 has filesystem for /boot MD1 has filesystem for /boot/efi MD2 is used as LVM PV MD3 is used as LVM PV
All other filesystems like / or /var or /home... will be created on LVM Logical Volumes to give you full flexibility to manage storage.
Regards, Simon
I also don't see how to build that boot USB stick. I will have the install ISO in the boot USB port and the 4 drives set up with hardware RAID. How are things figure out? I am missing some important piece here.
Oh, HP does list Redhat support for this unit.
thanks for all help.
Bob
On 1/6/23 11:45, Chris Adams wrote:
Once upon a time, Simon Matter simon.matter@invoca.ch said:
Are you sure that's still true? I've done it that way in the past but it seems at least with EL8 you can put /boot/efi on md raid1 with metadata format 1.0. That way the EFI firmware will see it as two independent FAT filesystems. Only thing you have to be sure is that nothing ever writes to these filesystems when Linux is not running, otherwise your /boot/efi md raid will become corrupt.
Can someone who has this running confirm that it works?
Yes, that's even how RHEL/Fedora set it up currently I believe. But like you say, it only works as long as there's no other OS on the system and the UEFI firmware itself is never used to change anything on the FS. It's not entirely clear that most UEFI firmwares would handle a drive failure correctly either (since it's outside the scope of UEFI), so IIRC there's been some consideration in Fedora of dropping this support.
And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for things where it writes to the FS (grubenv updates for "savedefault" for example). But, there's other issues with GRUB2's FS handling anyway, so this case is probably far down the list.
I think that having RAID 1 for /boot and/or /boot/efi can be helpful (and I've set it up, definitely not saying "don't do that"), but has to be handled with care and possibly (probably?) would need manual intervention to get booting again after a drive failure or replacement.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On Mon, Jan 09, 2023 at 07:32:02AM +0100, Simon Matter wrote:
Hi
Continuing this thread, and focusing on RAID1.
I got an HPE Proliant gen10+ that has hardware RAID support. (can turn it off if I want).
What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all.
Yes it is the S100i SR. For HW RAID there is a e208i-p card available that can use the single PCI slot.
jl
Official drives should be here Friday, so trying to get reading.
On 1/9/23 01:32, Simon Matter wrote:
Hi
Continuing this thread, and focusing on RAID1.
I got an HPE Proliant gen10+ that has hardware RAID support. (can turn it off if I want).
What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all.
Yes, I found the information: ============================ HPE Smart Array Gen10 Controllers Data Sheet.
Software RAID
· HPE Smart Array S100i SR Gen10 Software RAID
Notes:
- HPE Smart Array S100i SR Gen10 SW RAID will operate in UEFI mode only. For legacy support an additional controller will be needed
- The S100i only supports Windows. For Linux users, HPE offers a solution that uses in-distro open-source software to create a two-disk RAID 1 boot volume. For more information visit: https://downloads.linux.hpe.com/SDR/project/lsrrb/ ==================== I have yet to look at this url.
I am planning two groupings of RAID1 (it has 4 bays).
There is also an internal USB boot port.
So I am really a newbie in working with RAID. From this thread it sounds like I want /boot and /boot/efi on that USBVV boot device.
I suggest to use the USB device only to bot the installation medium, not use it for anything used by the OS.
Will it work to put / on the first RAID group? What happens if the 1st drive fails and it is replaced with a new blank drive. Will the config in /boot figure this out or does the RAID hardware completely mask the 2 drives and the system runs on the good one while the new one is being replicated?
I am trying to grok what you are saying here. is MD0-4 the physical disks or partitions?
All the drives I am getting are 4TB, as that is the smallest Enterprise quality HD I could find! Quite overkill for me, $75 each.
I guess the best thing would be to use Linux Software RAID and create a small RAID1 (MD0) device for /boot and another one for /boot/efi (MD1),
Here is sounds like MD0 and MD1 are partitions, not physical drives?
both in the beginning of disk 0 and 1 (MD2). The remaining space on disk 0 and 1 are created as another MD device. Disk 2 and 3 are also created as one RAID1 (MD3) device. Formatting can be done like this
MD0 has filesystem for /boot MD1 has filesystem for /boot/efi MD2 is used as LVM PV MD3 is used as LVM PV
Now it really seems like MDn are partitions with MD0-3 on disks 1&2 and MD3 on disks 3&4?
All other filesystems like / or /var or /home... will be created on LVM Logical Volumes to give you full flexibility to manage storage.
Given using iRedMail which puts all mail store under /var/vmail, /var goes on disks 3&4.
/home will be little stuff. iRedMail components put their configs and data (like domain and user sql database) all over the places. Disks 1&2 will be basically empty. Wish I could have found high quality 1TB drives for less...
thanks
Regards, Simon
I also don't see how to build that boot USB stick. I will have the install ISO in the boot USB port and the 4 drives set up with hardware RAID. How are things figure out? I am missing some important piece here.
Oh, HP does list Redhat support for this unit.
thanks for all help.
Bob
On 1/6/23 11:45, Chris Adams wrote:
Once upon a time, Simon Matter simon.matter@invoca.ch said:
Are you sure that's still true? I've done it that way in the past but it seems at least with EL8 you can put /boot/efi on md raid1 with metadata format 1.0. That way the EFI firmware will see it as two independent FAT filesystems. Only thing you have to be sure is that nothing ever writes to these filesystems when Linux is not running, otherwise your /boot/efi md raid will become corrupt.
Can someone who has this running confirm that it works?
Yes, that's even how RHEL/Fedora set it up currently I believe. But like you say, it only works as long as there's no other OS on the system and the UEFI firmware itself is never used to change anything on the FS. It's not entirely clear that most UEFI firmwares would handle a drive failure correctly either (since it's outside the scope of UEFI), so IIRC there's been some consideration in Fedora of dropping this support.
And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for things where it writes to the FS (grubenv updates for "savedefault" for example). But, there's other issues with GRUB2's FS handling anyway, so this case is probably far down the list.
I think that having RAID 1 for /boot and/or /boot/efi can be helpful (and I've set it up, definitely not saying "don't do that"), but has to be handled with care and possibly (probably?) would need manual intervention to get booting again after a drive failure or replacement.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 1/10/23 20:20, Robert Moskowitz wrote:
Official drives should be here Friday, so trying to get reading.
On 1/9/23 01:32, Simon Matter wrote:
Hi
Continuing this thread, and focusing on RAID1.
I got an HPE Proliant gen10+ that has hardware RAID support. (can turn it off if I want).
What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all.
Yes, I found the information:
HPE Smart Array Gen10 Controllers Data Sheet.
Software RAID
· HPE Smart Array S100i SR Gen10 Software RAID
Notes:
- HPE Smart Array S100i SR Gen10 SW RAID will operate in UEFI mode
only. For legacy support an additional controller will be needed
- The S100i only supports Windows. For Linux users, HPE offers a
solution that uses in-distro open-source software to create a two-disk RAID 1 boot volume. For more information visit: https://downloads.linux.hpe.com/SDR/project/lsrrb/ ==================== I have yet to look at this url.
This guide seems to answer MOST of my questions.
I am planning two groupings of RAID1 (it has 4 bays).
There is also an internal USB boot port.
So I am really a newbie in working with RAID. From this thread it sounds like I want /boot and /boot/efi on that USBVV boot device.
I suggest to use the USB device only to bot the installation medium, not use it for anything used by the OS.
Will it work to put / on the first RAID group? What happens if the 1st drive fails and it is replaced with a new blank drive. Will the config in /boot figure this out or does the RAID hardware completely mask the 2 drives and the system runs on the good one while the new one is being replicated?
I am trying to grok what you are saying here. is MD0-4 the physical disks or partitions?
I see from your response to another poster you ARE talking about RAID on individual partitions. So I can better think about your approach now.
thanks
All the drives I am getting are 4TB, as that is the smallest Enterprise quality HD I could find! Quite overkill for me, $75 each.
I guess the best thing would be to use Linux Software RAID and create a small RAID1 (MD0) device for /boot and another one for /boot/efi (MD1),
Here is sounds like MD0 and MD1 are partitions, not physical drives?
both in the beginning of disk 0 and 1 (MD2). The remaining space on disk 0 and 1 are created as another MD device. Disk 2 and 3 are also created as one RAID1 (MD3) device. Formatting can be done like this
MD0 has filesystem for /boot MD1 has filesystem for /boot/efi MD2 is used as LVM PV MD3 is used as LVM PV
Now it really seems like MDn are partitions with MD0-3 on disks 1&2 and MD3 on disks 3&4?
All other filesystems like / or /var or /home... will be created on LVM Logical Volumes to give you full flexibility to manage storage.
Given using iRedMail which puts all mail store under /var/vmail, /var goes on disks 3&4.
/home will be little stuff. iRedMail components put their configs and data (like domain and user sql database) all over the places. Disks 1&2 will be basically empty. Wish I could have found high quality 1TB drives for less...
thanks
Regards, Simon
I also don't see how to build that boot USB stick. I will have the install ISO in the boot USB port and the 4 drives set up with hardware RAID. How are things figure out? I am missing some important piece here.
Oh, HP does list Redhat support for this unit.
thanks for all help.
Bob
On 1/6/23 11:45, Chris Adams wrote:
Once upon a time, Simon Matter simon.matter@invoca.ch said:
Are you sure that's still true? I've done it that way in the past but it seems at least with EL8 you can put /boot/efi on md raid1 with metadata format 1.0. That way the EFI firmware will see it as two independent FAT filesystems. Only thing you have to be sure is that nothing ever writes to these filesystems when Linux is not running, otherwise your /boot/efi md raid will become corrupt.
Can someone who has this running confirm that it works?
Yes, that's even how RHEL/Fedora set it up currently I believe. But like you say, it only works as long as there's no other OS on the system and the UEFI firmware itself is never used to change anything on the FS. It's not entirely clear that most UEFI firmwares would handle a drive failure correctly either (since it's outside the scope of UEFI), so IIRC there's been some consideration in Fedora of dropping this support.
And... I'm not sure if GRUB2 handles RAID 1 /boot fully correctly, for things where it writes to the FS (grubenv updates for "savedefault" for example). But, there's other issues with GRUB2's FS handling anyway, so this case is probably far down the list.
I think that having RAID 1 for /boot and/or /boot/efi can be helpful (and I've set it up, definitely not saying "don't do that"), but has to be handled with care and possibly (probably?) would need manual intervention to get booting again after a drive failure or replacement.
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 1/10/23 20:20, Robert Moskowitz wrote:
Official drives should be here Friday, so trying to get reading.
On 1/9/23 01:32, Simon Matter wrote:
Hi
Continuing this thread, and focusing on RAID1.
I got an HPE Proliant gen10+ that has hardware RAID support. (can turn it off if I want).
What exact model of RAID controller is this? If it's a S100i SR Gen10 then it's not hardware RAID at all.
Yes, I found the information:
HPE Smart Array Gen10 Controllers Data Sheet.
Software RAID
· HPE Smart Array S100i SR Gen10 Software RAID
Notes:
- HPE Smart Array S100i SR Gen10 SW RAID will operate in UEFI mode
only. For legacy support an additional controller will be needed
- The S100i only supports Windows. For Linux users, HPE offers a
solution that uses in-distro open-source software to create a two-disk RAID 1 boot volume. For more information visit: https://downloads.linux.hpe.com/SDR/project/lsrrb/ ==================== I have yet to look at this url.
This guide seems to answer MOST of my questions.
I didn't know this guide from HPE. What I'm not sure is how well it is supported now for newer distributions like EL9. I could be wrong but I think EL9 already supports setting this up out of the box without any additional fiddling.
In my case years ago with EL7 I simply put /boot/efi on a partition on disk 0 and another one on disk 1, like so:
/dev/nvme0n1p1 200M 12M 189M 6% /boot/efi /dev/nvme1n1p1 200M 12M 189M 6% /boot/efi.backup
To keep the filesystems in sync I've added a hook to our package update mechanism which simply calls this bash function:
EFISRC="/boot/efi" EFIDEST="${EFISRC}.backup"
efisync() { if [ -d "${EFISRC}/EFI" -a -d "${EFIDEST}/EFI" ]; then rsync --archive --delete --verbose "${EFISRC}/EFI" "${EFIDEST}/" fi }
If I had to install this today I'd like to put it on RAID as discussed.
Regards, Simon
Hi All, very interesting thread, I add my 2 cents point-of-view for free to all of you ...
A lot af satisfaction with HP Proliant MicroServer from the first GEN6 (AMD NEON) to the 1-year old MicroServer Gen10 X3216 (CentOS6/7/8) so I think yours is the right choice!
In /boot/efi/ (mounted from the first partition of the first GPT disk) you only have the grub2 efi binary, not the vmlinuz kernel or initrd image or the grub.cfg itself ...
To be more precise a grub.cfg file exists there but it's only a static file which has an entry to find the right one using the uuid fingerprint
*cat \EFI\ubuntu\grub.cfg*search.fs_uuid d9f44ffb-3cb8-4783-8928-0123e5d8a149 root set prefix=($root)'/@/boot/grub' configfile $prefix/grub.cfg
Using an md1 software raid mirror for this FAT32 (ESP) partition is not safe IF you use it outside of the linux environment (because the mirror will became corrupted at the first write the other OSes will do on this partition).
It's better to setup a separated /boot partition (yes, here an md1 linux software raid mirror is OK) which the grub2 bootloader can manage correctly (be sure grub2 can access his modules to understand and manage this LVM/RAID : mdraid09,mdraid1x,lvm.mod [1] [2]
insmod raid # and load the related `mdraid' module `mdraid09' for RAID arrays with version 0.9 metadata, and `mdraid1x' for arrays with version 1.x metadata. insmod mdraid09 set root=(md0p1) # or the following for an unpartitioned RAID array set root=(md0)
IMHO installing ex-novo is the easiest path with setup that puts all the things correctly, building the right initramfs and putting the correct entry in grub.cfg for the modules needed to manage raid/lvm... To be honest I don't know how the anaconda installer manage the /dev/sda1 ESP/FAT32/EFI partitions (I'd like it clones this efi partition to the 2nd disk, but i think it will leave /dev/sdb1 partition empty)
To understand better how GRUB2 works i've looked here : [3] [4] [5]
Happy hacking
*Fleur*
[1] : https://unix.stackexchange.com/questions/187236/grub2-lvm2-raid1-boot [2] : https://wiki.gentoo.org/wiki/GRUB/Advanced_storage [3] : https://www.gnu.org/software/grub/manual/grub/grub.html [4] : https://documentation.suse.com/sled/15-SP4/html/SLED-all/cha-grub2.html [5] : https://wiki.archlinux.org/title/GRUB