Hi All;
I have a new server we're setting up that supports EFI or Legacy in the bios
I am a solid database guy but my SA skills are limited to what I need to get by
1) I used EFI because I wanted to create a raid 10 array with 6 4TB drives and apparently I cannot setup gpt partitions via parted in legacy mode (at least that's what I've read - is this true?)
2) I installed the OS on 2 500GB drives, I used to do all my installs with software RAID (mirrored) without LVM as follows: - create 2 raid partitions (one on each drive) for swap, /boot and / - create a raid1 device for each set of partitions above
The installer would not let me proceed without a /boot/efi partition I tried to create a raid partition on each drive for this and create a /boot/efi raid disk but when I doit this way in the installer I no longer see the "EFI SYSTEM Partition" as an option for the filesystem type so this did not work either.
I ended up doing hardware raid for the OS drives and software raid for the 6 4TB data drives. It works but I prefer to do software raid for everything so we ca have standard methods of monitoring for bad drives.
Is there a way to setup software raid with EFI?
Do I need to add a /boot/efi partition only to one of the 2 OS drives? If so how do I recover if we loose the drive with the /boot/efi partition?
Is it required to use LVM to do this?
Thanks in advance
On 10.05.2014 18:36, CS_DBA wrote:
Hi All;
I have a new server we're setting up that supports EFI or Legacy in the bios
I am a solid database guy but my SA skills are limited to what I need to get by
- I used EFI because I wanted to create a raid 10 array with 6 4TB
drives and apparently I cannot setup gpt partitions via parted in legacy mode (at least that's what I've read - is this true?)
When you say legacy mode do you mean BIOS or the CSM ("Compatibility Support Module") of the UEFI firmware?
BIOS cannot boot from GPT partition but the CSM mode of the UEFI firmware should be able to. You really want to go with plain UEFI though if your system supports it.
- I installed the OS on 2 500GB drives, I used to do all my installs
with software RAID (mirrored) without LVM as follows:
- create 2 raid partitions (one on each drive) for swap, /boot and /
- create a raid1 device for each set of partitions above
The installer would not let me proceed without a /boot/efi partition I tried to create a raid partition on each drive for this and create a /boot/efi raid disk but when I doit this way in the installer I no longer see the "EFI SYSTEM Partition" as an option for the filesystem type so this did not work either.
I ended up doing hardware raid for the OS drives and software raid for the 6 4TB data drives. It works but I prefer to do software raid for everything so we ca have standard methods of monitoring for bad drives.
Is there a way to setup software raid with EFI?
No. The UEFI Firmware needs to access to this partition before it can boot the OS so anything that needs to have the OS running (like a software raid) cannot work. What you can do is create the partition on both disks, point the installer to only the first disk and then later copy the files over to the partition on the other disk so that if the first disk dies you can still boot using the second one. The partition can be tiny (just a couple of megabytes), should be the first partition on the disk (though I think this is not strictly necessary), should be formatted as FAT32, and should be given a type GUID of "C12A7328-F81F-11D2-BA4B-00A0C93EC93B" which means "EFI System partition" so that the UEFI firmware can find it.
You *should* be able to create the other partitions (including /boot) as software raid though I've not done this myself yet so I'm not 100% certain on that.
Do I need to add a /boot/efi partition only to one of the 2 OS drives? If so how do I recover if we loose the drive with the /boot/efi partition?
Is it required to use LVM to do this?
Thanks in advance
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 10.05.2014 19:17, Dennis Jacobfeuerborn wrote:
On 10.05.2014 18:36, CS_DBA wrote:
Hi All;
I have a new server we're setting up that supports EFI or Legacy in the bios
I am a solid database guy but my SA skills are limited to what I need to get by
- I used EFI because I wanted to create a raid 10 array with 6 4TB
drives and apparently I cannot setup gpt partitions via parted in legacy mode (at least that's what I've read - is this true?)
When you say legacy mode do you mean BIOS or the CSM ("Compatibility Support Module") of the UEFI firmware?
BIOS cannot boot from GPT partition but the CSM mode of the UEFI firmware should be able to. You really want to go with plain UEFI though if your system supports it.
- I installed the OS on 2 500GB drives, I used to do all my installs
with software RAID (mirrored) without LVM as follows:
- create 2 raid partitions (one on each drive) for swap, /boot and /
- create a raid1 device for each set of partitions above
The installer would not let me proceed without a /boot/efi partition I tried to create a raid partition on each drive for this and create a /boot/efi raid disk but when I doit this way in the installer I no longer see the "EFI SYSTEM Partition" as an option for the filesystem type so this did not work either.
I ended up doing hardware raid for the OS drives and software raid for the 6 4TB data drives. It works but I prefer to do software raid for everything so we ca have standard methods of monitoring for bad drives.
Is there a way to setup software raid with EFI?
No. The UEFI Firmware needs to access to this partition before it can boot the OS so anything that needs to have the OS running (like a software raid) cannot work. What you can do is create the partition on both disks, point the installer to only the first disk and then later copy the files over to the partition on the other disk so that if the first disk dies you can still boot using the second one. The partition can be tiny (just a couple of megabytes), should be the first partition on the disk (though I think this is not strictly necessary), should be formatted as FAT32, and should be given a type GUID of "C12A7328-F81F-11D2-BA4B-00A0C93EC93B" which means "EFI System partition" so that the UEFI firmware can find it.
You *should* be able to create the other partitions (including /boot) as software raid though I've not done this myself yet so I'm not 100% certain on that.
Just to give you an idea what a "couple of megabytes" means this is what is stored on my EFI partition right now:
[root@nexus EFI]# du -csh /boot/efi/EFI/* 658K /boot/efi/EFI/Boot 7,8M /boot/efi/EFI/fedora 244K /boot/efi/EFI/fedora15 18M /boot/efi/EFI/Microsoft 247K /boot/efi/EFI/redhat 27M total
That's with four different OS installations so for a non-dual-boot system something like 50-100MB should be plenty.
Regards, Dennis
On 05/10/2014 07:22 PM, Dennis Jacobfeuerborn wrote:
On 10.05.2014 19:17, Dennis Jacobfeuerborn wrote:
On 10.05.2014 18:36, CS_DBA wrote:
Hi All;
I have a new server we're setting up that supports EFI or Legacy in the bios
I am a solid database guy but my SA skills are limited to what I need to get by
- I used EFI because I wanted to create a raid 10 array with 6 4TB
drives and apparently I cannot setup gpt partitions via parted in legacy mode (at least that's what I've read - is this true?)
When you say legacy mode do you mean BIOS or the CSM ("Compatibility Support Module") of the UEFI firmware?
BIOS cannot boot from GPT partition but the CSM mode of the UEFI firmware should be able to. You really want to go with plain UEFI though if your system supports it.
- I installed the OS on 2 500GB drives, I used to do all my installs
with software RAID (mirrored) without LVM as follows:
- create 2 raid partitions (one on each drive) for swap, /boot and /
- create a raid1 device for each set of partitions above
The installer would not let me proceed without a /boot/efi partition I tried to create a raid partition on each drive for this and create a /boot/efi raid disk but when I doit this way in the installer I no longer see the "EFI SYSTEM Partition" as an option for the filesystem type so this did not work either.
I ended up doing hardware raid for the OS drives and software raid for the 6 4TB data drives. It works but I prefer to do software raid for everything so we ca have standard methods of monitoring for bad drives.
Is there a way to setup software raid with EFI?
No. The UEFI Firmware needs to access to this partition before it can boot the OS so anything that needs to have the OS running (like a software raid) cannot work. What you can do is create the partition on both disks, point the installer to only the first disk and then later copy the files over to the partition on the other disk so that if the first disk dies you can still boot using the second one. The partition can be tiny (just a couple of megabytes), should be the first partition on the disk (though I think this is not strictly necessary), should be formatted as FAT32, and should be given a type GUID of "C12A7328-F81F-11D2-BA4B-00A0C93EC93B" which means "EFI System partition" so that the UEFI firmware can find it.
You *should* be able to create the other partitions (including /boot) as software raid though I've not done this myself yet so I'm not 100% certain on that.
Just to give you an idea what a "couple of megabytes" means this is what is stored on my EFI partition right now:
[root@nexus EFI]# du -csh /boot/efi/EFI/* 658K /boot/efi/EFI/Boot 7,8M /boot/efi/EFI/fedora 244K /boot/efi/EFI/fedora15 18M /boot/efi/EFI/Microsoft 247K /boot/efi/EFI/redhat 27M total
That's with four different OS installations so for a non-dual-boot system something like 50-100MB should be plenty.
Regards, Dennis
Is there any guide/tutorial/step-by-step how to install GPT-UEFI-CentOS 6.5? I can not seam to locate one.
Hey there,
And why not use HW raid and use monitoring tools for it? What raid card are you using that cannot be monitored?
Eliezer
On 05/10/2014 07:36 PM, CS_DBA wrote:
Hi All;
I have a new server we're setting up that supports EFI or Legacy in the bios
I am a solid database guy but my SA skills are limited to what I need to get by
- I used EFI because I wanted to create a raid 10 array with 6 4TB
drives and apparently I cannot setup gpt partitions via parted in legacy mode (at least that's what I've read - is this true?)
- I installed the OS on 2 500GB drives, I used to do all my installs
with software RAID (mirrored) without LVM as follows:
- create 2 raid partitions (one on each drive) for swap, /boot and /
- create a raid1 device for each set of partitions above
The installer would not let me proceed without a /boot/efi partition I tried to create a raid partition on each drive for this and create a /boot/efi raid disk but when I doit this way in the installer I no longer see the "EFI SYSTEM Partition" as an option for the filesystem type so this did not work either.
I ended up doing hardware raid for the OS drives and software raid for the 6 4TB data drives. It works but I prefer to do software raid for everything so we ca have standard methods of monitoring for bad drives.
Is there a way to setup software raid with EFI?
Do I need to add a /boot/efi partition only to one of the 2 OS drives? If so how do I recover if we loose the drive with the /boot/efi partition?
Is it required to use LVM to do this?
Thanks in advance
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos