I have been trying to install CentOS 4.1 on a new server today. The installation went fine but the system doesn't boot. It doesn't even get to the Grub screen. The system has a 3ware 9500 series controller and 12 250GB SATA disks which are configured for RAID5 with one hot spare. I searched on the web and noticed that this is a Grub (and Lilo) limitation. Is there any way around it other than setting up a smaller RAID device?
-akop
On Friday 02 September 2005 22:26, Akop Pogosian wrote:
I have been trying to install CentOS 4.1 on a new server today. The installation went fine but the system doesn't boot. It doesn't even get to the Grub screen. The system has a 3ware 9500 series controller and 12 250GB SATA disks which are configured for RAID5 with one hot spare. I searched on the web and noticed that this is a Grub (and Lilo) limitation. Is there any way around it other than setting up a smaller RAID device?
Grub has a 2TB limitation - always did... even the latest 1.90 release still has that issue. The only reasonable ways to get around that is to split your array into <2TB pieces or to get another disk for booting...
Lilo has the same issue - so you're not gonna be very happy.
Peter.
On Sat, 3 Sep 2005 at 12:07am, Peter Arremann wrote
On Friday 02 September 2005 22:26, Akop Pogosian wrote:
I have been trying to install CentOS 4.1 on a new server today. The installation went fine but the system doesn't boot. It doesn't even get to the Grub screen. The system has a 3ware 9500 series controller and 12 250GB SATA disks which are configured for RAID5 with one hot spare. I searched on the web and noticed that this is a Grub (and Lilo) limitation. Is there any way around it other than setting up a smaller RAID device?
Grub has a 2TB limitation - always did... even the latest 1.90 release still has that issue. The only reasonable ways to get around that is to split your array into <2TB pieces or to get another disk for booting...
Splitting the array will only work if by that you mean having multiple arrays (showing up as separate "disks"), one of which is <2TiB. The problem isn't that grub has a problem with >2TiB disks directly, it's that you can't use an msdos disk label on a disk >2TiB -- you must use gpt. And neither grub nor lilo understand gpt disk labels.
On Sat, 2005-09-03 at 05:31, Joshua Baker-LePain wrote:
Grub has a 2TB limitation - always did... even the latest 1.90 release still has that issue. The only reasonable ways to get around that is to split your array into <2TB pieces or to get another disk for booting...
Splitting the array will only work if by that you mean having multiple arrays (showing up as separate "disks"), one of which is <2TiB. The problem isn't that grub has a problem with >2TiB disks directly, it's that you can't use an msdos disk label on a disk >2TiB -- you must use gpt. And neither grub nor lilo understand gpt disk labels.
Can you boot a system like this from a CD? If so, are there any instructions for making one that would have the usual contents of /boot, including the potentially custom initrd image? Seems like a useful thing to be able to do even on smaller systems.
Wouldn't it be much easier to use an IDE Flash Module for that? You can get 128 MB IDE Flash for direct plugging to the ATA-Connector on the mainboard for as few as 30 € (which is definitely not an issue, if you have disk-arrays greater than 2 TB...). You could then use this thing for /boot and your bootloader. As it is presented to the system as a "normal" IDE-drive, you don't have to trick the OS.
You can also buy two CF-to-ATA Adapters, build them into the case, plug them into different ATA-Ports, make them a RAID-1 Device-Mapper device and configure your BIOS to try to boot from either one (i.e. whichever is still alive). This way you even get redundancy for your bootloader in case of CF-Card failure (which is imho quite unlikely...).
Ah and btw... if you're insane enough, you can put 2 GB CF-Cards into those adapters and install a rescue system - just in case your Disk- Array dies...
Or you just use one adapter accessible from the machine's front panel. You could then boot anything you want (e.g. your Restore-System, your Weekend's Game-Server or whatever) just by rebooting the machine with a different CF-Card.
So far... this is all theory-only, but I don't see any problems with this (nevertheless your mileage with the "don't try to boot from broken IDE-Drives"-Configuration will vary)
Regards, Andreas
Am Samstag, den 03.09.2005, 11:27 -0500 schrieb Les Mikesell:
Can you boot a system like this from a CD? If so, are there any instructions for making one that would have the usual contents of /boot, including the potentially custom initrd image? Seems like a useful thing to be able to do even on smaller systems.
On Sunday 04 September 2005 09:47, Andreas Rogge wrote:
Wouldn't it be much easier to use an IDE Flash Module for that? You can get 128 MB IDE Flash for direct plugging to the ATA-Connector on the mainboard for as few as 30 € (which is definitely not an issue, if you have disk-arrays greater than 2 TB...).
Not a good idea... CF isn't meant for constant re-writes... you'd have to run a special filesystem like JFFS that none of the major linux distributions know how to do as an install... CF sized harddisks aren't much better - they aren't meant to run 24/7 and so on...
If you can at all efford it, you should always have a set of mirrored OS disks and then do raid 0+1 or 5, depending on your space/speed/redundancy needs, for the data.
Peter.
On Sun, 2005-09-04 at 11:01, Peter Arremann wrote:
On Sunday 04 September 2005 09:47, Andreas Rogge wrote:
Wouldn't it be much easier to use an IDE Flash Module for that? You can get 128 MB IDE Flash for direct plugging to the ATA-Connector on the mainboard for as few as 30 € (which is definitely not an issue, if you have disk-arrays greater than 2 TB...).
Not a good idea... CF isn't meant for constant re-writes... you'd have to run a special filesystem like JFFS that none of the major linux distributions know how to do as an install... CF sized harddisks aren't much better - they aren't meant to run 24/7 and so on...
First, a USB flash drive should be inexpensive and sufficient. Almost everything these days should boot from USB. As for re-writing, how often do you write anything to /boot? However, like a boot iso, the main problem is just that the scripts to set it up aren't included. The mkbootbootdisk script might do just about the right thing for usb, though.
If you can at all efford it, you should always have a set of mirrored OS disks and then do raid 0+1 or 5, depending on your space/speed/redundancy needs, for the data.
If you have more than 2TB you might not miss the space, but it seems like a waste to dedicate a whole pair of hard drives to be able to boot, which you might only do once a year or so. I'd consider the ability to generate a bootable iso to be a good thing in any case, now that floppies no longer work.
Les Mikesell wrote:
everything these days should boot from USB. As for re-writing, how often do you write anything to /boot?
I think the problem is not so much explicitly WRITING to the CF device, but unless you have it mounted with the 'noatime' option it will get written to every time you access a file on that filesystem. I'm not sure if that would eliminate all writes to the filesystem, either.
Just my $.02
On Sun, 2005-09-04 at 16:11, Jay Leafey wrote:
Les Mikesell wrote:
everything these days should boot from USB. As for re-writing, how often do you write anything to /boot?
I think the problem is not so much explicitly WRITING to the CF device, but unless you have it mounted with the 'noatime' option it will get written to every time you access a file on that filesystem. I'm not sure if that would eliminate all writes to the filesystem, either.
/boot doesn't have to be mounted unless you are updating it.
On Sun, 2005-09-04 at 17:51, Peter Arremann wrote:
On Sunday 04 September 2005 18:33, Les Mikesell wrote:
/boot doesn't have to be mounted unless you are updating it.
Correct - but do you really want to boot a server off a usb memory stick?
I'd be a little worried that it might be accidentally removed by the time it was needed again. On the other hand I wouldn't have much of a problem with booting from a CD if I had a well-tested script to create images as updates are needed.
On Sunday 04 September 2005 20:04, Les Mikesell wrote:
I'd be a little worried that it might be accidentally removed by the time it was needed again. On the other hand I wouldn't have much of a problem with booting from a CD if I had a well-tested script to create images as updates are needed.
I was more thinking of accidently breaking the connector on the usb drive...
But I agree - in fact there just was some discussion about that on the fedora mailing list... Unfortunately not much positive...
Peter.
On Sat, Sep 03, 2005 at 12:07:21AM -0400, Peter Arremann wrote:
On Friday 02 September 2005 22:26, Akop Pogosian wrote:
I have been trying to install CentOS 4.1 on a new server today. The installation went fine but the system doesn't boot. It doesn't even get to the Grub screen. The system has a 3ware 9500 series controller and 12 250GB SATA disks which are configured for RAID5 with one hot spare. I searched on the web and noticed that this is a Grub (and Lilo) limitation. Is there any way around it other than setting up a smaller RAID device?
Grub has a 2TB limitation - always did... even the latest 1.90 release still has that issue. The only reasonable ways to get around that is to split your array into <2TB pieces or to get another disk for booting...
Lilo has the same issue - so you're not gonna be very happy.
Peter.
Oh, I see. Thanks all for replies. I guess I'll just take two 250GB disks out of RAID5 and create a mirror out of them for system disk which means we're going to lose ~250GB of starage capacity :/
-akop
On Sun, 2005-09-04 at 03:01 -0700, Akop Pogosian wrote:
Oh, I see. Thanks all for replies. I guess I'll just take two 250GB disks out of RAID5 and create a mirror out of them for system disk which means we're going to lose ~250GB of starage capacity :/
This is what I always do for a server. RAID-5 is not a good volume for swap, /tmp or /var. I typically create a "system" RAID-1 (or RAID-10) volume, and then a "data" RAID-5 volume.
Akop Pogosian wrote:
I have been trying to install CentOS 4.1 on a new server today. The installation went fine but the system doesn't boot. It doesn't even get to the Grub screen. The system has a 3ware 9500 series controller and 12 250GB SATA disks which are configured for RAID5 with one hot spare. I searched on the web and noticed that this is a Grub (and Lilo) limitation. Is there any way around it other than setting up a smaller RAID device?
Just create a small 100-200mb partition for /boot and it will work fine. The root filesystem can be on the larger partition without any problems as long as /boot is in a <2TB partition it will be happy.
Cheers,
On Sat, 3 Sep 2005 at 8:28am, Chris Mauritz wrote
Akop Pogosian wrote:
I have been trying to install CentOS 4.1 on a new server today. The installation went fine but the system doesn't boot. It doesn't even get to the Grub screen. The system has a 3ware 9500 series controller and 12 250GB SATA disks which are configured for RAID5 with one hot spare. I searched on the web and noticed that this is a Grub (and Lilo) limitation. Is there any way around it other than setting up a smaller RAID device?
Just create a small 100-200mb partition for /boot and it will work fine. The root filesystem can be on the larger partition without any problems as long as /boot is in a <2TB partition it will be happy.
IME, that won't work. >2TiB disks must have gpt disk labels, and neither grub nor lilo understand those.