-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Friday, May 11, 2007 3:27 PM To: CentOS mailing list Subject: Re: [CentOS] Re: Anaconda doesn't support raid10
Chris Croome wrote:
Hi
On Fri 11-May-2007 at 09:38:22AM -0400, Ross S. W. Walker wrote:
The entire use all four disks for /boot makes no sense if
two disks
belonging to the same mirror for the lvm go down. Please stop this nonsense about surviving everything to no benefit. You can have three disks fail and still have a working /boot. For what?
I think the idea of the 4 partition raid1 was more of,
what else is he
going to do with the 200MB at the beginning of each disk
which he has
because of partition symmetry across drives?
Makes sense to just dup the partition setup from one to
the other and
now with grub and a working /boot on each disk the order
of the drives
is no longer important, he can take all 4 out, play 4 disk
monty, slap
them back in and the system should come up without a problem.
FWIW this is what I did with the last server I built which
had 4x500gb
drives -- a RAID 1 /boot on 4 drives. The trick for this is
to edit your
grub.conf so that you can boot off any drive and run
grub-install on all
4 drives, also you have to remember to manually edit your grub.conf after each kernel upgrade to add the 3 extra disks:
title CentOS (2.6.18-8.1.1.el5xen) Disk 0 root (hd0,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro
root=/dev/VolGroup00/Root
module /initrd-2.6.18-8.1.1.el5xen.img
title CentOS (2.6.18-8.1.1.el5xen) Disk 1 root (hd1,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro
root=/dev/VolGroup00/Root
module /initrd-2.6.18-8.1.1.el5xen.img
title CentOS (2.6.18-8.1.1.el5xen) Disk 2 root (hd2,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro
root=/dev/VolGroup00/Root
module /initrd-2.6.18-8.1.1.el5xen.img
title CentOS (2.6.18-8.1.1.el5xen) Disk 3 root (hd3,0) kernel /xen.gz-2.6.18-8.1.1.el5 module /vmlinuz-2.6.18-8.1.1.el5xen ro
root=/dev/VolGroup00/Root
module /initrd-2.6.18-8.1.1.el5xen.img
If I had read this thread before I set up this machine I'd have used RAID 6 from the rest of the space, but I used RAID 5 with a
hot spare,
with LVM on top of that.
Before the machine was moved to the colo I tried pulling
disks out while
it was running and this worked without a problem, which was nice :-)
Chris
Chris,
I didn't have to install grub on any of the other volumes, and the server seemed to do well after drive failure (I pulled out drives 1 and 3). In my opinion, neither raid5 or raid6 makes sense with 4 drives, as you will get the same amount of space with raid10, but much better performance and availability (although raid6 is supposed to withstand 2 drive failures, in my tests it has not done well, and neither has raid5). As you might've read from the thread, if you want to put the / volume on the raid10, it will not be possible during the install, but you can set up 2 raid1 volumes, and do an LVM stripe across them which should yield comparable performance.
I think if you setup the 4 disk raid1 at boot grub gets installed on each as part of the install process.
You'd only need to do it yourself if you put a new disk to replace a failed one, then just run grub-install on it.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.