Since the built in kernel doesn't have the raid10 module for some reason, I would like to custom compile a kernel that does, and install with it. How would I go about doing this?
Russ
On Fri, 2007-05-04 at 16:46 -0400, Ruslan Sivak wrote:
Since the built in kernel doesn't have the raid10 module for some reason, I would like to custom compile a kernel that does, and install with it. How would I go about doing this?
http://wiki.centos.org/I_need_the_Kernel_Source http://wiki.centos.org/HowTos/Custom_Kernel
-- Daniel
On 5/4/07, Ruslan Sivak rsivak@istandfor.com wrote:
Since the built in kernel doesn't have the raid10 module for some reason, I would like to custom compile a kernel that does, and install with it. How would I go about doing this?
Short answer, you don't. Doing this would require rebuilding the install iso, unless you can build it as a module and load that via a driver disk.
Jim Perrin wrote:
On 5/4/07, Ruslan Sivak rsivak@istandfor.com wrote:
Since the built in kernel doesn't have the raid10 module for some reason, I would like to custom compile a kernel that does, and install with it. How would I go about doing this?
Short answer, you don't. Doing this would require rebuilding the install iso, unless you can build it as a module and load that via a driver disk.
Yes, I would be interested in doing that. Building the md raid10 personality as a module. How would I go about doing this? Toby Bluhm mentioned in a separate thread that he had that module on SL4.4
" You have enlightened me to the raid10 module:
[root@tikal ~]# locate raid10 /usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/config/md/raid10/module.h /usr/src/kernels/2.6.9-42.0.3.EL-smp-i686/include/linux/raid/raid10.h /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/config/md/raid10/module.h /usr/src/kernels/2.6.9-42.0.10.EL-i686/include/linux/raid/raid10.h /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/config/md/raid10 /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/config/md/raid10/module.h /usr/src/kernels/2.6.9-42.0.3.EL-i686/include/linux/raid/raid10.h /lib/modules/2.6.9-42.0.3.EL/kernel/drivers/md/raid10.ko /lib/modules/2.6.9-42.0.10.EL/kernel/drivers/md/raid10.ko /lib/modules/2.6.9-42.0.3.ELsmp/kernel/drivers/md/raid10.ko
[root@tikal ~]# modprobe raid10
[root@tikal ~]# lsmod | grep raid raid10 23233 0 raid1 20033 1 "
On 5/4/07, Ruslan Sivak rsivak@istandfor.com wrote:
Yes, I would be interested in doing that. Building the md raid10 personality as a module. How would I go about doing this? Toby Bluhm mentioned in a separate thread that he had that module on SL4.4
[root@tikal ~]# modprobe raid10
[root@tikal ~]# lsmod | grep raid raid10 23233 0 raid1 20033 1
As David Miller noted in this thread, CentOS includes the raid10 module.
$ locate raid10.ko /lib/modules/2.6.18-8.1.1.el5/kernel/drivers/md/raid10.ko /lib/modules/2.6.18-8.1.1.el5.centos.plus.1/kernel/drivers/md/raid10.ko /lib/modules/2.6.18-8.el5/kernel/drivers/md/raid10.ko
# modprobe raid10 # modprobe raid1 # lsmod | grep raid raid1 55745 0 raid10 55873 0
So, they are there.
Akemi
Akemi Yagi wrote:
On 5/4/07, Ruslan Sivak rsivak@istandfor.com wrote:
Yes, I would be interested in doing that. Building the md raid10 personality as a module. How would I go about doing this? Toby Bluhm mentioned in a separate thread that he had that module on SL4.4
[root@tikal ~]# modprobe raid10
[root@tikal ~]# lsmod | grep raid raid10 23233 0 raid1 20033 1
As David Miller noted in this thread, CentOS includes the raid10 module.
$ locate raid10.ko /lib/modules/2.6.18-8.1.1.el5/kernel/drivers/md/raid10.ko /lib/modules/2.6.18-8.1.1.el5.centos.plus.1/kernel/drivers/md/raid10.ko /lib/modules/2.6.18-8.el5/kernel/drivers/md/raid10.ko
# modprobe raid10 # modprobe raid1 # lsmod | grep raid raid1 55745 0 raid10 55873 0
So, they are there.
Akemi _______________________________________________
Interesting. I don't believe they were there for me in a fully booted system. I guess I would have to reinstall and find out. Is there a way to load it when I'm just first installing and going to a shell?
Russ
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ruslan Sivak Sent: Saturday, May 05, 2007 12:20 PM To: CentOS mailing list Subject: Re: [CentOS] Installing from a custom kernel
Akemi Yagi wrote:
On 5/4/07, Ruslan Sivak rsivak@istandfor.com wrote:
Yes, I would be interested in doing that. Building the md raid10 personality as a module. How would I go about doing this?
Toby Bluhm
mentioned in a separate thread that he had that module on SL4.4
[root@tikal ~]# modprobe raid10
[root@tikal ~]# lsmod | grep raid raid10 23233 0 raid1 20033 1
As David Miller noted in this thread, CentOS includes the
raid10 module.
$ locate raid10.ko /lib/modules/2.6.18-8.1.1.el5/kernel/drivers/md/raid10.ko
/lib/modules/2.6.18-8.1.1.el5.centos.plus.1/kernel/drivers/md/ raid10.ko
/lib/modules/2.6.18-8.el5/kernel/drivers/md/raid10.ko
# modprobe raid10 # modprobe raid1 # lsmod | grep raid raid1 55745 0 raid10 55873 0
So, they are there.
Akemi _______________________________________________
Interesting. I don't believe they were there for me in a fully booted system. I guess I would have to reinstall and find out. Is there a way to load it when I'm just first installing and going to a shell?
I believe anaconda only supports installation on raid0 and raid1.
What you can do in your setup, since it is only a 4 disk setup is:
create 4 128MB raid partitions at the start of each drive for a /dev/md0 raid1 with 2 spares, create an ext3 fs for /boot on it
create 4 raid partitions of the remaining spaces, create a /dev/md1 with 2 drives, and a /dev/md2 with 2 drives.
make /dev/md1 and /dev/md2 lvm volumes
create a volume group of /dev/md1 and /dev/md2, say called CentOS,
create a 'root' logical volume of say 16G with an interleave of 2, create a 'swap' logical volume of say 4G with an interleave of 2
There you go, it might not be the clean raid10 you envisioned, but it will work the same, plus you may find using lvm for the striping has some advantages.
1) Recognized by most/all grubs in 2.6
2) will allow migration of data to/from other volume groups
3) it's resizeable and reconfigurable
Once RAID10 is setup and your data is on-board, there isn't much reconfiguration that can be done.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
I believe anaconda only supports installation on raid0 and raid1.
What you can do in your setup, since it is only a 4 disk setup is:
create 4 128MB raid partitions at the start of each drive for a /dev/md0 raid1 with 2 spares, create an ext3 fs for /boot on it
create 4 raid partitions of the remaining spaces, create a /dev/md1 with 2 drives, and a /dev/md2 with 2 drives.
make /dev/md1 and /dev/md2 lvm volumes
create a volume group of /dev/md1 and /dev/md2, say called CentOS,
create a 'root' logical volume of say 16G with an interleave of 2, create a 'swap' logical volume of say 4G with an interleave of 2
One problem. /boot must be on its own unless grub in RHEL5/Centos 5 has lvm support. /boot needs to be on its own raid1 array.