[CentOS] kickstart raid disk partitioning

Roberto Nunnari roberto.nunnari at supsi.ch
Fri Nov 19 10:00:39 UTC 2010


Rudi Ahlers ha scritto:
> On Fri, Nov 19, 2010 at 11:32 AM, Roberto Nunnari
> <roberto.nunnari at supsi.ch> wrote:
>> Digimer ha scritto:
>>> On 11/18/2010 01:11 PM, Roberto Nunnari wrote:
>>>> Hello.
>>>>
>>>> A couple of years ago I installed two file-servers
>>>> using kickstart. The server has two 1TB sata disks
>>>> with two software raid1 partitions as follows:
>>>>
>>>> # cat /proc/mdstat
>>>> Personalities : [raid1]
>>>> md1 : active raid1 sdb4[1] sda4[0]
>>>>         933448704 blocks [2/2] [UU]
>>>> md0 : active raid1 sdb1[1] sda2[2](F)
>>>>         40957568 blocks [2/1] [_U]
>>>>
>>>>
>>>> Now the drives are starting to be failing and next week I'll
>>>> backup /homes, reinstall OS with kickstart, and finally
>>>> restore /homes.
>>>>
>>>> There's a problem with how the kickstart process partitions
>>>> the disks, though. As you may have noticed above, md0 is made
>>>> up of sdb1 and sda2.
>>>>
>>>> Could anybody help me understand how to make the partitions
>>>> on the two drives identical still using kickstart?
>>>>
>>>> Here's the relevant part from the kickstart file:
>>>>
>>>> zerombr yes
>>>> clearpart --all --initlabel
>>>> bootloader --location=mbr
>>>> part /boot --fstype ext3 --size 250 --asprimary --ondisk sda
>>>> part swap --size 2048 --asprimary --ondisk sda
>>>> part raid.01 --size 40000 --asprimary --ondisk sda
>>>> part raid.03 --size 1 --grow --asprimary --ondisk sda
>>>> part /boot2 --fstype ext3 --size 250 --asprimary --ondisk sdb
>>>> part swap --size 2048 --asprimary --ondisk sdb
>>>> part raid.02 --size 40000 --asprimary --ondisk sdb
>>>> part raid.04 --size 1 --grow --asprimary --ondisk sdb
>>>> raid / --level=1 --device=md0 --fstype ext3 raid.01 raid.02
>>>> raid /home --level=1 --device=md1 --fstype ext3 raid.03 raid.04
>>>>
>>>> ..but here's the produced partitioning on the two drives:
>>>>
>>>> # parted /dev/sda print
>>>> Disk geometry for /dev/sda: 0.000-953869.710 megabytes
>>>> Disk label type: msdos
>>>> Minor    Start       End     Type      Filesystem  Flags
>>>> 1          0.031    251.015  primary   ext3        boot
>>>> 2        251.016  40248.786  primary   ext3        raid
>>>> 3      40248.787  42296.132  primary   linux-swap
>>>> 4      42296.133 953867.219  primary   ext3        raid
>>>>
>>>> # parted /dev/sdb print
>>>> Disk geometry for /dev/sdb: 0.000-953869.710 megabytes
>>>> Disk label type: msdos
>>>> Minor    Start       End     Type      Filesystem  Flags
>>>> 1          0.031  39997.771  primary   ext3        boot, raid
>>>> 2      39997.771  42045.117  primary   linux-swap
>>>> 3      42045.117  42296.132  primary   ext3
>>>> 4      42296.133 953867.219  primary   ext3        raid
>>>>
>>>>
>>>> I'm not asking because I'm picky, but just because, it would
>>>> have made my life easier to fix bad blocks on disks by
>>>> dd from good block on disk1 to bad block on disk2, and as
>>>> next week I'll reinstall, I'd prefer to do it the right way.
>>>>
>>>> Some more bits about my environment:
>>>>
>>>> # cat /etc/redhat-release
>>>> CentOS release 4.8 (Final)
>>>>
>>>> # uname -rms
>>>> Linux 2.6.9-89.0.18.ELsmp i686
>>>>
>>>> Thank you and best regards.
>>>> Robi
>>> I've got a fairly simple script in a kickstart file I use[1] that
>>> handles RAID 1 and RAID 5 partitioning. Perhaps it would help? Here is
>>> the relevant snippet:
>>>
>>> zerombr
>>> clearpart --all --initlabel --drives=sda,sdb
>>> ignoredisk --only-use=sda,sdb
>>> bootloader  --location=mbr --driveorder=sda,sdb --append="crashkernel=auto"
>>>
>>> # /boot
>>> part raid.01 --ondisk=sda --asprimary --size=256
>>> part raid.02 --ondisk=sdb --asprimary --size=256
>>> # /
>>> part raid.11 --ondisk=sda --asprimary --size=40960
>>> part raid.12 --ondisk=sdb --asprimary --size=40960
>>> # <swap>
>>> part raid.21 --ondisk=sda --asprimary --size=4096
>>> part raid.22 --ondisk=sdb --asprimary --size=4096
>>>
>>> # Format /boot and /.
>>> raid /boot --fstype=ext3 --level=1 --device=md0 raid.01 raid.02
>>> raid /     --fstype=ext3 --level=1 --device=md1 raid.11 raid.12
>>> raid swap  --fstype=swap --level=1 --device=md2 raid.21 raid.22
>>>
>>> The kickstart script above is specifically for RHEL 6, but it came
>>> nearly unadapted from an older CentOS 4 kickstart script. The only line
>>> that might be an issue is: "crashkernel=auto".
>>>
>>> hth,
>>>
>>> Digimer
>>>
>>> 1. http://wiki.alteeve.com/files/an-cluster/ks/generic_server_rhel6.ks
>> Thank you for your reply.
>>
>> Does that kickstart effectly produces a partitioning that is
>> exactly the same on both disks? Because that is the problem
>> I'm facing: the partitioning produced by the kickstart
>> is different on the two drives.
>>
>> Also, why did you put /boot and swap in raid? Was it for
>> obtaining identical partitioning on both drives?
>> For swap, the kernel already does performance optimization
>> when swap partitions are on different drives, and /boot..
>> I always tended to keep /boot be as simple as possible, to avoid
>> any problem during boot.. but maybe, these days with initramdisk
>> there's no more need for that..
>>
>> Best regards.
>> Robi
>> _______________________________________________
>> CentOS mailing list
>> CentOS at centos.org
>> http://lists.centos.org/mailman/listinfo/centos
>>
> 
> Well, if the first drive fails, where you put /boot then you won't be
> able to boot-up the 2nd HDD :)
> 
> So put /boot on a RAID1 partition so that it gets mirrored on both
> drives for better redundancy

hehe.. that's right, but if you look at my partitioning,
there's a /boot2 partition on the second drive where I
keep a copy of /boot.. even if master boot record is
gone with /boot, with a grub cd of floppy I can always
boot my system.

What about my original question about kickstart and
raid partitioning?

Thank you.
Robi



More information about the CentOS mailing list