I think I have solved my issue and would like some input from anyone who has done this for pitfalls, errors, or if I am just wrong.
Centos 5.x, software raid, 250gb drives.
2 drives in mirror, one spare. All same size. 2 devices in the mirror, one boot (about 100MB), one that fills the rest of disk and contains LVM partitions.
I was thinking of taking out the spare and adding a 500gb drive. I would run this " sfdisk -d /dev/sda | sfdisk /dev/sdc " (drive 'a' setup is cloned onto drive c) Then I would add drive c to the raid array as the spare. I would then pull out drive b and allow drive c to be synced with a.
At this point drive c has 250gb worth of partioned space in raid/lvm.
I then add a 500gb drive b and repeat above pulling out drive a.
Now I have two 500gb drives (b and c) with 250gb worth of partitions mirrored.
I am thinking I would next forget about md0 (the boot part) and concentrate on the md1, where the whole system lies.
# mdadm --grow /dev/md1 --size=max
I believe this will grow out the size of md1 to fill the 500gb of the drive. I would then wrestle with expanding the LVMs that fill the md1 up as I wish.
After that, I would add drive 'a' 500gb to the mix by cloning the partitions and then adding as a spare.
So...does this sound like the way to upgrade from 250gb to 500gb drives on a raid 1 software raid?
Or, is it back to the drawing board?
Hi,
On Thu, Jul 2, 2009 at 12:52, Bob Hoffmanbob@bobhoffman.com wrote:
# mdadm --grow /dev/md1 --size=max
If /dev/md1 is made out of /dev/sda2 and /dev/sdb2, it will not work, as those partitions will still be the same size as they were before...
I think it would be easier to just create new partitions /dev/sda3 and /dev/sdb3, then create a new RAID1 /dev/md2, pvcreate it, and then use vgextend to add /dev/md2 to the same volume group that already contains /dev/md1.
Does that make sense to you?
Cheers, Filipe