I have a server with a stock install scheme with the exception that /boot and / are on two md raid 1 mirrors over two discs.
I need to remove those two large discs and replace them with smaller discs.
Given the lvm over md, and big to small scenario, anyone know any automated app that can do this?
I don't want to spend the time to resize the ext3 partition, the vg etc and re-mirror manually...
Thanks! jlc
On 4/6/2010 1:54 PM, Joseph L. Casale wrote:
I have a server with a stock install scheme with the exception that /boot and / are on two md raid 1 mirrors over two discs.
I need to remove those two large discs and replace them with smaller discs.
Given the lvm over md, and big to small scenario, anyone know any automated app that can do this?
I don't want to spend the time to resize the ext3 partition, the vg etc and re-mirror manually...
Even if there is a way to do a live migration, it's probably faster and safer to just build a new raid with or without lvm and copy the stuff over. You'll just have to reinstall grub on the new disks to make them boot.
Even if there is a way to do a live migration, it's probably faster and safer to just build a new raid with or without lvm and copy the stuff over. You'll just have to reinstall grub on the new disks to make them boot.
It needn't be live, I just haven't seen an app like clonezilla that can go big to small with lvm and md devices...
On that note, the data is all static so it's very safe:
1. I wonder if I actually could place the two new discs in 2. create two new md devices, md2/3 3. pvmove VolGroup00 into it md3, then pvremove md1's pv's, ignore the md1->3 difference here, lvm should find the unchanged vg/lv. 4. copy md0's /boot into md2 5. edit fstab for the new /boot on md2 instead of md0 6. install grub 7. have local guy pull out two old drives and send back.
I don't have a concern about data, it has a long kickstart which actually sets everything up, it's just downtime I want to avoid.
Should work:)
On 4/6/2010 3:02 PM, Joseph L. Casale wrote:
Even if there is a way to do a live migration, it's probably faster and safer to just build a new raid with or without lvm and copy the stuff over. You'll just have to reinstall grub on the new disks to make them boot.
It needn't be live, I just haven't seen an app like clonezilla that can go big to small with lvm and md devices...
On that note, the data is all static so it's very safe:
- I wonder if I actually could place the two new discs in
- create two new md devices, md2/3
- pvmove VolGroup00 into it md3, then pvremove md1's pv's, ignore the md1->3 difference here, lvm should find the unchanged vg/lv.
- copy md0's /boot into md2
- edit fstab for the new /boot on md2 instead of md0
- install grub
- have local guy pull out two old drives and send back.
I don't have a concern about data, it has a long kickstart which actually sets everything up, it's just downtime I want to avoid.
If I were doing it, I'd forget lvm on the new drive and just make the md devices, mkfs them, mount them somewhere temporarily, copy stuff over with 'cp -a', 'tar | tar', 'dump | restor', 'rsync -av', etc., edit fstab to mount the new md devices for / and /boot, fix grub and swap the drives. If you have to worry about growing files, do an rsync once live, then go to single user mode and repeat (the second run will fix anything that changed and will go pretty quickly).
If I were doing it, I'd forget lvm on the new drive and just make the md devices, mkfs them, mount them somewhere temporarily, copy stuff over with 'cp -a', 'tar | tar', 'dump | restor', 'rsync -av', etc., edit fstab to mount the new md devices for / and /boot, fix grub and swap the drives. If you have to worry about growing files, do an rsync once live, then go to single user mode and repeat (the second run will fix anything that changed and will go pretty quickly).
I'm sold, it really doesn't need lvm. I presume after editing fstab the nonexistent lvm config can be ignored? Never done that...
Thanks! jlc
Joseph L. Casale wrote:
If I were doing it, I'd forget lvm on the new drive and just make the md devices, mkfs them, mount them somewhere temporarily, copy stuff over with 'cp -a', 'tar | tar', 'dump | restor', 'rsync -av', etc., edit fstab to mount the new md devices for / and /boot, fix grub and swap the drives. If you have to worry about growing files, do an rsync once live, then go to single user mode and repeat (the second run will fix anything that changed and will go pretty quickly).
I'm sold, it really doesn't need lvm. I presume after editing fstab the nonexistent lvm config can be ignored? Never done that...
you can use the dump ... | restore thing with lvm, it doesn't care.
What _I_ do, anyways is...
* build new storage however I like, * reboot to single user * temp mount new filesystems as /new/.... (eg, / is /new, /var is /new/var, /new/home, etc) * for each file system, dump -0uf - /dev/mapper/VolGroup..... | (cd /new/... ; restore -rvf - ) * manually fix up boot stuff, manually edit /new/etc/fstab * umount new stuff, shut down, juggle disks, pray it works...
On 4/6/2010 4:30 PM, Joseph L. Casale wrote:
If I were doing it, I'd forget lvm on the new drive and just make the md devices, mkfs them, mount them somewhere temporarily, copy stuff over with 'cp -a', 'tar | tar', 'dump | restor', 'rsync -av', etc., edit fstab to mount the new md devices for / and /boot, fix grub and swap the drives. If you have to worry about growing files, do an rsync once live, then go to single user mode and repeat (the second run will fix anything that changed and will go pretty quickly).
I'm sold, it really doesn't need lvm. I presume after editing fstab the nonexistent lvm config can be ignored? Never done that...
Not sure about that - I think all that matters is that the things in fstab can actually be mounted. There is some trick to installing grub on a disk that is going to be moved to a new position that I've forgotten, though. But you can boot the install disk in rescue mode to fix that if you get it wrong.