Thanks Alexandar for your soon answer, BUT, is there another easier way to do this?
Regards, Israel
Quoting israel.garcia@cimex.com.cu:
How can I build a RAID 1 of all my particions of sda using mdadm?
- Boot into rescue mode.
- Shrink each file system a bit. Generally you'll use resize2fs
utility. MD uses some space at the end of partition for its metadata,
and you don't want this to overwrite the end of your filesystem. Most
HOWTOs on the net list this as one of the last steps (after mirrors
are
created). However if you do this as last step, in rare cases it may lead to data loss. You probably want to be on the safe side and
shrink
file systems first (this will insure your data is secure). I'm not sure by how much you need to shrink, should be somewhere in documentation how many sectors at end of partition MD uses for metadata...
- Create mirror(s) using mdadm. Create using only one real partition
and use "missing" instead of second partition (wish Anaconda installer
would allow for something like this). Then use mdadm to add second partition to the mirror. Something like:
# mdadm --create /dev/md0 --run --level=1 --raid-devices=2 /dev/sda1 missing # mdadm /dev/md0 -a /dev/sdb1
- Edit /etc/fstab file.
- You'll need to rebuild initrd image(s) for installed kernel(s).
Just mount your filesystems somewhere, chroot into it and run
mkinitrd.
Mkinitrd should be smart enough to figure it'll need md device drivers by looking into fstab file. Use -v option to mkinitrd to make
sure it included needed drivers into new initrd image.
- Configure boot loader (LILO or Grub, whicever you use) to reflect
changes. Make sure you install boot loader into MBR of both drives.
-- See Ya' later, alligator! http://www.8-P.ca/
On Thu, 2006-04-13 at 08:29, israel.garcia@cimex.com.cu wrote:
Thanks Alexandar for your soon answer, BUT, is there another easier way to do this?
Back up the stuff you want to keep, re-install setting up the RAID you want in disk druid during the install, then put anything you saved back.
Quoting israel.garcia@cimex.com.cu:
Thanks Alexandar for your soon answer, BUT, is there another easier way to do this?
No, not really. As far as I know, there isn't any GUI interface that will do the steps behind the scenes. Anyhow, it isn't that complicated.
One thing I forgot to mention is to use fdisk to tag partitions as "Linux raid autodetect" (fd) when you are done.
The most troublesome part is to find out how much you need to shrink file systems before building mirrors. According to md man page, md superblock (or metadata) is 4KB long and its start position is 64KB aligned. That means you will loose between 64 and 128KB of partition space when building mirrors.
The simple way is to just assume worst case and shrink your file systems to be 128KB smaller than partition size. Assuming example below (partition size 265041KB), you would shrink it to 265041 - 128 = 264913KB. Something like "resize2fs -p /dev/sda1 264913K". You can always use resize2fs second time after mirrors are created to get any possibly unused space at the end of metadevice (as if anybody will care about couple of KB, but if it makes you happy go for it). Just invoke resizefs like before, but this time do not specify size ("resize2fs -p /dev/sda1").
Or if you are really into it, you can calculate exactly by how much you need to shrink file system.
[root@wis165 ~]# fdisk /dev/sda Command (m for help): p
Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 33 265041 83 Linux
So the sda1 is 265041KB in size. This gives 265041 / 64 = 4141.265625 64KB blocks. Round it *down* to 4141. Never round up. You need to substract 1 for MD superblock, which gives 4140 usable 64KB blocks, so in this case file system needs to be resized to 4140 * 64 = 264960KB (in this case the file system will be shrinked by 81KB). So you would simply do "resize2fs -p /dev/sda1 264960K".
Hmmm... Maybe I should publish a new HOWTO ;-)
Aleksandar Milivojevic spake the following on 4/13/2006 7:46 AM:
Quoting israel.garcia@cimex.com.cu:
Thanks Alexandar for your soon answer, BUT, is there another easier way to do this?
No, not really. As far as I know, there isn't any GUI interface that will do the steps behind the scenes. Anyhow, it isn't that complicated.
One thing I forgot to mention is to use fdisk to tag partitions as "Linux raid autodetect" (fd) when you are done.
The most troublesome part is to find out how much you need to shrink file systems before building mirrors. According to md man page, md superblock (or metadata) is 4KB long and its start position is 64KB aligned. That means you will loose between 64 and 128KB of partition space when building mirrors.
The simple way is to just assume worst case and shrink your file systems to be 128KB smaller than partition size. Assuming example below (partition size 265041KB), you would shrink it to 265041 - 128 = 264913KB. Something like "resize2fs -p /dev/sda1 264913K". You can always use resize2fs second time after mirrors are created to get any possibly unused space at the end of metadevice (as if anybody will care about couple of KB, but if it makes you happy go for it). Just invoke resizefs like before, but this time do not specify size ("resize2fs -p /dev/sda1").
Or if you are really into it, you can calculate exactly by how much you need to shrink file system.
[root@wis165 ~]# fdisk /dev/sda Command (m for help): p
Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 33 265041 83 Linux
So the sda1 is 265041KB in size. This gives 265041 / 64 = 4141.265625 64KB blocks. Round it *down* to 4141. Never round up. You need to substract 1 for MD superblock, which gives 4140 usable 64KB blocks, so in this case file system needs to be resized to 4140 * 64 = 264960KB (in this case the file system will be shrinked by 81KB). So you would simply do "resize2fs -p /dev/sda1 264960K".
Hmmm... Maybe I should publish a new HOWTO ;-)
--See Ya' later, alligator! http://www.8-P.ca/
Or just set up identical sized partitions on second drive, set up raid arrays with "missing" parameter, rsync data from one partition to the other, edit fstab and grub.conf. Reboot from the second drive. Should be a complete system, abeit slower because of the "failed" raid arrays. Then you can use sfdisk to copy partition data from second drive to first, and add the partitions from first drive to second. It is safer, as you never change the old drive until you are sure the second is working, and you can always fall back or redo something up to the point that you use sfdisk. I did this on a RedHat 9 system a couple of years ago, and did most of it while the system was "live", rsyncing data along the way until I had everything set up the way I wanted. I then booted from a rescue disk to finish up and change the boot to the second drive, and then it was all about rebuilding the arrays.
An easy way to copy the partition structure from one disk to another is using sfdisk -d /dev/sda | sfdisk /dev/sdb. Remember to reboot after this or use a fdisk option to re-read the partition table from /dev/sdb. where /dev/sda is your actual in production disk and /dev/sdb is your new disk. After follow the steps Aleksandar said. I did this procedure sometimes and is a wasting time task. Try this first in a test environment (like VMWare) before do by yourself in the production environment. Any doubt, ask us.
On 4/13/06, Scott Silva ssilva@sgvwater.com wrote:
Aleksandar Milivojevic spake the following on 4/13/2006 7:46 AM:
Quoting israel.garcia@cimex.com.cu:
Thanks Alexandar for your soon answer, BUT, is there another easier way to do this?
No, not really. As far as I know, there isn't any GUI interface that will do the steps behind the scenes. Anyhow, it isn't that complicated.
One thing I forgot to mention is to use fdisk to tag partitions as "Linux raid autodetect" (fd) when you are done.
The most troublesome part is to find out how much you need to shrink file systems before building mirrors. According to md man page, md superblock (or metadata) is 4KB long and its start position is 64KB aligned. That means you will loose between 64 and 128KB of partition space when building mirrors.
The simple way is to just assume worst case and shrink your file systems to be 128KB smaller than partition size. Assuming example below (partition size 265041KB), you would shrink it to 265041 - 128 = 264913KB. Something like "resize2fs -p /dev/sda1 264913K". You can always use resize2fs second time after mirrors are created to get any possibly unused space at the end of metadevice (as if anybody will care about couple of KB, but if it makes you happy go for it). Just invoke resizefs like before, but this time do not specify size ("resize2fs -p /dev/sda1").
Or if you are really into it, you can calculate exactly by how much you need to shrink file system.
[root@wis165 ~]# fdisk /dev/sda Command (m for help): p
Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 33 265041 83 Linux
So the sda1 is 265041KB in size. This gives 265041 / 64 = 4141.265625 64KB blocks. Round it *down* to 4141. Never round up. You need to substract 1 for MD superblock, which gives 4140 usable 64KB blocks, so in this case file system needs to be resized to 4140 * 64 = 264960KB (in this case the file system will be shrinked by 81KB). So you would simply do "resize2fs -p /dev/sda1 264960K".
Hmmm... Maybe I should publish a new HOWTO ;-)
--See Ya' later, alligator! http://www.8-P.ca/
Or just set up identical sized partitions on second drive, set up raid arrays with "missing" parameter, rsync data from one partition to the other, edit fstab and grub.conf. Reboot from the second drive. Should be a complete system, abeit slower because of the "failed" raid arrays. Then you can use sfdisk to copy partition data from second drive to first, and add the partitions from first drive to second. It is safer, as you never change the old drive until you are sure the second is working, and you can always fall back or redo something up to the point that you use sfdisk. I did this on a RedHat 9 system a couple of years ago, and did most of it while the system was "live", rsyncing data along the way until I had everything set up the way I wanted. I then booted from a rescue disk to finish up and change the boot to the second drive, and then it was all about rebuilding the arrays.
--
MailScanner is like deodorant... You hope everybody uses it, and you notice quickly if they don't!!!!
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Cleber P. de Souza