[CentOS] Re: Building software RAID mdmad adding a second disk
Cleber P. de Souza
cleberps at gmail.com
Mon Apr 17 04:33:03 UTC 2006
An easy way to copy the partition structure from one disk to another
is using sfdisk -d /dev/sda | sfdisk /dev/sdb. Remember to reboot
after this or use a fdisk option to re-read the partition table from
/dev/sdb.
where /dev/sda is your actual in production disk and /dev/sdb is your new disk.
After follow the steps Aleksandar said.
I did this procedure sometimes and is a wasting time task.
Try this first in a test environment (like VMWare) before do by
yourself in the production environment.
Any doubt, ask us.
On 4/13/06, Scott Silva <ssilva at sgvwater.com> wrote:
> Aleksandar Milivojevic spake the following on 4/13/2006 7:46 AM:
> > Quoting israel.garcia at cimex.com.cu:
> >
> >> Thanks Alexandar for your soon answer, BUT, is there another easier way
> >> to do this?
> >
> > No, not really. As far as I know, there isn't any GUI interface that
> > will do the steps behind the scenes. Anyhow, it isn't that complicated.
> >
> > One thing I forgot to mention is to use fdisk to tag partitions as
> > "Linux raid autodetect" (fd) when you are done.
> >
> > The most troublesome part is to find out how much you need to shrink
> > file systems before building mirrors. According to md man page, md
> > superblock (or metadata) is 4KB long and its start position is 64KB
> > aligned. That means you will loose between 64 and 128KB of partition
> > space when building mirrors.
> >
> > The simple way is to just assume worst case and shrink your file systems
> > to be 128KB smaller than partition size. Assuming example below
> > (partition size 265041KB), you would shrink it to 265041 - 128 =
> > 264913KB. Something like "resize2fs -p /dev/sda1 264913K". You can
> > always use resize2fs second time after mirrors are created to get any
> > possibly unused space at the end of metadevice (as if anybody will care
> > about couple of KB, but if it makes you happy go for it). Just invoke
> > resizefs like before, but this time do not specify size ("resize2fs -p
> > /dev/sda1").
> >
> > Or if you are really into it, you can calculate exactly by how much you
> > need to shrink file system.
> >
> > [root at wis165 ~]# fdisk /dev/sda
> > Command (m for help): p
> >
> > Disk /dev/sda: 80.0 GB, 80026361856 bytes
> > 255 heads, 63 sectors/track, 9729 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> >
> > Device Boot Start End Blocks Id System
> > /dev/sda1 * 1 33 265041 83 Linux
> >
> > So the sda1 is 265041KB in size. This gives 265041 / 64 = 4141.265625
> > 64KB blocks. Round it *down* to 4141. Never round up. You need to
> > substract 1 for MD superblock, which gives 4140 usable 64KB blocks, so
> > in this case file system needs to be resized to 4140 * 64 = 264960KB (in
> > this case the file system will be shrinked by 81KB). So you would
> > simply do "resize2fs -p /dev/sda1 264960K".
> >
> > Hmmm... Maybe I should publish a new HOWTO ;-)
> >
> > --See Ya' later, alligator!
> > http://www.8-P.ca/
> Or just set up identical sized partitions on second drive, set up raid arrays
> with "missing" parameter, rsync data from one partition to the other, edit
> fstab and grub.conf.
> Reboot from the second drive. Should be a complete system, abeit slower
> because of the "failed" raid arrays. Then you can use sfdisk to copy partition
> data from second drive to first, and add the partitions from first drive to
> second.
> It is safer, as you never change the old drive until you are sure the second
> is working, and you can always fall back or redo something up to the point
> that you use sfdisk. I did this on a RedHat 9 system a couple of years ago,
> and did most of it while the system was "live", rsyncing data along the way
> until I had everything set up the way I wanted. I then booted from a rescue
> disk to finish up and change the boot to the second drive, and then it was all
> about rebuilding the arrays.
>
> --
>
> MailScanner is like deodorant...
> You hope everybody uses it, and
> you notice quickly if they don't!!!!
>
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
--
Cleber P. de Souza
More information about the CentOS
mailing list