Hi All,
For quite some time I've used raid 1 as a means of providing a rollback mechanism for an upgrade (which I learned from others long ago). So essentially, before an upgrade you split the mirrors and upgrade one side or the other. If your upgrade goes well you sync one way, if your upgrade does not you sync the other (much hand waving and chanting going on, as its more complicated than that, but that is the essence of the solution).
Recently, I was asked to do the same thing but with a raid 1+0 solution. Its easy, enough to break the raid 1 volumes underneath, but then how do I use the broke off volumes to form the duplicate strip. Pictures may help. We start off looking like:
/----------- Raid 0 Volume ----------\ | [disk 0]<---R 1--->[disk 2] | | | | [disk 1]<---R 1--->[disk 3] | --------------------------------------------/
What we want to go to is:
/--- Raid 0 ---\ /--- Raid 0 ---\ | [disk 0] | | [disk 2] | | | | | | [disk 1] | | [disk 3] | -----------------/ ------------------/ Old System New System
Is this possible with the current set of mdadm tools?
Thanks...james
----- "James Olin Oden" james.oden@gmail.com escreveu:
For quite some time I've used raid 1 as a means of providing a rollback mechanism for an upgrade (which I learned from others long ago). So essentially, before an upgrade you split the mirrors and upgrade one side or the other. If your upgrade goes well you sync one way, if your upgrade does not you sync the other (much hand waving and chanting going on, as its more complicated than that, but that is the essence of the solution).
Nice, I didn't think on this before... Will make a try :)
Recently, I was asked to do the same thing but with a raid 1+0 solution. Its easy, enough to break the raid 1 volumes underneath, but then how do I use the broke off volumes to form the duplicate strip. Pictures may help. We start off looking like:
/----------- Raid 0 Volume ----------\ | [disk 0]<---R 1--->[disk 2] | | | | [disk 1]<---R 1--->[disk 3] | --------------------------------------------/
What we want to go to is:
/--- Raid 0 ---\ /--- Raid 0 ---\ | [disk 0] | | [disk 2] | | | | | | [disk 1] | | [disk 3] | -----------------/ ------------------/ Old System New System
Is this possible with the current set of mdadm tools?
Humm... didn't know if it's possible, but IMHO it'll be much easier to do if you use a 0+1 RAID instead of a 1+0 schema :)
Antonio.
Antonio da Silva Martins Junior wrote:
----- "James Olin Oden" james.oden@gmail.com escreveu:
For quite some time I've used raid 1 as a means of providing a rollback mechanism for an upgrade (which I learned from others long ago). So essentially, before an upgrade you split the mirrors and upgrade one side or the other. If your upgrade goes well you sync one way, if your upgrade does not you sync the other (much
hand waving
and chanting going on, as its more complicated than that,
but that is the
essence of the solution).
Nice, I didn't think on this before... Will make a try :)
Recently, I was asked to do the same thing but with a raid 1+0 solution. Its easy, enough to break the raid 1 volumes underneath, but then how do I use the broke off volumes to form the
duplicate strip.
Pictures may help. We start off looking like:
/----------- Raid 0 Volume ----------\ | [disk 0]<---R 1--->[disk 2] | | | | [disk 1]<---R 1--->[disk 3] | --------------------------------------------/
What we want to go to is:
/--- Raid 0 ---\ /--- Raid 0 ---\ | [disk 0] | | [disk 2] | | | | | | [disk 1] | | [disk 3] | -----------------/ ------------------/ Old System New System
Is this possible with the current set of mdadm tools?
Humm... didn't know if it's possible, but IMHO it'll be much easier to do if you use a 0+1 RAID instead of a 1+0 schema :)
I would start looking at LVM and mirroring the LV to another, break it, upgrade the LV and if that doesn't work switching over to the working LV.
Or try a solution with snapshots...
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.
On 10/19/07, James Olin Oden james.oden@gmail.com wrote:
Hi All,
For quite some time I've used raid 1 as a means of providing a rollback mechanism for an upgrade (which I learned from others long ago). So essentially, before an upgrade you split the mirrors and upgrade one side or the other. If your upgrade goes well you sync one way, if your upgrade does not you sync the other (much hand waving and chanting going on, as its more complicated than that, but that is the essence of the solution).
Recently, I was asked to do the same thing but with a raid 1+0 solution. Its easy, enough to break the raid 1 volumes underneath, but then how do I use the broke off volumes to form the duplicate strip. Pictures may help. We start off looking like:
/----------- Raid 0 Volume ----------\ | [disk 0]<---R 1--->[disk 2] | | | | [disk 1]<---R 1--->[disk 3] | --------------------------------------------/
What we want to go to is:
/--- Raid 0 ---\ /--- Raid 0 ---\ | [disk 0] | | [disk 2] | | | | | | [disk 1] | | [disk 3] | -----------------/ ------------------/ Old System New System
Is this possible with the current set of mdadm tools?
Thanks...james
Hi All,
So after _much_ research I know how to do this. What makes the whole thing so difficult after you split the mirror, you now have a physical volume with all the information for the volume group that lived in the orignal multi-devices. In other words you have meta-data that conflicts regarding which multi-device it belongs to and in regards to the uuid's of all its components.
What I basically ended up doing via various machinations was to modify the LVM metadata such that the name of the volume group, its VG UUID, and its PV UUID was changed.
So here is the basic process (for simplicity I have an md device (md1) made up of two members sda1 and sdb1, with a volume group vg1 on top of it, with various logical volumes cut out of it):
* Make sure the mirror is not syncing; if it is, wait until it finishes syncing. This is done by examining /proc/mdstat. * Remount the filesystems on the volume group read only (mount -o ro,remount ...). * Split the mirror:
mdadm --fail /dev/md1 /dev/sdb1 mdadm --remove /dev/md1 /dev/sdb1
* Create the new mirror:
mdadm --create --level=2 --raid-devices=2 --run /dev/md2 /dev/sdb1 missing
* Get a backup of the original volume groups meta-data:
vgcfgbackup --file /tmp/vg1.meta /dev/vg1
* Convert the backup file to have a new volume group name, new volume group UUID, new physcial volume UUID, and new multi-device for the physical volume. * Now recreate the physical volume metadata with the new UUID:
pvcreate --uuid bL1zwN-ILoA-2src-4S4e-8MFy-UgY3-8GS2wN -ff /dev/md2
Note, the --uuid option will cause it to not be destructive and only replace metadata. * Restore the volume group metadata using the transformed backup:
vgcfgrestore --file /tmp/vg1.meta /dev/vg2
* Make the new volume group active:
vgchange -a y /dev/vg1
At this point you now have both volume groups up and going and active and they have an exact duplicate of their data. I have tested this with a simple case but not with a volume group containing the root filesystem. I'm pretty sure it would work in this case.
Thanks...james