Hi all!
One of these days I'm going to upgrade my centos 5.7 box (which is currently configured with raid-1 on two drives) to 6.x.
i've been given to believe that I should be able to do a fresh install on top of the existing raid setup, i.e., without having to re-create the raid array, but just use the existing setup.
and somewhere in the last few days (in this list I think) someone posted that Anaconda should detect the existing raid array and allow you to re- use it for a new install.
so just as an experiment, I booted up the 6.2 DVD (not live) and took it as far as where it sets up the partitioning. So, was I unduly surprised when it DID NOT detect the existing raid? well, a little. more in the category of NOT ENTIRELY PLEASED, actually.
I'd appreciate it if someone who knows something about linux hardware raid, and its intersection with Anaconda, could drop a few clues in my direction.
thanks in advance!
On Thu, Jan 19, 2012 at 3:30 PM, fred smith fredex@fcshome.stoneham.ma.us wrote:
Hi all!
One of these days I'm going to upgrade my centos 5.7 box (which is currently configured with raid-1 on two drives) to 6.x.
i've been given to believe that I should be able to do a fresh install on top of the existing raid setup, i.e., without having to re-create the raid array, but just use the existing setup.
and somewhere in the last few days (in this list I think) someone posted that Anaconda should detect the existing raid array and allow you to re- use it for a new install.
so just as an experiment, I booted up the 6.2 DVD (not live) and took it as far as where it sets up the partitioning. So, was I unduly surprised when it DID NOT detect the existing raid? well, a little. more in the category of NOT ENTIRELY PLEASED, actually.
I'd appreciate it if someone who knows something about linux hardware raid, and its intersection with Anaconda, could drop a few clues in my direction.
I tried to boot a 5.x system with several raid1 sets with a 6.0 live dvd and not only did it not detect/match up the mirrors, it renamed and broke them so they no longer worked on 5.x. I think there are major differences in the way the kernel handles things. I probably won't try to access existing arrays again unless someone else reports success, and even then I'll probably pull one of the drives of each set until I'm convinced it is safe to sync them back.
On 01/19/2012 01:53 PM, Les Mikesell wrote:
I tried to boot a 5.x system with several raid1 sets with a 6.0 live dvd and not only did it not detect/match up the mirrors, it renamed and broke them so they no longer worked on 5.x. I think there are major differences in the way the kernel handles things. I probably won't try to access existing arrays again unless someone else reports success, and even then I'll probably pull one of the drives of each set until I'm convinced it is safe to sync them back.
As far as I know, it's not a kernel version thing... If you attach disks with partitions using MD 0.95 metadata (which Anaconda will create) to another system, or boot live media in a system with such disks, the kernel may modify the minor number in the MD metadata. In order to fix that, you'll need to stop the volumes, then construct them manually using the device name that you want and the --update=super-minor argument.
On Sat, Jan 21, 2012 at 5:29 PM, Gordon Messmer yinyang@eburg.com wrote:
On 01/19/2012 01:53 PM, Les Mikesell wrote:
I tried to boot a 5.x system with several raid1 sets with a 6.0 live dvd and not only did it not detect/match up the mirrors, it renamed and broke them so they no longer worked on 5.x. I think there are major differences in the way the kernel handles things. I probably won't try to access existing arrays again unless someone else reports success, and even then I'll probably pull one of the drives of each set until I'm convinced it is safe to sync them back.
As far as I know, it's not a kernel version thing... If you attach disks with partitions using MD 0.95 metadata (which Anaconda will create) to another system, or boot live media in a system with such disks, the kernel may modify the minor number in the MD metadata. In order to fix that, you'll need to stop the volumes, then construct them manually using the device name that you want and the --update=super-minor argument.
I have moved raid sets to other machines (of similar distribution revs) and never had a problem before. In fact I have fairly regularly split raid1 volumes to different machines and re-synced with new mirrors and never had any surprises before. The real issue with this box was that the disks are all swappable and I had used raid with autodetect specifically so I didn't have to track which disk was where. And after booting the live dvd, they became more or less randomly named md devices, with each disk of the set becoming its own md device instead of pairing. Recovering was fairly painful.
On 01/21/2012 05:52 PM, Les Mikesell wrote:
I have moved raid sets to other machines (of similar distribution revs) and never had a problem before.
I wouldn't expect you to. If you move volumes other than the boot volume, it doesn't matter that they get a new device name.
In fact I have fairly regularly split raid1 volumes to different machines and re-synced with new mirrors and never had any surprises before. The real issue with this box was that the disks are all swappable and I had used raid with autodetect specifically so I didn't have to track which disk was where. And after booting the live dvd, they became more or less randomly named md devices, with each disk of the set becoming its own md device instead of pairing. Recovering was fairly painful.
Well, you haven't given us enough information to really explain what you saw. What I'd expect is that your MD devices were moved to /dev/md126, /dev/md127, etc. Those names aren't random; they're sequential, reflecting the assigned device minor number starting at minor number 126. "dmesg" output might explain why the RAID sets weren't assembled... I've never seen that happen.
On Sat, Jan 21, 2012 at 9:49 PM, Gordon Messmer yinyang@eburg.com wrote:
On 01/21/2012 05:52 PM, Les Mikesell wrote:
I have moved raid sets to other machines (of similar distribution revs) and never had a problem before.
I wouldn't expect you to. If you move volumes other than the boot volume, it doesn't matter that they get a new device name.
In fact I have fairly regularly split raid1 volumes to different machines and re-synced with new mirrors and never had any surprises before. The real issue with this box was that the disks are all swappable and I had used raid with autodetect specifically so I didn't have to track which disk was where. And after booting the live dvd, they became more or less randomly named md devices, with each disk of the set becoming its own md device instead of pairing. Recovering was fairly painful.
Well, you haven't given us enough information to really explain what you saw. What I'd expect is that your MD devices were moved to /dev/md126, /dev/md127, etc. Those names aren't random; they're sequential, reflecting the assigned device minor number starting at minor number 126. "dmesg" output might explain why the RAID sets weren't assembled... I've never seen that happen.
Yes, they were renamed with those unexpected names. I didn't really spend much time figuring it out, since I thought things would work normally when rebooted with the existing 5.x system. They didn't - the new names stuck, including the ones given to the 'other half' of each mirror.
On 01/19/2012 10:30 PM, fred smith wrote:
Hi all!
One of these days I'm going to upgrade my centos 5.7 box (which is currently configured with raid-1 on two drives) to 6.x.
i've been given to believe that I should be able to do a fresh install on top of the existing raid setup, i.e., without having to re-create the raid array, but just use the existing setup.
and somewhere in the last few days (in this list I think) someone posted that Anaconda should detect the existing raid array and allow you to re- use it for a new install.
so just as an experiment, I booted up the 6.2 DVD (not live) and took it as far as where it sets up the partitioning. So, was I unduly surprised when it DID NOT detect the existing raid? well, a little. more in the category of NOT ENTIRELY PLEASED, actually.
I'd appreciate it if someone who knows something about linux hardware raid, and its intersection with Anaconda, could drop a few clues in my direction.
thanks in advance!
Are you sure LiveDVD has mdadm service installed and running? LiveCD 5.3 was missing it, I have not looked at 6.x yet.
On Fri, Jan 20, 2012 at 02:14:16AM +0100, Ljubomir Ljubojevic wrote:
On 01/19/2012 10:30 PM, fred smith wrote:
Hi all!
One of these days I'm going to upgrade my centos 5.7 box (which is currently configured with raid-1 on two drives) to 6.x.
i've been given to believe that I should be able to do a fresh install on top of the existing raid setup, i.e., without having to re-create the raid array, but just use the existing setup.
and somewhere in the last few days (in this list I think) someone posted that Anaconda should detect the existing raid array and allow you to re- use it for a new install.
so just as an experiment, I booted up the 6.2 DVD (not live) and took it as far as where it sets up the partitioning. So, was I unduly surprised when it DID NOT detect the existing raid? well, a little. more in the category of NOT ENTIRELY PLEASED, actually.
I'd appreciate it if someone who knows something about linux hardware raid, and its intersection with Anaconda, could drop a few clues in my direction.
thanks in advance!
Are you sure LiveDVD has mdadm service installed and running? LiveCD 5.3 was missing it, I have not looked at 6.x yet.
this was NOT the live dvd:
CentOS-6.2-x86_64-bin-DVD1.iso
and in my original post, when I said "...knows something about linux hardware raid..." I had meant to say "software raid". DUH.
On 01/20/2012 03:57 AM, fred smith wrote:
this was NOT the live dvd:
Yeah, sorry, I misread it, probably readit too fast, skiping parts. Yesterday there was 58 centos-users mails waiting for me, so...
On 01/19/2012 01:30 PM, fred smith wrote:
and somewhere in the last few days (in this list I think) someone posted that Anaconda should detect the existing raid array and allow you to re- use it for a new install.
so just as an experiment, I booted up the 6.2 DVD (not live) and took it as far as where it sets up the partitioning. So, was I unduly surprised when it DID NOT detect the existing raid? well, a little. more in the category of NOT ENTIRELY PLEASED, actually.
I'm probably the person that you're referring to. Anaconda has to be able to detect existing RAID sets in order to perform upgrades.
In order for Anaconda to present you with the existing sets, you must select an upgrade (in which case you won't see any partitioning stage) or you must use the graphical installer and select "Create custom layout". The text mode installer can no longer do this.
On Sat, Jan 21, 2012 at 03:20:00PM -0800, Gordon Messmer wrote:
On 01/19/2012 01:30 PM, fred smith wrote:
and somewhere in the last few days (in this list I think) someone posted that Anaconda should detect the existing raid array and allow you to re- use it for a new install.
so just as an experiment, I booted up the 6.2 DVD (not live) and took it as far as where it sets up the partitioning. So, was I unduly surprised when it DID NOT detect the existing raid? well, a little. more in the category of NOT ENTIRELY PLEASED, actually.
I'm probably the person that you're referring to. Anaconda has to be able to detect existing RAID sets in order to perform upgrades.
In order for Anaconda to present you with the existing sets, you must select an upgrade (in which case you won't see any partitioning stage) or you must use the graphical installer and select "Create custom layout". The text mode installer can no longer do this.
Ah. I did graphical, but I t hink I chose one of the 'reuse' options. I'll check into trying custom layout soon.
thanks!