Hi Folks,
I've inherited an old RH7 system that I'd like to upgrade to CentOS6.1 by means of wiping it clean and doing a fresh install. However, the system has a software raid setup that I wish to keep untouched as it has data on that I must keep. Or at the very least, TRY to keep. If all else fails, then so be it and I'll just recreate the thing. I do plan on backing up the data first in case of disasters. But I'm hoping I don't have to considering there's some 500GiB on it.
The previous owner sent me a breakdown of how they build the raid when it was first done. I've included an explanation below this message with the various command outputs. Apparently their reason for doing it the way they did was so they can easily add drives to the raid and grow everything equally. It just seems a bit convoluted to me.
Here's my problem: I have no idea what the necessary steps are to recreate it, as in, in what order. I presume it's pretty much the way they explained it to me: - create partitions - use mdadm to create the various md volumes - use pvcreate to create the various physical volumes - use lvcreate to create the two logical volumes
If that's the case, great. However, can I perform a complete system wipe, install CentOS 6.1, and re-attach the raid and mount the logical volumes without much trouble?
What follows is the current setup, or at least, the way it was originally configured. The system has 5 drives in it:
sda = main OS drive (80 GiB) sdb, sdc, sdd, and sde: raid drives, 500 GiB each.
The setup for the raid as I've been explained was done something like this:
First the four drives were each partitioned into 10 equal size partitions. fdisk shows me this:
fdisk -l /dev/sdb
Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdb1 1 6080 48837568+ 83 Linux /dev/sdb2 6081 12160 48837600 83 Linux /dev/sdb3 12161 18240 48837600 83 Linux /dev/sdb4 18241 60801 341871232+ 5 Extended /dev/sdb5 18241 24320 48837568+ 83 Linux /dev/sdb6 24321 30400 48837568+ 83 Linux /dev/sdb7 30401 36480 48837568+ 83 Linux /dev/sdb8 36481 42560 48837568+ 83 Linux /dev/sdb9 42561 48640 48837568+ 83 Linux /dev/sdb10 48641 54720 48837568+ 83 Linux /dev/sdb11 54721 60800 48837568+ 83 Linux
Then they took each partition on one drive and linked it with the same partition on the other drive. So when I look at mdadm for each /dev/md[0-9] device, I see this:
mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Wed Aug 29 07:01:34 2007 Raid Level : raid5 Array Size : 146512128 (139.72 GiB 150.03 GB) Used Dev Size : 48837376 (46.57 GiB 50.01 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0 Persistence : Superblock is persistent
Update Time : Tue Jan 17 13:49:49 2012 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 256K
UUID : 43d48349:b58e26df:bb06081a:68db4903 Events : 0.4
Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1
... and pvscan says:
pvscan PV /dev/md0 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md1 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md2 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md3 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md4 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md5 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md6 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md7 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md8 VG VolGroup00 lvm2 [139.72 GB / 0 free] PV /dev/md9 VG VolGroup00 lvm2 [139.72 GB / 139.72 GB free] Total: 10 [1.36 TB] / in use: 10 [1.36 TB] / in no VG: 0 [0 ]
(evidently /dev/md9 isn't being used ... emergency spare?) And from there, they created the logical volumes which lvscan says are:
lvscan ACTIVE '/dev/VolGroup00/LogVol00' [1.09 TB] inherit ACTIVE '/dev/VolGroup00/LogVol01' [139.72 GB] inherit
On 01/17/12 1:30 PM, Ashley M. Kirchner wrote:
Hi Folks, I've inherited an old RH7 system that I'd like to upgrade to
CentOS6.1 by means of wiping it clean and doing a fresh install. However, the system has a software raid setup that I wish to keep untouched as it has data on that I must keep. Or at the very least, TRY to keep. If all else fails, then so be it and I'll just recreate the thing. I do plan on backing up the data first in case of disasters. But I'm hoping I don't have to considering there's some 500GiB on it.
frankly, I'd temporarily hang a 1TB drive on that thing, format it as a simple volume, and backup your file systems to it, that raid is a *MESS*. It would make much more sense to have 1 partition on each physical disk be a member of the MD raid5, then put that md in the volgroup, rather than having 9 sets of raids, I can only imagine they did it the way they did due to limitations of that ancient linux kernel in RH Linux 7.x (early Kernel 2.4, I believe).
but, a newer linux kernel should see those md volumes, and should be able to import the LVM VG on them, if you really want to keep it intact.
On 1/17/2012 2:51 PM, John R Pierce wrote:
frankly, I'd temporarily hang a 1TB drive on that thing, format it as a simple volume, and backup your file systems to it, that raid is a *MESS*. It would make much more sense to have 1 partition on each physical disk be a member of the MD raid5, then put that md in the volgroup, rather than having 9 sets of raids, I can only imagine they did it the way they did due to limitations of that ancient linux kernel in RH Linux 7.x (early Kernel 2.4, I believe). but, a newer linux kernel should see those md volumes, and should be able to import the LVM VG on them, if you really want to keep it intact.
While I absolutely agree that it's a mess, there was a reason to the madness I'm sure. I think originally they were thinking smaller volume sizes which are then raided in mirror mode, so when a drive fails, only a small slice of data would fail. This is all theoretical on my part.
The interesting this is, there's a CentOS 5.7 (Final) system in the mix that's also setup the same exact way. The F7 is merely a mirror backup of the CentOS machine. (I realized it's not an RH7, but Fedora 7, running kernel 2.6.23.17-88.fc7.)
Either way, I do think a rebuild is in order, if only to simplify the raid ... I think what I'm going to do is upgrade the mirror system first after rebuilding the raid, mirror the main system, then swap machines while I then address the main one, which is running CentOS. Once that's back up and running, swap them back because the CentOS system is actually the larger, more powerful one (8 cores versus 2 cores on the mirror system.)
On 01/17/2012 01:30 PM, Ashley M. Kirchner wrote:
I've inherited an old RH7 system that I'd like to upgrade to
CentOS6.1 by means of wiping it clean and doing a fresh install. However, the system has a software raid setup that I wish to keep untouched as it has data on that I must keep.
If you boot the CentOS installer, it should detect any existing RAID and LVM volumes. You'll be able to select individual filesystems to mount in the new system, and optionally format them. Assuming that your data is on a volume of its own, you can select the "system" filesystems and format only those.
You shouldn't have to manually recreate anything.
Just make sure you have a verified backup before you do anything !!
If it's not backed up data, it's not important data.
I don't remember what version of the ext filesystem was current during the RH7 days, but I would seriously consider dumping the raid and reloading in onto a newly formatted ext4 filesystem.
There may be good reasons (or bad ones) why you really can't wipe everything and reload. I'm just suggesting you think long and hard about it.
Also, the hard drives in a system that old have got be really tired. Consider new drives. In fact even a low-end new system will seriously out perform a system that old. Even a pure softraid raid1 would do so and be more reliable.
Good Luck
On 01/17/2012 06:47 PM, Gordon Messmer wrote:
On 01/17/2012 01:30 PM, Ashley M. Kirchner wrote:
I've inherited an old RH7 system that I'd like to upgrade to
CentOS6.1 by means of wiping it clean and doing a fresh install. However, the system has a software raid setup that I wish to keep untouched as it has data on that I must keep.
If you boot the CentOS installer, it should detect any existing RAID and LVM volumes. You'll be able to select individual filesystems to mount in the new system, and optionally format them. Assuming that your data is on a volume of its own, you can select the "system" filesystems and format only those.
You shouldn't have to manually recreate anything. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 1/17/2012 9:16 PM, Raymond Lillard wrote:
Just make sure you have a verified backup before you do anything !!
If it's not backed up data, it's not important data.
This particular system is actually a mirror of a production server, a CentOS 5.7 (Final), which is also configured the same exact way as far as the raid goes. The data is already on a different machine. The one I'm going to be working on is the backup to the main server. So, even if I don't create a backup, it won't be a total loss as I just have to mirror it from the main system again. However, that being said, I do plan on backing it up anyway, just to cover my ass. In a moment of stupidity, I could very easy reverse the rsync command and end up deleting everything from the main server instead of mirroring it. :)
I don't remember what version of the ext filesystem was current during the RH7 days, but I would seriously consider dumping the raid and reloading in onto a newly formatted ext4 filesystem.
Agreed, but I'm also considering redoing the raid. I still can't come up with a good reason why it was created the way it was. I'm sure they had their reasons at the time.
On 01/19/2012 08:50 PM, Ashley M. Kirchner wrote:
On 1/17/2012 9:16 PM, Raymond Lillard wrote:
I don't remember what version of the ext filesystem was current during the RH7 days, but I would seriously consider dumping the raid and reloading in onto a newly formatted ext4 filesystem.
Agreed, but I'm also considering redoing the raid. I still can't
come up with a good reason why it was created the way it was. I'm sure they had their reasons at the time.
You can create mdadm "RAID 10 far" and create separate partitions on top that will mimic original/old raid. You can also first create partitions and then create "RAID 10 far" for each partion(s).
On 1/19/2012 5:01 PM, Ljubomir Ljubojevic wrote:
You can create mdadm "RAID 10 far" and create separate partitions on top that will mimic original/old raid. You can also first create partitions and then create "RAID 10 far" for each partion(s).
To what point? I don't really care for how they were done. After I make sure everything is 100% mirrored, I'm just going to blow it away and start over.