Sorry, this is going to be a rather long post...Here's the situation; I have 4 IDE disks from an old snap server which fails to mount the raid array. We believe there is a controller error on the SNAP so we've put them in another box running CentOS 5 and can see the disks OK.
hda thru hdd looks like this
Disk /dev/hdd: 185.2 GB, 185283624960 bytes 255 heads, 63 sectors/track, 22526 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hdd1 * 1 2 16041+ 83 Linux /dev/hdd2 3 70 546210 83 Linux /dev/hdd3 71 138 546210 5 Extended /dev/hdd4 139 21781 173844468 83 Linux /dev/hdd5 71 104 273104+ 83 Linux /dev/hdd6 105 138 273104+ 83 Linux
I can mount hdX1 and hdX2 (hdx2 is xfs) on all disks.
Now /etc/raidtab (from one of the hdx2 partions) has the following entry which I'd like to re-create on the CentOS box
raiddev /dev/md0 raid-level 5 nr-raid-disks 4 nr-spare-disks 0 persistent-superblock 1 chunk-size 64 device /dev/hda4 raid-disk 0 device /dev/hdc4 raid-disk 1 device /dev/hde4 raid-disk 2 device /dev/hdg4 raid-disk 3
I've also checked the superblocks
/dev/hda4: Magic : a92b4efc Version : 00.90.00 UUID : a72e334f:a9ccba81:bba665d8:82a3c378 Creation Time : Mon Jun 30 12:00:28 2003 Raid Level : raid5 Used Dev Size : 173844352 (165.79 GiB 178.02 GB) Array Size : 521533056 (497.37 GiB 534.05 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0
Update Time : Tue May 19 21:40:43 2009 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Checksum : cbe14089 - correct Events : 0.22
Layout : left-asymmetric Chunk Size : 64K
Number Major Minor RaidDevice State this 0 3 4 0 active sync /dev/hda4
0 0 3 4 0 active sync /dev/hda4 1 1 22 4 1 faulty /dev/hdc4 2 2 0 0 2 faulty removed 3 3 34 4 3 active sync /dev/hdb4: Magic : a92b4efc Version : 00.90.00 UUID : a72e334f:a9ccba81:bba665d8:82a3c378 Creation Time : Mon Jun 30 12:00:28 2003 Raid Level : raid5 Used Dev Size : 173844352 (165.79 GiB 178.02 GB) Array Size : 521533056 (497.37 GiB 534.05 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0
Update Time : Sat Aug 2 19:26:28 2008 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Checksum : ca62ce2c - correct Events : 0.21
Layout : left-asymmetric Chunk Size : 64K
Number Major Minor RaidDevice State this 1 22 4 1 active sync /dev/hdc4
0 0 3 4 0 active sync /dev/hda4 1 1 22 4 1 active sync /dev/hdc4 2 2 0 0 2 faulty removed 3 3 34 4 3 active sync /dev/hdc4: Magic : a92b4efc Version : 00.90.00 UUID : a72e334f:a9ccba81:bba665d8:82a3c378 Creation Time : Mon Jun 30 12:00:28 2003 Raid Level : raid5 Used Dev Size : 173844352 (165.79 GiB 178.02 GB) Array Size : 521533056 (497.37 GiB 534.05 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0
Update Time : Sun Jul 22 22:33:00 2007 State : active Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : c871f493 - correct Events : 0.18
Layout : left-asymmetric Chunk Size : 64K
Number Major Minor RaidDevice State this 2 33 4 2 active sync
0 0 3 4 0 active sync /dev/hda4 1 1 22 4 1 active sync /dev/hdc4 2 2 33 4 2 active sync 3 3 34 4 3 active sync /dev/hdd4: Magic : a92b4efc Version : 00.90.00 UUID : a72e334f:a9ccba81:bba665d8:82a3c378 Creation Time : Mon Jun 30 12:00:28 2003 Raid Level : raid5 Used Dev Size : 173844352 (165.79 GiB 178.02 GB) Array Size : 521533056 (497.37 GiB 534.05 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0
Update Time : Sat Aug 2 19:26:28 2008 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Checksum : ca62ce3c - correct Events : 0.21
Layout : left-asymmetric Chunk Size : 64K
Number Major Minor RaidDevice State this 3 34 4 3 active sync
0 0 3 4 0 active sync /dev/hda4 1 1 22 4 1 active sync /dev/hdc4 2 2 0 0 2 faulty removed 3 3 34 4 3 active sync
This is my fist stab and re-assembling an array after moving the disks so I though I'd double check how I would go about re-assembling this Can I simply do
mdadm --assemble --scan
? Or do I need to reset the superblocks and re-assemble in degraded mode somehow?
TIA
Dan
on 5-20-2009 8:32 AM Daniel Bird spake the following:
Sorry, this is going to be a rather long post...Here's the situation; I have 4 IDE disks from an old snap server which fails to mount the raid array. We believe there is a controller error on the SNAP so we've put them in another box running CentOS 5 and can see the disks OK.
hda thru hdd looks like this
Disk /dev/hdd: 185.2 GB, 185283624960 bytes 255 heads, 63 sectors/track, 22526 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hdd1 * 1 2 16041+ 83 Linux /dev/hdd2 3 70 546210 83 Linux /dev/hdd3 71 138 546210 5 Extended /dev/hdd4 139 21781 173844468 83 Linux /dev/hdd5 71 104 273104+ 83 Linux /dev/hdd6 105 138 273104+ 83 Linux
I can mount hdX1 and hdX2 (hdx2 is xfs) on all disks.
Now /etc/raidtab (from one of the hdx2 partions) has the following entry which I'd like to re-create on the CentOS box
raiddev /dev/md0 raid-level 5 nr-raid-disks 4 nr-spare-disks 0 persistent-superblock 1 chunk-size 64 device /dev/hda4 raid-disk 0 device /dev/hdc4 raid-disk 1 device /dev/hde4 raid-disk 2 device /dev/hdg4 raid-disk 3
I've also checked the superblocks
/dev/hda4: Magic : a92b4efc Version : 00.90.00 UUID : a72e334f:a9ccba81:bba665d8:82a3c378 Creation Time : Mon Jun 30 12:00:28 2003 Raid Level : raid5 Used Dev Size : 173844352 (165.79 GiB 178.02 GB) Array Size : 521533056 (497.37 GiB 534.05 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0
Update Time : Tue May 19 21:40:43 2009 State : active
Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Checksum : cbe14089 - correct Events : 0.22
Layout : left-asymmetric Chunk Size : 64K Number Major Minor RaidDevice State
this 0 3 4 0 active sync /dev/hda4
0 0 3 4 0 active sync /dev/hda4 1 1 22 4 1 faulty /dev/hdc4 2 2 0 0 2 faulty removed 3 3 34 4 3 active sync /dev/hdb4: Magic : a92b4efc Version : 00.90.00 UUID : a72e334f:a9ccba81:bba665d8:82a3c378 Creation Time : Mon Jun 30 12:00:28 2003 Raid Level : raid5 Used Dev Size : 173844352 (165.79 GiB 178.02 GB) Array Size : 521533056 (497.37 GiB 534.05 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0
Update Time : Sat Aug 2 19:26:28 2008 State : active
Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Checksum : ca62ce2c - correct Events : 0.21
Layout : left-asymmetric Chunk Size : 64K Number Major Minor RaidDevice State
this 1 22 4 1 active sync /dev/hdc4
0 0 3 4 0 active sync /dev/hda4 1 1 22 4 1 active sync /dev/hdc4 2 2 0 0 2 faulty removed 3 3 34 4 3 active sync /dev/hdc4: Magic : a92b4efc Version : 00.90.00 UUID : a72e334f:a9ccba81:bba665d8:82a3c378 Creation Time : Mon Jun 30 12:00:28 2003 Raid Level : raid5 Used Dev Size : 173844352 (165.79 GiB 178.02 GB) Array Size : 521533056 (497.37 GiB 534.05 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 0
Update Time : Sun Jul 22 22:33:00 2007 State : active
Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Checksum : c871f493 - correct Events : 0.18
Layout : left-asymmetric Chunk Size : 64K Number Major Minor RaidDevice State
this 2 33 4 2 active sync
0 0 3 4 0 active sync /dev/hda4 1 1 22 4 1 active sync /dev/hdc4 2 2 33 4 2 active sync 3 3 34 4 3 active sync /dev/hdd4: Magic : a92b4efc Version : 00.90.00 UUID : a72e334f:a9ccba81:bba665d8:82a3c378 Creation Time : Mon Jun 30 12:00:28 2003 Raid Level : raid5 Used Dev Size : 173844352 (165.79 GiB 178.02 GB) Array Size : 521533056 (497.37 GiB 534.05 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0
Update Time : Sat Aug 2 19:26:28 2008 State : active
Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Checksum : ca62ce3c - correct Events : 0.21
Layout : left-asymmetric Chunk Size : 64K Number Major Minor RaidDevice State
this 3 34 4 3 active sync
0 0 3 4 0 active sync /dev/hda4 1 1 22 4 1 active sync /dev/hdc4 2 2 0 0 2 faulty removed 3 3 34 4 3 active sync
This is my fist stab and re-assembling an array after moving the disks so I though I'd double check how I would go about re-assembling this Can I simply do
mdadm --assemble --scan
? Or do I need to reset the superblocks and re-assemble in degraded mode somehow?
TIA
Dan
Are you sure that all the disks are in the same position that they were in when removed from the snap server? IE... hda on the snap is still hda on the new controller. If so, you could try the assemble command. Otherwise you will have to do some creative work to get it going. But from the above superblocks, the array is very dirty, and some drives have different array status, so you might have to force things along. If this data is critical, I would dd the drives onto other hardware and experiment there, or at least dd the drives to image files and try those. If you succeed, you can backup the data and restore later. If you fail with the original drives, you might render the array unrecoverable.
Are you sure that all the disks are in the same position that they were in when removed from the snap server?
The drives are the same position , i.e hda is hda and hdb is hdb etc, but hdc and hdd were hde and hdg previously. Does that matter?
If you fail with the original drives, you might render the array unrecoverable.
dd is a good idea. I'll dd these disks and try with the images.
Cheers,
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
on 5-20-2009 9:34 AM Daniel Bird spake the following:
Are you sure that all the disks are in the same position that they were in when removed from the snap server?
The drives are the same position , i.e hda is hda and hdb is hdb etc, but hdc and hdd were hde and hdg previously. Does that matter?
If you fail with the original drives, you might render the array unrecoverable.
dd is a good idea. I'll dd these disks and try with the images.
If you DD the entire drive, here is a howto I found for mounting the partition you want from the image.
dd is a good idea. I'll dd these disks and try with the images.
If you DD the entire drive, here is a howto I found for mounting the partition you want from the image.
Thanks Scott, That was going to be my next question! ;-)
D
on 5-21-2009 1:18 AM Daniel Bird spake the following:
dd is a good idea. I'll dd these disks and try with the images.
If you DD the entire drive, here is a howto I found for mounting the partition you want from the image.
Thanks Scott, That was going to be my next question! ;-)
D
You can also just dd from the partition you want if just for recovery. It makes it easier to loopback mount several partitions. I used to do this occasionally when I took some forensics work at the local PD. Long before the certs were even heard of, much less available.