Andrew @ ATM Logic spake the following on 7/6/2007 4:39 AM:
Hi, Can you explain what have you tried till now? All I can say "man mdamd" is sufficient.
-- Regards, Sudev Barar _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
These are a few of the commands I have ran so far...
cat /proc/mdstat Personalities : [raid0] [raid1] [raid5] [raid6] Md1 : active raid1 hdd1[2] hdc1[1] hda1[0] 104320 blocks [3/3] [UUU]
(Just so you know... That's missing the "needed" MD2)
lvm pvscan
- No matching physical volumes found
lvm lvscan
- No volume groups found
lvm vgscan
- Reading all physical volumes. This may take a while....
No volume groups found
lvm vgchange -ay No volume groups found
lvm vgdisplay No volume groups found
fdisk -l /dev/hda Disk /dev/hda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 fd Linux raid autodetect /dev/hda2 14 19457 156183930 fd Linux raid autodetect
fdisk -l /dev/hdc Disk /dev/hdc: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hdc1 * 1 13 104391 fd Linux raid autodetect /dev/hdc2 14 19457 156183930 fd Linux raid autodetect
fdisk -l /dev/hdd Disk /dev/hdd: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hdd1 * 1 13 104391 fd Linux raid autodetect /dev/hdd2 14 19457 156183930 fd Linux raid autodetect
raidstart /dev/md1
- No errors, just returns command prompt
raidstart /dev/md2
- No errors, just returns command prompt
Sooooooooo, Then I tried to start fixing things.....
mdadm --assemble -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
I get Mdadm: Bad super-minor number: /dev/md2
mdadm --assemble --run -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
Mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
Then... Looking at
cat /proc/mdstat
Personalities : [raid0] [raid1] [raid5] [raid6] Md1 : active raid1 hdd1[2] hdc1[1] hda1[0] 104320 blocks [3/3] [UUU]
Md2 : inactive hdc2[1] hdd2[2] 312367616 Unused devices: <none>
Then... Trying to get a little more pushy...
mdadm --stop /dev/md2 mdadm --verbose --assemble --run -m 2 /dev/md2 /dev/hda2 /dev/hdc2 /dev/hdd2
Mdadm: looking for devices for /dev/md2 Mdadm: /dev/hda2 is identified as a member of /dev/md2, slot 0 Mdadm: /dev/hdc2 is identified as a member of /dev/md2, slot 1 Mdadm: /dev/hdd2 is identified as a member of /dev/md2, slot 2 Mdadm: added /dev/hda2 to /dev/md2 as 0 Mdadm: added /dev/hdd2 to /dev/md2 as 2 Mdadm: added /dev/hdc2 to /dev/md2 as 1 Mdadm: failed to RUN_ARRAY /dev/md2: Invalid argument
Ok... So... That is where I quit... Any Idea what kind of hope I should be holding out for?
I recently had luck with the following on an array that wouldn't start after the system was unplugged. mdadm --assemble --run --force --update=summaries /dev/"raid array" /dev/"drive1" /dev/"drive2" /dev/"drive3"
remember to fix the quoted parts with your actual entries.