On Tue, Sep 2, 2014 at 1:33 PM, m.roth@5-cent.us wrote:
I haven't used raw devices as members so I'm not sure I understand the scenario. However, I thought that devices over 2TB would not auto assemble so you would have to manually add the ARRAY entry for /dev/md4 in /etc/mdadm.conf containing /dev/sdd and /dev/sdc for the system to recognize it at bootup.
Yeah. That was one thing I discovered. Silly me, assuming that the mdadm would create an entry in /etc/mdadm.conf. And this is not something I do more than once or twice a year, and haven't this year (we have a good number of Dells with a PERC 7, or then there's the JetStors....).
With devices < 2TB and MBR's, you don't need /etc/mdadm.conf - the kernel just figures it all out at boot time, regardless of the disk location or detection order. I have sometimes set up single partitions as 'broken' raids just to get that autodetect/mount effect on boxes where the disks are moved around a lot because it worked long before distos started mounting by unique labels or uuids. And I miss it on big drives.
But sdd _should_ have the correct data - it just isn't being detected as a raid member. I think with smaller devices - or at least devices with smaller partitions and FD type in the MBR it would have worked automatically with the kernel autodetect.
Both had a GPT on them, just no partitions. And that's the thing that really puzzles me - why mdadm couldn't find the RAID info on /dev/sdd, which *had* been just fine.
Anyway, the upshot was my manager was rather annoyed - I *should* have pulled sdc, and put in a new one, and just let that go. I still think it would have failed, given the inability of mdadm to find the info on sdd. We wound up just remaking the RAID, and rebuilding the mirror over the weekend.
I think either adding the ARRAY entry in /etc/mdadm.conf and rebooting or some invocation of mdadm could have revived /dev/md4 with /dev/sdd (and the contents you wanted) active.