A user's system had a hard drive failure over the weekend. Linux RAID 6. I identified the drive, brought the system down (8 drives, and I didn't know the s/n of the bad one. why it was there in the box, rather than where I started looking...) Brought it up, RAID not working. I finally found that I had to do an mdadm --stop /dev/md0, then I could do an assemble, then I could add the new drive.
But: it's now cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active (auto-read-only) raid5 sdg1[8](S) sdh1[7] sdf1[4] sde1[3] sdd1[2] sdc1[1] 23441313792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/5] [_UUUU_U] bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices: <none>
and I can't mount it (it's xfs, btw). *Should* I make it readwrite, or is there something else I should do?
mark
On 1/22/19 2:26 PM, mark wrote:
A user's system had a hard drive failure over the weekend. Linux RAID 6... But: it's now
Personalities : [raid6] [raid5] [raid4] md0 : active (auto-read-only) raid5 sdg1[8](S) sdh1[7] sdf1[4] sde1[3] sdd1[2] sdc1[1] 23441313792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/5] [_UUUU_U]
That doesn't look like RAID 6. That looks like RAID 5 with a hot spare. There appear to be two drives missing. If you can locate one of those two drives and add it to the array, you should be able to start it and mount the filesystem. Otherwise you'll have to restore backups.
The drive that failed this weekend may work well enough to add it to the array temporarily if you need to refresh your backups.