[CentOS] Drive failed in 4-drive md RAID 10

Sun Sep 20 09:19:37 UTC 2020
Simon Matter <simon.matter at invoca.ch>

> --On Friday, September 18, 2020 10:53 PM +0200 Simon Matter
> <simon.matter at invoca.ch> wrote:
>
>> mdadm --remove /dev/md127 /dev/sdf1
>>
>> and then the same with --add should hotremove and add dev device again.
>>
>> If it rebuilds fine it may again work for a long time.
>
> This worked like a charm. When I added it back, it told me it was
> "re-adding" the drive, so it recognized the drive I'd just removed. I
> checked /proc/mdstat and it showed rebuilding. It took about 90 minutes to
> finish and is now running fine.

I think it's usually like this:
When a drive has a bad sector, the sector is then read from the other raid
disk but the failing disk is marked bad. Then when rebuilding, the bad
sector gets written again and the drive remaps it to a spare sector. As a
result all is well again. Note that the drive firmware can handle such
cases differently depending on the drive type.

Regards,
Simon

>
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> https://lists.centos.org/mailman/listinfo/centos
>