[CentOS] odd mdadm behavior

Ljubomir Ljubojevic office at plnet.rs
Tue Dec 20 01:30:43 UTC 2011


Vreme: 12/19/2011 10:58 PM, Paul Heinlein piše:
> On Mon, 19 Dec 2011, aurfalien at gmail.com wrote:
>
>>> I'm interested to know if you used mdadm to fail and remove the bad
>>> disk from the array when it first started acting up.
>>
>> No, I should have but left it alone.
>>
>> I know, my bad.
>
> I was merely interested.
>
> Recently I had a RAID-1 device get marked as bad, but I couldn't see
> any SMART errors. So I failed, removed, and then re-added the device.
> It worked for about a week, then it failed again, but time the SMART
> errors were obvious.
>
> I'd ordered a new drive at the first failure, so I was ready when it
> failed the second time.
>
> I guess the point is that I've seen "bad" drives go "good" again, at
> least for short periods of time.
>

Les, I will reply to your mail here.

This would better explain my troubles. I've had to manually re-add them 
few times, but I've had little experience and I thought these things 
happen, never gave it much thought.

I am still not 100% on what actually happened, but my guess would be 
that since boot partition was active only on one drive, and that one was 
creating problems, I went with line of easier resistance and just patch 
it up. I am going to use C6 ability to boot from full raid partition, 
and maybe even add IDE DOM module for boot partition.

I am now good and watching like a hawk.


-- 

Ljubomir Ljubojevic
(Love is in the Air)
PL Computers
Serbia, Europe

Google is the Mother, Google is the Father, and traceroute is your
trusty Spiderman...
StarOS, Mikrotik and CentOS/RHEL/Linux consultant



More information about the CentOS mailing list