Hello,
My machine is running Software RAID 5 on /dev/hde1, /dev/hdg1,/dev/hdi1. I
noticed on the log that suddenly we are getting messages such as :
Oct 14 01:27:16 localhost smartd[4801]: Device: /dev/hde, SMART Prefailure
Attribute: 8 Seek_Time_Performance changed from 247 to 246
Oct 14 01:57:15 localhost smartd[4801]: Device: /dev/hde, SMART Prefailure
Attribute: 8 Seek_Time_Performance changed from 246 to 245
Oct 14 02:27:15 localhost smartd[4801]: Device: /dev/hde, SMART Prefailure
Attribute: 8 Seek_Time_Performance changed from 245 to 246
Oct 14 02:27:15 localhost smartd[4801]: Device: /dev/hde, SMART Usage
Attribute: 195 Hardware_ECC_Recovered changed from 253 to 252
Oct 14 02:57:15 localhost smartd[4801]: Device: /dev/hde, SMART Prefailure
Attribute: 8 Seek_Time_Performance changed from 246 to 245
Oct 14 02:57:15 localhost smartd[4801]: Device: /dev/hde, SMART Usage
Attribute: 195 Hardware_ECC_Recovered changed from 252 to 253
Similar messages for /dev/hdi and /dev/hdg also, however it seems
that /dev/hde has the most and have been occuring the longest.
[root@localhost log]# grep SMART messages messages.1 | grep hde | wc
178 3023 25944
[root@localhost log]# grep SMART messages messages.1 | grep hdg | wc
93 1578 13583
[root@localhost log]# grep SMART messages messages.1 | grep hdi | wc
95 1612 13865
This did not happen a week ago:
[root@pathfinder log]# grep SMART messages.2
Checking /proc/mdstat indicates that all the drives are active i.e. RAID is
not running in degraded mode.
So, is it that suddenly my drives are failing ?
Any comment or suggestion is greatly appreciated.
Thanks.
RDB
--
Reuben D. Budiardja
Department of Physics and Astronomy
The University of Tennessee, Knoxville, TN