Sorin Srbu wrote: <snip>
Anyway, I get a bad block message when running fsck, and am not sure whether this is a interface problem between the chair and the monitor or something with the tech preview.
<snip>
Having just live through this issue, I recommend you run the extended (long) SMART test on all your drives and check the reports. The relevant package to install is smartmontools. It's worth investing a little time in setting up the package. I ended up with this incantation in /etc/smartd.conf :
/dev/hda -T normal -p -a -o on -S on -s (S/../.././02|L/../../6/03) -m root@localhost
To execute the extended tests (doesn't mess with your data): # smartctl --test=long /dev/hda
To view the test results about 80 minutes later: # smartctl --log=selftest /dev/hda
and so on.
On 1/28/2011 10:02 AM, cpolish@surewest.net wrote:
Sorin Srbu wrote:
<snip> > Anyway, I get a bad block message when running fsck, and am not sure > whether this is a interface problem between the chair and the monitor or > something with the tech preview. <snip>
Having just live through this issue, I recommend you run the extended (long) SMART test on all your drives and check the reports. The relevant package to install is smartmontools. It's worth investing a little time in setting up the package. I ended up with this incantation in /etc/smartd.conf :
/dev/hda -T normal -p -a -o on -S on -s (S/../.././02|L/../../6/03) -m root@localhost
To execute the extended tests (doesn't mess with your data): # smartctl --test=long /dev/hda
To view the test results about 80 minutes later: # smartctl --log=selftest /dev/hda
and so on.
Are there guidelines about what are reasonable results or will the 'Smart Health Status' tell you enough after the tests run?
Les Mikesell wrote:
Are there guidelines about what are reasonable results or will the 'Smart Health Status' tell you enough after the tests run?
In a recent study[1] of a large population of hard drives these assertions stood out:
[A]fter their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors.
Drives with one or more reallocations do fail more of- ten than those with none. The average impact on AFR appears to be between a factor of 3-6x.
After their first reallocation, drives are over 14 times more likely to fail within 60 days than drives without reallocation counts, making the critical threshold for this parameter also one.
After the first offine reallocation, drives have over 21 times higher chances of failure within 60 days than drives without offine reallocations...
The critical threshold for probational counts is also one: after the first event, drives are 16 times more likely to fail within 60 days than drives with zero probational counts.
[1] Failure Trends in a Large Disk Drive Population Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz A. Barroso Google Inc.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of cpolish@surewest.net Sent: Friday, January 28, 2011 5:02 PM To: CentOS mailing list Subject: Re: [CentOS] Ext4 on CentOS 5.5 x64
Sorin Srbu wrote:
<snip> > Anyway, I get a bad block message when running fsck, and am not sure > whether this is a interface problem between the chair and the monitor or > something with the tech preview. <snip>
Having just live through this issue, I recommend you run the extended (long) SMART test on all your drives and check the reports. The relevant package to install is smartmontools. It's worth investing a little time in setting up the package. I ended up with this incantation in /etc/smartd.conf :
/dev/hda -T normal -p -a -o on -S on -s (S/../.././02|L/../../6/03) -m root@localhost
To execute the extended tests (doesn't mess with your data): # smartctl --test=long /dev/hda
To view the test results about 80 minutes later: # smartctl --log=selftest /dev/hda
and so on.
Good info, thanks!