On Fri, 2008-08-22 at 09:33 -0700, nate wrote:
William L. Maltby wrote:
?? Uncertain about "spares has been exhausted".
I don't recall where I read it, and I suppose it may be misinformation, but it made sense at the time. The idea is the disks are not made to hold EXACTLY the amount of blocks that the specs are for. There are some extra blocks, that the disk "hides" from the disk controller. The disk automatically re-maps these hidden blocks(making them visible again). By
That is correct. Back in the old days, we had access to a "spares" cylinder and could manually maintain the alternate sectors table. We could wipe it, add sectors etc.
As technology progressed, this capability disappeared and the drive electronics and proms began taking care of it.
What I don't know (extreme lack of sufficient interest to find out so far) is if the self-monitoring tools report a sector when a *read* results in either a hard or soft failure and if it tries to reassign at that time. My local evidence seems to indicate that the report is made at read time but assignment of a spare is not made then. This because the same three sectors kept reporting over and over.
After running the repair software, messages stopped, indicating that the bad sector was then marked unusable and alternate sectors had been assigned.
the time bad blocks start showing up on the OS level these extra blocks are already full, an indication that there is far more bad blocks on the disk than just the ones that you can see at the OS level.
Correct.
Now, I don't know (or care) if an alternate sector was assigned, just that the sector was flagged unusable. For my use (temporary use - no permanent or critical data) this is fine. Last several mke2fs runs have produced the same amount of usable blocks and i-nodes, so I don't see evidence that no spare was available.
Note that mke2fs doesn't write over the entire disk, I doubt it even scans the entire disk.
Correct, unless the check is forced. I failed to note in my previous post that a *substantial* portion of the partition was written (which I knew included the questionable sectors through manual math and the nature of file system usage).
I've used a technology called thin provisioning where only data that is written to disk is actually allocated on disk(e.g. you can create a 1TB volume, if you only write 1GB to it, it only uses 1GB, allowing you to oversubscribe the system, and dynamically grow physical storage as needed). When allocating thinly provisioned volumes and formatting them with mke2fs, even on multi hundred gig systems only a few megs are written to disk(perhaps a hundred megs).
Yep. Only a few copies of the superblock and the i-node tables are written by the file system make process. That's why it's important for files systems in critical applications to be created with the check forced. Folks should also keep in mind that the default check, read only, is really not sufficient for critical situations. The full write/read check should be forced on *new* partitions/disks.
nate
<snip sig stuff>