J
jack smith
AIUI ... a bad sector on an IDE hard drive gets labelled as unusable
in the drive's map and a substitute sector is found. The user
wouldn't know about it because this happens transparently.
If there are LOTS of bad sectors then couldn't we have a situation
where the drive's performance is poor but there is no indication in
running drive diagnostics that there's anything wrong?
I have some hard drives which are much slower than similar ones.
Could a very large number of mapped out bad sectors be a *likely*
explanation for this?
BACKGROUND: The difference is most easily observable when I do an
online defrag of NTFS's own files (such as $MFT). The defragger
checks for and locks all data files before performing its defrag.
The speed it does this varies enormously between drives.
The difference seems to be of another order of magnitude in size to
the differences which might be due to model, firmware level, type of
data, file system, etc.
in the drive's map and a substitute sector is found. The user
wouldn't know about it because this happens transparently.
If there are LOTS of bad sectors then couldn't we have a situation
where the drive's performance is poor but there is no indication in
running drive diagnostics that there's anything wrong?
I have some hard drives which are much slower than similar ones.
Could a very large number of mapped out bad sectors be a *likely*
explanation for this?
BACKGROUND: The difference is most easily observable when I do an
online defrag of NTFS's own files (such as $MFT). The defragger
checks for and locks all data files before performing its defrag.
The speed it does this varies enormously between drives.
The difference seems to be of another order of magnitude in size to
the differences which might be due to model, firmware level, type of
data, file system, etc.