B
Ben Bradley
I'll ask the question I can get an answer for first, and describe
the background/rant/how it happened further down.
I have WD400 and WD800 EIDE drives that scandisk has detected
hundreds of bad clusters on. Watching its operation (W98, running
scandisk in dos mode), if it is unable to read a cluster in about ten
seconds, it marks it bad, which is fine. Unfortunately there are many
clusters adjacent to the bad ones that scandisk reads good, even
though it may take 1 to 5 seconds to read it. When scandisk reads
through a good/unaffected area of the disk, it reads about 100
clusters per second (or about 10mS per cluster). I would like to have
these "adjacent blocks" that take substantially more than 10mS to read
to also be marked bad. Is there a utility that will do this?
My reason for wanting to do this is so I can use the machine for
recording LP's and 78's, and burning the recordings to CDR. These are
basically real-time tasks (though of course highly buffered for the
non-RT desktop OS), and I don't want to have it fail because a block
from the hard disk ends up taking several seconds to read or write.
So here's how it happened/the rant:
I have a Dell XPS T400 (Pentium II, 400MHz) system I bought cheap
without drives and added two older drives I had from upgrading other
machines. They are WD400 and WD800 EIDE drives about three to four
years old, with no bad areas (at least as seen by Windows 98's
scandisk). I've had this system up on a bench for a few months and the
drives have been doing fine according to the occasional scandisk run.
A couple of weeks ago I relocated the monitor (an IBM VGA, with the
slanted IBM PS2 logo) next to the right side of the computer, cases
separated by about an inch. After moving the monitor, the computer
was on all the time and the monitor was on for most of this time.
Several days after moving the monitor I ran scandisk on one drive and
was surprised to see a LOT of bad blocks, and I ran it on the other
drive and saw many bad blocks on it as well.
I at first thought an electrical spike might have caused this, but
then I observed that it corresponded to moving the monitor next to the
machine. I moved the monitor away and ran scandisk on each drive a
couple more times, and a few more marginal "adjacent blocks" were
detected as bad each time, but these are a small number (maybe 10-20)
compared to the first time I saw the problem (hundreds). So as I see
it this was clearly caused by the proximity of the monitor (perhaps
the horizontal and/or vertical deflection yokes continuously
generating large alternating magnetic fields, or maybe the degaussing
coil when the monitor is turned on).
After some googling for disk utility programs, I downloaded and ran
diagnostic programs from Western Digital and Maxtor. Both programs run
fine on this machine (Dell XPS T450, W98) and both drives (IBM 28 gig,
original drive from 1999! and new WD 120 gig) show as perfect in all
tests. On the machine in question they can read the SMART data, but of
course find enough errors on full disk scan to say "This drive is
failing."
I had one utility do a "write zeros" to Drive D: (the 80 gig), did
an FDISK to reestablish it as a DOS drive, then did a DOS/Windows
Format to clean it off and put a filesystem on it. Running scandisk on
this just ends up rediscovering all the bad blocks that it had found
before (all the B's are showing up in the same place as before on this
drive). Currently scandisk shows out of 2,441,533 clusters, 427,000
examined, 670 found bad.
I've read about 'spare tracks' (drives internally have
substantially more storage than advertised, and use this space to
invisibly replace failing/marginal tracks), but I don't know of a way
to tell how many of these are actually being used in a 'perfect'
drive. I presume there's some utilities to show total spare tracks and
how many are in use - what program does that? (not that it would help
me here, this is just general interest) Apparently all the 'spare
tracks' on these two drives are all used up.
These errors on the disk are apparently 'soft' in that they are
caused by bad data written to the disk, and not by bad media itself,
so if these areas could be rewritten, these spots could be fixed. I
looked up low-level formatting in hopes of doing that, and it's clear
that you can't low-level format modern drives.
And so, these drives were made pretty much FUBAR just by putting a
running CRT monitor next to the PC case for a few days. Is it common
knowledge that this can happen? I'm really surprised I haven't heard
about it. I didn't even think of possible drive damage when I put the
monitor there. Hard disk drives are put next to rotating fans and
switching power supplies very often, and those things generate
magnetic fields, and are supposed to have high coercivity and be
diffucult to erase. Perhaps long-term they DO cause errors in hard
disk drives, and no one has noticed or tied it to adjacent devices
creating magnetic fields.
I can already hear the responses "go buy a new computer, they're
cheap enough..."
the background/rant/how it happened further down.
I have WD400 and WD800 EIDE drives that scandisk has detected
hundreds of bad clusters on. Watching its operation (W98, running
scandisk in dos mode), if it is unable to read a cluster in about ten
seconds, it marks it bad, which is fine. Unfortunately there are many
clusters adjacent to the bad ones that scandisk reads good, even
though it may take 1 to 5 seconds to read it. When scandisk reads
through a good/unaffected area of the disk, it reads about 100
clusters per second (or about 10mS per cluster). I would like to have
these "adjacent blocks" that take substantially more than 10mS to read
to also be marked bad. Is there a utility that will do this?
My reason for wanting to do this is so I can use the machine for
recording LP's and 78's, and burning the recordings to CDR. These are
basically real-time tasks (though of course highly buffered for the
non-RT desktop OS), and I don't want to have it fail because a block
from the hard disk ends up taking several seconds to read or write.
So here's how it happened/the rant:
I have a Dell XPS T400 (Pentium II, 400MHz) system I bought cheap
without drives and added two older drives I had from upgrading other
machines. They are WD400 and WD800 EIDE drives about three to four
years old, with no bad areas (at least as seen by Windows 98's
scandisk). I've had this system up on a bench for a few months and the
drives have been doing fine according to the occasional scandisk run.
A couple of weeks ago I relocated the monitor (an IBM VGA, with the
slanted IBM PS2 logo) next to the right side of the computer, cases
separated by about an inch. After moving the monitor, the computer
was on all the time and the monitor was on for most of this time.
Several days after moving the monitor I ran scandisk on one drive and
was surprised to see a LOT of bad blocks, and I ran it on the other
drive and saw many bad blocks on it as well.
I at first thought an electrical spike might have caused this, but
then I observed that it corresponded to moving the monitor next to the
machine. I moved the monitor away and ran scandisk on each drive a
couple more times, and a few more marginal "adjacent blocks" were
detected as bad each time, but these are a small number (maybe 10-20)
compared to the first time I saw the problem (hundreds). So as I see
it this was clearly caused by the proximity of the monitor (perhaps
the horizontal and/or vertical deflection yokes continuously
generating large alternating magnetic fields, or maybe the degaussing
coil when the monitor is turned on).
After some googling for disk utility programs, I downloaded and ran
diagnostic programs from Western Digital and Maxtor. Both programs run
fine on this machine (Dell XPS T450, W98) and both drives (IBM 28 gig,
original drive from 1999! and new WD 120 gig) show as perfect in all
tests. On the machine in question they can read the SMART data, but of
course find enough errors on full disk scan to say "This drive is
failing."
I had one utility do a "write zeros" to Drive D: (the 80 gig), did
an FDISK to reestablish it as a DOS drive, then did a DOS/Windows
Format to clean it off and put a filesystem on it. Running scandisk on
this just ends up rediscovering all the bad blocks that it had found
before (all the B's are showing up in the same place as before on this
drive). Currently scandisk shows out of 2,441,533 clusters, 427,000
examined, 670 found bad.
I've read about 'spare tracks' (drives internally have
substantially more storage than advertised, and use this space to
invisibly replace failing/marginal tracks), but I don't know of a way
to tell how many of these are actually being used in a 'perfect'
drive. I presume there's some utilities to show total spare tracks and
how many are in use - what program does that? (not that it would help
me here, this is just general interest) Apparently all the 'spare
tracks' on these two drives are all used up.
These errors on the disk are apparently 'soft' in that they are
caused by bad data written to the disk, and not by bad media itself,
so if these areas could be rewritten, these spots could be fixed. I
looked up low-level formatting in hopes of doing that, and it's clear
that you can't low-level format modern drives.
And so, these drives were made pretty much FUBAR just by putting a
running CRT monitor next to the PC case for a few days. Is it common
knowledge that this can happen? I'm really surprised I haven't heard
about it. I didn't even think of possible drive damage when I put the
monitor there. Hard disk drives are put next to rotating fans and
switching power supplies very often, and those things generate
magnetic fields, and are supposed to have high coercivity and be
diffucult to erase. Perhaps long-term they DO cause errors in hard
disk drives, and no one has noticed or tied it to adjacent devices
creating magnetic fields.
I can already hear the responses "go buy a new computer, they're
cheap enough..."