In message <
[email protected]> Rick
Fragmentation causes a performance impact, not a reliability impact
(Although recovering a corrupt drive is substantially harder if it's
fragmented)
There are three aspects to fragmentation, defragging, and reliability.
1) Recoverability
As DevilsPGD says, the less fragmented the file system, the better the
data recovery will be if you are forced to assume the linkage of
cluster chains/runs from a first-cluster starting point.
On a FATxx volume, losing both copies of the FAT will force you to
work from an assumption of unfragmented runs. You can deduce
break-points, but it's tedious guesswork at best. Best results are
where the cluster size is larger than the file size ;-)
On an NTFS volume, there's no table of cluster addresses; instead, a
series of start and length entries define the cluster runs
(unfragmented chain segments) that comprise the file's data stream.
I don't know NTFS well enough to speculate on the likelyhood of
knowing the start cluster for a file from the directory entry, yet
losing the run segment info, but another unwanted side-effect of
fragmentation suggests itself; an increasing bulk of space needed to
hold the additional start, length entries.
2) Corruption risk of fragmentation
The longer it takes to update the file system, the greater the risk of
corruption from events that might interrupt the process. Think of
this as "size on the dartboard of time", with crashes etc. as the
darts that get thrown at the board.
The worst-case scenario is a slowly-growing directory containing lots
of items that is frequently updated )and thus perhaps "always in use"
so it is never defragged). This would be like the whole of the "20"
on the dart board, and it's quite easy to hit a "20".
Now if that long fragmmented chain was defragged so that it could be
operated on more quickly, it would be like the "double-20" on the
dartboard; a far smaller target for the arrows (OK, darts) of chance.
3) Corruption risk of DEfragmentation
The defrag process is inherently risky, as it involves reading
potentially "everything" off HD into RAM and then writing it back
somewhere else. That's a lot of disk-heating activity, and if RAM
throws 1 in 100 000 errors, that's a lot of corrupted file contents
and clusters written back to the wrong places on the disk. UGLY.
I see (3) as a far more significant risk than (2), and thus I see
defragging as like an hour-long workout in the gym; a great way to
make a healthy system fitter, but potentially lethal for the infirm.
So not only is defragging close to irrelevant in terms of reliability,
is is actively contra-indicated in unreliable systems.
Defrag is NOT a trouble-shooting tool !!
-------------------- ----- ---- --- -- - - - -
Hmmm... what was the *other* idea?