Previously Alexander Grigoriev said:
A properly written video capture application can sustain even 12 MB/s
uncompressed stream, no matter how the disk is fragmented. By "properly
written" I mean, which employs separate threads for capture and disk write,
a memory buffer big enough, and FILE_FLAG_NO_BUFFERING mode when the file is
opened. I won't say I know any app that meets these conditions, but I haven
t tried many.
This is definitely untrue. If the disk is heavily fragmented, you get
the full seek time + latency time for every disk cluster.
As an example assume 32kB clusters, radmon seek at 9ms, latency at 4ms
(7200rpm). These are realistic values. That gives you 13ms for every
32kB and equals a sustained read rate of about 2.5MB/sec.
A little bit might be gained by access reordering, but that is an OS
taks, the application cannot do this. Of course this is a pathological
example. But it clearly shows that your claim is wrong in general.
However in the average case with a decent filesystem layer (don't know
how well MS is doing, but on a long used Linux system I have about 2%
file fragmentation), you get very little fragmentation unless you fill
the drive to capacity or have unusual access patterns, such as
many sparse files that are updated within the sparse areas in small
blocks.
So yes, there is still room for defragging-tools. And some people
may even have desparate need for them. (Actually a backup-restore cycle
does the same...).
Arno