A
Arne Ludwig
I am writing large files onto an empty NTFS partition using Windows XP
SP1 and I see results in the Computer Management/Disk Defragmenter
display that seem a bit strange. The file is written using
CreateFile/WriteFile. The file is 2GB on a 40GB disk with one primary
partition. The file is written using 1000 calls of WriteFile writing
2MB with each call. This is the only activity on this disk, except for
the presence of a contiguous swap file right at the beginning of the
disk.
Now what I end up in the graphic display of dfrgntfs most often is a
file with 11 fragments that are scattered literally all over the 40GB
disk, or 4 visually separable chunks followed by a free space then 2
more chunks then a big free space and then one big chunk at about 75%
capacity of the disk. (all chunks red, swap file green, one green line
after the first red chunk, all readings from left to right)
Next I defragmented the disk, leaving me with one big blue chunk at
about 25% capacity of the disk. The green line is gone.
I deleted that file and I wrote the file again using the same method
as above. Result: One file with 9 fragments, four on the left as
before, one big chunk where the blue chunk was, thin red line at 75%
capacity of the disk, green line after the first red chunk as before.
Delete and write again, Result: One file with 4 fragments, two big red
chunks in the middle, thin green line on the left.
Again, Result: One file with 10 fragments, fours small red chunks as
in the beginning, thin green line after the first chunk as before, two
big red chunks at 40% close together, one thin line at 75%.
What is going on?
I know that logical disk blocks do not necessarily have anything to do
with physical location on the disk (what with cylinders and LBA and
all that), but is XP NTFS that smart? And if so, why would it be so
non-reproducible, but semi reproducible to some extent (4 small chunks
on the left)?
Strangely enough, with FILE_FLAG_NO_BUFFERING I get a fairly
consistent write speed even with the arbitrary fragmentation but will
it stay that way once the disk gets full?
Could somebody explain the block allocation policy for writing files
with NTFS on XPSP1/2? How is the free list maintained, i.e. when I
remove a big file and then reallocate a file of the same size, does it
end up in the same space? Do I have to reformat the disk to get
contiguous files?
Thanks!
SP1 and I see results in the Computer Management/Disk Defragmenter
display that seem a bit strange. The file is written using
CreateFile/WriteFile. The file is 2GB on a 40GB disk with one primary
partition. The file is written using 1000 calls of WriteFile writing
2MB with each call. This is the only activity on this disk, except for
the presence of a contiguous swap file right at the beginning of the
disk.
Now what I end up in the graphic display of dfrgntfs most often is a
file with 11 fragments that are scattered literally all over the 40GB
disk, or 4 visually separable chunks followed by a free space then 2
more chunks then a big free space and then one big chunk at about 75%
capacity of the disk. (all chunks red, swap file green, one green line
after the first red chunk, all readings from left to right)
Next I defragmented the disk, leaving me with one big blue chunk at
about 25% capacity of the disk. The green line is gone.
I deleted that file and I wrote the file again using the same method
as above. Result: One file with 9 fragments, four on the left as
before, one big chunk where the blue chunk was, thin red line at 75%
capacity of the disk, green line after the first red chunk as before.
Delete and write again, Result: One file with 4 fragments, two big red
chunks in the middle, thin green line on the left.
Again, Result: One file with 10 fragments, fours small red chunks as
in the beginning, thin green line after the first chunk as before, two
big red chunks at 40% close together, one thin line at 75%.
What is going on?
I know that logical disk blocks do not necessarily have anything to do
with physical location on the disk (what with cylinders and LBA and
all that), but is XP NTFS that smart? And if so, why would it be so
non-reproducible, but semi reproducible to some extent (4 small chunks
on the left)?
Strangely enough, with FILE_FLAG_NO_BUFFERING I get a fairly
consistent write speed even with the arbitrary fragmentation but will
it stay that way once the disk gets full?
Could somebody explain the block allocation policy for writing files
with NTFS on XPSP1/2? How is the free list maintained, i.e. when I
remove a big file and then reallocate a file of the same size, does it
end up in the same space? Do I have to reformat the disk to get
contiguous files?
Thanks!