Gerry said:
John
We seem to be debating two points, namely what is fragmentation and what
a boot time defragmenter does. I agree that a non-contiguous file is
fragmented. It is problematic because, being non-contiguous, it causes
free disk space to be fragmented into smaller pockets across the volume.
This means that new files being written are more likely to immediately
fragment than they might if free space were contiguous. I sense you
accept grudging that the contents of a contiguous pagefile can still be
fragmented but maintain that this is not what is generally meant when
users discuss what a Disk Defragmenter does.
Are you running a VAX/VMS machine? There is little of interest with
internal pagefile fragmentation on Windows operating systems, the
pagefile is not a sequentially accessed file, any effects caused by
internal file fragmentation will be minimal.
Of course if you have a pagefile>
that is not fixed in size i.e. windows managed, you will not have free
space within the pagefile to fragment. Fragmentation within the pagefile
is probably only of academic interest as I suspect that there is no
utility available which is capable of defragmenting the data. In any
event the data held within the pagefile is constantly changing.
It is a commonly held view that defragmenting the pagefile is pointless
because of the constantly changing nature of the data held in the
pagefile.
That is *your* view, not one held by Microsoft. When dealing with
Windows operating systems no one, or hardly anyone refers to pagefile
fragmentation as internal file fragmentation, it is always meaning that
the file is not in one contiguous segment on the disk, that can have a
negative impact on virtual memory performance. Your view that
defragmenting the pagefile is a waste of time is one that is based on
internal file fragmentation alone, it is not one that is based on
Microsoft's definition of pagefile fragmentation, it is an erroneous
statement when speaking of pagefiles that are scattered about in
multiple segments on the disk.
You make the point "When the pagefile is allowed to
dynamically grow some of this fragmentation is normal and temporary
(until you reboot), if you occasionally have 2 or 3 segments due to
dynamic expansion you shouldn't worry too much about it." Several
points here. The pagefile can contract. I am unclear what you mean by
linking temporary and reboot? Are you saying that the data in the
pagefile is abandoned on shutdown and new data is loaded the next time
the computer is booted or something else?
The location of committed or reserved memory addresses in the pagefile
is identified by the page-table entry, when the computer is rebooted the
page table entry is cleared and the pagefile contents are no longer
valid, for all intents and purposes, to the Virtual Memory Manager the
pagefile is as good as empty when Windows is rebooted, new table entries
will be given to any new memory addresses that are backed by the
pagefile and the pagefile will simply be overwritten with new frames.
You also choose the word
normal, which is questionable given that user is given choices on how to
manage the pagefile.
Why is it questionable? A system managed pagefile can grow or shrink as
needed, when the file grows if there is no adjacent free space for the
pagefile growth the file will become fragmented on the disk, that is
perfectly normal. Whether or not performance is affected by this
fragmentation entirely depends on whether or not the whole file is used,
the Virtual Memory Manager will only use what pagefile space it needs,
the rest is just unused disk space, just because a system managed file
at one time grew to an unusually large size it doesn't mean that it will
always need to use that much space. If you constantly have a fragmented
pagefile then you should take appropriated steps to prevent this
fragmentation, if this is an occasional occurrence with a system managed
pagefile it may not be that big a deal, the next time you reboot Windows
it won't matter at all, the minimal system managed pagefile size may be
all that the VMM needs. Even if the file were to be in three segments
it doesn't necessarily mean that all three segments will be used, there
may only be need for the first segment, that is the same as having one
large static pagefile, you could make the pagefile 4GB if you wanted to,
it wouldn't affect performance at all to have that large a pagefile, the
Virtual Memory Manager would only use what it needs out of the 4GB file
and the rest would simply be unused and unavailable disk space for all
but the VMM.
I have reservations regarding your comment about not worrying about two
or three segments. If they were at a fixed location then fine. However,
you also use the expression "dynamic expansion". Where the free disk
space is less than 60% this will eventually lead to accelerated
fragmentation as the free disk space is filled over time. My preference
is to set a pagefile with the minimum and maximum the same. Whilst there
is 60% free space you can create a contiguous pagefile. Once the free
space reduces below 60% it becomes increasingly difficult to create a
single contiguous pagefile. If the amount of free disk space is marginal
you can temporarily increase space, turning off system restore is one
way, to enable a contiguous pagefile to be created. You then turn system
restore back on.
So, if you are concerned about these two or three segments why are you
saying that defragmenting the pagefile is a waste of time? Personally I
always prefer and set a static pagefile on my machines and I always
insist on having it in one segment on the disk, I make sure that it
remains in a contiguous location on the disk. But I also know that many
people have a system managed pagefile and that it is sometimes in two or
three segments on the disk but that they seldom use all of the pagefile
and that for them it isn't much of a performance problem. Some of these
people are not technically inclined and they don't care to know the
technical details of how the pagefile works and what pagefile
fragmentation is, they just want to use their computers and not be
bothered with these things. For them a System Managed pagefile is often
the best option and the odd time that the file might need additional
room may be infrequent and may not be a significant performance hit, I
am just being pragmatic with this, for some folks this is not that big a
deal.
I appreciate my views on managing the pagefile do not coincide with the
more commonly held preference to allow windows to manage.
Yes, and I hold the same view, nowhere in my previous posts did I
contradict that or indicate otherwise. I don't disagree with your
approach at all, I agree with you and I think that having a properly
sized pagefile is preferable. But we do see posts here from people who
get Virtual Memory warning messages and who are completely baffled by
these warnings, for them a System Managed pagefile is preferable to a
static pagefile that is too small, as you well know running out of
pagefile resources will cause the operating system to crash. While I
agree with you on the use of a static pagefile I disagree with your
statement that defragmenting the pagefile is a waste of time, it isn't
if the pagefile is fragmented and if you use the file to a significant
extent.
My approach
does away with the need for a third party defragmenter.
That doesn't mean that others don't have a use for it, sometimes people
change their pagefile from a System Managed to a static pagefile or they
increase the size of the pagefile and they end up with the file in
several segments. For them using a utility like PageDefrag might be
easier than deleting and recreating the pagefile, plus PageDefrag can do
it in one reboot instead of two.
Regards;
John