There are some items in that article that have been controversial on
this forum before.:
"Have the initial size be at least 1.5 times bigger than the amount of
physical RAM. Do NOT make the Pagefile smaller than the amount of
physical RAM you've got installed on your system."
I was advised in an earlier thread to set my initial pagefile size to
512MB, which is the size of my RAM. I was also advised to set my
maximum pagefile size to 1.5GB. I had it a lot higher but I was
running into NTFS corruption problems that went away when I lowered to
the current values of 512MB/1.5GB.
With 780-odd MB memory allocated as you reported earlier
(but really, you should consider the peak value not the
momentary as you did), 512MB pagefile should work. You are
continually overlooking that there is no one generic answer
that fits all systems as well as actually looking at YOUR
system usage. Don't tell us what you have the pagefile set
to, tell us what your PEAK Commit Charge is.
Remember that if you have too small a pagefile set, it won't
just slow down your use, you will see a warning message.
You could continue to have the system set to something like
a 512MB minimum (which minimizes fragmentation, contiguous
file if the disk space is available but even then, the file
itself may have fragmented access because that's how paging
works- only what's needed is read back), and a larger
maximum. You'd want your minimum large enough that your big
jobs don't exceed it, and the larger maximum is just a
failsafe should you do something very unusual. Keep in mind
that if you did such an unusual task and suddenly needed
another GB of virtual memory, you'd be sitting around for
ages waiting for the system to stop thrashing the HDD
swapping it all back and forth from disk to real memory.
You don't ever want to run jobs like that, to give you an
example I used to try to edit audio on a P2 box with 32MB in
it, several minutes would pass by waiting on the swapping to
get done. When a pair of 128MB DIMMs were added to that box
later, similar jobs took under 20 seconds.
"Make its initial size as big as the maximum size. Although this will
cause the Pagefile to occupy more HD space, we do not want it to start
off small, then having to constantly grow on the HD. Writing large
files (and the Pagefile is indeed large) to the HD will cause a lot of
disk activity that will cause performance degradation. Also, since the
Pagefile only grows in increments, you will probably cause Pagefile
fragmentation, adding more overhead to the already stressed HD."
You don't need to set initial same as max, just set initial
large enough that you don't "expect" it to ever be
exceeded... for example you could set a 1GB min and 2GB max.
Whether it would fragment past 1GB makes little difference,
because you don't want to use the system on anything that
would actually make use of an additional 1GB of virtual
memory, it is merely there for allocation purposes, not
reading and writing data.
I was advised to make the maximum 1.5GB even though the initial was
512MB. I have more than ample disk space to that is not an issue.
If it ain't broke don't fix it.