If pagefile was used only for virtual memory, this would be the end of
it. A task load of 2G would need a 1.5G page files with 512M
(assuming it will be able to cope with that RAM:swap ratio), and 512M
page file with 1.5G RAM, etc.
It's the above logic that leads me to use 512M pagefile for most XP
installations, irrespective of whether they have 128M, 256M, 512M or
more RAM. But I also take steps to ensure the page file is not being
used for other purposes such as fast user switching or full RAM crash
dumps, as those scale up with RAM, rather than down as swapping would.
Aside from fast user switching and full crash dumps, Vista may use the
pagefile as a dumping ground for other underfootware stuff. I don't
know the OS well enough to know if this applies to contenders such as
background defrag, shadow copy, indexing, thumbnailing etc.
I have 2 Gig of RAM, and the point I was making is the PF would keep on
growing even when not being used anywhere close to capacity. It would also
fragment very badly and this did carry over a reboot.
If mine is left to manage itself it creates a tiny PF initially, then it
will keep adding chunks and these will all be fragmented and NOT be gone
after a reboot. The only way I found of combating this was to either switch
off PF and reboot and switch back on etc, or limit it manually, which I did.
It's hard to interpret the impact of fragmenting the page file. On
the face of it, it would slow things down if reading the page file
from one end to the other, but that's rarely how it would be used.
Let's make a few assumptions (and have these contested by readers
please, if they are wrong!):
- in-RAM material that has not changed, is:
- free to page out, as it doesn't need to be written back to HD
- cheap to reload from the original source file
- in-RAM material that has changed, is:
- costly to page out, as it has to be written back to HD
- may be written to pagefile rather than original location
- ultimately may need to be copied to original location
If this is true, then paging can be expected to first purge material
that does not need to be written back to disk, and only when that's
all done, will it start on altered RAM contents that do have to be
written to disk before something else can page into that RAM area.
That implies page file access will be mixed with page-back reads from
unchanged but paged-out files, which will include original OS code
that would usually be at the "front" of the disk unless updated, then
it could be anywhere, unless relocated by defrag.
It will also be mixed with write-paging back to temp files (if not
pagefile itself) that may be at the "far side" of the disk.
If both of the above are true, then the winning strategy might be to
fragment the page file into the file mass, especially if the paging
manager is smart enough to page using areas of the file that lie
closest to what is being paged.
--------------- ----- ---- --- -- - - -
Error Messages Are Your Friends