A
Al Dykes
And the CPU was designed to page, the motherboard chipsets were designed =
to page. Windows was designed to suit the hardware.
Fire up task manager and pick the View/Select Columns tab. You'll be
able to add counters for memory-related usage and see which programs
are doing what. There's lots there I don't understand for Windows but
to me the interesting data is "PF delta" which is how many times in
the update interval an apllication needed a page what wasn't in it's
cache.
If there is a Page Fault it means that a page your application needs
isn't in the VM mapping tables and the OS takes over and updates the
VM tables, bringing a page in, if necessary. If you are short on real
memory that may mean forcing a physical write of some other page to
make room, so there were two disk I/O ops instead of zero. Even if a
page is in memory a PF takes CPU time away from useful work and slows
down your app. A "soft PF" means that the page was in memory and no
I/O was necessary to resolve. a "hard PF" means that I/O was
necessary. (anyone that can correct my terminology for WIndows please
chip in.)
As someone else described, each program has a "working set", the
minimum number pf pages it needs to do it's job with essentially zero
page faults (except for startup). The total size of the program is
frequently many times the working set size. As long as the total of
the working set for all running processes is less than the total real
memory you've got a system that is running efficiently.
I used to be able to quote microsecond figures for page fault handling
for certain mainframes. Soft faults were in microseconds, hard faults
are in milliseconds. (they still are.) In the day, I knew that 25 soft
faults per second meant we either had to tune our application mix
(might be expensive) or buy another chunk of memory (expensive.)
At least one major mainframe operating systems that was current in the
late 70's was even more tightly coupled to the VM architecture than
Windows is. The hardware and OS managed a data page of the file
system the same way it handled a memory page. All of memory was one
big cache. Top-20, fast as h**l for it's day.
Many PhD papers were were written in the 60's and 70's about memory
management strategies for virtual systems and there were loud
arguements at perfessional meetings about how they worked with
different process scheduling algoritms. Something we take for granted
now.
Now I'll return your TV channel to the 21st century......