page file size

A

Al Dykes

And the CPU was designed to page, the motherboard chipsets were designed =
to page. Windows was designed to suit the hardware.

Fire up task manager and pick the View/Select Columns tab. You'll be
able to add counters for memory-related usage and see which programs
are doing what. There's lots there I don't understand for Windows but
to me the interesting data is "PF delta" which is how many times in
the update interval an apllication needed a page what wasn't in it's
cache.

If there is a Page Fault it means that a page your application needs
isn't in the VM mapping tables and the OS takes over and updates the
VM tables, bringing a page in, if necessary. If you are short on real
memory that may mean forcing a physical write of some other page to
make room, so there were two disk I/O ops instead of zero. Even if a
page is in memory a PF takes CPU time away from useful work and slows
down your app. A "soft PF" means that the page was in memory and no
I/O was necessary to resolve. a "hard PF" means that I/O was
necessary. (anyone that can correct my terminology for WIndows please
chip in.)

As someone else described, each program has a "working set", the
minimum number pf pages it needs to do it's job with essentially zero
page faults (except for startup). The total size of the program is
frequently many times the working set size. As long as the total of
the working set for all running processes is less than the total real
memory you've got a system that is running efficiently.

I used to be able to quote microsecond figures for page fault handling
for certain mainframes. Soft faults were in microseconds, hard faults
are in milliseconds. (they still are.) In the day, I knew that 25 soft
faults per second meant we either had to tune our application mix
(might be expensive) or buy another chunk of memory (expensive.)

At least one major mainframe operating systems that was current in the
late 70's was even more tightly coupled to the VM architecture than
Windows is. The hardware and OS managed a data page of the file
system the same way it handled a memory page. All of memory was one
big cache. Top-20, fast as h**l for it's day.

Many PhD papers were were written in the 60's and 70's about memory
management strategies for virtual systems and there were loud
arguements at perfessional meetings about how they worked with
different process scheduling algoritms. Something we take for granted
now.

Now I'll return your TV channel to the 21st century......
 
D

David Candy

Mike Brannigan is a noone and his opinion is worthless. If you want to quote MS staff at least choose one with a clue like Larry O (he would probably say the same thing as MB but based on knowledge). But MB just parrots what he's been told to say. He is not a source for anything except how the newsgroups run (that's his job).

There was a man here a while ago complaining that IE had a hard coded limit of 34 windows (or something like that) and no error messages. Classic symptoms of a person with a too small a page file. I said increase page file size. He said no and wouldn't believe me. After a few days he tried my suggestion and went away happy.

Larry is no page file expert (there aren't that many people working on the page file code at MS - the number of experts is very small) but knows a thing or two. I'll refer you to his articles. You need to read both.

http://weblogs.asp.net/larryosterman/archive/2004/03/18/92010.aspx
http://weblogs.asp.net/larryosterman/archive/2004/05/05/126532.aspx

So I don't expect you to to quote the clerical staff again.

But lets play a game.
Q. How much memory can a program use?
 
D

David Candy

One thing to note is that XP tries to page minimised apps completely out of memory (Raymond always mentions this). Leave an app minimised long enough and as long as no code runs in it it will consume 0 bytes of physical memory.

I think part of the misunderstanding is that NT is designed to never need rebooting (not quite there yet). So MS think how to manage memory for ever while some turn off their computers everyday and think NT/XP is designed to manage their memory for only 8 hours. I've been up for 4 days (storms 4 days ago).

In designing servers one also has to consider disk space. It's really bad if the swap stops the company database from updating due to lack of space. Or an excel spreadsheet from saving (especially if temp files are involved). This is normally caused by a program that goes wild. But somthing to consider. It's mostly theoretical.

I know that 2 gig will always handle my workload (256meg will also but I have the rest for days like today when working with large video and graphic files so I never need to think about it again. I last adjusted my swap file 18 months ago - the last time chkdsk deleted 90% of my files.

I was reading working set in some post today. There's a tool somewhere that lists each apps working set. 99.999999999% of apps had the standard setting of 2 meg. Only office had changed the default (and they were being a bit greedy I thought).
 
D

David Candy

I wasted years of my life optimising the swap file on win 3.1, changing sweep frequency, over committ, et al (still got the docs for it - saw it today in a search for something). Nothing I did made a bit of difference except benchmarks gave 0.0000001% improvement.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top