Page File size - I've been thinkinig.

  • Thread starter Thread starter Lord Turkey Cough
  • Start date Start date
L

Lord Turkey Cough

Was wonderinig what to set that too, I set it a while back to 10 gig :O)
reason being I have piles of spare disk space so I thought "I might as
well make use of it!!"

Any I returned to the issue recently as I was a bit concerned about
performance.
More page file needed I thought :O|

However upon futher though I think the big page file may be the problem
not the solution.

I have 1.25 gig of memory and most of the time there seems to be a fair bit
free,
currently 735 meg 'available'.

I now think I would be better off without a pagefile at all. I think page
files are only
good for frequently used stuff and I don't have enough of it and that now
they system
is wasting time writing a load of 'crap' to the page file which will
probably never be
needed again.

For example I was cleaninig out a load of stuff I recorded from TV, some big
files
up to 3 gig. Now I think once it reads em in (so I can see what is recorded)
it is writing them back to the page file.A total waste of time.
I would go as far as to say it more than halving my computers speed on many
occasions. (writes take a long time).

Im gonna set it to zero. It's pointless having one with 1.2 gig of ram.
 
Lord Turkey Cough said:
Was wonderinig what to set that too, I set it a while back to 10 gig :O)
reason being I have piles of spare disk space so I thought "I might as
well make use of it!!"

Any I returned to the issue recently as I was a bit concerned about
performance.
More page file needed I thought :O|

However upon futher though I think the big page file may be the problem
not the solution.

I have 1.25 gig of memory and most of the time there seems to be a fair
bit free,
currently 735 meg 'available'.

I now think I would be better off without a pagefile at all. I think page
files are only
good for frequently used stuff and I don't have enough of it and that now
they system
is wasting time writing a load of 'crap' to the page file which will
probably never be
needed again.

For example I was cleaninig out a load of stuff I recorded from TV, some
big files
up to 3 gig. Now I think once it reads em in (so I can see what is
recorded)
it is writing them back to the page file.A total waste of time.
I would go as far as to say it more than halving my computers speed on
many
occasions. (writes take a long time).

Im gonna set it to zero. It's pointless having one with 1.2 gig of ram.


OK done it. Will let you know it what effect it has :O)
 
Was wonderinig what to set that too, I set it a while back to 10 gig :O)
reason being I have piles of spare disk space so I thought "I might as
well make use of it!!"

Any I returned to the issue recently as I was a bit concerned about
performance.
More page file needed I thought :O|

However upon futher though I think the big page file may be the problem
not the solution.

I have 1.25 gig of memory and most of the time there seems to be a fair bit
free,
currently 735 meg 'available'.

I now think I would be better off without a pagefile at all. I think page
files are only
good for frequently used stuff and I don't have enough of it and that now
they system
is wasting time writing a load of 'crap' to the page file which will
probably never be
needed again.

For example I was cleaninig out a load of stuff I recorded from TV, some big
files
up to 3 gig. Now I think once it reads em in (so I can see what is recorded)
it is writing them back to the page file.A total waste of time.
I would go as far as to say it more than halving my computers speed on many
occasions. (writes take a long time).

Im gonna set it to zero. It's pointless having one with 1.2 gig of ram.

You don't need a 10GB pagefile, but in some cases you could
need one even with 1.25GB of memory. While the system will
run (many) things without a pagefile, eventually when you
run something that needs more virtual memory it may cause a
problem. Try a 2GB pagefile.

When your system reads a bit TV capture file, it is not
writing any of that out to the pagefile. The pagefile is
only for things that need to remain in memory (but you're
ran out), or addt'l address space reserved by applications
that "might" end up using up to that much memory even if
they don't.

If you have excessive HDD activity it is more likely due to
filesystem fragmentation, or some other OS setting you have
changed from the defaults.
 
| Im gonna set it to zero. It's pointless having one with 1.2 gig of ram.

It depends on the amount of RAM you have and the amount of memory you use.
Todays applications tend to need more. As computers get bigger in memory,
the software developers steal it away from you with more bloated programs.

If you can possibly increase your RAM, that would be best. If using XP,
try to reach 3 GB but beyond that it's not much help. I don't know about
Vista. Linux can go up to 64GB in the 32 bit version.

I'm looking at a new desktop system for my Linux work and plan to make it
a swapless system with 8GB to 16GB of RAM. The intent is to avoid the
I/O activity of swapping.
 
32 bit Windows can address 4 GB memory, so the most paging file it can use is 4 GB.
When WinNT was may main OS I tried running without a paging file and it would cause a crash after running for a while. I have not tried WinXP without a paging file so I don't know if it will work or not.
 
Mike said:
32 bit Windows can address 4 GB memory, so the most paging file it can use is 4 GB.

The 4GB limit is tied to the physical address space. The pagefile
doesn't live here, so you can more than 4GB of pagefile
 
Lord Turkey Cough said:
OK done it. Will let you know it what effect it has :O)


Seems to be running fine, nice and smooth no prblems to report
whatsoever. I can't see the point of having a cache with over a gig of ram,
its counter productive A giga of data by definition cannot be frequently
used.
It just creates a lot of unneccesary, wasting time, energy and disk space
and bearings!!
Possibly the worst idea in the history of computing!!
 
Seems to be running fine, nice and smooth no prblems to report
whatsoever. I can't see the point of having a cache with over a gig of ram,
its counter productive A giga of data by definition cannot be frequently
used.

Well... in that case some people wouldn't buy that much
memory?

You can disable the pagefile and everything will run fine,
even a trivially small percent faster, but then odd problems
can develop later.

For example I ran a gaming system fine with pagefile
disabled, then during some game (I forget what it is at this
point), it would randomly either freeze or kick me out to
the desktop (I forget which). Lots of people would've
suggested power/cooling/drivers/etc, but since I had fair
confidence of the prior checks I'd done to the system a
further investigation and enabling pagefile again resolved
the problem.

It just creates a lot of unneccesary, wasting time, energy and disk space
and bearings!!
Possibly the worst idea in the history of computing!!

MS certainly doesn't want to rule out low-spec systems from
running windows. Regardless, the scenario you posed
previously about loading a video file would not cause it to
page out to virtual memory. IF you were doing something
particular, say loading that into a video editing
application which then proceeded to allocate a very large
chunk of memory for itself, in that case you would have a
small write to the pagefile allocating, typically not yet
paging out anything yet unless you had actually ran out of
physical memory for the task you were trying to do.

The disk space isn't very significant, it's not like anyone
should be trying to squeeze every last GB of space out of
their drive, as using the last portion is inherantly slower
and tends to end up more fragmented. The energy is even
less of a concern, it'd be a few mA difference in a system
using several amps per rail, and it's not as though the
drive wouldn't have been spinning since you are actively
using the system.

If you want to talk about waste in HDD access, consider
Vista which actively reads in, superfetches files just "in
case" you might want to use them, quickly filling much of
the memory so that unless the system is quite well endowed
with (otherwise more memory than would be needed), once you
start using applications requiring some of this memory the
superfetched data has to be discarded and re-read again
before next execution. Like filling a bucket then dumping
it out then filling and dumping all over again. This would
be great if it were a 2 mile trip to the nearest well, and
you had plenty of spare buckets. It's not so great when
Vista is shipped with even low-end PCs now.
 
A rule of thumb in linux is to make your swap file no bigger than the
ammount of physical memory in your computer. That is probably a decent
guideline to follow for windows as well. I would set it to 1gig and see if
you run in to any problems. If you do then increase it to 2gig but 1 should
be plenty alongside your 1gig physical ram.
 
Michael Everson said:
A rule of thumb in linux is to make your swap file no bigger than the
ammount of physical memory in your computer.

That is an old wives tale.

Does it really make sense to have a 128meg swapfile if you only have 128meg,
but have a 1gig swapfile if you have 1gig?

Let windows manage the size... If you have lots of ram you won't be hitting
it very often anyhow.
 
In message <xUCQi.54854$1y4.47899@pd7urf2no> "Noozer"
That is an old wives tale.

It's a decent suggestion, IF the amount of RAM in your system is correct
to begin with.

Any/all of these "'x' times the amount of RAM" are totally useless if
the amount of RAM isn't sized properly for expected average and expected
peak loads.
 
Michael Everson said:
A rule of thumb in linux is to make your swap file no bigger than the
ammount of physical memory in your computer. That is probably a decent
guideline to follow for windows as well. I would set it to 1gig and see if
you run in to any problems. If you do then increase it to 2gig but 1 should
be plenty alongside your 1gig physical ram.
No I don't really agree, I think 90% of the time it will just be swapping a
load of crap
you will never need again to disk, and disk writes are slow.
I would say my computer is a lot more responsive since I ditched the page
file.

I don't wish to be rude, but the idea that you have over 1 gig of frequently
used
data is ludricrous - cloud cuckoo land. If you are reading in files that
large might
as well read the origiinal file.

If you read in 100 meg then you would have to write a 100 meg to disk which
is a much
slower process then simply reading in the original file.

Poiintless.
 
| Im gonna set it to zero. It's pointless having one with 1.2 gig of ram.

It depends on the amount of RAM you have and the amount of memory you use.
Todays applications tend to need more. As computers get bigger in memory,
the software developers steal it away from you with more bloated programs.

If you can possibly increase your RAM, that would be best. If using XP,
try to reach 3 GB but beyond that it's not much help. I don't know about
Vista. Linux can go up to 64GB in the 32 bit version.

I'm looking at a new desktop system for my Linux work and plan to make it
a swapless system with 8GB to 16GB of RAM. The intent is to avoid the
I/O activity of swapping.

What exactly will you be doing that requires / can use that much RAM?
 
Lord Turkey Cough said:
No I don't really agree, I think 90% of the time it will just be swapping
a load of crap
you will never need again to disk, and disk writes are slow.
I would say my computer is a lot more responsive since I ditched the page
file.

I don't wish to be rude, but the idea that you have over 1 gig of
frequently used
data is ludricrous - cloud cuckoo land. If you are reading in files that
large might
as well read the origiinal file.

If you read in 100 meg then you would have to write a 100 meg to disk
which is a much
slower process then simply reading in the original file.

Contiguous swap file space versus fragmented original 100MB file. Could make
loads of difference! I personally run with 1.5GB of RAM and swapfile
disabled as I work with loads of small files that are read in, compiled, OBJ
files created, linked etc etc. If the swapfile were busy at the same time,
performance would drop.

I would suggest running the simply Windows Task Manager. Leave it open for
ages on the performance tab with the update speed (View menu) set to Low.
See how much memory you 'peak' at. If you don't get anywhere near (maybe
75%) full, then just turn off swapping. But if you run out of RAM, things
WILL fail/crash.
 
.... snip ...

I'm looking at a new desktop system for my Linux work and plan to
make it a swapless system with 8GB to 16GB of RAM. The intent is
to avoid the I/O activity of swapping.

Why? If you have enough memory no swapping will take place. If
you don't, the swapping avoids crashing your software after it has
been grinding out an answer for the past 8 hours.
 
GT said:
Contiguous swap file space versus fragmented original 100MB file. Could
make loads of difference! I personally run with 1.5GB of RAM and swapfile
disabled as I work with loads of small files that are read in, compiled,
OBJ files created, linked etc etc. If the swapfile were busy at the same
time, performance would drop.

I would suggest running the simply Windows Task Manager. Leave it open for
ages on the performance tab with the update speed (View menu) set to Low.
See how much memory you 'peak' at. If you don't get anywhere near (maybe
75%) full, then just turn off swapping. But if you run out of RAM, things
WILL fail/crash.

Yes I suppose they would, I think I might have had that once when I opened
up a window on an application, sounds plausible.. Typically I have around
1/2 gig available. I have just put my machine up towhat I would call 'max'
usage, 4 poker applications, OE, several IE and a digital TV application
running and I have 300 meg free. I would not normally run with that kind of
load as it is quite a load on the CPU, especially the TV app.
Anyway I will keep an eye on things in the task manager and see how I get
on.
It was fine yesterday and has been fine so far today. Generally I would
prefer to run without a pagefile.
 
Sorry, your reply makes me laugh considering you are the one who asked for
advice on the matter and originally had a 10 gig swap file. If you don't
want to take advice that you have asked for then dont ask for it :)
 
startrap said:
Don't disable the paging file unless you are really, really, short of
disk space. If windows needs the paging file and cannot find it, it may
BSOD or crash. I dont recall where I read it, but there is some fine
distinction between the 'swap' and 'paging' files despite the terms
being used interchangeably by almost everyone (including me!).

If you are so concerned about fragmentation of the paging file, create
a small dedicated partition for it and leave it there. I have a 8 GB
partition only for the page file on disk 1, while a partition on disk 0
has my XP installation.

Fragmentation is not a concern because my defragger is automatic, and
enabled for partitions that see heavy disk I/O. Defragmentation occurs
in the background, transparently during idle, as required for the OS and
2 other partions, thus freeing me from wasting time running defrags for
the 3 partitions via a schedule or manually. Smart guy, whoever thought
of making a defragger automatic.


Can't say I bother with defragging at all, never noticed any performance
difference after defragging (actually seemed slower), so I just don't bother
anymore. Don't like the idea of a back ground defragger either I prefer my
computer to be silent when idle, constant disk activity would drive me nuts.
 
If you only have 128 meg ram then you probably shouldnt be running much
should you


True that's an extreme case, though I've seen plenty of
low-end Dells/etc that shipped with WinXP, 256MB, and
integrated video taking a little of that away.

As for running much though, a 128MB system ran
conservatively (not a lot of unnecessary services running
nor 3rd party printer/scanner/etc/etc junk running) can do
well enough at very light multitasking including web, email,
office, providing the swapfile is enabled and set to a bit
more than 256MB.
 
Back
Top