Intel takes back lead in U.S. retail

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Sorry, looked like a bunch of hearsay to me. Where's the real fact
that 2G RAM needs 2G swap file?

It's posted at MSDN - I hardly think M$ would let it stay if it was BS.
You can probably find the full tech details at MSDN though I'm not sure
about non-registered access to docs.

The bottom line is that with a FAT32 file system which has no sparse file
structure, ergo no direct access methods posibility, it would be umm,
highly inefficient to decide to page out a page at the 2G mark if the
pagefile is currently only say 512K - if every page-out required the file
to grow by a few 100K, it would be catastrophic... as we saw with Win98 and
its growing & shrinking swap file. With NTFS's sparse file feature, it
would be possible to do it more efficiently but I'm not sure that Windows
has a good enough direct access method capability.
 
George Macdonald said:
It's posted at MSDN - I hardly think M$ would let it stay if it was
BS. You can probably find the full tech details at MSDN though I'm
not sure about non-registered access to docs.

The bottom line is that with a FAT32 file system which has no sparse
file structure, ergo no direct access methods posibility, it would be
umm, highly inefficient to decide to page out a page at the 2G mark
if the pagefile is currently only say 512K - if every page-out
required the file to grow by a few 100K, it would be catastrophic...
as we saw with Win98 and its growing & shrinking swap file. With
NTFS's sparse file feature, it would be possible to do it more
efficiently but I'm not sure that Windows has a good enough direct
access method capability.

Have fun explaining how come this system with 1G of
physical ram has a much smaller swap file and others
with 256M have bigger swap files. All running XP.
 
<snip>
: Have fun explaining how come this system with 1G of
: physical ram has a much smaller swap file and others
: with 256M have bigger swap files. All running XP.

Can we all just please **ignore** the troll?? Geez!

j.
 
Have fun explaining how come this system with 1G of
physical ram has a much smaller swap file and others
with 256M have bigger swap files. All running XP.

From Windows XP Help: "The default size of the virtual memory pagefile
(appropriately named Pagefile.sys) created during installation is 1.5 times
the amount of RAM on your computer." Uhh, some variation is tolerated for
the ridiculously small... and at the upper end. My 1GB system has a
"Recommended" 1534MB, with a "Currently Allocated" of 1535MB.
 
From Windows XP Help: "The default size of the virtual memory
pagefile (appropriately named Pagefile.sys) created during
installation is 1.5 times the amount of RAM on your computer."

Just another example of the stupidity of much of the XP help.

Pity that it doesnt actually do that in practice.
Uhh, some variation is tolerated for the ridiculously small...
and at the upper end. My 1GB system has a "Recommended"
1534MB, with a "Currently Allocated" of 1535MB.

Mine is only half that with the same amount of physical ram.
 
The page file won't have sparse feature enabled, it doesn't make sense for
this file size, while creating additional headache for testing the VM
subsystem. Remeber, that VM has very special I/O access path and
constraints, different from regular application I/O. Another reason for not
using sparse file is that crash dump cannot be written to a sparse page
file - everything should be preallocated.
 
Rod said:
Just another example of the stupidity of much of the XP help.

Yeah, I still don't buy it. The need for such a large swap file, that
is. I think there's some obsolete "rules of thumb" being
bandied-about.

I still have yet to see anything solid that says you need a swap file
as large as your main memory, with modern a OS and file-system (NTFS).

Certainly, there's people with 2G RAM on their XP (32-bit) machines,
and it's working.
 
The page file won't have sparse feature enabled, it doesn't make sense for
this file size, while creating additional headache for testing the VM
subsystem.

Yes, well "file size" was one of the issues. IOW if you have lots of
physical memory the pagefile should not have to get to a huge size... BUT
it could still be paging out pages at the 2GB address mark. Many progs
claim a huge virtual space but use it sparsely depending on current
instance dataset size & characteristics.
Remeber, that VM has very special I/O access path and
constraints, different from regular application I/O. Another reason for not
using sparse file is that crash dump cannot be written to a sparse page
file - everything should be preallocated.

Again a Windows characteristic.
 
chrisv said:
Rod Speed wrote
Yeah, I still don't buy it. The need for such a large swap file, that
is. I think there's some obsolete "rules of thumb" being bandied-about.
I still have yet to see anything solid that says you need a swap file as
large as your main memory, with modern a OS and file-system (NTFS).

Yeah, and it cant fly logically either. You can certainly see why a very
crudely implemented swap file might need to be as big as the virtual
memory the OS supports, but clearly the Win swap file cant be that
crude if it uses a swap file of 1.5 times the physical memory with
the smaller physical memory like 256M. And since it can obviously
handle a virtual memory of much bigger than the physical ram in that
case, why should it need a bigger swap file with say 1G of physical ram ?
Certainly, there's people with 2G RAM on
their XP (32-bit) machines, and it's working.

And plenty who dont have a swap file of 1.5 times the physical ram with
1G of physical ram working fine too, like with this system I am posting with.
 
Yes, well "file size" was one of the issues. IOW if you have lots of
physical memory the pagefile should not have to get to a huge size...
BUT it could still be paging out pages at the 2GB address mark.
Many progs claim a huge virtual space but use it sparsely
depending on current instance dataset size & characteristics.

And Win must too even if it does use a swap file thats 1.5 times the
physical ram with the smaller physical ram, say 256M of physical ram.

And when that obviously works fine, why should
it not still work fine with 1G of physical ram too ?
Again a Windows characteristic.

But not necessarily relevant for normal use of a Win system.

The worst you lose is the ability to do a crash dump.

 
And Win must too even if it does use a swap file thats 1.5 times the
physical ram with the smaller physical ram, say 256M of physical ram.

And when that obviously works fine, why should
it not still work fine with 1G of physical ram too ?

I think the point isn't that it wouldn't work. Just that apart from
the dump issue, in some situations, it's possible for Windows to need
to page out more memory than the swap (if swap < physical memory).
When it cannot do so for those of us with swap like 512MB vs 1GB of
p.Mem, it will raise the "Out of memory" error or something for the
application trying to get more memory.
 
I think the point isn't that it wouldn't work. Just that apart from
the dump issue, in some situations, it's possible for Windows to need
to page out more memory than the swap (if swap < physical memory).
When it cannot do so for those of us with swap like 512MB vs 1GB of
p.Mem, it will raise the "Out of memory" error or something for the
application trying to get more memory.

Still cant see why going from a system with 256M of physical
ram to one with 1G of physical ram, with the same swap file,
that should happen any more except when it needs to do a dump.
 
Yeah, I still don't buy it. The need for such a large swap file, that
is. I think there's some obsolete "rules of thumb" being
bandied-about.

I still have yet to see anything solid that says you need a swap file
as large as your main memory, with modern a OS and file-system (NTFS).

Apparently WinXP takes no advantage of NTFS's sparse file for the pagefile;
unless you want to be growing a file and its structures by up to 2GB or so
on a page-out, there's no other way than to pre-allocate. Do you really
want that "growing" to happen while a real-time process is doing its thing?
Have you forgotten Win98?
Certainly, there's people with 2G RAM on their XP (32-bit) machines,
and it's working.

And what do you think their default pagefile size is? Mine is "1535MB" for
1GB physical memory; in the office I have a system with 2GB physical memory
where it's 2GB.
 
Certainly, there's people with 2G RAM on their XP (32-bit) machines,
and it's working.

Sorry, I meant to ask this earlier:

What's the ill effect of running with 2G on a 32 bit machine? It
sounds like there are some limitations that some folks believe are
serious, but does it actually affect the operation, speed,
reliability, etc?

I recall earlier Windows (98?) would actually slow down if you put too
much RAM on the MB.
 
Sorry, I meant to ask this earlier:

What's the ill effect of running with 2G on a 32 bit machine? It
sounds like there are some limitations that some folks believe are
serious, but does it actually affect the operation, speed,
reliability, etc?

I recall earlier Windows (98?) would actually slow down if you put too
much RAM on the MB.

The ill effects, and what started this whole discussion, is that most
current versions of Windows are 32-bit operating systems. 32-bit OSes
are limited to a maximum addressable memory of 4GB, but due to various
limitations, you can only use about 2GB or 3GB of that under normal
circumstances. There are some ugly hacks around this (ie Intel's PAE)
that can allow you to use more memory, but they are just that, ugly
hacks.

Exactly where and when you run into this issues is what we're
debating. It certainly is possible to use 2GB of physical memory
under (32-bit) Windows XP, though in my mind you should definitely be
considering a 64-bit processor (basically ALL new processors are
64-bit) and a 64-bit OS at that stage. Other disagree.

Fortunately this discussion should become somewhat moot shortly.
Windows Vista will be 64-bit right from the get-go and Linux and other
*nix OSes have widely supported 64-bit x86 for a couple of years now.
 
<snip>

: Fortunately this discussion should become somewhat moot shortly.
: Windows Vista will be 64-bit right from the get-go and Linux and other
: *nix OSes have widely supported 64-bit x86 for a couple of years now.
:
Tony, I've always enjoyed and rather admired your posts. But with all due
respect, you (and others) talk about Windows Vista as if it were the second
coming of christ, for chrissakes (...pun intended). Have all of you
forgotten (or ignored) the fact that with Vista's newfangled DRM tied into
hardware, that we'll be pretty much at the mercy of the likes of RIAA, MPAA,
etc., etc.? Hell, if I understand it correctly, I may not even be able to
rip my purchased and fully-owned CD's to MP3 for use in my car stereo, as
I'm able to do now under Win2k and/or XP. I mean, geez!!!!

J.
 
talk about Windows Vista as if it were the second
coming of christ, for chrissakes (...pun intended). Have all of you
forgotten (or ignored) the fact that with Vista's newfangled DRM tied into
hardware, that we'll be pretty much at the mercy of the likes of RIAA, MPAA,
etc., etc.? Hell, if I understand it correctly, I may not even be able to
rip my purchased and fully-owned CD's to MP3 for use in my car stereo, as
I'm able to do now under Win2k and/or XP. I mean, geez!!!!

For the majority of the world, they either don't care or don't
understand enough to care. So most of them will eventually end up on
Vista one way or another. Linux unfortunately at the moment still is a
few steps away from being a sufficiently polished replacement for
Windows. Almost there but... :/
 
Sorry, I meant to ask this earlier:

What's the ill effect of running with 2G on a 32 bit machine? It
sounds like there are some limitations that some folks believe are
serious, but does it actually affect the operation, speed,
reliability, etc?

I recall earlier Windows (98?) would actually slow down if you put too
much RAM on the MB.

There's no ill effect from 2GB of memory... or more. The discussion is
about what size pagefile Windows pre-allocates by default.
 
<snip>

: Fortunately this discussion should become somewhat moot shortly.
: Windows Vista will be 64-bit right from the get-go and Linux and other
: *nix OSes have widely supported 64-bit x86 for a couple of years now.
:
Tony, I've always enjoyed and rather admired your posts. But with all due
respect, you (and others) talk about Windows Vista as if it were the second
coming of christ, for chrissakes (...pun intended). Have all of you
forgotten (or ignored) the fact that with Vista's newfangled DRM tied into
hardware, that we'll be pretty much at the mercy of the likes of RIAA, MPAA,
etc., etc.? Hell, if I understand it correctly, I may not even be able to
rip my purchased and fully-owned CD's to MP3 for use in my car stereo, as
I'm able to do now under Win2k and/or XP. I mean, geez!!!!

I haven't forgotten at all and I have no plans on upgrading to Vista.
However Microsoft has a history of discontinuing sales of it's old OS
VERY shortly after a new one arrives, particularly for consumer
machines. Official word from Microsoft is that XP will be available
for 12 months after Vista arrives, though I suspect that by this time
next year they will be strongly discouraging the sale of anything
other than Vista.

For most of us, the choice going forward is going to be to upgrade to
Vista, try to hold out on Win2K or XP (support for WinXP Pro at least
will be provided for some years to come) or switch to Linux or some
other alternative OS.
 
Back
Top