How much faster?

  • Thread starter Thread starter Ron Joiner
  • Start date Start date
R

Ron Joiner

I have an A8N-SLI (A64 3500+) with 1 gig of Corsair ram (2 sticks). Will
I get any speed benefit by adding another gig of ram in games. digital
image processing etc?

Ron
 
Ron Joiner said:
I have an A8N-SLI (A64 3500+) with 1 gig of Corsair ram (2 sticks). Will I
get any speed benefit by adding another gig of ram in games. digital image
processing etc?

Ron

Maybe digital image proccessing, depends on the size of the images.
As a rule of thumb, if you HDD is thrashing and your page file usage goes up
substantially when you are using your PC you will benefit from more ram.
 
Ron Joiner said:
I have an A8N-SLI (A64 3500+) with 1 gig of Corsair ram (2 sticks). Will
I get any speed benefit by adding another gig of ram in games. digital
image processing etc?

Ron

It all depends in the programs/plugins you use and how you use them.
If you just do some image editing and enhancement you not likely
have any benefit getting more ram. If you work with exeptional large
scans (100 MB +) and memory consuming plugins then, more ram
is always better. Same with digital audio. If you use softwaresamplers
with large samples you can never have to much ram.
I don't think you'll get much benefit in games with more than 1024 MB.

Nickeldome
 
Ron Joiner said:
I have an A8N-SLI (A64 3500+) with 1 gig of Corsair ram (2 sticks). Will
I get any speed benefit by adding another gig of ram in games. digital
image processing etc?

Ron

For games, no. The problem is, to run four sticks, typical will
require setting command rate to 2T (see the anandtech reviews of S939
boards for more info). You could manually set command rate to 1T and
run the memory at DDR400 with just the two sticks in dual channel
mode, and that would have 20% more memory bandwidth than your proposed
four stick configuration.

The thing is, most games will be comfortable with 1GB. At least,
I haven't seen any suggestions that gaming needs more than that.

If, say, we discuss a hypothetical Photoshop situation, where you
process monstrous images all the time, then even though the 2GB
runs a bit slower, there might be an advantage to using the 2GB.
If Photoshop has to use its scratch disks all the time, it can
be really slow. Keeping the undo buffers in RAM would be a lot
faster. If, on the other hand, your Photoshop adventures
are more modest in terms of bitmap size, then again, the extra
RAM would only be a liability.

Knowing the potential for the extra memory to slow the system
down, the next step is to profile what percentage of the RAM
is currently being used with your existing usage pattern.
If you were hitting swap consistently, you would already have
upgraded by now :-)

HTH,
Paul
 
You would speed up working with programs like Photoshop if you are working
with Large digital files.
 
Paul said:
For games, no. The problem is, to run four sticks, typical will
require setting command rate to 2T (see the anandtech reviews of S939
boards for more info). You could manually set command rate to 1T and
run the memory at DDR400 with just the two sticks in dual channel
mode, and that would have 20% more memory bandwidth than your proposed
four stick configuration.

The thing is, most games will be comfortable with 1GB. At least,
I haven't seen any suggestions that gaming needs more than that.

If, say, we discuss a hypothetical Photoshop situation, where you
process monstrous images all the time, then even though the 2GB
runs a bit slower, there might be an advantage to using the 2GB.
If Photoshop has to use its scratch disks all the time, it can
be really slow. Keeping the undo buffers in RAM would be a lot
faster. If, on the other hand, your Photoshop adventures
are more modest in terms of bitmap size, then again, the extra
RAM would only be a liability.

Knowing the potential for the extra memory to slow the system
down, the next step is to profile what percentage of the RAM
is currently being used with your existing usage pattern.
If you were hitting swap consistently, you would already have
upgraded by now :-)

HTH,
Paul
More ram might be slower - interesting! I do more gaming than image
processing but even when image processing I don't get too much HDD
thrashing.

Ron

Ron

--
And it really doesn't matter if
I'm wrong I'm right
Where I belong I'm right
Where I belong.

Lennon & McCartney
 
Ron,

Fire up Task Manager (CTRL ALT DEL, Task Manager, or right click on the task
bar and select Task Manager).

It will show you a lot of info about memory usage and other things. On the
Performance tab, there are many indicative counters as to Total Physical,
Available, System Cache, Total Commit Charge (memory in use), Limit (max
usable RAM), and Peak memory use.

If available gets below say 100mb on a 1GB system, it is "busy" RAM wise. If
available is below say 25MB, very busy, if it gets near 4MB things slow down
hugely as applications get memory quotas trimmed back, they may get rolled
out, the page file use escalates etc. and the system runs like a 3 legged
dog.

If say available ram is never going below 50% then the system is lightly
loaded. Adding more ram will be money down the dunny.

Another useful utility under windows 2000, NT, and XP is the Profiler. This
is by default in the Administrative Tools program group. Clicking the + icon
enables you to add meters to the display -there are many of them and there
is an Explain button on the Add New Counter form. By default it shows Page
Faults / seond, CPU use, and memory use.

Lightly loaded systems do go faster with more ram, but the effect wears very
thin at 1GB.
 
More ram might be slower - interesting! I do more gaming than image
processing but even when image processing I don't get too much HDD
thrashing.

Ron

Ron

Yeah, I have read *many* threads about running more than 2 modules and/or 1
gig making things slower. There isn't enough data out there, real data
that I know of, but I haven't checked Anand or Tom's about it.

Part of the issue is that, as you increase the size of your physical
mainboard RAM, your Windows virtual RAM file increases proportionately,
unless you manually change it.

Another thing that CAN speed things up is to have a hard drive that is
separate from the boot drive (not a logical either) for the page file.
That way accesses to each can run simultaneously at times and will reduce
loading on the Windows drive. A small hard drive will do, just something
with the same spin rate. An additional logical partition can be added to
it so that the spare space is used for something rather than wasted.

Hope this helps a little bit. BTW, 2 modules of 1GigB is better than 4
modules of 512kB as far as performance as well. This I get from following
posts for a few years. If I had the money I'd have tested for it my own
self.
 
If you want to tune up windows, then set about it in a scientific way.
System tuning requires knowledge and hard performance statics with a set
objective.
Starting with the assumption that the swap file is the problem will most
likely lead you down an expensive and pointless path.
 
Hope this helps a little bit. BTW, 2 modules of 1GigB is better than 4
modules of 512kB as far as performance as well. This I get from following
posts for a few years. If I had the money I'd have tested for it my own
self.

Actually, it is pretty hard to make that statement in all cases.
Try as I might, I cannot find really good 1GB modules. I think
the ones on corsairmicro.com are the best ones I've seen so far.
They are only PC3200, which if you are an overclocker, is a bit
of a liability. So, deciding what to do, is still a hard choice
if you want 2GB total. If you can live with 1GB, you've got a world
of choices available to you (at least seven brands of CAS2 stuff,
and clock speeds to "infinity and beyond" for your 2x512MB).

If I had to build a computer with 2GB in it, I'd be spending
several hours reading S939 motherboard reviews, to get the
data needed to make the decision.

The last memory I bought, was 4x512MB, but the reason was
granularity. It allows me to move pairs of DIMMs between
dual channel motherboards, so if, on a given day, I need
2GB in one machine, it is easy to arrange. Otherwise, two
computers have 1GB each, dual channel.

Paul
 
Mercury said:
Ron,

Fire up Task Manager (CTRL ALT DEL, Task Manager, or right click on the task
bar and select Task Manager).

It will show you a lot of info about memory usage and other things. On the
Performance tab, there are many indicative counters as to Total Physical,
Available, System Cache, Total Commit Charge (memory in use), Limit (max
usable RAM), and Peak memory use.

Just to clarify:

Peak Commit Charge is the interesting one for deciding whether you could
benefit from more RAM or not.
If available gets below say 100mb on a 1GB system, it is "busy" RAM wise. If
available is below say 25MB, very busy, if it gets near 4MB things slow down
hugely as applications get memory quotas trimmed back, they may get rolled
out, the page file use escalates etc. and the system runs like a 3 legged
dog.

If say available ram is never going below 50% then the system is lightly
loaded. Adding more ram will be money down the dunny.

I've got a GB of RAM, and I've just fired up eclipse with two large
projects: Cocoon 2.1.x and 2.2 Trunk. Compiled both simultaneously (one
of them outside of eclipse) and my peak commit charge was about 850MB -
so although the Page File was at nearly 600MB, more RAM will probably
not benefit me.

Ben
 
Ben said:
Just to clarify:

Peak Commit Charge is the interesting one for deciding whether you could
benefit from more RAM or not.



I've got a GB of RAM, and I've just fired up eclipse with two large
projects: Cocoon 2.1.x and 2.2 Trunk. Compiled both simultaneously (one
of them outside of eclipse) and my peak commit charge was about 850MB -
so although the Page File was at nearly 600MB, more RAM will probably
not benefit me.

Ben
I don't really want to muddle the debate but would a 64 bit windows OS
manage memory in a better way? Would more ram be justifiable with a 64
bit OS?

Ron
 
Ron said:
I don't really want to muddle the debate but would a 64 bit windows OS
manage memory in a better way?

Not purely by virtue of being 64 bit, no.

A 32bit address space gives you 4GB or RAM. There are ways that the OS
can extend this space further with various techniques, but any given
program is limited to 4GB.

There are other issues that can prevent you from seeing all of 4GB, such
as memory mapped IO (remember the memory hole at 64MB for ISA?). So
anything from 3-4GB is a practical limit.

Current 64Bit x84 CPUs (AMD64) gives you a physical address space of
40bits (1 Terabyte), and a virtual address space of 48bits (256TB). SO
a 64bit OS can take advantage of this.
Would more ram be justifiable with a 64 bit OS?

Depends what you mean by justifiable. Regarding 64bit desktop setups,
you're currently limited to 4GB, realistically. You seen the price of
2GB modules or a board that supports more than 4 modules? Have you seen
an application that requires more than 4GB?

So lets assume we're talking about a server environment, with 4 DIMMs
per CPU and say, 4 CPUs - now we're talking. But thats not really got
much to do with the OS.

Memory space is currently the major driving force for 64bit - almost all
appplications do not need to deal with numbers that are larger than 32
bit so making the data bus wider is, for the most part, irrelevant. The
reason that the AMD64 performs better than a Barton, is architectural
improvements, not bus width. Improvements such as sticking the memory
controller on the CPU to reduce latency, and and tweaking the pipeline
to keep it more full, more often.

It's widening the address bus that makes 64bit a requirement, and right
now, there is no reason for more than 4GB on the desktop.

Things might be different next week, but for now 1GB is almost always
enough - my peak commit charge is now 920MB - perhaps it's closer than
we think. I'm running pretty hefty development stuff, Cocoon is a Java
based XML processing application, and memory requirements are fairly
hefty, so I'm not a typical desktop user and even I only use 1GB.
Typically the server running this stuff would be dedicated to it, and
not running it, whilst compiling it, whilst doing everything else.

I think the answer was no, sorry, got a bit carried away. :-p

Ben
 
Ron,

Fire up Task Manager (CTRL ALT DEL, Task Manager, or right click on the task
bar and select Task Manager).

It will show you a lot of info about memory usage and other things. On the
Performance tab, there are many indicative counters as to Total Physical,
Available, System Cache, Total Commit Charge (memory in use), Limit (max
usable RAM), and Peak memory use.

If available gets below say 100mb on a 1GB system, it is "busy" RAM wise. If
available is below say 25MB, very busy, if it gets near 4MB things slow down
hugely as applications get memory quotas trimmed back, they may get rolled
out, the page file use escalates etc. and the system runs like a 3 legged
dog.

If say available ram is never going below 50% then the system is lightly
loaded. Adding more ram will be money down the dunny.

Another useful utility under windows 2000, NT, and XP is the Profiler.

Profiler? Are you talking about the Performance applet? If not,
where do I find the "Profiler?"
This
is by default in the Administrative Tools program group. Clicking the + icon
enables you to add meters to the display -there are many of them and there
is an Explain button on the Add New Counter form. By default it shows Page
Faults / seond, CPU use, and memory use.

Lightly loaded systems do go faster with more ram, but the effect wears very
thin at 1GB.

Ron
 
Ben said:
Not purely by virtue of being 64 bit, no.

A 32bit address space gives you 4GB or RAM. There are ways that the OS
can extend this space further with various techniques, but any given
program is limited to 4GB.

There are other issues that can prevent you from seeing all of 4GB, such
as memory mapped IO (remember the memory hole at 64MB for ISA?). So
anything from 3-4GB is a practical limit.

Current 64Bit x84 CPUs (AMD64) gives you a physical address space of
40bits (1 Terabyte), and a virtual address space of 48bits (256TB). SO
a 64bit OS can take advantage of this.


Depends what you mean by justifiable. Regarding 64bit desktop setups,
you're currently limited to 4GB, realistically. You seen the price of
2GB modules or a board that supports more than 4 modules? Have you seen
an application that requires more than 4GB?

So lets assume we're talking about a server environment, with 4 DIMMs
per CPU and say, 4 CPUs - now we're talking. But thats not really got
much to do with the OS.

Memory space is currently the major driving force for 64bit - almost all
appplications do not need to deal with numbers that are larger than 32
bit so making the data bus wider is, for the most part, irrelevant. The
reason that the AMD64 performs better than a Barton, is architectural
improvements, not bus width. Improvements such as sticking the memory
controller on the CPU to reduce latency, and and tweaking the pipeline
to keep it more full, more often.

It's widening the address bus that makes 64bit a requirement, and right
now, there is no reason for more than 4GB on the desktop.

Things might be different next week, but for now 1GB is almost always
enough - my peak commit charge is now 920MB - perhaps it's closer than
we think. I'm running pretty hefty development stuff, Cocoon is a Java
based XML processing application, and memory requirements are fairly
hefty, so I'm not a typical desktop user and even I only use 1GB.
Typically the server running this stuff would be dedicated to it, and
not running it, whilst compiling it, whilst doing everything else.

I think the answer was no, sorry, got a bit carried away. :-p

Ben
Thanks Ben. Very interesting and certainly food for thought.

--
And it really doesn't matter if
I'm wrong I'm right
Where I belong I'm right
Where I belong.

Lennon & McCartney
 
Back
Top