There is a fairly convincing line of argument that says these products do
not work. See:
http://www.techweb.com/winmag/columns/explorer/2000/12.htm
I can tell that RamIdle is running and when it is not. There is a
stark difference. It handles memory management in a way that is
superior to Windows.
A few contentions:
"But when you load more apps and data than your RAM can hold, Windows
makes judgments about what’s important and what’s not -- and what’s
likely to be needed again soon, and what’s not. Windows then writes
some of the less-vital contents of RAM to disk in the virtual memory
area. When something’s needed from the virtual memory area, Windows
loads it back into RAM, making room (if need be) by swapping something
else out of RAM onto the disk. It works pretty well, but (of course)
isn’t perfect. That’s where “memory optimizers” come in."
Exactly! Windows is a terrible judge as to what gets swapped out and
when. This is one reason to use one.
"Memory optimizers operate on your pool of general memory. They have
nothing whatsoever to do with the fixed-size User and GDI memory areas
that are the real problem with resource memory leaks."
"After an application crashes, for example, it’s possible for some
general memory to be “orphaned” or marked as still in use, when it
actually is not. And some apps do leak small amounts of general memory
-- that is, they don’t properly clean up after themselves. In these
cases, and in instances where Win98 can’t recover the leaked general
memory on its own, running a memory optimizer may be able to recover
the leaked memory."
A memory leak occurs when the program author dynamically allocates
memory in a program and fails to release it. The dynamic allocation is
in main memory and it can be recovered. Another reason to use one.
"Now let’s look at loading an app from defragged RAM, bearing in mind
the speed difference between RAM and hard drives: If defragmented RAM
lets you avoid, say, 100 memory-access operations at 60ns each, you’ve
saved 6000ns, or -- gosh! -- a whole six millionths of a second. To
say that is too small to notice is beyond understatement. It's so
small, it's irrelevant."
Some applicatons use data structures or writes that require x bytes of
contiguous memory cells. This is why defragging is a good thing to do,
not the "access time" as stated by the author above. When you start an
app that utilizes one of these structures or methods there is
contiguous space and everything flows right into memory, without the
pause for disk swapping that would occur without the memory utility.
I realize Fred has a huge following. I don't agree with his article at
all though.