I was thinking the AMD 3200+ CPU would be a huge difference over the 2500+
CPU because the 3200+ says it uses a 400MHz bus while the 2500+ I believe
uses a 333MHz bus. How much difference does bus speed really make, and does
it matter what the CPU bus speed is and what the RAM speed is? Can I use a
2500+ CPU with 333MHz bus with a DDR PC3200 400MHz RAM? I was looking at a
motherboard that says it supports both 333MHz and 400MHZ, though I'm not
sure if that applies to the CPU, the RAM, or both.
Please keep in mind that apart from obviously factual statements, the
following is merely my opinion, based both on my experience and on reading.
I'm planning to build a machine myself, and I've been doing research.
Bus speed, memory speed, and CPU speed all matter. There's no point having a
fast CPU if the bus is slow and vice versa. On a high speed machine, RAM must
be high quality and matched, and the BIOS on your motherboard must be able to
"tune" the bus to the RAM. (AFAIK, all recent high speed boards allow you to
do this).
Does the size of the cache make a big difference, or just on intense action
gaming?
The cache makes a difference to disk access times as well as to gaming. Size
and number of caches: the general rule is that up to a point, larger is
better. Above that, larger can actually slow things down. I don't know what
advantage a third cache has.
Disk drives also have on-board caches, and the general rule is the larger the
better, but you can trust the manufacturer on this: they want to sell you a
HD that edges out the competition. However, other drive characteristics also
affect total system speed: the spindle speed of the drive, and the data rate
are the two most important. AFAIK, only SATA drives have data rates that take
advantage of high bus speeds, so if you are using plain IDE/ATAPI drives, go
for the highest spindle speed you can afford if you want a higher performance
system. SATA requires a different controller than IDE/ATAPI and boards with
SATA controllers are still relatively rare and pricey.
I like to stick with AMD, so I'm just concerned with the L1/L2
cache. Intel likes to confuse things more with their L3 cache. I think
Intel just redesigned it to confuse people into just assuming theirs is
better because they couldn't keep up with AMD for having the fastest CPU for
the price. AMD has always blown Intel out of the water on CPUs with similar
pricing. As one review put it: "The Athlon 64 3200+ competes head-to-head
with the standard 3.2GHz Pentium 4, but does fall short in a few
applications - but it's priced like a 3.0GHz P4."
That reviewer I suspect didn't take account of the fact that a 64-bit CPU
will slow down when running 32-bit software, and is even worse when running
16-bit software (and a lot of current software still has 16-bit modules in
it, including (so I'm told) Windows 2000.) The Pentiums are still 32-bit, so
they have a _software_ advantage.
Anyhow, the CPU speed is in many ways a misleading number. It's only one of
several factors that affect system speed. A 10-20% difference in CPU speed
will not be seen by the vast majority of users. In fact, if all you want to
do is web access, word processing, and a little scanning and photo
retouching, you won't find the speed difference between 1 GHz, 2GHz, and 3GHz
to amount to very much. It's _your_ speed that will govern the rate at which
you work, and your speed is much, much slower than the machine. All a fast
CPU means is that the machine will be waiting for you more often and for
longer. Only extreme gamers will find the differences worth the extra money,
but for them even a 2-3% edge is significant .
The fact is that the OS and software design has far more to do with system
speed than the CPU does - that's why MACs with 1 GHz CPUs have system speeds
that are generally in the same range as Intel machines with 2 GHz CPUs, and
not much slower than 3GHz CPUs. Windows really, really slows things down. The
hardware improvements (faster bus speeds, faster graphics/video boards) have
more than compensated for the bloating of Windows, but that is beginning to
change. IMO significant speed increases will come more from OS and software
design in future than from hardware improvements (hardware is beginning to
get close to the zone where quantum mechanical effects are significant, and
current designs just can't cope with QM effects.) More and more, we are
seeing that the software must be designed to take advantage of the hardware,
must in a sense be tuned to the hardware, to get highest performance. The
days of patching a bunch of plain vanilla stuff together and getting a decent
machine, are over IMO.
I read Maxximum PC, a rag devoted to extreme machines. (It's a fun magazine,
and I've learned a lot from it, despite its sometimes uncritical bias for
Intel and Windows.) Their test suite focuses on system speed, not the speeds
of any one component in the machine. It consists of a variety of software,
heavily biased to graphics-intensive games of course, but they want to test
the system as a whole. My opinions are based on their test results, which
often show that machines with CPUs of similar speed can vary greatly in the
speeds at which they run the same software. Because MaxxPC runs the same
software on many different machines, they have a pretty good idea of which
hardware components affect system speed and in what ways. Bottom line: match
the hardware components to each other if you want the best value for money.
HTH&GL