and i'm realizing that amd might be the best choice for now -- i like intel however, they
are just introducing the p4 (with the 1066 mhz fsb speed) so they're expensive and
possibly have issues that'll need to be cleared up...
To the best of my knowledge the only 1066MT/s bus speed P4's available
now are the P4 Extremely Expensive Edition chips, which really just
aren't worth it IMO.
The P4 600 series is the chip to get from Intel if you ask me. Both
that and AMD's Athlon64 line are very good choices, you should be ok
with either one of them.
i'd read that there is 1 speed for the pci bus, and the agp is 2* the pci bus (this is an
older article) i don't know whether isa had its own bus speed?
That's a bit dated and not entirely accurate. PCI bus, for the most
part, works at 33MHz and is 32-bits wide. There are also 66MHz and
64-bit wide versions of PCI, but these aren't widely used in desktop
computers.
AGP started out as a way to get the VERY bandwidth-hungry video cards
off of the shared PCI bus. The original AGP spec was basically a
dedicated 66MHz/32-bit PCI bus only with a few changes to make it
specific for graphics. Since then we've seen AGP 2x, AGP 4x and now
AGP 8x, each time doubling the effective clock rate of bus.
ISA bus mostly ran at 8MHz. Originally it was only 8-bits wide, then
it grew to 16-bits wide and even 32-bits wide in the form of EISA. It
was designed (using the term loosely) for the original IBM PC way back
in the dark ages of computing. It's also a complete and utter piece
of crap, and not just because the technology is dated. Fortunately
ISA is mostly gone from modern PCs.
read that the raid levels are what are important...for example level 5 (block interleaved
distributed parity) is supposed to be the best
"The best" depends on the application. For servers where a large
quantity of data storage and high reliability are most important (ie
most servers), RAID5 is probably the best. For servers were top speed
and reliability are important, RAID 0+1 (aka RAID 10) is probably
best.
For desktops you're mostly looking at RAID 0 or RAID 1. Simple
explanation is that RAID 0 splits your data between your two drives.
This way, when you read a file, you get half of it from each drive,
there by doubling the amount of data that can be read at a time. This
is good for performance, but the downside is reliability. With RAID 0
you cut your reliability down by more than half because if EITHER
drive in the array dies on you, you lose all your data. What's more,
if your RAID controller dies on your then you tend to also be hosed.
RAID 1 is kind of the opposite. Here your data is copied in full to
both drives. When you write out a file, instead of just writing it
out once, the RAID controller writes it out twice, once to each drive.
This greatly increases the reliability since if either drive dies you
still have all your data. All you have to do is replace the bad hard
drive, rebuild the array and you're back up and running again. The
downside to RAID 1 is that it doesn't improve performance by much.
New RAID 1 controller do have some smarts so that their read
performance is nearly as good as RAID 0, but the write performance
isn't helped at all (in fact, it would be slightly slower than a
single drive due to a small amount of extra overhead). Of course with
RAID 1 you also cut your storage capacity in half when compared to two
independent drives.
Personally I wouldn't touch RAID 0 for anything even remotely
resembling important data, I've seen just far too many hard drives die
to trust it. However for certain applications it does have it's uses,
and RAID 1 can definitely be a good thing IMO.
isn't the time it takes memory data <-> cpu the same (same bus speed) but the total time
is reduced b/c the controller is in the cpu and likely *knows* what the cpu will need /
send reducing the time of cpu-controller communication....i hope this is the case
That's kind of along the right lines, though it's a bit more
complicated than that. What you're referring to would be more along
the lines of prefetching which can (and is) be implemented with an
off-die memory controller as well.
First off, there are two measures of speed for getting data back and
forth. The first is bandwidth, ie the total amount of data you can
send in one block of time. In this situation the Athlon64 and P4 are
fairly similar since they generally both use dual-channel DDR400
memory.
The second measure of speed is latency, and this is where things get a
little trickier. Latency is the measure of time between when a piece
of data is requested and when it's received. Now most data that a
processor needs sits in the cache memory, right on the processor
itself, and can be accessed fairly quickly (though cache latency
definitely does exist and plays a role in performance), however
eventually the processor needs to go to main memory to get some chunk
of data. It sends out the request and then has to pretty much just
sit around, twiddling it's electronic thumbs, until that data arrives.
Usually with today's systems this takes around 50-100ns, which may
seem instantaneous to us mere mortals, but to a multi-GHz processor
that is a LONG time.
Now, with the P4, when it needs data it first has to send out a
message over it's processor bus to the chipset. The chipset than has
to translate that request into a message that it can pass along to the
memory over the memory bus. The memory chips answer that request and
send the data back to the chipset. The chipset then again translates
this data back to the protocol for the processor bus and sends it back
to the CPU.
With the Athlon64 they eliminate the middle-man. The data request
goes directly out of the processor and onto the memory bus and then
comes directly back into the processor. The result is that the
round-trip time is reduced by about 30%, which is HUGE.
except for my house
(256 kb cache ram)
LOL, I guess I should have specified a "new chip" rather than just a
chip in general!
the ones i read about (nforce3) hold max 3 gbs...
The memory limits are determined by both the memory controller
(chipset in the case of the P4, CPU in the case of the Athlon64) and
the motherboard. For most single-processor systems these days 2-4GB
is the normal maximum.
all the reviews i've read state it has onBoard video chip...
Hmm... maybe I'm confusing the boards. Asus site seemed a bit odd as
to the specifications. Just be sure to check the specs closely before
you buy. Some stores do mis-print info and occasionally the specs may
change without much notice.