amd vs. intel

  • Thread starter Thread starter Tanya
  • Start date Start date
hi and thanks again for answering!
[...below...]

<snip clock speed stuff>


<major snip>
i wouldn't even know what to use dual monitors for (this is where the human bottleneck is
having to look at 2 monitors simultaneously <ASAP> <lol>)

You have two eyes don't you? ;-)

Seriously, I use two monitors both at work and home. Actually I have
three monitors in front of me now. One of the monitors is connected to
a RISC box, but I'd find room for a few more if I could. Big desktops
are very nice. Be warned though, once you try it you won't go back!

And I thought that I was the only one who advocates dual monitors, I have
been telling my friends for a long time the virtues of dual monitors. At
one time you could buy a couple of 19" monitors for about the same cost of
a good LCD screen. Whats even more great is Gnu/Linux use of user
desktops, now every time I use a Windows box I get frustrated because I
have to minimize everything I use. Its so nice to just click or use the
mouse wheel to change desktops, each desktop is used for a different
purpose, one is sound, one is browser, one is system stats, one is Email
its really nice.

Gnu_Raiz
 
Keith R. Williams said:
Perhaps one has to go to socket-940 (registered memory) to get to 8GB.
THe Asus socket-939 boards apparently do 4GB.

http://usa.asus.com/prog/spec.asp?m=A8V-E Deluxe&langs=09

All socket 939 motherboards I've seen have 4 DDR sockets (2x2), and
the biggest "unbuffered" (normal) memory I've seen is 1 GB, hence 4 GB
max. Well, really 4 GB per CPU, but 939 doesn't support more than one
CPU either.

If you look at the Socket 754 motherboards you'll see that many of
them have 3 sockets, but note that in many circumstance it'll reduce
the memory speed if you have anything in the third one (to PC2700 or
PC1600!). I'd suspect that these limitations is why the don't bother
putting 6 DDR sockets on the 939 motherboards, although it's not
completely impossible that AMD removed that for the 2-channel
versions.

There ARE 2 GB memory sticks, but they're registered and hence
requires a 940 socket processor (or for Intel a server chipset which
requires registered memory). ASUS has two socket 940 motherboards (1
CPU), both have 4 sockets and list 8 GB as the limit as expected.

This could be because they don't see a market for it (yet), OR it
could be that it REQUIRES too many chips using the currently available
memory densities, and hence needs the buffering that registered memory
sticks have (to avoid to high capacitance probably)...

If it's the later it might be doable once denser memory modules
becomes available, but that will also depend on whether the Athlon64
memory controller supports that memory layout (with denser memory it
might not "look" like the current registered 2 GB modules, so that
doesn't prove it will be supported).

AFAIK this is the reason why the Intel 440BX chipset couldn't handle
512MB sticks (built using newer more denser memory chips).

Example:
http://www.crucial.com/store/listmodule.asp?module=DDR+PC3200&Attrib=Package&cat=RAM
http://www.crucial.com/store/listmodule.asp?module=DDR2+PC2-3200&Attrib=Package&cat=RAM
http://www.corsairmicro.com/corsair/servers.html
http://www.corsairmicro.com/corsair/valueselect.html
http://www.corsairmicro.com/corsair/xms.html
http://www.corsairmicro.com/corsair/xms2.html

Kingston doesn't include the search term in the URL, but it looks
similar. Some unregistered "2GB Kit", ie 2x1024MB, but the only 2GB
memory sticks are registered (DDR, no 2GB DDR2 yet that I could find).
 
hi and thanks again for answering!
[...below...]

keith wrote:
<snip clock speed stuff>


<major snip>
i wouldn't even know what to use dual monitors for (this is where the human bottleneck is
having to look at 2 monitors simultaneously <ASAP> <lol>)

You have two eyes don't you? ;-)

Seriously, I use two monitors both at work and home. Actually I have
three monitors in front of me now. One of the monitors is connected to
a RISC box, but I'd find room for a few more if I could. Big desktops
are very nice. Be warned though, once you try it you won't go back!
<snip>
^
+--- Good idea!

And I thought that I was the only one who advocates dual monitors, I have
been telling my friends for a long time the virtues of dual monitors. At
one time you could buy a couple of 19" monitors for about the same cost of
a good LCD screen. Whats even more great is Gnu/Linux use of user
desktops, now every time I use a Windows box I get frustrated because I
have to minimize everything I use. Its so nice to just click or use the
mouse wheel to change desktops, each desktop is used for a different
purpose, one is sound, one is browser, one is system stats, one is Email
its really nice.

Even WinBoxes like dual monitors. My work laptop is W2K and it's been
fine, though I have to have a seperate card in the dock to get it to play
nice. I wouldn't have (have the card) under Win98 or XP, but the add-in
card is the less painfull option.
 
All socket 939 motherboards I've seen have 4 DDR sockets (2x2), and
the biggest "unbuffered" (normal) memory I've seen is 1 GB, hence 4 GB
max. Well, really 4 GB per CPU, but 939 doesn't support more than one
CPU either.

If you look at the Socket 754 motherboards you'll see that many of
them have 3 sockets, but note that in many circumstance it'll reduce
the memory speed if you have anything in the third one (to PC2700 or
PC1600!). I'd suspect that these limitations is why the don't bother
putting 6 DDR sockets on the 939 motherboards, although it's not
completely impossible that AMD removed that for the 2-channel
versions.

The dual channel S939 CPUs do not support 4 ranks of memory without
dropping the clock speed too, though some of the review sites claim to have
made that work by setting memory timings manually. There's also another
issue here with the 1/2T timing spec, where 2T waits an extra clock cycle
from Chip Select before a command is sent... compensates for increased bus
load and has quite a drastic effect on max memory bandwidth. Memory for
the Athlon64s has to be selected with great care.
 
hi and thanks again, Keith
[...below...]

Keith R. Williams said:
hi and thanks again for answering!
[...below...]

The classical definition of the front-side bus is the bus from the
processor to the north-bridge (chipset). It became known as the FSB
when the L2 cache was removed from the processor bus and moved to the
"back-side" of the processor.

Since the AMD64 processors have integrated memory controllers the
concept of the "north-bridge" and "front-side bus" is a little muddled.
There really isn't a northbridge or FSB, as such. The AMD 64
processors have the memory bus tied directly to the processor, where
Intel and earlier AMD processors have the memory bus tied to the
chipset (northbridge-half) and the front-side bus connects the
northbridge to the processor. In an Intel system the memory traverses
the FSB, thus its performance is important. In an AMD (uni-processor)
system it's less important since it isn't in the memory path. AMD64
multi-processors do have to use the bus(ses) (hypertransport) to access
memory on the other processor(s), so hypertransport isn't a slouch.

The main point here being that AMD64 processors have a memory latency
advantage because they don't have the extra trip over the FSB and
through the northbridge. System memory is hooked directly to the
processor.


George MacDonald tells us that PCI-E cards are cheaper (I.e. AGP has
already seen the other end of the bathtub). I don't do 3D games so
don't much care about either. ;-)

actually i heard this somewhere else as well (that agp was >$)
You have two eyes don't you? ;-)

(between the eyes is missing:)
Seriously, I use two monitors both at work and home. Actually I have
three monitors in front of me now. One of the monitors is connected to
a RISC box, but I'd find room for a few more if I could. Big desktops
are very nice. Be warned though, once you try it you won't go back!


Sure, but none will protect you against the biggest source of data
loss; the loose nut behind the keyboard. RAID only protects against
one source of data loss - the hard disk itself.

however, say that the kb user is *conscious* if there was a hd (mechanica)l failure wouldn't
raid (level 4 for ex) be useful?
The request doesn't have to go from the processor, over the FSB,
through the northbridge, to the DRAM, and back. The FSB and
northbridge are eliminated.


AMDs FSB is *infinitely* fast. ;-)


How about: http://www.tyan.com/products/html/matrix.html

Is 32GB enough? ;-) Seriously, that's for a (serious) 4-processor
board, but others are 8GB per processor too. 2GB per stick, four
slots...

actually i read that > 4 gbs is not supported unless one has windows xp with 64 bit support.
(and in any case even 3 gb's would be great.!)
Ah, there's the $64,000 question. Asus has a good reputation and I've
built several systems with them. I prefer Tyan these days, but don't
pretend they're the only manufacturer out there.

you posted a link -- other post with an asus board that looks great HOWEVER, it does not have
the nforce4 chipset and users' comments mentioned this (i have to look for one (preferably asus)
with the nforce4 chipset unless they don't make this?)

It's like the "23 jewel" watch a friend once showed me. It was a
standard 17 jewel swiss movement with six jewels taped inside the back
cover.

Of course more transistors isn't better. They cost (small) money to
make and they dissipate power. If they're used for something useful
they may be interesting though. Using your example above, it seems
that the P4 has ~20E6 transistors taped inside the case. ...maybe
that's what they really meant by having "taped out". ;-)

i guess intel cannot win can it?
<lol>
thanks, Keith!
 
hi George,
thanks for the reply...
[...below...]

George said:
On Thu, 03 Mar 2005 21:27:35 -0500, Tanya <[email protected]>


Well the Hypertransport speed is not really comparable to Intel's FSB as to
the data carried on them in a uniprocessor system. In an Intel system, the
FSB is only a "bus" in that it can have more than one CPU on it in a
multiprocessor system and it connects the CPU(s) to the MCH (Memory
Controller HUB... what we used to call the North Bridge); the MCH in turn
connects to the memory channel, to fast I/O devices, namely PCI Express and
to the ICH (IO Controller Hub) which handles all the other I/O devices.

Since the AMD Athlon64 systems have the Memory Controller on the CPU die,
CPU <-> Memory transfers do not have to travel on the Hypertransport Bus
*but* all I/O device <-> memory transfers do. Hypertransport is really two
buses twinned together, one for the up-stream and the other with the
down-stream; with current clock speeds, their aggregate bandwidth is close
to the same as Intel's FSB.

Take a look at the Data Sheets from both Intel and AMD - you'll find some
diagrams which illustrate how the data lanes fan out system-wise better
than any description I can give.

i have seen quite a few (data sheets) now and i am convinced that amd is a better
performer (i asked however the retailer where i want to purcha$e parts) and they
state that intel is better for raw data processing (dataBases, video...) and amd is
better for gaming which unfortunately i do not do...


There are pros & cons to both ways of doing things but the big gain for AMD
is in the low latency access to main memory since addresses/data don't have
to cross external clock domains through an MCH. Intel tries to get around
this with their Hyper Threading and by agressive prefetching of data from
memory to the CPU's (usually larger) L2 Cache The bottom line is that in
overall system performance you're not going to see a lot of difference -
some apps wll favor one approach and others the alternative method. If you
get a decent system with either CPU -- I usually buy one notch down from
the leading/bleeding edge -- you're not going to fret over the difference,
which in most cases will be barely measurable.

that is what i chose (the p4 520 2.8 ghz) (it is certainly not their best) and i
want to choose the equivalent (wrt amd -- i.e. not their (amd's) best but a good
cpu) not to compare w/ intel but to compare with their (amd's) other cpus.

thanks,
sincerely
Tanya
 
hi Keith,
thanks for replying...
[...below...]

:
Yes. The P4 has an interesting Icache, but it's tiny. You really have
to crawl through each cache to see what's going on.

i cannot even find reference to the i and d cache for the p4 so i guess they're really
small?
:)
Like processor
clock frequency, just the size alone doesn't mean much.

(in case you don't see the other post -- i read that windowsxp -- 64 bit support will
support > 4gb however xp will not at this time...)
Perhaps one has to go to socket-940 (registered memory) to get to 8GB.
THe Asus socket-939 boards apparently do 4GB.

registered is very expen$ive isn't it?
also read that if the board supports this type of ram (registered), one HAS TO use it
(this is not rambus memory is it)


do you know of an asus board with nforce4 chipset? (i find nforce3)
and what cpu?
Look at it another way; to make up for the longer latency of an
external memory controller intel had to add more cache. You can try to
hide latency, but that's all you can do - try.


With an external controller the data has to make four hops rather than
two.

i am convinced...

thanks,
sincerely
Tanya
 
hi Torbjorn,
[...below...]

Torbjorn said:
All socket 939 motherboards I've seen have 4 DDR sockets (2x2), and
the biggest "unbuffered" (normal) memory I've seen is 1 GB, hence 4 GB
max. Well, really 4 GB per CPU, but 939 doesn't support more than one
CPU either.

If you look at the Socket 754 motherboards you'll see that many of
them have 3 sockets, but note that in many circumstance it'll reduce
the memory speed if you have anything in the third one (to PC2700 or
PC1600!). I'd suspect that these limitations is why the don't bother
putting 6 DDR sockets on the 939 motherboards, although it's not
completely impossible that AMD removed that for the 2-channel
versions.

There ARE 2 GB memory sticks, but they're registered and hence
requires a 940 socket processor (or for Intel a server chipset which
requires registered memory). ASUS has two socket 940 motherboards (1
CPU), both have 4 sockets and list 8 GB as the limit as expected.

This could be because they don't see a market for it (yet), OR it
could be that it REQUIRES too many chips using the currently available
memory densities, and hence needs the buffering that registered memory
sticks have (to avoid to high capacitance probably)...

If it's the later it might be doable once denser memory modules
becomes available, but that will also depend on whether the Athlon64
memory controller supports that memory layout (with denser memory it
might not "look" like the current registered 2 GB modules, so that
doesn't prove it will be supported).

AFAIK this is the reason why the Intel 440BX chipset couldn't handle
512MB sticks (built using newer more denser memory chips).

what about rambus ram?
(i did read that current os (except for linux) does not support > 4 gb but the 64-bit os
will in the future) -- currently my *fastest* pc has 128 MEGAbytes so even ONE gb with a p4
or an amd athlon 64 will be noticed very easily!
thanks,
sincerely
Tanya
 
George said:
The dual channel S939 CPUs do not support 4 ranks of memory without
dropping the clock speed too, though some of the review sites claim to have
made that work by setting memory timings manually. There's also another
issue here with the 1/2T timing spec, where 2T waits an extra clock cycle
from Chip Select before a command is sent... compensates for increased bus
load and has quite a drastic effect on max memory bandwidth. Memory for
the Athlon64s has to be selected with great care.

ok i don't understand this: can i not use the same ram (type size) in a p4 as
in an amd athlon 64 (being that i currently am using 128 mb's of pc100(?) as
the "fastest")
(for ex if i want to 'build' an intel - based AND an amd athlon 64-based pc i
could get a $ deal if i order the same mem for each.)
thanks!
 
Tanya said:
hi and thanks again, Keith
[...below...]

:

hi and thanks again for answering!
[...below...]

keith wrote:

[SNIP]
Sure, but none will protect you against the biggest source of data
loss; the loose nut behind the keyboard. RAID only protects against
one source of data loss - the hard disk itself.

An old mantra that was tossed around in the late 80's was that
RAID can protect your data from hard drive failures but only
backups can protect your data from your users. Still valid today.
however, say that the kb user is *conscious* if there was a hd (mechanica)l failure wouldn't
raid (level 4 for ex) be useful?

"Level 4" ???? Common RAID "levels" include 0,1,0+1,1+0,3,5 and
JBOD. There is also a new and rare 1.5 that I don't expect to
gather a large - if any - following. And, of course, 0 has no
redundancy and hence offers no protection at all.

Don't take AMD's word for it. That's like asking a barber if you
need a haircut. There are lots of independent reviews out there
- most of which confirms AMD's claims.

[SNIP]
The original Opteron specs proclaimed support for 4 GB DIMMs,
even though no such DIMMs existed at the time.

There are a few vendors, Polywell and HP for example, that will
sell you Opty boxes with 4 DIMM slots per cpu and will happily
stuff a 4 GB DIMM into each slot for you. You do, however, have
to settle for PC2100 if you want 4 GB DIMMs.

As well, you need to keep in mind that motherboard manufacturers
and system builders very often specify what CPUs and what RAM
types are supported based on what CPUs and DIMMs were available
at the time they published the specs for the board or computer.
For example, the Tyan S288x motherboards supposedly only support
DIMMs up to 2 GB, but I have seen eight 4 GB DIMMs successfully
demoed in an S2881 based system.

Also: HP sells a Opty server, a quad if I recall, that has 8 DIMM
slots per processor. Each CPU and its DIMMs are on a daughtercard.

Sun seems to be restricting itself to using 1 GB and 2 GB DIMMs.
actually i read that > 4 gbs is not supported unless one has windows xp with 64 bit support.
(and in any case even 3 gb's would be great.!)

Forgot about Linux, have you ? So far Linux is light-years ahead
of MicroSoft in supporting x86-64 processors.

However, in the Windows world there has also been versions of NT
Server, W2K Server, and W2K3 Server for a dozen years now that
support more than 4 GB. The limit with NT4 was 32 GB but I
believe the limit with W2K3 is 64 GB.

[snip]
i guess intel cannot win can it?

Sure they can. In the AMD64 vs (P4 and Xeon) battle, AMD easily
wins on quality, but quality is not everything. Intel has
marketing, a large loyal user base, entrenched relationships with
manufacturers (can you say "Dell"?), etc., ...

And wasn't Mao Tse-Tung the one who said "Quantity has a quality
all of its own." ?
 
and i'm realizing that amd might be the best choice for now -- i like intel however, they
are just introducing the p4 (with the 1066 mhz fsb speed) so they're expensive and
possibly have issues that'll need to be cleared up...

To the best of my knowledge the only 1066MT/s bus speed P4's available
now are the P4 Extremely Expensive Edition chips, which really just
aren't worth it IMO.

The P4 600 series is the chip to get from Intel if you ask me. Both
that and AMD's Athlon64 line are very good choices, you should be ok
with either one of them.
i'd read that there is 1 speed for the pci bus, and the agp is 2* the pci bus (this is an
older article) i don't know whether isa had its own bus speed?

That's a bit dated and not entirely accurate. PCI bus, for the most
part, works at 33MHz and is 32-bits wide. There are also 66MHz and
64-bit wide versions of PCI, but these aren't widely used in desktop
computers.

AGP started out as a way to get the VERY bandwidth-hungry video cards
off of the shared PCI bus. The original AGP spec was basically a
dedicated 66MHz/32-bit PCI bus only with a few changes to make it
specific for graphics. Since then we've seen AGP 2x, AGP 4x and now
AGP 8x, each time doubling the effective clock rate of bus.

ISA bus mostly ran at 8MHz. Originally it was only 8-bits wide, then
it grew to 16-bits wide and even 32-bits wide in the form of EISA. It
was designed (using the term loosely) for the original IBM PC way back
in the dark ages of computing. It's also a complete and utter piece
of crap, and not just because the technology is dated. Fortunately
ISA is mostly gone from modern PCs.
read that the raid levels are what are important...for example level 5 (block interleaved
distributed parity) is supposed to be the best

"The best" depends on the application. For servers where a large
quantity of data storage and high reliability are most important (ie
most servers), RAID5 is probably the best. For servers were top speed
and reliability are important, RAID 0+1 (aka RAID 10) is probably
best.

For desktops you're mostly looking at RAID 0 or RAID 1. Simple
explanation is that RAID 0 splits your data between your two drives.
This way, when you read a file, you get half of it from each drive,
there by doubling the amount of data that can be read at a time. This
is good for performance, but the downside is reliability. With RAID 0
you cut your reliability down by more than half because if EITHER
drive in the array dies on you, you lose all your data. What's more,
if your RAID controller dies on your then you tend to also be hosed.

RAID 1 is kind of the opposite. Here your data is copied in full to
both drives. When you write out a file, instead of just writing it
out once, the RAID controller writes it out twice, once to each drive.
This greatly increases the reliability since if either drive dies you
still have all your data. All you have to do is replace the bad hard
drive, rebuild the array and you're back up and running again. The
downside to RAID 1 is that it doesn't improve performance by much.
New RAID 1 controller do have some smarts so that their read
performance is nearly as good as RAID 0, but the write performance
isn't helped at all (in fact, it would be slightly slower than a
single drive due to a small amount of extra overhead). Of course with
RAID 1 you also cut your storage capacity in half when compared to two
independent drives.

Personally I wouldn't touch RAID 0 for anything even remotely
resembling important data, I've seen just far too many hard drives die
to trust it. However for certain applications it does have it's uses,
and RAID 1 can definitely be a good thing IMO.
isn't the time it takes memory data <-> cpu the same (same bus speed) but the total time
is reduced b/c the controller is in the cpu and likely *knows* what the cpu will need /
send reducing the time of cpu-controller communication....i hope this is the case:)

That's kind of along the right lines, though it's a bit more
complicated than that. What you're referring to would be more along
the lines of prefetching which can (and is) be implemented with an
off-die memory controller as well.

First off, there are two measures of speed for getting data back and
forth. The first is bandwidth, ie the total amount of data you can
send in one block of time. In this situation the Athlon64 and P4 are
fairly similar since they generally both use dual-channel DDR400
memory.

The second measure of speed is latency, and this is where things get a
little trickier. Latency is the measure of time between when a piece
of data is requested and when it's received. Now most data that a
processor needs sits in the cache memory, right on the processor
itself, and can be accessed fairly quickly (though cache latency
definitely does exist and plays a role in performance), however
eventually the processor needs to go to main memory to get some chunk
of data. It sends out the request and then has to pretty much just
sit around, twiddling it's electronic thumbs, until that data arrives.
Usually with today's systems this takes around 50-100ns, which may
seem instantaneous to us mere mortals, but to a multi-GHz processor
that is a LONG time.

Now, with the P4, when it needs data it first has to send out a
message over it's processor bus to the chipset. The chipset than has
to translate that request into a message that it can pass along to the
memory over the memory bus. The memory chips answer that request and
send the data back to the chipset. The chipset then again translates
this data back to the protocol for the processor bus and sends it back
to the CPU.

With the Athlon64 they eliminate the middle-man. The data request
goes directly out of the processor and onto the memory bus and then
comes directly back into the processor. The result is that the
round-trip time is reduced by about 30%, which is HUGE.
except for my house:)
(256 kb cache ram)

LOL, I guess I should have specified a "new chip" rather than just a
chip in general!
the ones i read about (nforce3) hold max 3 gbs...

The memory limits are determined by both the memory controller
(chipset in the case of the P4, CPU in the case of the Athlon64) and
the motherboard. For most single-processor systems these days 2-4GB
is the normal maximum.
all the reviews i've read state it has onBoard video chip...

Hmm... maybe I'm confusing the boards. Asus site seemed a bit odd as
to the specifications. Just be sure to check the specs closely before
you buy. Some stores do mis-print info and occasionally the specs may
change without much notice.
 
(between the eyes is missing:)

Ah, you have no nose. I hope you have perfect vision, or you're going
to be in real trouble. ;-)
however, say that the kb user is *conscious* if there was a hd (mechanica)l failure wouldn't
raid (level 4 for ex) be useful?

Sure, RAID (except for RAID0) will help one recover after a *disk*
hardware failure. While this isn't rare it's not the most common cause
of data loss. It is usually a massive loss when it happens though. A
decent backup strategy would be better, in most cases. Ok, no one uses
such a strategy... ;-)

The disadvantage of RAID is that it takes CPU cycles somewhere. The
more complicated the RAID system the more cycles. Software RAID (like
what you're normally going to find on PC motherboards) takes those
cycles from the main processor. RAID 0 and RAID 1 use relatively few
cycles, so that's what you're going to get (perhaps RAID10) in PC class
systems.

RAID isn't the solution to all problems.

actually i read that > 4 gbs is not supported unless one has windows xp with 64 bit support.
(and in any case even 3 gb's would be great.!)

Even 4GB isn't supported under XP. IIRC it'll support "only" 3GB
because it uses 1GB virtual address space for itself and I/O. Since
virtual address range = real address range on an x86, this 1GB must be
subtracted from the maximum real memory.
you posted a link -- other post with an asus board that looks great HOWEVER, it does not have
the nforce4 chipset and users' comments mentioned this (i have to look for one (preferably asus)
with the nforce4 chipset unless they don't make this?)

I believe the nForce4 is relatively new, so Asus may not have theirs
out yet. I'm not one to say whether it's worth waiting for though.
i guess intel cannot win can it?
<lol>

"We" like it that way. ;-)
 
hi Keith,
thanks for replying...
[...below...]

:
Yes. The P4 has an interesting Icache, but it's tiny. You really have
to crawl through each cache to see what's going on.

i cannot even find reference to the i and d cache for the p4 so i guess they're really
small?
:)

From: ftp://download.intel.com/design/Pentium4/datashts/29864312.pdf

The 130nm P4 has an 8K D-cache and a 12K I-cache. The 90nM P4s have a
16K D-cache, and I don't recall how big the I-cache is.

The P4's I-cache is interesting because it's after the instruction
decoder so once a loop has been decoded it doesn't have to be decoded
again. The theory is that this improves performance. Decoded
instructions tend to be bigger than the undecoded instructions and
compiler optimizations may thwart this strategy though. It's
interesting, though I'm not sure it's a clear win. No one else has
gone here, so it looks like others agree.
(in case you don't see the other post -- i read that windowsxp -- 64 bit support will
support > 4gb however xp will not at this time...)

I'm 90% sure the real limit is 3GB (and the other 10% 3.5GB ;-). You
are correct though, a 32bit processor can only address 2^32 bytes of
memory, which happens to be 4GB. This isn't strictly true, since Intel
has address extensions (PAE) that will allow 36bit addressing (64GB),
but Windows doesn't use PAE either.
registered is very expen$ive isn't it?

Somewhat, but not all that much more these days. Crucial has 512MB
registered (CL-3 ECC) for $120 a stick vs. $94 for unbuffered (CL=3
ECC). OTOH, if you go with non-ECC the unbuffered price drops to $69
(yikes! ECC has gotten expensive). So, I guess it's about 2:1
difference between registered ECC and unbuffered nonECC. I guess that
makes it expensive.
also read that if the board supports this type of ram (registered),
one HAS TO use it

Yes, at least Socket-940. Some chipsets were selectable, the AMD64
processors are not. The selection is done when you order (socket 939
vs. socket 940).
(this is not rambus memory is it)

No. That another kettle of over-ripe fish.
do you know of an asus board with nforce4 chipset? (i find nforce3)
and what cpu?

No, but I'm sure others will chime in here.
(also places $ell cpu-board combos -- are these the <a> way to go?)

I pretty much deal only with NewEgg (and Crucial for memory) these
days.

<snip>
 
Tanya said:
hi and thanks again, Keith
[...below...]

:

hi and thanks again for answering!
[...below...]

keith wrote:
[SNIP]
i got a question here (i cannot find the article though -- i read that there are several
raid levels and some are great in keeping *copies* soToSpeak of the drive's data...)

Sure, but none will protect you against the biggest source of data
loss; the loose nut behind the keyboard. RAID only protects against
one source of data loss - the hard disk itself.

An old mantra that was tossed around in the late 80's was that
RAID can protect your data from hard drive failures but only
backups can protect your data from your users. Still valid today.
however, say that the kb user is *conscious* if there was a hd (mechanica)l failure wouldn't
raid (level 4 for ex) be useful?

"Level 4" ????

Sure. You don't think they skipped from three to five without thinking of
something silly inbetween?

http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel4-c.html
Common RAID "levels" include 0,1,0+1,1+0,3,5 and
JBOD. There is also a new and rare 1.5 that I don't expect to gather a
large - if any - following. And, of course, 0 has no redundancy and
hence offers no protection at all.

RAID 1+0 and 0+1 are redundant. ;-) They're also known as RAID10 and
RAID01 (also redundant ;).
 
ok i don't understand this: can i not use the same ram (type size) in a p4 as
in an amd athlon 64 (being that i currently am using 128 mb's of pc100(?) as
the "fastest")
(for ex if i want to 'build' an intel - based AND an amd athlon 64-based pc i
could get a $ deal if i order the same mem for each.)
thanks!

You *can* get the same memory... but depending on the choice of P4
mbrd/chipset you may not want to. There are some P4 mbrds which take
DDR-SDRAM but DDR2-SDRAM is the trend there.
 
hi George,
thanks for the reply...
[...below...]

George Macdonald wrote:
Take a look at the Data Sheets from both Intel and AMD - you'll find some
diagrams which illustrate how the data lanes fan out system-wise better
than any description I can give.

i have seen quite a few (data sheets) now and i am convinced that amd is a better
performer (i asked however the retailer where i want to purcha$e parts) and they
state that intel is better for raw data processing (dataBases, video...) and amd is
better for gaming which unfortunately i do not do...

I have to wonder why he said "raw data processing" - it doesn't really
describe any particular sub-set of computing. It's true that the Athlon64s
are the current favorites of gamers; the P4s score better at things like
video, and to a certain extent audio, processing... i.e. data streaming
applications. In between there's all the general processing, including
data base, which tends to have a mix of some streamable data and a lot of
random accesses. Here the difference is less marked but AMD still scores
better from what I observe.
that is what i chose (the p4 520 2.8 ghz) (it is certainly not their best) and i
want to choose the equivalent (wrt amd -- i.e. not their (amd's) best but a good
cpu) not to compare w/ intel but to compare with their (amd's) other cpus.

Considering the cost of the rest of the system, I'd try to bump that a
little to say a P4 630 3.0GHz - the extra cache, EM64T and enhanced power
management/control is worth it IMO. Take a look at
http://www.techreport.com/reviews/2005q1/pentium4-600/index.x?pg=1 for a
reasonable comparison, though all benchmarks should be taken with a pinch
of salt. I got an Athlon64 3500+ 90nm(Winchester) in November and I still
think it's the sweet spot there; the 90nm core runs significantly cooler
than the 130nm Newcastle. Availability on 3500+(90nm) is spotty though and
synchronizing with the purchase of the right mbrd, which is also spotty,
can be frustrating. To save a few $$ and still get 90nm, and a 1000MHz
Hypertransport, there's also the 3200+.
 
keith said:
Tanya said:
hi and thanks again, Keith
[...below...]

:



hi and thanks again for answering!
[...below...]

keith wrote:



On Mon, 28 Feb 2005 21:39:21 -0500, Tanya wrote:
[SNIP]


i got a question here (i cannot find the article though -- i read that there are several
raid levels and some are great in keeping *copies* soToSpeak of the drive's data...)

Sure, but none will protect you against the biggest source of data
loss; the loose nut behind the keyboard. RAID only protects against
one source of data loss - the hard disk itself.
An old mantra that was tossed around in the late 80's was that
RAID can protect your data from hard drive failures but only
backups can protect your data from your users. Still valid today.

however, say that the kb user is *conscious* if there was a hd (mechanica)l failure wouldn't
raid (level 4 for ex) be useful?

"Level 4" ????


Sure. You don't think they skipped from three to five without thinking of
something silly inbetween?

Each drive has a gerbil in an exercise wheel to keep the drives
going if the power dies ?

Sounds like what I've always thought of as 3 is technically 4.
What he describes as 3 is a nightmare.
RAID 1+0 and 0+1 are redundant. ;-) They're also known as RAID10 and
RAID01 (also redundant ;).

If you compare the drives in a 0+1 against drives containing the
same data from a 1+0, then superficially you will have two
identical sets of drives. However 1+0 and 0+1 differ in how the
drives are arranged on the controller(s) to achieve the best
performance.

For example, If there are six drives in the array then you will
typically use 0+1 by putting a three drive stripe set on each
channel of a two channel SCSI controller. 1+0 with 6 drives and
only two SCSI channels does perform nearly as well.

Similarly, if you have 4 IDE drives on two IDE ports that are
controlled by a cheap HighPoint or Promise controller, then you
will usually find that your performance is better if the striping
is done across the master/slave pair on each controller, with
each IDE port mirroring the other (RAID 1+0).
 
It is in my opinion that if you get a chip from both companies with
the same clock speed and L2 cache: Intel is generally faster, but AMB
is usually more stable (don't know why, just is)... You also gotta
look at features; Intel boasts hyperthreading on the new P4's (making
one CPU almost act as if it were two) and AMD boasts it has the only
64bit proccessor. Fiannally cost-wise AMD processors are usually
cheaper, but Intel's MoBo's are usually cheaper :evil: which is kind
of ironic. I still prefer Intel though.
 
Fantabulum said:
It is in my opinion that if you get a chip from both companies
with the same clock speed and L2 cache: Intel is generally faster,
but AMB is usually more stable (don't know why, just is)...

??? IMHO most people would say it the other way around.

For the same GHz clock, AMD K7/8 beat Intel P7 (Pentium4)
hands down. On common software, the current Intel P7 hardware
has much lower (30%) Instructions-per-Clock than AMD.

AMD has a slight stigma of instability, mostly from being
using in lower-end machines with poorer powersupplies, memory
and motherboards. If Intel machines were built with those
components, they'd probably have _worse_ reliability.

-- Robert
 
Back
Top