4-way Opteron vs. Xeon-IBM X3 architecture

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
FWIW, eight 2 GB PC3200 ECC Reg DIMMs (total of 16 GB) from
either Crucial or Corsair on a Tyan S2880, S2882, S2885, or S2895
seems to work every time.

Ah good - just to be sure... that's two channels of 4xdual-rank DIMMs each?
I haven't even looked at a Tyan/Opteron board but that's what I have a
picture of.
I did, however, have issues when I tested a mix of Crucial and
Corsair DIMMs. MemTest would run OK for a while and then the
systems would crash, and the length of the pre-crash interval
seem to be random. Having each bank of four DIMM slots filled
with matching DIMMs seemed to be the only reliable way to go -
two matching pairs in each bank of four slots did not seem to be
good enough.

I had cases a while back where memtest86+ would pass after several hours
with an Athlon64 and 1T command rate; installing WinXP would get flakey
though and that's when I decided to try the suspected flakey config with 1T
on Prime95 Torture Test and it failed in a few minutes. I set 2T to
install WinXP, which worked fine then set to 1T again and it booted fine
into WinXP then ran the Prime95.
 
I e-mailed and snail-mailed Tyan about this back in July or
August and I have yet to get a reply. Forget about phoning -
with my hearing I can't handle badly accented English
face-to-face, let alone over the phone.

I emailed Tyan about my SATA problem and got a reply rather quickly.
Didn't fix anything, but got a reply. ;-/ Maybe try again?
Crucial and Corsair were more helpful - or at least tried to be.
Other than suggesting that I try different BIOS versions they
didn't have much to offer - but at least they replied.

I have also discussed this in other forums but have only gotten very
limited confirmation. There just aren't enough people in those forums
who have Opty dualies and lots of 2 GB DIMMs to play with. (Far more
often than not I don't have that kind of stuff to experiment with.)

At $650/stick, I guess maybe.
Also: this seems to be a 2 GB DIMM issue - I have successfully done
mix'n'match with 1 GB PC3200 ECC Reg from a wide variety of
manufacturers - especially including Crucial, Corsair, Mushkin, and
Kingston. They might not play together if I throw them in at random,
but they do if I stick with singles or matched pairs.

Is it a number of ranks issue that's only seen at 2GB?
This is not really that big a deal for me - knowing that there is an
easily avoidable problem is sufficient. It might be a different story
if the RAM manufacturers stopped selling at more or less the same prices.

Sure. How many times do you really need to mix DRAM. At this level
(cost) one can afford to be a little carefull. This system (Tyan
S2875S) has 1.5GB, two 256MB and two 512MB sticks, all Crucial. No, I
can't afford 2GB sticks either. ;-)
 
Looking at PC3200 ...

For 512 MB DIMMS, about 40%.
For 1 GB DIMMs, about 55%.

For 2 GB DIMMs their is no direct comparison.
PC3200 ECC Reg from Crucial is about $650 - which is about 115%
more *per GB* than 1 GB unbuffered non-ECC DIMMs.

The RAM cost per desktop is typically about $60 or $120.
The RAM cost for a 2P server will seldom be less than $1K and can
easily hit $5K.

I know (BTDT, got the bill), I was making the point that "server" memory
is always more expensive. ;-)
And where do you "measure" ? At the memory manufacturer ?
Server/desktop/laptop manufacturer ? Consumer ?

Sure, there's that too, though at least on the first order they should be
the same as long as one doesn't measure apples to orangutans.
 
My "agenda" is only to report facts and data. The facts on the ground
is that I have not seen any references that has shown that you can
reliably add more memory to the Opteron system @ 400 Mb/s.

More than what?

When you cobble together a composed misquote, it's distorting my "facts" to
fit something you want to argue with.
The point here is that the issue concerns both speed AND capacity.
The references given suggest that higher speeds are possible, but
none shows that higher speeds are possible with full capacity.

Depends what you mean by "full capacity" - if you mean eight quad-rank
DIMMs then the memory mfrs don't even make PC3200 in that form factor; if
you look at the mbrds available to us, eight slots per CPU is not something
I've seen. OTOH right in this thread is a "reference"
(e-mail address removed) apparently demonstrating PC3200
operation with 72 devices per channel.
As you may suspect, I read plenty about memory systems, and I would
vigorously challenge your memory in regards to the "common knowledge"
here. I do not believe it's common knowledge for anyone to have
demonstrated running 64 DDR(1) SDRAM devices reliabliably @ 400 Mb/s.
If you can, please do cite a concrete reference URL. What you remember
to have seen may not actually be what it is.

See above.
The issue that prohibits making the higher speed version with x4 devices
is that you have to hang 16 of them (18 with ECC) on the same address
and command busses per rank. That's a rather heavy electrical load to
run @ 200 MHz. So, no, you can't just look at "enthusiast memory" built
with x8 parts and automatically assume that x4 parts will work just the
same at the same data rate, because you're going to need even faster
parts to meet the same timing.

The memory mfrs are producing PC3200 2GB RDIMMs with 36 devices. said:
Because you cited some rather nebulous references in regards to memory
from the enthusiast market and assumed that it would work in the server
world. I was simply pointing out that's not going to work here because
of configuration differences and electrical loading considerations.

AMD has specs. They can be exceeded and have been regularly for Athlon64
unbuffered operation; it does seem that the specs for Opteron registered
could be and in fact are easily exceeded.
Which is what is shipping in HP's Opteron server, and guarenteed to
work by HP. That guarentee provides the effective upper limit to
the maximum memory capacity of an Opteron server as of today. That
limit cannot be exceeded or changed arbitrarily. This, I believe was
the crux of the contentious point. . .

The original claim was that anything "> half-loaded" required a decrease in
clock speed; even AMD specs go above half-loaded.
32 devices is not counting ECC. 36 counting ECC.

The limit is 2 ranks of 18 x4 DDR(1) devices running @ 400 Mb/s, and
that's the same number I am seeing over and over again. Not just HP,
but Tyan as well. So the limitation isn't just HP Opterons, or even
Opterons of any brand of servers, but DDR(1) SDRAM memory controllers
@ 400 Mb/s. The Opteron just happen to have a DDR(1) SDRAM memory
controller that has to follow the same constraints as everyone else.
If you claim that the limit can be exceeded, please show me where
you're getting your impression from, because I'd certainly like to
see where someone is getting a fully loaded DDR(1) memory system to
run @ 400 Mb/s. **

Look up the specs again - looks like AMD is saying 3 ranks of 18 x4
devices.
** By fully loaded, I mean 4 ranks of 16 x4 devices, for a total of
64 devices (not counting ECC) per channel. With 1 Gbit devices, you'll
get 8 GB of memory per channel, and 16 GB of memory per Opteron
processor. In a 4P config, that would push your total memory
capacity to 64 GB instead of the current limit of 32 GB @ 400 Mb/s.



The difference is that CPU's and GPU's are single vendor to single customer
parts. That is, Intel can change the specs of these devices to whatever
it wants, whenever it wants, as long as its customers doesn't mind
following the new spec. Same with AMD, NVIDIA, ATI, etc. DRAM doesn't
work that way. The dynamics of the commodity market means that the parts
are supposed to be completely interchangeable. So the "interchangeable"
aspect of things greatly limits the standards definition process.

Times have changed - we can now tweak the voltages of about every part of
the system from software. We now have gone as far as CPUs with variable
voltage specs. For memory there's up to .5V boost for some mbrds in .05V
increments; saying you shouldn't do that because it violates some old spec
is nuts. Would I personally run server memory at 3V - no! Would I go up
to 2.75V?... if it gave me a worthwhile performance advantage, probably.
For example, Samung can probably crank out much faster DDR parts because
it has excellent process tech, but some of the less-well funded fabs
can barely meet the spec, and they would be hard pressed to meet these
push spec parts, so they would be resistant to changes in the JEDEC
standards definition.

The limitation of the JEDEC standard means that the faster guys can't
really run ahead of the slow guys, although they're finding some ways
around that with the push spec parts designed for the "enthusiast
market". So, no, you can't just take advantage of opportunities
made available with faster process technology to make your own faster
DRAM parts. You have to wait until sufficient number of DRAM manufacturers
can agree with you on the new addendum to the spec, and a suffient
number of design houses (Intel, IBM, Sun, AMD, etc etc) agree to the
same set of proposed addendum to the spec, before the standard can be
created, and you can sell you part as (JEDEC) DDRx xyz MHz, and customers
can be reasonably certain that your DDRx xyz MHz parts can operate
interchangeably with parts from Infineon, Samsung, Micron, Elpida,
etc.

Every DIMM mfr is selling unbuffered boutique parts rated at PC4000/DDR500
(2.8V seems the norm) - where they get the chips from matters not...
whether it violates some enshrined, 3-year-old JEDEC document is of no
importance to the people selling or buying.
Oh you mean like FBDIMMs... with AMBs which "will brown a burger better
than a George Foreman Grill can" [quote from this NG]?:-)

FBD's have been in developement for more than 2 years, and they're still in
development/testing/tweaking.

Yeah and the point is that Intel dumped a buncha $$ into Micron to get it
kick-started... thus the hint of irony in your remark about Micron.;-)
They'll enable servers with incredible amount of memory, and the power
headache that comes with it. The AMB is just part of the problem. With
16 device per FBD, you can get the ratio of AMB device power to DRAM
device power down to 15~20%.

But the AMB power usage must depend on its position in the channel... so
are they going to be able to be able to lose the heatpipes on them?:-)
Surely the Primary is always going to get hammered.
 
Depends what you mean by "full capacity" - if you mean eight quad-rank
DIMMs then the memory mfrs don't even make PC3200 in that form factor; if
you look at the mbrds available to us, eight slots per CPU is not something
I've seen. OTOH right in this thread is a "reference"
(e-mail address removed) apparently demonstrating PC3200
operation with 72 devices per channel.

I've already defined what I meant by "full capacity". 4 ranks of
DRAM devices, 18 devices per rank, 72 devices per channel running
at 400 Mb/s.

As to the reference, the message header points right back in this thread,
and there's nothing in that post that shows 72 devices per channel @
400 Mb/s.


See above.

There's nothing in the referenced post.


The memory mfrs are producing PC3200 2GB RDIMMs with 36 devices.<shrug>

You need them to make PC3200 4 GB RDIMMs with 36 devices and work in the
2 slot boards to get to 8 GB per channel and 16 GB per CPU. Once you have
that, then you can raise the memory capacity of the 4P Opteron box to
64 GB. Until then, the limit for the 4P Opteron box remains as 32 GB of
DDR(1) @ 400 Mb/s.


AMD has specs. They can be exceeded and have been regularly for Athlon64
unbuffered operation; it does seem that the specs for Opteron registered
could be and in fact are easily exceeded.

No one running a 4P server and using it commercially will run the box with
memory system configuration that exceeds spec.


The original claim was that anything "> half-loaded" required a decrease in
clock speed; even AMD specs go above half-loaded.

That claim is supported by the specification of a validated, shipping
system. Your argument that the server folks can run with more memory
in the memory system by pushing the configuration beyond the validated
spec - does not hold water.

Times have changed - we can now tweak the voltages of about every part of
the system from software. We now have gone as far as CPUs with variable
voltage specs. For memory there's up to .5V boost for some mbrds in .05V
increments; saying you shouldn't do that because it violates some old spec
is nuts. Would I personally run server memory at 3V - no! Would I go up
to 2.75V?... if it gave me a worthwhile performance advantage, probably.

I have never heard of any IT manager sitting around tuning the voltage on
the CPU or memory of his/her latest 4P Opteron/Xeon box. This scenario
sounds rather far fetched.

Every DIMM mfr is selling unbuffered boutique parts rated at PC4000/DDR500
(2.8V seems the norm) - where they get the chips from matters not...
whether it violates some enshrined, 3-year-old JEDEC document is of no
importance to the people selling or buying.

No one buys these parts to put in a 4P Opteron/Xeon server, and the
sentiment does not apply.

But the AMB power usage must depend on its position in the channel... so
are they going to be able to be able to lose the heatpipes on them?:-)
Surely the Primary is always going to get hammered.

Heatpipes? From what I've seen, it's just a small heatsink. I think
it's about ~4W per fully-on AMB, and you can selectively turn off
parts of it to save power.
 
I've already defined what I meant by "full capacity". 4 ranks of
DRAM devices, 18 devices per rank, 72 devices per channel running
at 400 Mb/s.

As to the reference, the message header points right back in this thread,
and there's nothing in that post that shows 72 devices per channel @
400 Mb/s.

What is this - myopia?... denial? Rob has 2x Crucial 2GB RDIMMs on each
channel. I'd have thought you'd be able to figure it out. Are you even
reading what I've said?... that the reference was to a post in this
thread... or are you just shooting from the hip?
There's nothing in the referenced post.

I think you can do better than that.
You need them to make PC3200 4 GB RDIMMs with 36 devices and work in the
2 slot boards to get to 8 GB per channel and 16 GB per CPU. Once you have
that, then you can raise the memory capacity of the 4P Opteron box to
64 GB. Until then, the limit for the 4P Opteron box remains as 32 GB of
DDR(1) @ 400 Mb/s.

Ah, it's denial.
No one running a 4P server and using it commercially will run the box with
memory system configuration that exceeds spec.

If the spec is 3-years old and the devices and assemblies have improved,
there's no reason not to. To deliberately igniore technology progress is
perverse.
That claim is supported by the specification of a validated, shipping
system. Your argument that the server folks can run with more memory
in the memory system by pushing the configuration beyond the validated
spec - does not hold water.

No pushing, or water, is required. If mfrs want to derate AMD's specs
that's up to them.
No one buys these parts to put in a 4P Opteron/Xeon server, and the
sentiment does not apply.

There you go again trying to suggest that I said something I didn't. I
thought it seemed perfectly obvious here and in my previous mention of
those PC4000 DIMMs that the fact that such devices are available, means
that higher performing devices can be used for servers. I'm getting tired
of repeating myself here.
Heatpipes? From what I've seen, it's just a small heatsink. I think
it's about ~4W per fully-on AMB, and you can selectively turn off
parts of it to save power.

This
http://www.tecchannel.de/_misc/img/detailoriginal.cfm?pk=346982&fk=432957&id=il-74145445969594731
is what I'd call a heatpipe heatsink and it is *not* small in relation to
the part. Other sources have commented on the heat problem and people
don't normally put fans over devices which are not hot - you'll need to do
more than think... like maybe burn your finger on the thing.
 
What is this - myopia?... denial? Rob has 2x Crucial 2GB RDIMMs on each
channel. I'd have thought you'd be able to figure it out. Are you even
reading what I've said?... that the reference was to a post in this
thread... or are you just shooting from the hip?

Rob has 2 2-GB RDIMMs per channel. This gets him 4 GB per channel and
8 GB per CPU.

This is the same configuration as the 32 GB per 4P Opteron box @ 400 Mb/s
configuration that you believed to be limited.

I had thought that Rob's difficulty in configuring even this setup with
mixed RDIMMs would go some ways in convincing you that you don't just
throw 8 GB of DDR(1) memory per channel into an Opteron box and expect
it to work @ 400 Mb/s - apparently I was incorrect in expecting you see
that the two configurations are equivalent.
Ah, it's denial.

It's simple mathematics. You stated that more DDR(1) memory can be crammed
into the 4P Opteron server @ 400 Mb/s. You need 2 4-GB DIMMs with 36 1 Gb
DRAM devices per DIMM to get that done.
If the spec is 3-years old and the devices and assemblies have improved,
there's no reason not to. To deliberately igniore technology progress is
perverse.

Reliability considerations trumps everything in the traditional medium/
heavy duty server segment. You don't throw parts that have not gone through
many device-years of qualification into the nearest server box.
No pushing, or water, is required. If mfrs want to derate AMD's specs
that's up to them.

Which means 32 GB in the 4P Opteron box is as good as it gets (for now,
until DDR2 parts arrive), and your previous assertions in regards to
HP Opteron's memory capacity @ 400 Mb/s is not valid.
There you go again trying to suggest that I said something I didn't. I
thought it seemed perfectly obvious here and in my previous mention of
those PC4000 DIMMs that the fact that such devices are available, means
that higher performing devices can be used for servers. I'm getting tired
of repeating myself here.

The over spec'ed parts are never going to end up in a server. Power
increases proportional to square of the voltage. In big servers packed
with DRAM devices, you're not going to want to increase memory capacity
by cranking up the voltage to get the performance, even assuming you
can get such a memory system to qualify.

You derided FBD's for the power consumption of the AMB, yet somehow
power increase to shoehorn overspec'ed memory into a server box is
acceptable?

It's not going to happen, and your repeated reference in bringing
the overspec'ed parts into a discussion about server memory is
distracting to say the least.
This
http://www.tecchannel.de/_misc/img/detailoriginal.cfm?pk=346982&fk=432957&id=il-74145445969594731
is what I'd call a heatpipe heatsink and it is *not* small in relation to
the part. Other sources have commented on the heat problem and people
don't normally put fans over devices which are not hot - you'll need to do
more than think... like maybe burn your finger on the thing.

"Heatpipe" has specific connectation, and it does not apply here. That is
a small clip on heatsink.

The reason that fan is there is because the MB is sitting out in the open,
and there's no airflow over the AMB. In a real system, air would flow over
the FBD's. You can see in the picture that the FBD's are aligned with the
direction of the heatsink fins on the CPU.
 
David said:
Rob has 2 2-GB RDIMMs per channel. This gets him 4 GB per channel and
8 GB per CPU.

And if you want to play that game, the MSI K8N Master2 FAR is a
dual Socket 940 board with six DIMM slots - all on CPU0.

I did not personally evaluate that board, but I did consider it
for a while - seriously enough to ask both Crucial and Corsair if
it would work with 12 GB of PC3200. Crucial said yes and Corsair
said no. Corsair would only guarantee compatibility with their
512 MB and 1 GB DIMMs.

As well, there is the IWill DK88 Opty dualie that has 8 DIMM
slots per processor. However, IWill only promises PC2700 speeds.
 
Rob has 2 2-GB RDIMMs per channel. This gets him 4 GB per channel and
8 GB per CPU.

This is the same configuration as the 32 GB per 4P Opteron box @ 400 Mb/s
configuration that you believed to be limited.

No it is NOT - it is 4 full ranks of 18x 512Mb devices each, which has
repeatedly been claimed to not work at PC3200 by Kanter... & you, just 4
paras above.
I had thought that Rob's difficulty in configuring even this setup with
mixed RDIMMs would go some ways in convincing you that you don't just
throw 8 GB of DDR(1) memory per channel into an Opteron box and expect
it to work @ 400 Mb/s - apparently I was incorrect in expecting you see
that the two configurations are equivalent.

This is unbelievable - you've been harping on about 64/72 devices per
channel and when presented with the concrete evidence that it does work at
PC3200, you wriggle with some imagined claim of mine... some ethereal 8GB
you dreamt up. The only thing "incorrect" is your persistent changing of
the "target".

As for mixing different mfr DIMMs, I don't see any "difficulties" here at
all - just another imagined "fact"; that is well known to be a less than
optimal way of going about getting speed and capacity - it has been, to a
certain extent, since PC-133 SDRAM, and with DDR-SDRAM it's a very risky
proposition and not something I'd expect to work well. The way I read it
Rob was experimenting just to see but was happy to use a single memory mfr
per mbrd.
It's simple mathematics. You stated that more DDR(1) memory can be crammed
into the 4P Opteron server @ 400 Mb/s. You need 2 4-GB DIMMs with 36 1 Gb
DRAM devices per DIMM to get that done.

No it's not mathematics - it's arithmetic... even simpler: Micron does not
have DDR400 1Gb devices... yet; the 2GB RDIMMs in question have 18x 512Mb
devices per side... two double sided DIMMs... four full ranks per
channel... 72 devices. It couldn't be clearer.
Reliability considerations trumps everything in the traditional medium/
heavy duty server segment. You don't throw parts that have not gone through
many device-years of qualification into the nearest server box.

Oh damn the traditional - the results are in.
Which means 32 GB in the 4P Opteron box is as good as it gets (for now,
until DDR2 parts arrive), and your previous assertions in regards to
HP Opteron's memory capacity @ 400 Mb/s is not valid.

The 32GB exceeds AMD's specs... if you'd only look. To go higher you need
1Gb parts with DDR400 rating or quad-rank RDIMMs which are only qualified
up to DDR333... that I've seen.
The over spec'ed parts are never going to end up in a server. Power
increases proportional to square of the voltage. In big servers packed
with DRAM devices, you're not going to want to increase memory capacity
by cranking up the voltage to get the performance, even assuming you
can get such a memory system to qualify.

They're only over-spec'd by your 3-year old rulebook. In fact they work
fine by all accounts in desktops and workstations and the mfr makes the
specs.
You derided FBD's for the power consumption of the AMB, yet somehow
power increase to shoehorn overspec'ed memory into a server box is
acceptable?

You read what I said as derision - your choice... reading my irony vs.
your derision on Micron as criticism of FBDIMM is err, contorted logic from
my POV. I was originally optimistic about FBDIMMs but when I see what's
available this late in the game, I'm beginning to wonder about them. It
seems that Intel, in its desperation to catch up, might be jumping the gun.

As for power problems, .2V extra is hardly comparable at this stage of the
DDR1 game and it's not a given that it would be required at PC3200... just
the better qualified devices. Hell, power supplies can be out by that much
anyway.
It's not going to happen, and your repeated reference in bringing
the overspec'ed parts into a discussion about server memory is
distracting to say the least.

I hope it's as distracting as your persistent changing of the framework and
"target" of the discussion.
"Heatpipe" has specific connectation, and it does not apply here. That is
a small clip on heatsink.

More "Wang's rules".Ô_ô I do not know what connectation means but if it
has pipes (capillary tubes actually) filled with a liquid/vapor phase
coolant inside, it's a heatpipe. You can apply some personal
"connectation" for your own view of things - you cannot impose it on other
people.
 
And if you want to play that game, the MSI K8N Master2 FAR is a
dual Socket 940 board with six DIMM slots - all on CPU0.
I did not personally evaluate that board, but I did consider it
for a while - seriously enough to ask both Crucial and Corsair if
it would work with 12 GB of PC3200. Crucial said yes and Corsair
said no. Corsair would only guarantee compatibility with their
512 MB and 1 GB DIMMs.

I'm not sure this advances the case for the Opteron's memory
capacity @ 400 Mb/s. Forgetting for a moment that there's only
6 DIMM slots on one CPU on a dual CPU board, and freely assume
that 12 total DIMM slots for a 2P board is just a matter of
system board real estate, you have Crucial saying that 6 GB per
CPU is the limit they'll support on that board while Corsair
claims that they will support 12 GB per CPU. I think this would've
been stronger evidence if you had the personal experience in
testing this configuration and be able to report/deny any issues
in running 12 GB per CPU just as you did for the 8 GB configuration
on your present setup.

Regardless, the 12 GB per CPU appears to be the maximum amount of
memory that AMD's Opteron is spec'ed to support @ 400 Mb/s - For DDR(1).
It's still quite far away from the 32 GB per CPU (128 GB per 4P box)
that George *seems* to have thought possible.

If I may post portions from this thread that I reponded to. . .

http://groups.google.com/group/comp...chips/msg/7e6cef82ad46a6f1?dmode=source&hl=en

DK> http://h18004.www1.hp.com/products/servers/proliantdl585/specifications.html
DK> Upto 32GB DDR400, 48GB @ DDR333 and 128GB @ DDR266.

DK> Now, to me that looks an awful like they are decreasing memory speeds
DK> as capacity goes up. In fact, since they only sell the system with
DK> DDR333 and DDR400, it looks to me like if you want 128GB, you have to
DK> downclock the memory.

GM> They're specs fer chrissakes - experience has shown differently, if you'd
GM> only look around.

DK> Experience? What experience? Sadly, I don't have dozens of these guys
DK> running around in my house. Can you show systems with benchmarks that
DK> use 128GB of memory running at full speed?

GM> The 128GB is not the issue - you said "> half-loaded" so what's important
GM> is how you get to 128GB and AMD's specs talk of number of ranks of memory.
GM> There are certainly Web sites who have tried successfully to run at DDR400
GM> with a full rank count on the memory channels - sorry but I did not
GM> bookmark them. Obviously the number of CPUs matters as well with AMD...
GM> how you divide your 128GB up.

It seemed to me that George had the notion that 128 GB @ 400 Mb/s was
possible in a 4P configuration, but it was difficult to tell since he
replied to a specific point with a generic counter statement.

After all the discussion, it seems that we're now getting
up to 12 GB per CPU @ 400 Mb/s for DDR(1) Opterons, still a ways to go
to figure out how to get to 128 GB in the 4P box @ 400 Mb/s - repeaters?
That would certainly kill the latency here.
As well, there is the IWill DK88 Opty dualie that has 8 DIMM
slots per processor. However, IWill only promises PC2700 speeds.

That is the theme of the discussion here. Capacity and speed
trade-offs. You can stuff the memory system full of DRAM devices
and run them at a lower datarate, but at 400 Mb/s, you have to
accept substantially lower capacity. HP is limiting the capacity
to 4 GB per channel and 8 GB per CPU. IMO, It seems to be a
reasonable configuration for an Opteron server box.
 
Back
Top