While it may suit your agendum, that is not a quote of what I said.
My "agenda" is only to report facts and data. The facts on the ground
is that I have not seen any references that has shown that you can
reliably add more memory to the Opteron system @ 400 Mb/s.
The point here is that the issue concerns both speed AND capacity.
The references given suggest that higher speeds are possible, but
none shows that higher speeds are possible with full capacity.
I did not give any "concrete reference" URL... not even to THG - that was a
casual mention of one I remembered among several I've seen and I do *not*
keep bookmarks for such things... as common as they are. You need to get
out more - the "common knowledge" is in discussion groups all over the
place - I visit several Web Forums to gather info and troubleshoot and I've
seen such discussion of memory configs in almost all.
As you may suspect, I read plenty about memory systems, and I would
vigorously challenge your memory in regards to the "common knowledge"
here. I do not believe it's common knowledge for anyone to have
demonstrated running 64 DDR(1) SDRAM devices reliabliably @ 400 Mb/s.
If you can, please do cite a concrete reference URL. What you remember
to have seen may not actually be what it is.
With the right devices and with registering the server market should be
able to do better - that's all I'm saying. There's nothing about an x4
device which would prohibit making the higher speed versions in x4 form.
The issue that prohibits making the higher speed version with x4 devices
is that you have to hang 16 of them (18 with ECC) on the same address
and command busses per rank. That's a rather heavy electrical load to
run @ 200 MHz. So, no, you can't just look at "enthusiast memory" built
with x8 parts and automatically assume that x4 parts will work just the
same at the same data rate, because you're going to need even faster
parts to meet the same timing.
Ah so the servers don't "go for speed"? What I'm saying is that with the
devices available now I don't see why they could do better. You know
damned well that UDIMMs are 8 devices per rank so why do you have "to ask
the next question"?
Because you cited some rather nebulous references in regards to memory
from the enthusiast market and assumed that it would work in the server
world. I was simply pointing out that's not going to work here because
of configuration differences and electrical loading considerations.
Ah now we have it: "conservative"
... that's my complaint.;-)
Which is what is shipping in HP's Opteron server, and guarenteed to
work by HP. That guarentee provides the effective upper limit to
the maximum memory capacity of an Opteron server as of today. That
limit cannot be exceeded or changed arbitrarily. This, I believe was
the crux of the contentious point. . .
No, the topic was *not* limited to 4P Opteron box, not that it's make much
difference with Opteron anyway. I'm not sure how you're counting your "32
devices" but HP only makes the rules for its boxes... not Opteron in
general.
32 devices is not counting ECC. 36 counting ECC.
The limit is 2 ranks of 18 x4 DDR(1) devices running @ 400 Mb/s, and
that's the same number I am seeing over and over again. Not just HP,
but Tyan as well. So the limitation isn't just HP Opterons, or even
Opterons of any brand of servers, but DDR(1) SDRAM memory controllers
@ 400 Mb/s. The Opteron just happen to have a DDR(1) SDRAM memory
controller that has to follow the same constraints as everyone else.
If you claim that the limit can be exceeded, please show me where
you're getting your impression from, because I'd certainly like to
see where someone is getting a fully loaded DDR(1) memory system to
run @ 400 Mb/s. **
** By fully loaded, I mean 4 ranks of 16 x4 devices, for a total of
64 devices (not counting ECC) per channel. With 1 Gbit devices, you'll
get 8 GB of memory per channel, and 16 GB of memory per Opteron
processor. In a 4P config, that would push your total memory
capacity to 64 GB instead of the current limit of 32 GB @ 400 Mb/s.
So a recommended .1V difference for DDR400 makes it overclocked and a
different "set of specs"?<boggle> As I've already tried to point out the
"baseline spec's" for DDR are >3 years old - if you're going to deny the
ability to push as the silicon opportunity presents, I have to ask why? We
"allow" that CPUs, GPUs, etc. increase voltage and speed over a silicon
design lifecycle... why not memory?
The difference is that CPU's and GPU's are single vendor to single customer
parts. That is, Intel can change the specs of these devices to whatever
it wants, whenever it wants, as long as its customers doesn't mind
following the new spec. Same with AMD, NVIDIA, ATI, etc. DRAM doesn't
work that way. The dynamics of the commodity market means that the parts
are supposed to be completely interchangeable. So the "interchangeable"
aspect of things greatly limits the standards definition process.
For example, Samung can probably crank out much faster DDR parts because
it has excellent process tech, but some of the less-well funded fabs
can barely meet the spec, and they would be hard pressed to meet these
push spec parts, so they would be resistant to changes in the JEDEC
standards definition.
The limitation of the JEDEC standard means that the faster guys can't
really run ahead of the slow guys, although they're finding some ways
around that with the push spec parts designed for the "enthusiast
market". So, no, you can't just take advantage of opportunities
made available with faster process technology to make your own faster
DRAM parts. You have to wait until sufficient number of DRAM manufacturers
can agree with you on the new addendum to the spec, and a suffient
number of design houses (Intel, IBM, Sun, AMD, etc etc) agree to the
same set of proposed addendum to the spec, before the standard can be
created, and you can sell you part as (JEDEC) DDRx xyz MHz, and customers
can be reasonably certain that your DDRx xyz MHz parts can operate
interchangeably with parts from Infineon, Samsung, Micron, Elpida,
etc.
Oh you mean like FBDIMMs... with AMBs which "will brown a burger better
than a George Foreman Grill can" [quote from this NG]?
FBD's have been in developement for more than 2 years, and they're still in
development/testing/tweaking.
They'll enable servers with incredible amount of memory, and the power
headache that comes with it. The AMB is just part of the problem. With
16 device per FBD, you can get the ratio of AMB device power to DRAM
device power down to 15~20%.
As we've discussed, the practical limit for DDR(1) @ 400 Mb/s is 32 devices
per channel, and for DDR2 devices it's 64 devices @ 400 Mb/s per
channel. Each channel will cost you about 100 control/data pins.
FBD's can get you 256 devices per channel with much fewer pins. You can
basically hang 10x more DRAM bits per pin. Now imagine a memory system with
10x more devices, and the amount of power that memory system can consume.
AMB is a (relatively) small problem.