The practical consequences of FB-DIMM

  • Thread starter Thread starter Robert Myers
  • Start date Start date
R

Robert Myers

http://www.computerweekly.com/Articles/2005/06/14/210381/Bigadvancesinserverpowerinsixmonths.htm

I could have titled the post "Big advances in server power in six
months," although that should really mean "Big advances in [Intel]
server power in six months," since that's what the article is about,
and those wanting a significant advance in server power right now
would probably be looking at Opteron. The article claims that the big
advances are sufficiently significant that buyers should consider
waiting.

In any case, the big advance would appear to be dual core xeon (the
big advance is that they share cache?) and FB-DIMM. Somewhere in the
same time frame (I don't keep track of AMD), AMD should be close to
quad core chips, so CPU count can't be the big deal.

Aside from the possible advantages to a shared cache, the big
difference I can see is FB-DIMM, for which the difference is lower pin
count, which means more memory capacity. The article mentions MySQL
and SQL Server 2005 as likely to benefit from expanded memory
capacity.

If I try to parse through the market hype, I come up with this: you
need more memory really to take advantage of more cores, and to add
more memory, you need something like FB-DIMM. Is that really a big
advance in server power?

RM
 
Robert said:
http://www.computerweekly.com/Articles/2005/06/14/210381/Bigadvancesinserverpowerinsixmonths.htm

I could have titled the post "Big advances in server power in six
months," although that should really mean "Big advances in [Intel]
server power in six months," since that's what the article is about,
and those wanting a significant advance in server power right now
would probably be looking at Opteron. The article claims that the big
advances are sufficiently significant that buyers should consider
waiting.

Ooh, that's not going to please Intel very much. It's server sales are
already showing signs of shakiness, as reports out of Taiwan seem to
indicate that the majority of server motherboards coming out are for
Opteron nowadays, except at Supermicro. If people are being told to
wait on Xeon, then that sales number is going to look even shakier.

AnandTech: Industry Update - Q2-2005: Chipset wars, AMD's growing
market share and more...
http://www.anandtech.com/tradeshows/showdoc.aspx?i=2444
In any case, the big advance would appear to be dual core xeon (the
big advance is that they share cache?) and FB-DIMM. Somewhere in the
same time frame (I don't keep track of AMD), AMD should be close to
quad core chips, so CPU count can't be the big deal.

They don't mention whether the new DC Xeons will have shared cache,
unless it's based off of the Yonah Pentium-M. I doubt they will be
Yonah, more likely based off of the Netburst architecture still --
stretched out another couple of quarters.

AMD isn't expecting to have quad-cores till 2007.

FB-DIMM would be the next evolution of registered DIMMs. It'll help
increase the density of memory modules on a server, at the expense of a
little latency.
If I try to parse through the market hype, I come up with this: you
need more memory really to take advantage of more cores, and to add
more memory, you need something like FB-DIMM. Is that really a big
advance in server power?

Sure, why not? You can never have too much memory.

Yousuf Khan
 
Robert Myers wrote:

Sure, why not? You can never have too much memory.

Well, sure. The question is: will it (more memory capacity) or should
it drive purchase decisions for any significant fraction of buyers?
You have to believe that server capacity would be limited by memory
footprint at 8 Gb for a significant number of buyers for FB-DIMM to
matter all that much. The real question is: is there something else
I'm missing?

RM
 
If you're doing server consolidation, then the critical technologies
are SAN, CPU-based virtualization, and (related to virtualization) more
memory. That's the only way to make one server take over the duties
from several little ones.

Also the more memory you have, the more likely it is that you can
replace one of your big RISC servers with a little x86 box. The RISC
boxes typically have twice as many memory slots as x86 boxes. That
doesn't necessarily mean that the RISC boxes have very high-performance
memory compared to x86 boxes, but some applications just require a lot
of memory, but the speed of the memory is not as important.

Yousuf Khan
 
If you're doing server consolidation, then the critical technologies
are SAN, CPU-based virtualization, and (related to virtualization) more
memory. That's the only way to make one server take over the duties
from several little ones.

Also the more memory you have, the more likely it is that you can
replace one of your big RISC servers with a little x86 box. The RISC
boxes typically have twice as many memory slots as x86 boxes. That
doesn't necessarily mean that the RISC boxes have very high-performance
memory compared to x86 boxes, but some applications just require a lot
of memory, but the speed of the memory is not as important.
Ah. The key is virtualization. I was envisioning databases somehow
magically needing the larger memory footprint because of more
throughput (the article sort of implies that with the SQL references).
The real story is that, if you try to jam multiple servers into one
box, it's a fair bet you'll need more memory than a single x86
platform can currently supply.

The unstated option here is that buyers who either had to buy IBM or
do with a bunch of x86 boxes can now clean up all that cable clutter
and still buy from Dell. It would be impolitic for an industry rag to
be so blunt, I suppose.

RM

RM
 
Robert said:
Ah. The key is virtualization. I was envisioning databases somehow
magically needing the larger memory footprint because of more
throughput (the article sort of implies that with the SQL references).
The real story is that, if you try to jam multiple servers into one
box, it's a fair bet you'll need more memory than a single x86
platform can currently supply.

Well, it does come down to requiring memory for databases in the end.
Except, in this case it's not for a single database but multiple.
The unstated option here is that buyers who either had to buy IBM or
do with a bunch of x86 boxes can now clean up all that cable clutter
and still buy from Dell. It would be impolitic for an industry rag to
be so blunt, I suppose.

Or they could still buy their buy their x86 gear from IBM. Dell would
not be a big player in the server consolidation game, with only a 4-way
Xeon box as their biggest asset. When people talk about server
consolidation they're usually talking about folding 5, or 6, ... or a
dozen boxes into one. A 4-way Xeon might be able to replace four 1-way
servers, or two 2-way servers, even with virtualization. To generate
the economies of scale, you have to be able to replace many little
boxes with X number of processors overall with a single box with Y
processors, where Y is less than X. So for example it might be easier
to replace sixteen 2-way boxes with a single 8-way box.

Yousuf Khan
 
Robert Myers said:
Ah. The key is virtualization. I was envisioning databases somehow
magically needing the larger memory footprint because of more
throughput (the article sort of implies that with the SQL references).
The real story is that, if you try to jam multiple servers into one
box, it's a fair bet you'll need more memory than a single x86
platform can currently supply.

The unstated option here is that buyers who either had to buy IBM or
do with a bunch of x86 boxes can now clean up all that cable clutter
and still buy from Dell. It would be impolitic for an industry rag to
be so blunt, I suppose.
Where have you been. IBM been selling server consolidation on all kinds
of platforms for a long time. You can even take an iseries and stuff it
full of attached intel processors and go to town. And you can put a lot
of memory on a intel box without fbdimm, which is good because it isn't
out yet.

Try to get caught up a little, eh?

del
 
Where have you been. IBM been selling server consolidation on all kinds
of platforms for a long time. You can even take an iseries and stuff it
full of attached intel processors and go to town. And you can put a lot
of memory on a intel box without fbdimm, which is good because it isn't
out yet.

Try to get caught up a little, eh?
It isn't IBM I'm trying to catch up on. IBM has always blazed its own
trail. It's the rest of the world that uses x86, and, in particular,
how Intel's processors will be competitively placed.

RM
 
Robert said:
http://www.computerweekly.com/Articles/2005/06/14/210381/Bigadvancesinserverpowerinsixmonths.htm

I could have titled the post "Big advances in server power in six
months," although that should really mean "Big advances in [Intel]
server power in six months," since that's what the article is about,
and those wanting a significant advance in server power right now
would probably be looking at Opteron. The article claims that the big
advances are sufficiently significant that buyers should consider
waiting.

I could be wrong, but I thought the lower pin count (thus the potential
for more RAM) was only half the benefit of FB-DIMM.

The other half, and just as significant for Intel, is that it allows an
on-die memory controller in the CPU that's not married to a specific
memory type. Behind the buffer the DIMM can use whatever memory type it
wants. This would, in theory, make switches from things like DDR2 to
DDR3 completely transparent to the rest of the system.

At least that's what I thought I read somewhere.
 
Robert Myers said:
It isn't IBM I'm trying to catch up on. IBM has always blazed its own
trail. It's the rest of the world that uses x86, and, in particular,
how Intel's processors will be competitively placed.


They seem to be competitively placed in IBM servers at present ;-)
Not to mention Dell servers, which will certainly move upscale.

Then there are all the smaller server companies, chipping away
at the margins of the business. One or another of them might make
a good move into 1U or blade 4-16 processor land, particularly
when there are 4 processors per chip and then 8 ...

i.e. I see some potential "interesting" moves in that market space
over the next 2-3 years.

--

... Hank

http://home.earthlink.net/~horedson
http://home.earthlink.net/~w0rli
 
Hank said:
They seem to be competitively placed in IBM servers at present ;-)
Not to mention Dell servers, which will certainly move upscale.

Then there are all the smaller server companies, chipping away
at the margins of the business. One or another of them might make
a good move into 1U or blade 4-16 processor land, particularly
when there are 4 processors per chip and then 8 ...

i.e. I see some potential "interesting" moves in that market space
over the next 2-3 years.

I'm sure there will be. Intel *seems* to have been caught a little
flat-footed by AMD, but Intel doesn't seem particularly exercised. Not
only that, but this news article doesn't say: if you might need to
replace equipment now, consider one of those spiffy new Opteron boxes.
It says: if you don't need to replace right away, wait a little because
there's really neat stuff coming from Intel.

What is that neat stuff? Dual core chips? We've got twofer's of some
kind from Intel already, don't we? Okay, so maybe chips that share
cache between two cores will be much more impressive (do we know that
they will share cache?) than the dual core chips Intel is shipping now.

What does that leave? FB-DIMM, apparently. Slightly higher memory
bandwidth, much lower pincount, more memory on a standard motherboard.
With virtualization technology, that means relatively inexpensive
server boards to consolidate multiple smaller servers.

As Del points out, IBM already has all that capability, but you do have
to pay for an i-series.

Or maybe there's something about the upcoming intel technology that I
just missed completely.

RM
 
gaf1234567890 said:
I could be wrong, but I thought the lower pin count (thus the potential
for more RAM) was only half the benefit of FB-DIMM.

The other half, and just as significant for Intel, is that it allows an
on-die memory controller in the CPU that's not married to a specific
memory type. Behind the buffer the DIMM can use whatever memory type it
wants. This would, in theory, make switches from things like DDR2 to
DDR3 completely transparent to the rest of the system.

At least that's what I thought I read somewhere.

That feature is there too. But I'd suspect that should only be
considered a side benefit, the main benefit is the extra ram that can be
installed on a system.

Yousuf Khan
 
Robert said:
http://www.computerweekly.com/Articles/2005/06/14/210381/Bigadvancesinserverpowerinsixmonths.htm

I could have titled the post "Big advances in server power in six
months," although that should really mean "Big advances in [Intel]
server power in six months," since that's what the article is about,
and those wanting a significant advance in server power right now
would probably be looking at Opteron. The article claims that the big
advances are sufficiently significant that buyers should consider
waiting.

I could be wrong, but I thought the lower pin count (thus the potential
for more RAM) was only half the benefit of FB-DIMM.

The other half, and just as significant for Intel, is that it allows an
on-die memory controller in the CPU that's not married to a specific
memory type. Behind the buffer the DIMM can use whatever memory type it
wants. This would, in theory, make switches from things like DDR2 to
DDR3 completely transparent to the rest of the system.

At least that's what I thought I read somewhere.

Lower pin count? Assuming that was even true, why would that enable more RAM?

fwiw, fbdimms use a 240 pin connector, and over 100 of those are signal pins
for the memory interconnect (and not including sideband signals).

The only reason fbdimms *could* increase memory capacity is due to the
buffering that the AMBs provide - both at the system interconnect level where
every dimm rides on its own point-to-point segment, and at the DRAM interface
- both at the expense of additional latency.

However, in its first incarnation of fbdimms-enabled Xeon chipsets, Intel
didn't bother to implement a wide enough chip selection field to allow fbdimms
to actually increase capacity over what could be provided with registered DDR2
dimms.

No gain on the play.

/daytripper (Irony is so ironical...)
 
Back
Top