Why no 256-bit memory interfaces for Athlon 64?

  • Thread starter Thread starter pigdos
  • Start date Start date
P

pigdos

If the Xbox 360 and numerous video cards can implement 256-bit memory
interfaces why couldn't AMD? I realize it would require a new socket
architecture, but 128 additional pins couldn't be that much trouble could
it?
 
pigdos said:
If the Xbox 360 and numerous video cards can implement 256-bit memory
interfaces why couldn't AMD? I realize it would require a new socket
architecture, but 128 additional pins couldn't be that much trouble could
it?

Yes it could. That be a lot of pins. And a lot of extra grounds as
returns, and a lot of noise.
 
pigdos said:
If the Xbox 360 and numerous video cards can implement 256-bit memory
interfaces why couldn't AMD? I realize it would require a new socket
architecture, but 128 additional pins couldn't be that much trouble could
it?

Why waste the pins when the memory controller isn't that wide? And
what consumer wants to buy 4 DIMMs for their home PC?

DK
 
pigdos said:
If the Xbox 360 and numerous video cards can implement 256-bit memory
interfaces why couldn't AMD? I realize it would require a new socket
architecture, but 128 additional pins couldn't be that much trouble could
it?

AMD?? Have you counted the number of pins on the memory data path of
*Intel* chips these days?
 
If the Xbox 360 and numerous video cards can implement 256-bit memory
interfaces why couldn't AMD? I realize it would require a new socket
architecture, but 128 additional pins couldn't be that much trouble could
it?

Why should they? Athlons are not at all limited by memory bandwidth.
What was the gain in transition from DDR to DDR2? Next to nothing.
And every extra pin costs extra. AMD has better uses for the money.

NNN
 
Couldn't AMD also charge the consumer more money for those extra pins? Maybe
call it the AMD Athlon 64 X2 Extreme edition?
 
Of course I realize it's extra trouble, but why couldn't AMD charge a
premium price for Athlon 64's so equipped?
 
Excellent point. I remember buying my 30 pin SIMM's in 4's, so his argument
isn't one...

I think it's a question of cost and convenience. For example, just
because I remember TVs that didn't have remote controls, does that
mean I should be prepared to accept a new TV that doesn't have one?

I also remember early memory boards that used socketed DIP chips, and
I recall the original IBM AT which used piggy-backed DRAMs ...

- Franc Zabkar
 
I think it's a question of cost and convenience. For example, just
because I remember TVs that didn't have remote controls, does that
mean I should be prepared to accept a new TV that doesn't have one?

I also remember early memory boards that used socketed DIP chips, and
I recall the original IBM AT which used piggy-backed DRAMs ...

The point is moot because how many "consumers" actually buy memory
chips??
 
Franc, I vaguely remember seeing some of these piggy-backed DRAMs. How would
they work though? It would seem to me that both chips would be trying to
output data simultaneously (on a read) on the same data lines (because both
piggy-backed DRAMs would be receiving the same inputs) or inputting the same
data on a write.

I remember my first job in IT I was configuring two ISA, 16-bit, expansion
memory cards for an IBM AT. You had to flip DIP's to determine how much
memory was installed on the IBM AT (including other memory expansion cards),
whether you wanted to backfill the IBM AT to 640KB, if you wanted the memory
configured as expanded/extended and what the starting address of the memory
expansion card would be. This was all so that we could install OS/2. I think
they were AST products, but I'm not sure now.
 
pigdos said:
Franc, I vaguely remember seeing some of these piggy-backed DRAMs. How would
they work though? It would seem to me that both chips would be trying to
output data simultaneously (on a read) on the same data lines (because both
piggy-backed DRAMs would be receiving the same inputs) or inputting the same
data on a write.

I remember my first job in IT I was configuring two ISA, 16-bit, expansion
memory cards for an IBM AT. You had to flip DIP's to determine how much
memory was installed on the IBM AT (including other memory expansion cards),
whether you wanted to backfill the IBM AT to 640KB, if you wanted the memory
configured as expanded/extended and what the starting address of the memory
expansion card would be. This was all so that we could install OS/2. I think
they were AST products, but I'm not sure now.
They were two flavors of chips with slightly different pinouts (18 pin
modules) so only one got activated at a time.
 
Dave said:
On Fri, 27 Oct 2006 18:39:03 +1000, Franc Zabkar


The point is moot because how many "consumers" actually buy memory
chips??

No, it's just as relevant, since Gateway, Dell, HP, IBM, etc. will all
have to buy twice the number of DIMMs that they are using today. That
will actually increase the average cost by even more, since for ever
dollar of MFG costs, there's often 20-40 cents of markup associated,
and even more for servers.

The bottom line is that it makes no sense to have wider memory
interfaces, because:

1. It costs more
2. AMD doesn't need the bandwidth that bad, or they'd move to new
memory technology quicker
3. AMD would have to redesign the memory controller
4. AMD would probably have to redesign their socket

DK
 
Franc, I vaguely remember seeing some of these piggy-backed DRAMs. How would
they work though? It would seem to me that both chips would be trying to
output data simultaneously (on a read) on the same data lines (because both
piggy-backed DRAMs would be receiving the same inputs) or inputting the same
data on a write.

My Personal Computer AT Tech Ref manual shows an array of 128Kx1 DRAMs
(no part number). The pinout is as follows (supply pins not
identified):

1 Data in
2 WE*
3 RAS 1*
4 RAS 0*
5 A0
6 A2
7 A1
8
9 A7
10 A5
11 A4
12 A3
13 A6
14 Data out
15 CAS*
16

It looks like one 64Kx1 DRAM may respond to RAS0 while the other may
respond to RAS1.

- Franc Zabkar
 
The bottom line is that it makes no sense to have wider memory
interfaces, because:

1. It costs more
2. AMD doesn't need the bandwidth that bad, or they'd move to new
memory technology quicker
3. AMD would have to redesign the memory controller
4. AMD would probably have to redesign their socket


....but your original point was, and I quote:

"And what consumer wants to buy 4 DIMMs for their home PC?"

which was the point I was arguing against (well arguing against is a
bit strong I just remember buying dimms 4 at a time when I had a 486).
 
...but your original point was, and I quote:

"And what consumer wants to buy 4 DIMMs for their home PC?"

which was the point I was arguing against (well arguing against is a
bit strong I just remember buying dimms 4 at a time when I had a 486).

Should've been SIMMs...
 
Dave has an excellent point as well. Back when 486's required 4 30 pin
SIMM's, memory was $45.00 a meg (note this is discounting inflation as
well). Just for the record, 8 MB's would have cost over $360.00 back then.
 
Think they'ed be a specification for 128bit DIMMs at some point in the
future, since DDR2 is still 64bit and requires 2 for dual channel.
 
Think they'ed be a specification for 128bit DIMMs at some point in the
future, since DDR2 is still 64bit and requires 2 for dual channel.

Pins == $$

No advantage, no reason. The folks who do this stuff for a living
know what they're doing.
 
Back
Top