Oh, now we get in to that slippery word "bank".
How many are "typical" on say a 128MB SIMM or a 512 MB DIMM?
There used to be just one bank of DRAM arrays inside of each DRAM
chip, so when you lined them up in parallel, you had "1 bank" of
DRAM chips. The terminology rank into problems when SDRAM came along,
and they have 2 or 4 banks per chip. So now when you lined up a
bunch of SDRAM chips, you have 1 "bank" of DRAM chips, and you
have 4 "bank" of DRAM arrays inside of that 1 "bank" of DRAM
chips. It gets to be a bit dizzying after you read some
chipset/motherboard manuals.
We now refer to the "bunch of chips lined up to behave as a
single bank of memory" as a "rank" of DRAM chips, and each "rank"
has 2, 4, 8 or 32 banks of DRAM arrays inside of them.
Each SIMM can have 1 or 2 ranks, but since you only have FPM or
EDO on SIMMS, each rank only has 1 bank per rank.
Each DIMM can have 1 or 2 ranks. Each SDRAM/DDR device has 4
banks per rank, so 4 or 8 ranks per DIMM.
How are the banks laid out in RAM? Sequentially (1=0-32, 4=96-128MB),
or interleaved on physical addresses? What about multiple SIMM/DIMMs?
Depends on the chipset. You can take a look at Intel's chipset
manuals, and some of them tell you how the physical address bits
are mapped to the DRAM address bits.
Rank id's are usually mapped to the very high range of the physical
address. It's a very in-effective way to utilize bank-rank parallelism,
but the problem is that the number of ranks is variable depending on
what the end-user sticks in their machine. That's why the rank_id is
mapped to the high range.
They did, and I think Samsung has shown such a device operating at
1.6 Gbps, which is quite good, and CAS is down to ~20ns or so...
Not that it matters any longer.
Whoa! Doesn't more open banks mean more heat?
Yes.
Small wonder they needed heatsinks.
They need heat sinks/heat spreaders for a slightly different reason
than the CPU's.