A8V ank Interleaving & Node Interleaving

  • Thread starter Thread starter azukre
  • Start date Start date
A

azukre

Hi,

I'm using A8V motherboard w/FX53 using Corsair CMX512-3200C2. When I
ran into "Bank Interleaving" & "Node Interleaving", I wasn't sure to
leave it enable or disable.

Is there any advantages to enable these two options? It would be
better if someone can explain this in plain english =p, or guide a site
that I can read about it.

Thank you for your time.
·ºë®Ï©°·
 
azukre said:
Hi,

I'm using A8V motherboard w/FX53 using Corsair CMX512-3200C2. When I
ran into "Bank Interleaving" & "Node Interleaving", I wasn't sure to
leave it enable or disable.

Is there any advantages to enable these two options? It would be
better if someone can explain this in plain english =p, or guide a site
that I can read about it.

Thank you for your time.
·ºë®Ï©°·

Since the memory controller is inside the processor, your best bet is to
examine some processor documentation: Many of the settings you will find
on Athlon64 motherboards are copied right out of this document, without
the motherboard maker having to explain them - this is possible if the
BIOS module that controls memory is written by AMD, for example.

http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/26094.PDF

It looks to me like a node refers to a processor. Say you had a Tyan
quad Opteron board. There would be four processors on there, and hence
four memory controllers. "Node Interleave" in that case, suggests that
the memory of all four processors looks like one big memory, and memory
locations are spread across all the memories with some granularity. That
means perhaps, if you coded up a "for" loop in C, and attempted to
read 16KB of contiguous memory, then 4KB of the accesses would be
fielded by the memory controllers on each processor, and 12KB of the
necessary data travel as packets across Hypertransport to your
processor. The off-node accesses, of course, are slower than the local
ones.

In other words, as far as I can tell, "node interleave" only makes sense
on a multiprocessor board populated with Opteron processors (the ones
with up to three Hypertransport interfaces, for interconnecting
processors).

Bank interleave is a little more familiar, as it involves chopping up
a local chunk of memory space, into accesses across multiple memory
chips and DIMMs. For example, DDR DRAM chips currently have four banks
internally, there are two ranks on a double sided DIMM, and with four
DIMMs in dual channel configuration, you have two DIMMs per channel
to interleave as well. That means with four 512MB double sided DIMMs,
there are 16 banks that can be open at any one time. By trying to
keep these banks open, the time needed to access the memory, on
average, is reduced.

For this to work with minimal silicon complexity, all DIMMs have to be
identical. This is why, on some motherboards, mixing 2x1024 with 2x512
DIMMs causes 20% memory bandwidth loss, compare to using four
identical DIMMs. With identical DIMMs, bank interleave can be engaged
across all DIMMs. While is is possible to craft interleave patterns
without invoking that degree of symmetry, it would seem the silicon
designers have better things to do with their time. (I.e. You could
interleave the chips on either side of a double sided DIMM, for a
moderate performance increase, and not bother to interleave between
DIMMs. You don't get as much of a performance gain.)

Best guess,
Paul
 
Back
Top