Athlon 64's: a shared memory bus?

  • Thread starter Thread starter pigdos
  • Start date Start date
Um, the AMD A64 documentation also refers to a North Bridge, but I suppose
your majesty is the final authority on the architecture of the A64, not AMD:

AMD Functional Data Sheet,
939-Pin Package

2.5 Northbridge
The Northbridge logic in the processor refers to the HyperTransportT
technology interface, the
memory controller, and their respective interfaces to the CPU core. These
interfaces are described in
more detail in the following sections.
2.5.1 HyperTransportT Technology Overview
The processor includes a 16-bit HyperTransportT technology interface
designed to be capable of
operating up to 2000 mega-transfers per second (MT/s), resulting in a
bandwidth of up to 8 Gbytes/s
(4 Gbytes/s in each direction). The processor supports HyperTransportT
synchronous clocking
mode. Refer to the HyperTransportT I/O Link Specification
(www.hypertransport.org) for details of
link operation.
2.5.1.1 Link Initialization
The HyperTransportT technology interface of the processor can be operated as
a single 16-bit link.
The HyperTransportT I/O Link Specification details the negotiation that
occurs at power-on to
determine the widths and rates that will be used with the link. Refer also
to the BIOS and Kernel
Developer's Guide for the AMD AthlonT 64 and AMD OpteronT Processors, order#
26094, for
information about link initialization and setup of routing tables.
The unused L0_CTLIN_H/L[1] pins must be terminated as follows:
. L0_CTLIN_H[1] must be pulled High.
. L0_CTLIN_L[1] must be pulled Low.

No mention of GART or TLB's for AGP read accesses either...

Wrong document sunshine. I already told which one but you seem to have a
R&C problem. Hint: it's mentioned above in your err, unquoted umm, quote.
 
Neither does it refer to DMA, so I guess you were wrong as well, your Royal
Highness.

So AGP 3.0 doesn't do DiME and (according to you) doesn't do DMA but it's
err said:
Are there any AGP video cards that use isochronous xfers?

Why don't you read us the AGP 3.0 specs and tell what the requirements are
for using the AGP 3.0 moniker on a video card product?
On another note, do any of the modern Pentium-4 chipsets feature DDR or
DDR-II memory architectures with xfer speeds on-par w/A64's? I had thought
Pentium-4's could perform dual-channel so wouldn't it be the equivalent of a
128-bit wide memory bus?

What does that have to do with anything? The near equivalence, between
Intel and AMD methods, of peak memory bandwidth is not in question - given
the same clock speeds and bus width it would require considerable
incompetence on the part of one for it not to be. What is in question, is
your contention that AMD's configuration accumulates "propagation delays"
through some imagined "nice, big stall".

What is more relevant in any comparison is the internal bus width and clock
speed of the Intel MCH chip between the memory channel buffers and the
AGP/PCIe-x16 interface. We know that HyperTransport has a bus width of
16-bits both up and down and a transfer rate of 2GT/s for 4GB/s in both
directions simultaneously... at least I, and others here, knew it -
apparently, to you, that was a revelation. I don't recall reading anywhere
what internal data width or multiplier the Intel MCH might use over the
base system clock of 200MHz (for a 800MT/s FSB) for its internal logic but
if you have that info then please tell.
I'm still waiting for proof that A64's have dedicated hardware (LOL) to
handle GART and TLB's for AGP memory xfers, your majesty.

Oh it's there - keep looking.
 
Um, the AMD A64 documentation also refers to a North Bridge, but I suppose
your majesty is the final authority on the architecture of the A64, not AMD:
AMD Functional Data Sheet,

The original meaning of "North Bridge" and "South Bridge" had to do
with different ends of a PCI bus back in the day that the two chips
used PCI to communicate (think early Pentium chipsets). This original
definition stopped being applicable back in the late 1990's.
Gradually the exact role that various chips play in a computer has
evolved to the point that no current system has anything that really
matches up with what a "North Bridge" was back in the i430FX chipset
days.

Intel stopped calling their chipsets north and south bridges when they
ceased to match the original definition, hence we got their Memory
Controller Hub (MCH) and I/O Controller Hub (ICH). VIA kept the north
and south bridge notation despite the fact that everything PCI related
had moved into the "south" bridge (and hence calling the two chips a
matched set of bridges didn't really make sense). SiS was moving
towards all-in-one chips, so it didn't really matter, and nVidia enter
the market with their Media and Communication Processor (MCP) and
Integrated Graphics Processor (IGP) chips.

So where did AMD fall with all of this? Well they've moved things
around even more. In an Athlon64 they've moved the memory controller
onto the processor, the processor bus controller onto one chip and the
PCI controller onto another chip. So really the traditional job of
the north bridge chip has been split between two or three different
chips. If you feel like saying that the north bridge is in the CPU
then that is fine, part of it is. If you prefer to call the IGP or
PCI-X/AGP Tunnel (AMD's terms) are the north bridge, then that works
too because part of it is there too. Or you could even call the MCP
or I/O Hub the north bridge (and south bridge too for that matter)
because that's where the PCI bridge now resides.


Personally I just say that there is *NO* chip that really should be
called either a "north bridge" or "south bridge" in a modern system.
The terms are rather dated and really don't describe what's going on
inside of them anymore. However if you have a definition of North
bridge that you like (there are lots to chose from), then you're quite
welcome to stick with it.
 
: Neither does it refer to DMA, so I guess you were wrong as well,
<snip>

Hey Pig, will you PLEASE stop the idiotic top-posting? You're making this
rather hilarious exchange between yourself and GM / KW almost impossible
to follow. Suggestion: Buy yourself a clue and bail out now...you're
**way** outgunned here. LOL!

J.
 
AGP reads/writes are different than DMA because the device initiates the
transaction, not the CPU. DMA transactions are setup by the CPU NOT the
device.

What is the title of the document that refers to A64's having any hardware
dedicated to APG transactions? I couldn't find it anywhere on AMD's website.
The documents I did find made absolutely no mention of AGP whatsoever...
 
Um, as a matter of fact, AGP 3.0 is only backward compatible in the sense of
using AGP 2.0 xfers. AGP 3.0 xfers don't use DMA and limit AGP accesses to:

2.3.1 AGP Transaction Requests
In the past, AGP transactions could be generated using two modes: PIPE and
SBA. The core-logic
had to support both modes, while the AGP Master could optionally use either.
AGP3.0 does not
support using PIPE mode to generate an AGP request. This leaves only the SBA
scheme, which is no
longer optional. When operating in AGP3.0 mode, the PIPE# signal pin on the
AGP connector is given
a new function, DBI-HI. A "universal" implementation must multiplex PIPE#
and DBI_HI onto the same
signal pin and select the right function based on the signaling mode of
operation.

I'm only referring to the AGP 3.0 spec. It doesn't use DMA and no where
mentions DMA because it's not
DMA. If AGP xfers are merely DMA then why do we need an additional
specification for AGP? Why not just
use DMA?

It's nice how you never bothered to prove your assertion that A64's have any
hardware dedicated to AGP
xfers. Is it because you can't?

If we have a TLB miss on an AGP access the A64 on-board memory controller
wouldn't be as efficient as the
latest PIV implementation since the MCH has direct access to the memory bus.
If you can't understand how having another device between the target and
source makes a difference then there's not much I can do for you...

Of course the stall won't occur because of the hypertransport bus so
obviously I was wrong. At least I can admit it, your majesty.

According to Intel's documentation the 875P is the latest chipset that
supports AGP. It can handle an 800Mhz FSB at 64-bits and dual-channel DDR400
so I don't see why it would be any slower than the A64's on-board memory
controller, the latency to/from the MCH would be just as bad/good as the
A64's memory controller but there would be that much less propagation delay.
If you think data is instantaneously available (i.e. in zero time) from the
memory bus to the north bridge (yes, AMD's documentation refers to it as
this) you're on drugs.
 
George McDonald doesn't impress me at all. He hasn't bothered to prove half
the shit he spews out...
 
I see your point. It seems like the A64 on-board northbridge only handles
the hypertransport interface though.
 
AGP reads/writes are different than DMA because the device initiates the
transaction, not the CPU. DMA transactions are setup by the CPU NOT the
device.

DMA is when *any* I/O device accesses memory directly (hint: "Direct
Memory Access"). It doesn't matter if it's a stupid controller that can
only access sequential memory locations after beign set up by the
processor, a scatter-gather controller, or a even a peripheral processor.
It's *STILL* DMA.
What is the title of the document that refers to A64's having any
hardware dedicated to APG transactions? I couldn't find it anywhere on
AMD's website. The documents I did find made absolutely no mention of
AGP whatsoever...

You are a indeed a moron. Of *course* AMD processros can accept memory
accesses from the I/O channel. *NO* DMA would would otherwise. Why would
you think "dedicated hardware" would be necessary? ...you don't think!
 
The difference in a northbridge approach would be that in AGP xfers I have
a. to xfer an address then my data (on a write) to the A64 memory controller
b. and on reads a GTLB miss would result in us having to read the GART even
before we try to fetch the data the AGP read is interested in.

How can this be more efficient than having a north bridge or MCH handle
this? It's producing more traffic to more devices for the same data.
 
AGP reads/writes are different than DMA because the device initiates the
transaction, not the CPU. DMA transactions are setup by the CPU NOT the
device.

What is the title of the document that refers to A64's having any hardware
dedicated to APG transactions? I couldn't find it anywhere on AMD's website.
The documents I did find made absolutely no mention of AGP whatsoever...

I have already told you - you just won;t listen.<shrug>

Now go away and *READ*.
 
The difference in a northbridge approach would be that in AGP xfers I have
a. to xfer an address then my data (on a write) to the A64 memory controller
b. and on reads a GTLB miss would result in us having to read the GART even
before we try to fetch the data the AGP read is interested in.

How can this be more efficient than having a north bridge or MCH handle
this? It's producing more traffic to more devices for the same data.

Please explain how it can be less efficient to have the GTLB miss happen in
the faster logic of CPU on-die GART logic and the Gart lookup go straight
to the memory controller... just as it does on a "north bridge" system,
which still doesn't exist in current chipsets/systems.

BTW all this *is* history - there is no chipset GART logic in a PCI-E
system - linear translation to potential non-contiguous physical system
memory addresses must now be done in the GPU interface, right on the
graphics card.
 
TOP POST by ME: You are just too ignorant to argue wth any longer. You
refuse to learn and ignore references to authoritative documents; you have
no conception of what DMA means and it is futile to try any further logical
discussion.
 
Back
Top