P
pigdos
I've always been curious about this because these devices have to bridge
multiple types of buses.
multiple types of buses.
I've always been curious about this because these devices have to bridge
multiple types of buses.
Because of DDR wouldn't the internal clock speed have to be at least double
the FSB base clock speed?
In the case of north bridges w/8x AGP interfaces it would seem to me that
the core clock speed would have to be at least 533Mhz to be able to keep up
w/a 2.133GB/s data xfer rate.
George said:With e.g. an Intel MCH, assuming an internal width of 64-bits, same as the
FSB, it'd have to run at 1066MHz to keep up with the latest FSB rates to
avoid addiing latency... which would also match a dual channel DDR2 memory
controller at 533MT/s.
Since the FSB interface and memory controller are
allowed to run non-clock locked, and considering strategies like read
around write etc, I'm not sure how that works internally... buffering?
Maybe one of the hardware guys can comment further on chips which handle
multiple time domains.
pigdos said:For every given address generated (for a fetch) something like 4x (probably
more) that amount of data is fetched (at least from memory, to fill a cache
line) on a L1/L2 miss, so I don't see how this could be true.
Since the
address bus is unidirectional there is no bus turnaound time either and I
don't think other devices share these address lines anymore (unlike say
ISA).
The frequencies don't have to line up. Fifos and synchronizers in theDavid Kanter said:Not really, if you think about what 'quad pumping' or 'double pumping'
is, then it should become clear that the issue is not the data bus, but
the addressing bus.
You're right that running the core clock at a non-integer multiple of
an I/O will increase latency due to strange gearing ratios (i.e. it's
simple to run at 100MHz and support 2.5GHz, 2GHz, 2.3GHz...running at
115Mhz and supporting those would be ugly).
Obviously there's quite a bit of buffering going on, and it grows as
the bandwidth of the IO grows.
Certain parts of the chip run asynchronously, and you hope to hell that
the frequencies line up nicely as I said above.
If you think about the size of current chipsets, I/O controllers, etc.
etc. you will realize that a die that size at 2GHz would dissipate
vastly more heat than is reasonable. That alone should tell you that
the frequency is substantially lower than what you are guessing so far.
DK
Del said:The frequencies don't have to line up. Fifos and synchronizers in the
appropriate places take care of it. Multiple clock domains are quite
common these days. If they can all be driven off a common refclk like
62.5MHz, that is nice but multiple oscillators and PLLs are no big deal.
The issue of power vrs frequency is not so clear cut as you might think.
You have to handle the data rates in any case. So the datapath for the
low frequency version has to be much wider, so more circuits, more fan
out, etc.
Aw....Del, you're no fun! I was hoping to use this as a thought
exercise to try and get him to figure out how it worked.
Sure, but power is quadratic WRT frequency and only linear WRT
capacitance.
krw said:Oh, good grief! You want to try again?!!!
krw said:Oh, good grief! You want to try again?!!!
Not really, if you think about what 'quad pumping' or 'double pumping'
is, then it should become clear that the issue is not the data bus, but
the addressing bus.
You're right that running the core clock at a non-integer multiple of
an I/O will increase latency due to strange gearing ratios (i.e. it's
simple to run at 100MHz and support 2.5GHz, 2GHz, 2.3GHz...running at
115Mhz and supporting those would be ugly).
Obviously there's quite a bit of buffering going on, and it grows as
the bandwidth of the IO grows.
Certain parts of the chip run asynchronously, and you hope to hell that
the frequencies line up nicely as I said above.
If you think about the size of current chipsets, I/O controllers, etc.
etc. you will realize that a die that size at 2GHz would dissipate
vastly more heat than is reasonable. That alone should tell you that
the frequency is substantially lower than what you are guessing so far.
I've always been curious about this because these devices have to bridge
multiple types of buses.
In a northbridge design the address lines would run one way -- from the CPU
to the N. Bridge.
Perhaps in a cache-less system one could have processors existing blissfully
unaware of the rest of the universe...
Or perhaps one with a single CPU and no DMA.
Um, nothing writes to memory unless the CPU initiates it. DMA xfers are not
initiated without CPU intervention (at least to set up the starting and
ending addresses). I'm referring here to a single CPU situation w/a N.
Bridge.