What core speed were/are north bridges and/or MCH's clocked at?

  • Thread starter Thread starter pigdos
  • Start date Start date
Indeed, I think everyone could agree to that.

Even getting back to his main point (what frequency does the chipset
run at internally), he needs to really rethink what is necessary. Now
that he knows that as a rough approximation, the NB runs at the speed
of the addressing pins on the FSB...he should try and figure out how
that can work and why it is done that way.

You do know that the address bus on AGTL+ is DDR?
 
Um daytripper, how do you figure a hard drive unilaterally writes to main
memory? Do you honestly think devices just independently begin writing to
main memory (aside from memory NIC's) without any CPU intervention at all?
Have you ever read up on what exactly is involved in setting up a DMA xfer?
There is also the concept of uncacheable memory addresses maybe you should
read up on that as well.
 
pigdos said:
Um daytripper, how do you figure a hard drive unilaterally writes to main
memory? Do you honestly think devices just independently begin writing to
main memory (aside from memory NIC's) without any CPU intervention at all?
Have you ever read up on what exactly is involved in setting up a DMA xfer?

Gosh...
The CPU writes into some controller register the address range it might use.
Thats all. The device is then allowed to start transfer at will (for example
disk might start transfering the data after 15ms -- by that time typical OS
will switch task about 15 times as well -- probably whole CPU cache will be
swapped few times as well and CPU has already executed about 30 million
instructions.
The data disk writes into memory must get in sync with CPU caches, and how
do you thing the CPU is informed about data being written?
No, no cache flushing instruction and stuff is done when disk finishes
tranfer, as performance would be terrible.

Hint: google for memory snoops.

There are only two viable options: either address bus is bidirectinal
(Pentiums & Cores and stuff) or there is additional snoop access bus (K7).
There is also the concept of uncacheable memory addresses maybe you should
read up on that as well.

Take your own advice.

The areas where disk transfers occurs are cacheable in any hald-baked OS!

Such areas are typically for some controller memory and/or io ports mapped
into CPU's physical address range. Uncacheable memory area is needed when
memory accesses must be stricly controlled (for example accessing to some
address triggers some action in the device).


rgds
 
pigdos said:
Um daytripper, how do you figure a hard drive unilaterally
writes to main memory? Do you honestly think devices just
independently begin writing to main memory (aside from
memory NIC's) without any CPU intervention at all?

The existance of CPU intervention isn't proof that such
intervention is required by the PCI BusMastering spec.

Depending on how the OS wishes to use resources and the
intelligence of those peripherals, many things are possible.
Video card GPUs set-up and run almost all the DMA.
High-performance ethernet does likewise (except zero-copy).

-- Robert
 
Um daytripper, how do you figure a hard drive unilaterally writes to main
memory? Do you honestly think devices just independently begin writing to
main memory (aside from memory NIC's) without any CPU intervention at all?
Have you ever read up on what exactly is involved in setting up a DMA xfer?
There is also the concept of uncacheable memory addresses maybe you should
read up on that as well.

I dub thee PigIgnorant.
 
Um daytripper, how do you figure a hard drive unilaterally writes to main
memory? Do you honestly think devices just independently begin writing to
main memory (aside from memory NIC's) without any CPU intervention at all?
Have you ever read up on what exactly is involved in setting up a DMA xfer?
There is also the concept of uncacheable memory addresses maybe you should
read up on that as well.

Ok, game over. I've given you *way* more time than you are clearly worth.

While I don't use killfiles, I suspect our sun will burn out before I try to
help you again...

/daytripper
 
No internal multiplier? Then I'm missing something here, unless there's
128-bit or dual 64-bit paths internally, it doesn't seem to jibe that data
is arriving and leaving at 6.4GB/s but is clocked internally at 200MHz.
Even so, it doesn't seem to make sense to clock FSB addresses at 400MT/s
(again for a 200MHz BCLK) and then use a 200MHz clock for the internal
logic.

There is no internal multiplier for the BCLK within the MCH core. No need for
one. The FSB is quad-pumped, and the DIMM bus is double-pumped and
double-width. Data moves through the chip in 32 byte wide form, so the BCLK is
the right frequency for the core...

Cheers

/daytripper
 
W.R.T. AGP writes these are entirely done in the AGP aperture range which
isn't cacheable (and the 3.0 spec specifically recommends to make these
areas uncacheable).

I see your point on ethernet. But, why would any device (aside from NIC's)
begin writing to memory *independently* without any CPU intervention? Even
in the case of NIC xfers, we can make the memory buffer uncacheable (which
would make a LOT of sense). I'm talking single CPU setups here, no
dual-core.
 
If the CPU sets up the DMA xfer then it would KNOW about what memory
addresses are about to be invalidated wouldn't it? I guess the problem is it
doesn't know when they will be invalidated?
 
Give me an example of a device (aside from a NIC or AGP card) that writes to
memory without the CPU setting up the xfer in advance. Proof?
 
pigdos said:
I see your point on ethernet. But, why would any device
(aside from NIC's) begin writing to memory *independently*
without any CPU intervention? Even in the case of NIC xfers,
we can make the memory buffer uncacheable (which would make a
LOT of sense). I'm talking single CPU setups here, no dual-core.

Devices write to main memory to offload I/O tasks from the CPU.

A device can be set to do so independantly of the CPU to offload
the interrupt/setup load from the CPU. This is especially
important when the CPU is loaded and would suffer high latency in
servicing the interrupt that would jeopardize the device buffer.

NICs are the obvious example, but other I/O cards (RS-422)
could qualify. Even disks in some (database?) applications.
Or imagine a DVD player that operates with minimum CPU
intervention: the drive drops into RAM, the GPU processes.

-- Robert
 
pigdos said:
If the CPU sets up the DMA xfer then it would KNOW about what
memory addresses are about to be invalidated wouldn't it? I guess
the problem is it doesn't know when they will be invalidated?

Timing is one thing, but the CPU could invalidate immediate
if only it could recognize the incoming addresses. It can't.

When the CPU sets up transfers, it does nothing that the
cache hardware can recognize as potentially invalidating
lines. Just a series of OUT instructions to control ports.
Cache hardware does know which one is the address and which
is length. And these will vary for different controllers.

MOESI [qv] and cache snooping are the chosen solution, and
very handy for modern systems, almost all of which are SMP.

-- Robert
 
Back
Top