Y
LOM..."Landed-on-motherboard" more likely.YKhan said:On the server front, they can connect things like commodity PCI-e
Infiniband cards directly to the CPU for HPC clusters.
Robert said:LOM..."Landed-on-motherboard" more likely.
Poor VIA, SIS, and ULI - they will be relegated to making commodityTwo thoughts occur first that motherboards are about to get a fair bit
cheaper and second that overclocking is about to get more complex.
sounds like motherboards are basicly going to get turned into sockets and
a few things that wouldn't work inside the cpu. im sure ive read somewhere
that wireless and sound will need to remain seperate due to the way they
work.
Poor VIA, SIS, and ULI - they will be relegated to making commodity
south bridges, or fight mighty Intel for a piece of Pentium chipset
market. Nvidia and ATI have at least something to fall back on -
graphics. High end GPU will stay separate from CPU at least for quite
a while. OTOH, low end GPU may find its way into south bridges,
making them a bit less of a cheap commodity.
Considering the cooling requirements of even a low-end GPU (cooling fins
coming out all over the place), it's unlikely that they'll try to
integrate the GPU with the southbridge. The video chip overheats and you
lose connection to your hard drives?
Yousuf Khan
Low end GPU like X300 can do with passive heatsink, and quite a few
north bridges now need a fan even without graphics. So they'll slap a
fan on the south bridge/GPU combo. If a fan is not enough a BIG fan
will do. After all they'll need to sell something, and the market for
cheap integrated chipsets will always be there. Looks like nobody at
Intel is afraid to lose connection to RAM because the integrated
Extreme Graphics could overheat ;-)
Yousuf said:One thing nobody has mentioned yet is the shear irony of this situation.
Intel created PCI-e as a competitor to Hypertransport, because they
refused to adhere to a standard that AMD came up with. AMD gave the
green light to PCI-e without even a fight, knowing full well that PCI-e
and HT would be compatible with each other (just slightly different
physical layers), and now it may come up with the first PCI-e integrated
into the CPU.
Yousuf Khan
YKhan said:Perhaps this will remind you?
Approval near on Intel PC-overhaul plan | CNET News.com
http://news.com.com/2100-1001-270823.html
Yousuf Khan
Del said:Yes, it says intel got pci-e adopted. Hypertransport is totally
different thing, capable of driving a few inches. It is a FSB. Why the
doof that wrote the article even mentioned it isn't clear.
Because there was a time when HT was proposed as the next generation
PCI. It was initially going to allow PCI to get faster by simply
splitting each PCI slot into its own PCI bus, with each of the PCI buses
connected over HT. Then eventually they were talking about HT gaining
its own slot connector and people using HT directly.
Both of those scenarios actually did come true, in a way. HT has become
a very popular underlying layer for PCI, PCI-X and even PCI-E. There is
also a slot connector standard for HT called HTX, but it's not
necessarily all that popular.
Tony said:There's no spec that shows exactly how far each could be driven, but I
suspect that you'll find Hypertransport and PCI-Express could achieve
comparable distances for similar data rates. My idea of "a few
inches" in computer designs is 2-3", and there are definitely HT
setups running at high data rates that go further than that (I would
guess that the furthest I've seen would be about 12" for a 16-bit,
2000MT/s link).
To the best of my knowledge there is only ONE HTX add-in card, an
Infiniband card from Pathscale. This card was recently used to set
some world records for low-latency communication in clusters.
The slot is actually VERY similar to PCI-Express (same physical
connectors) and the specs are designed to make it easy to have both
PCI-E and HTX on the same board.
Really when you get right down to it, Hypertransport and PCI-Express
started out with rather different goals but the end result is
surprisingly similar. I guess there really are only so many ways to
skin a cat.
Tony said:To the best of my knowledge there is only ONE HTX add-in card, an
Infiniband card from Pathscale. This card was recently used to set
some world records for low-latency communication in clusters.
The slot is actually VERY similar to PCI-Express (same physical
connectors) and the specs are designed to make it easy to have both
PCI-E and HTX on the same board.
Really when you get right down to it, Hypertransport and PCI-Express
started out with rather different goals but the end result is
surprisingly similar. I guess there really are only so many ways to
skin a cat.
Del said:HT can go maybe a foot, if you are really lucky. Work out the skew
budgets. At 2000 MT, the board is allocated less than 100 ps as I recall.
PCI-E on the other hand can go several meters.
Totally different approaches.
Yousuf said:Which is why PCI-e never got adopted as a CPU to CPU interconnect.
Yousuf Khan
David said:One never knows what the future holds. Anyway, it's pretty obvious
that parallel transmission (read HT) is the way of the past. If you
look at any high performance interconnect, they are all serial. Talk
to the Rambus guys, they know what they are doing...
HT was never envisioned to replace PCI-X, PCI or anything else.
Yousuf, you should at least try to distinguish yourself from AMD PR
personnel...
David Kanter said:One never knows what the future holds. Anyway, it's pretty obvious
that parallel transmission (read HT) is the way of the past. If you
look at any high performance interconnect, they are all serial. Talk
to the Rambus guys, they know what they are doing...
Now, as to whether serial connections between CPUs is a good idea, I am
not entirely sure; I suspect Del is far more qualified to discuss that
topic than I am. Generally, serial connections can be driven far
faster, but there is slighly longer latency for SERDES.