Fortune "Most Admired" American semiconductor manufacturers: Intel vs. AMD

  • Thread starter Thread starter Stacey
  • Start date Start date
The reason why they don't want a piece of that pie is because it's a
money-losing pie.

And just look at what a small piece of the market Intel-made MBs have.





To reply by email, remove the XYZ.

Lumber Cartel (tinlc) #2063. Spam this account at your own risk.

This sig censored by the Office of Home and Land Insecurity....
 
George said:
It's not clear to me how long DDR-II will survive
before it is succeeded by say the FB-DIMM interface. If you have the
memory controller interface on the CPU die, you're always going to be
hoping that your CPU generations correspond pretty closely with memory
interface standard generations.

That could be a problem. Look at how the memory configs changed during the
P4's life from rdram/sdram to ddr333 then dual chanel ddr.
 
I don't know about that, the "nasty" stuff I've seen in chipsets has
ALWAYS been the drivers. Actual hardware problems with the chipsets
have been quite rare in my experience, it almost always comes down to
shitty drivers. The memory controller and interchip communication
don't need any drivers, but the stuff that is left to the chipset
vendors still need the same stuff.

You mean the software drivers? It has been my impression that the reason
that we've had so many (different) iterations of drivers from different
mfrs, like VIA, was that their hardware was different (not necessarily
wrong) from Intel's on things like bus arbitration, buffering, timing etc.
All the different drivers did was diddle the control registers in a special
way to try to accomodate some add-in device which was overloading some
sub-part of the system, e.g. hogging a bus, filling buffers etc. As an
example, VIA's "problems" would seem to have been with their bus
arbitration scheme, which was allowing long PCI burst transfers to be
interrupted and which then had to be restarted over again - they were able
to mitigate the problem with a driver but it didn't cure the fundamental
problem.

The nasty stuff I'm talking about is what one usually finds in a North
Bridge or MCH - the assignment of the memory bus to devices, memory
transaction priorities, the PC address routing to various memory types
according to their mappings - some of that stuff looks awful nasty to me to
get optimized.
Hopefully AMD will have thought far enough ahead that they'll
accommodate new memory technologies with existing sockets. It's not
like memory technology changes very often (DDR has been around for
nearly 4 years now), basically once they have DDR-2 support they
should be set for the life of the K8 design.

Maybe I'm mistaken but I don't see how they can do DDR-II with the same
socket as DDR - they're certainly going to need a new memory controller
with different signalling and the chances of pin-count being different are
umm, good. PCI-Express is *supposed* to be compatible with PCI at *some*
level but I wonder if AMD may have to rework the memory transaction logic
in their current CPUs. It's not clear to me how long DDR-II will survive
before it is succeeded by say the FB-DIMM interface. If you have the
memory controller interface on the CPU die, you're always going to be
hoping that your CPU generations correspond pretty closely with memory
interface standard generations.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
Bitstring <[email protected]>, from the


Thus spake someone who never tried to get the KT133A chipset to work
with multiple DMA transfers and Creative Labs SB cards. 8>.

Ohh I've had more than my share of issues with Creative Labs cards,
and they almost always come down to the fact that Creative writes
absolutely abysmal drivers for their sound cards! Not that hardware
problems are none-existent, just that the vast majority of problems
end up pointing to the drivers, not the hardware itself.

Besides which much DMA nastiness is still in the hands of chipset
vendors with AMD K8 chipsets, not on the processor.
 
You mean the software drivers?

Yup, them's the beasts of which I speak!
It has been my impression that the reason
that we've had so many (different) iterations of drivers from different
mfrs, like VIA, was that their hardware was different (not necessarily
wrong) from Intel's on things like bus arbitration, buffering, timing etc.
All the different drivers did was diddle the control registers in a special
way to try to accomodate some add-in device which was overloading some
sub-part of the system, e.g. hogging a bus, filling buffers etc. As an
example, VIA's "problems" would seem to have been with their bus
arbitration scheme, which was allowing long PCI burst transfers to be
interrupted and which then had to be restarted over again - they were able
to mitigate the problem with a driver but it didn't cure the fundamental
problem.

Occasionally it's just a bit of bit twiddling of the hardware, but
more often than not it seems to just crazy odd-ball incompatibilities
with different versions/patches of the operating system and how the
driver is telling the OS to talk to the hardware. I've mostly given
up trying to figure out just WHY it is that these things don't work
and simply gone the easy road of not using VIA chipsets.

Either way, all that PCI bus, DMA, bus arbitration, buffering, etc. is
all still largely in the hands of the chipset vendors.
The nasty stuff I'm talking about is what one usually finds in a North
Bridge or MCH - the assignment of the memory bus to devices, memory
transaction priorities, the PC address routing to various memory types
according to their mappings - some of that stuff looks awful nasty to me to
get optimized.

Sure, there are some tricky points with regards to the memory
controller, but even VIA seemed to get most of those sorted out as far
back as their KT266A chipset. Ever since then their memory
controllers have been fairly trouble free and within the same ballpark
as what Intel and nVidia were putting out. They're still working on
getting the latency down to the nVidia/Intel level (I've mentioned it
before and I'll say it again, nVidia and Intel both did a real bang-up
job on memory latency with their nForce2 and i865/i875 chipsets
respectively), but at least they're pretty close.

Now, don't get me wrong, I think that integrating the memory
controller is a GREAT idea, and one that Intel will eventually have to
adopt as well. One area of nastiness that it definitely DOES remove
from the chipset vendors hands is ECC. EVERY Athlon64 system out
there should always support ECC memory, no matter how badly VIA
manages to screw things up.

However there is still a lot of nastiness left to the chipset vendors,
and a lot of the major reliability problems are in those areas.
Really the memory controller was the only part of the chipset that has
been brought on-die (assuming that you're not doing I/O directly off
hypertransport links, a la Cray Red Storm). There are still plenty of
ways for VIA to screw things up IMO, and that's the main reason why
I've been somewhat hesitant to recommend Athlon64 systems to most
people (I'm still waiting for nVidia to get their nForce3 250 chipset
out).
Maybe I'm mistaken but I don't see how they can do DDR-II with the same
socket as DDR - they're certainly going to need a new memory controller
with different signalling and the chances of pin-count being different are
umm, good.

Sure is a good thing that AMD has a new socket planned anyway than
isn't it? :>

There are also always a handful of unused pins in any socket design.
current Athlon64/Opteron chips might not support DDR2, but I suspect
that future ones will, especially for Socket 939. Socket 754 might
never support DDR2, but then again, it's likely to be phased out
towards the end of the year.
PCI-Express is *supposed* to be compatible with PCI at *some*
level but I wonder if AMD may have to rework the memory transaction logic
in their current CPUs.

PCI-Express is a total non-issue. All you need is a simple HT -
PCI-Express bridge. nVidia and VIA have already sampled theirs and
AMD will probably have a bridge of their own at some point in the near
future. Definitely zero chance of a socket change being required
here.

Hypertransport 2.0, if it's ever implemented in the K8 chips, will
make it cheaper/easier, but it's certainly not necessary.
It's not clear to me how long DDR-II will survive
before it is succeeded by say the FB-DIMM interface. If you have the
memory controller interface on the CPU die, you're always going to be
hoping that your CPU generations correspond pretty closely with memory
interface standard generations.

I don't think it's really that tricky. New processors are coming out
every year or so, even if they're only slight updates to existing
chips. It only takes a little bit of foresight to prevent
incompatibilities with upcoming memory technologies.
 
The reason why they don't want a piece of that pie is because it's a
money-losing pie. Intel does it because, as you mention, it helps
sell processors where Intel makes back the money to support their
loses on chipsets and motherboards. AMD, being a smaller company,
would probably not sell enough extra processors to make it all that
worth while, at least at the consumer level.

And this highlights one of the essential differences between the
companies. Intel is unwilling to have chipset availability/quality
issues gate the introduction and sales of new, high-margin CPUs, so
they put a high priority onto making them available when necessary.
It's just another cost of the CPU business, for them.

Sure, they screw it up sometimes, but it's under their control, and
they can also prioritize the solutions. They always get it right at
some point. Once they get a chipset tweaked for a certain generation
of CPU, it's hard to find a better one.

It helps that they have the money to put into it, of course. This
makes it one of these critical mass issues that AMD may never be able
to compete on. I wouldn't be surprised if their chipset design staff
is larger than AMD's CPU design staff.


Neil Maxwell - I don't speak for my employer
 
Stacey said:
There is no "Whole solution". With Intel you can buy a board/CPU made and
-supported- by the same company. With AMD you have to buy a board from one
place, with a chipset made by someone else and then a CPU from a different
company. It's too easy for any of them to blame the other and corporate
buyers know it 'cause they probably do it themselves!

Big companies adopting Opteron may change that; before, Athlon/Duron and
before it K6 were available in consumer market machines, but nothing
high-end except in the gamer sense of high end.

These days, you can buy an IBM dual Opteron system right now, and while I'm
not sure they're shipping yet, both Sun and HP have definite plans. I don't
know who's making the chipsets for those (AMD itself? I may have an IBM
server to play with in a few months to answer that...) but in the end
acceptance by the enterprise-class vendors matters MORE than who's making
the chipset and motherboard.
 
On Mon, 01 Mar 2004 23:38:11 -0500, George Macdonald


Occasionally it's just a bit of bit twiddling of the hardware, but
more often than not it seems to just crazy odd-ball incompatibilities
with different versions/patches of the operating system and how the
driver is telling the OS to talk to the hardware. I've mostly given
up trying to figure out just WHY it is that these things don't work
and simply gone the easy road of not using VIA chipsets.

Well it was often due to VIA chipset's apparent inability to cope with bus
greedy drivers produced by several mfrs to bloat benchmark results - IOW
timing and arbitration problems. Those mfrs' attitude was: if it works
with an Intel chipset, it should work with any chipset... and if it
doesn't. it's not our fault. Obviously there are different approaches to
get to PCI standard conforming hardware; it's also possible that Intel has
implemented something which goes beyond the PCI standard... but which just
happens to not be published.:-P
Either way, all that PCI bus, DMA, bus arbitration, buffering, etc. is
all still largely in the hands of the chipset vendors.

All DMA has to now pass through the CPU on-die memory controller (as well
as the chipset at the other end of the HT channel) and its abritration
logic, where everything to do with memory transaction ordering & priorities
has to be resolved just as in a North Bridge. Obviously the new
chipset/PCI orderings, priorities, transaction retry etc. have to agree
with CPU's memory transaction decisions. From the CPU itself, as with a
traditional North Bridge, addresses from the CPU have to be routed in the
CPU on-die logic to the correct destination according to physical mappings
in MRRs... MRRs which are more akin to current North Bridge ones and I'd
think separate rom the usual CPU MRRs.
Sure, there are some tricky points with regards to the memory
controller, but even VIA seemed to get most of those sorted out as far
back as their KT266A chipset. Ever since then their memory
controllers have been fairly trouble free and within the same ballpark
as what Intel and nVidia were putting out. They're still working on
getting the latency down to the nVidia/Intel level (I've mentioned it
before and I'll say it again, nVidia and Intel both did a real bang-up
job on memory latency with their nForce2 and i865/i875 chipsets
respectively), but at least they're pretty close.

However there is still a lot of nastiness left to the chipset vendors,
and a lot of the major reliability problems are in those areas.
Really the memory controller was the only part of the chipset that has
been brought on-die (assuming that you're not doing I/O directly off
hypertransport links, a la Cray Red Storm). There are still plenty of
ways for VIA to screw things up IMO, and that's the main reason why
I've been somewhat hesitant to recommend Athlon64 systems to most
people (I'm still waiting for nVidia to get their nForce3 250 chipset
out).

I'm not talking about just the memory controller which talks to the memory
array - you have much more than that in the on-die CPU memory transaction
logic. Those are the nasty bits I'm talking about and as you say there are
still ways to bugger it up at the chipset level; the job of the chipset
vendor is nevertheless a much simpler one I believe.

PCI-Express is a total non-issue. All you need is a simple HT -
PCI-Express bridge. nVidia and VIA have already sampled theirs and
AMD will probably have a bridge of their own at some point in the near
future. Definitely zero chance of a socket change being required
here.

Your confidence is admirable.:-)
I don't think it's really that tricky. New processors are coming out
every year or so, even if they're only slight updates to existing
chips. It only takes a little bit of foresight to prevent
incompatibilities with upcoming memory technologies.

All I'm saying is that that part of the CPU die, the memory controller,
still has to be reworked. It may not be a big deal but it's something
where AMD could potentially stub their toe in the market.

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
 
Well it was often due to VIA chipset's apparent inability to cope with bus
greedy drivers produced by several mfrs to bloat benchmark results - IOW
timing and arbitration problems. Those mfrs' attitude was: if it works
with an Intel chipset, it should work with any chipset... and if it
doesn't. it's not our fault.

I've always attributed the problems Creative Labs has with their
drivers more to incompetence than malice. I just haven't seen any
evidence at all that Creative is capable of writing drivers that work
properly and it ends up being almost pure luck that they (sometimes)
work on Intel chipsets :>
Obviously there are different approaches to
get to PCI standard conforming hardware; it's also possible that Intel has
implemented something which goes beyond the PCI standard... but which just
happens to not be published.:-P

The standard always only tells a small part of the story. If drivers
were written well, the small differences could be easily handled. As
mentioned above, I have seen no evidence that Creative Labs has EVER
managed to write drivers properly though.

When you take Creative Labs out of the picture, basically all of the
bus arbitration and DMA master problems pretty much disappear on VIA
chipsets. Then it just comes down to their crummy IDE drivers and
odd-ball problems related to OS version, patches and driver revisions.
All DMA has to now pass through the CPU on-die memory controller (as well
as the chipset at the other end of the HT channel) and its abritration
logic, where everything to do with memory transaction ordering & priorities
has to be resolved just as in a North Bridge. Obviously the new
chipset/PCI orderings, priorities, transaction retry etc. have to agree
with CPU's memory transaction decisions. From the CPU itself, as with a
traditional North Bridge, addresses from the CPU have to be routed in the
CPU on-die logic to the correct destination according to physical mappings
in MRRs... MRRs which are more akin to current North Bridge ones and I'd
think separate rom the usual CPU MRRs.

A small chunk of things has been moved on-die, but I'd still say that
the bulk of it is in the hands of the chipset vendors.
Your confidence is admirable.:-)

It's not that high, I have only very limited confidence that VIA (at
least) will implement PCI-Express PROPERLY, but they will at least
implement it! :>

This could be a bit of a test for nVidia, it's the first totally new
bus that they've had to implement. It'll be interesting to see how
they handle it as compared to VIA and Intel.

In any case, as I mentioned, the samples are already out there, so
it's clear that the plan is definitely to do PCI-Express over
relatively current hypertransport (HT will be bumped up to 1.0GHz, but
otherwise unchanged in the near future).
All I'm saying is that that part of the CPU die, the memory controller,
still has to be reworked. It may not be a big deal but it's something
where AMD could potentially stub their toe in the market.

I'll grant that there is a potential for problems, but I really don't
see it as a big stumbling block, more just like a small snag in the
carpet. The memory market isn't really the rapid-changing technology
field that some people have made it out to be. Other than the brief
fling with RDRAM the memory market has been VERY stable. A few clock
speed bumps every now and then, but still it's mostly just been SDRAM
for a about 4 years and now DDR for about 4 years. Starting in 2005
we'll switch to DDR2 and it will probably hang around for a little
while (perhaps not 4 years, but certainly a year or two).
 
Back
Top