The death of non-x86 is now at hand?

  • Thread starter Thread starter Yousuf Khan
  • Start date Start date
Who's limiting it to just processes of relevance to computing? Isn't
a straight line plot is how *everything* works?! :>

Well, if you work with some of the people I've worked with, you *can*
draw a straight line through anything, and I mean anything. No need
even to confuse things by running a correlation coefficient or
anything.

Notice that y-axis in the law I named after you is logarthmic, so as
to fit most predictions for the IT industry. If you like, I'll be
happy to credit you with a Generalized Hill's Law, in which the y axis
is allowed to take on any mapping you want. Then *all* monotonic
processes map into a Generalized Hill Graph (tm).

Coping with non-monotonic functions is beyond the scope of this post.

RM
 
Tony Hill said:
x86 isn't doing all that bad in the embedded market actually. I've
used it before in an embedded system design for the simple reason that
software development was MUCH easier. Now admittedly were weren't all
that constrained in terms of size or power, the box was going to sit
on a machine that was half the size of an american football field.
Still, the real strength of x86 was that it was EASY to develop
software for it. We could do essentially all the development and
testing on a plain old desktop system running Linux. We could even
get an OS image for the all done up on a desktop as well. This all
ended up being VERY handy since we didn't receive the hardware until
just over a week before the final product had to be shipped out.

You can start to see this sort of thing happen for a lot of embedded
projects. The ease of development for embedded x86 systems often out
weights any potential loss in the performance/watt measure. I don't
expect to see any x86 chips in smoke detectors any time soon, but
things like set-top boxes and many industrial processes can benefit a
lot here.

I guess the embedded market is a little wider than I was thinking when I
wrote that. True, it could do very well in the set top box type market. But
when it comes to small embedded apps, Cell Phones/PDA/PDA-Phone etc. x86 is
going to have a hard time competing with the likes of ARM and MIPS. Heck,
x86 is even having a hard time in the consol gaming market. X-Box has no
foothold to speak of in Japan because it's so large... Rumors are MS is
going to switch to a RISC chip for X-Box 2.

Carlo
 
Tony Hill said:
I seem to remember that AMD actually competing very effectively in the
late 386s days with their 40MHz 386DX chip while Intel was just
starting out with the (at the time) very expensive 486s. And back in
the 286 days there were quite a number of competitors (including AMD
back then).

In the 286 days, there were two non-Intel 286 processors, one from AMD, and
one from Harris. I think both of them were endorsed by Intel as
second-sources to its own 286s. AMD did push the speed of their 286 out to
16Mhz, higher than Intel's 12Mhz maximum, but that seems to be the only real
difference between them at that time.

It was with the 386 processor generation that AMD and Intel first began
having their legal troubles with each other. AMD began bringing out its own
386's towards the end of the 386 generation and the beginning of the 486
generation, which was very late 80's, 1989 as a matter of fact. So for the
majority of the 80's, it had really no competition, even its competition
were simply partners which produced the same chips as Intel with their full
permission.

Also very early on, Intel had some competition in the form of the NEC V20
and V30 processors which were 8086-compatibles, but a separate design. They
never really took off greatly. IBM definitely only bought from Intel and its
official second sources at the time, so it wasn't going to buy any NEC
work-alikes.
I'd say that it's a bit of both there, particularly if you look at
4-way servers. The Opteron seems to smack totally smack the XeonMP
around any time you start playing with 4P systems. On 2P systems the
shared bandwidth of the Xeon doesn't seem to hurt as much, though the
Opteron does almost always win here as well.

The only way they seem to know how to make Xeon win here is to create a
custom chipset that turns Xeon into a ccNUMA architecture. Opteron already
does this out of the box without anything special required.
The glue around the Itanium is currently allowing it to perform a lot
better in very large servers than anything we've seen from the Xeon.
Of course, we haven't really had a chance to see what the Opteron can
really do in large servers since no one has made anything more than a
4P system.

Yes, again, with Itanium, they need special chipsets to take it out of the
shared-memory domain and into the ccNUMA domain. If people are going to be
spending money to develop chipsets around Itanium just to make it perform
well, then spending money on something else that already performs well out
of the box, may net you better results.

I guess once Cray Strider systems become available, I guess we'll know how
well an external interface performs for Opteron then.
Hehe, I'd like to see that Toyota notebook, complete with non-descript
styling and a boring paint job :> Actually a Centrino Toyota notebook
might just work, "sure it doesn't look very exciting, but it's
extremely reliable and gets excellent millage (low power
consumption)".

I think Intel is pretty well positioned in the laptop market for the
time being. AMD/Acer might have a bit of a win on their hands with
the Ferrari notebook, but really Intel has a great base of technology
in their Pentium-M and i855 chipset.

AMD also recently started sponsoring the Ducatti motorcycle racing team. Yet
another avenue of marketing available for them now.
AMD does have some options here,
particularly if they can do something with the AthlonXP-M line on a
90nm fab process. If they could combine some of the features of the
Athlon64/Opteron and the very low power consumption of the AthlonXP-M
(that chip is actually in the same basic power range as the
Pentium-M), they could have a decent competitor. I'm just not sure
that AMD has the resources to develop two completely separate cores
like Intel does (err, I guess Intel develops 3 cores).

I think power-savings is important, but I don't think anybody cares if it
gets to an extremely high level. I doubt anyone would notice too much
difference between a 5 hour battery life vs. a 7 hour one.
Of course, VIA could start eating into the low-end here if they can
follow through on their plans effectively. Their chips are getting
some pretty impressive power consumption numbers and, perhaps more
importantly, combining that with VERY low costs. VIA has yet to get
the marketing going well, but the opportunity is there. VIA could
potentially start leading a low-cost notebook revolution in much the
same way that the K6 did on the desktop. I'm sure there are a lot of
people who would be willing to sacrifice some performance for a $500
laptop instead of a $1000 one. Intel's Celeron-M seems to be a
non-starter so far (though it's still early), while the Celeron Mobile
consumes a fair chunk of power while offering terrible performance.

Are you talking about VIA chipsets, or the VIA processor?

Yousuf Khan
 
Yousuf Khan said:
I dabbled in 6502 assembly

my first bitbanging
I then got a PC, I was amazed to find that instead of machine
language monitors they had those highly convenient assemblers,
which allowed you to create machine language offline and run them
only once you were completely done! Wow, now that was convenience.
:-)

you missed Turbo Assembler then, huge leap from C64 cartridge monitor
Going for x86 on my brand new 386 with Borland Tasm was easy too,
different mnemonics but assembler is assembler.

mmm LDA STA days ...

Pozdrawiam.
 
In the 286 days, there were two non-Intel 286 processors, one from AMD, and
one from Harris. I think both of them were endorsed by Intel as
second-sources to its own 286s. AMD did push the speed of their 286 out to
16Mhz, higher than Intel's 12Mhz maximum, but that seems to be the only real
difference between them at that time.

And Harris got theirs all the way up to 25 MHz, for all the good it
did them. I think I still have one of these in the dusty old parts
bin, along with crystal sockets and such.

They were exact copies of the Intel chips, so the clock rate was all
there was to it.



Neil Maxwell - I don't speak for my employer
 
In comp.sys.ibm.pc.hardware.chips Yousuf Khan said:
Also very early on, Intel had some competition in the form of the NEC V20
and V30 processors which were 8086-compatibles, but a separate design.

The best $8 upgrade I ever saw, by the late 1980s, and not a bad $20 upgrade
in the mid 1980s. A 4.77mhz V20 was a noticeable speed bump over a stock
8088.
They never really took off greatly.

Got used in a few clones, and it was a reasonably popular hobbyist upgrade
since they were socket-compatible.
I think power-savings is important, but I don't think anybody cares if it
gets to an extremely high level. I doubt anyone would notice too much
difference between a 5 hour battery life vs. a 7 hour one.

With realistic workloads, 5 hours off one battery would be an improvement,
even over many Pentium-M laptops.
Are you talking about VIA chipsets, or the VIA processor?

I imagine he's talking about the latest C3-derived chips.

What I'd love to see is what Intel could come up with for _really_ low power
processors; the Pentium-M runs around 12W for the 1.2ghz model, or 7W for
the 900Mhz model. The two lowest power Intel chips I could find were the
embedded ULV Celeron 400 which runs on 4.2W, and the embedded ULV P166MMX
which runs on 4.1W. Now the latter is probably too slow to do non-embedded
work these days, but the ULV C400 might well make an intriguing subnotebook
chip.

What does a P-M 900mhz cost? How about the ULV C400? And which process was
the C400 make on...?
 
RusH said:
you missed Turbo Assembler then, huge leap from C64 cartridge monitor
Going for x86 on my brand new 386 with Borland Tasm was easy too,
different mnemonics but assembler is assembler.

I assume the Turbo Assembler you're talking about was one for C64, then? The
only Turbo Assembler I'm familiar with was Borland's Tasm for PCs.

Yousuf Khan
 
Tony Hill said:
x86 isn't doing all that bad in the embedded market actually.
I've used it before in an embedded system design for the simple
reason that software development was MUCH easier. Now admittedly
were weren't all that constrained in terms of size or power, the
box was going to sit on a machine that was half the size of an
american football field. Still, the real strength of x86 was that
it was EASY to develop software for it. We could do essentially
all the development and testing on a plain old desktop system
running Linux. We could even get an OS image for the all done up
on a desktop as well. This all ended up being VERY handy since we
didn't receive the hardware until just over a week before the
final product had to be shipped out.

You can start to see this sort of thing happen for a lot of
embedded projects. The ease of development for embedded x86
systems often out weights any potential loss in the
performance/watt measure. I don't expect to see any x86 chips in
smoke detectors any time soon, but things like set-top boxes and
many industrial processes can benefit a lot here.

You seem to mistake the real reason why all went so well for you. It
wasnt some magical x86 power that made your project easy, it was a good
development base build around Linux (gcc and all). With uCLinux you can
develop on any supported system for any other supported system. ARM is
particularly strong in this area, but there are others like Mips or
M68k. Emulators and compilers reached the point where you can expect
the same level of confidence while debbuging as it was your actual
target hardware.

Pozdrawiam.
 
Yousuf Khan said:
I assume the Turbo Assembler you're talking about was one for C64,
then? The only Turbo Assembler I'm familiar with was Borland's
Tasm for PCs.

Exactly, there was Turbo Assembler for C64. At first I was soo amazed
that you could use those magical thingies called labels and constants.
Assembler never was simplier before.

http://www.c64.ch/programming/ta-docs.php


Pozdrawiam.
 
I guess the embedded market is a little wider than I was thinking when I
wrote that. True, it could do very well in the set top box type market. But
when it comes to small embedded apps, Cell Phones/PDA/PDA-Phone etc. x86 is
going to have a hard time competing with the likes of ARM and MIPS. Heck,

For the time being x86 isn't really the best option for the really low
power ( > 2W or so) devices. Many embedded applications fall into
this category and many do not. It's not just set-top boxes though,
there are lots of industrial applications where a 2-5W processor is no
problem. Set-top boxes are just one of the most common uses of x86
CPUs in an embedded application that you're likely to see.
x86 is even having a hard time in the consol gaming market. X-Box has no
foothold to speak of in Japan because it's so large... Rumors are MS is
going to switch to a RISC chip for X-Box 2.

I would not really call a gaming console an embedded application.
Even though on the surface it might seem to be rather similar to a
set-top box, I would say that they are quite different. Reason being
a set-top box is really intended only to run a fairly small set of
applications that are bundled with the box when you purchase it. A
game console is designed to run software purchased later. I'm sure
that this is by no means a true distinction of what is/is not an
embedded application (I'm quite certain that a lot of people wouldn't
consider either set-top boxes or gaming consoles to be "embedded"
applications), but at least for this particular situations it would be
how I would differentiate them.

As for the X-Box2 (or X-Box Next as it's sometimes being called), it's
no rumor, MS is going to be using a PowerPC chip from IBM. I don't
think that physical size has anything to do with this change though,
or even power consumption. The 733MHz Celeron core used on the
original X-Box probably only gobbled up a 15W or less, not really that
significant when you consider there are now 50W+ processors in
notebooks. Besides, I expect that the PPC chip in the next X-Box will
probably consume more power than the x86 chip in the current one.
 
Notice that y-axis in the law I named after you is logarthmic, so as
to fit most predictions for the IT industry. If you like, I'll be
happy to credit you with a Generalized Hill's Law, in which the y axis
is allowed to take on any mapping you want. Then *all* monotonic
processes map into a Generalized Hill Graph (tm).

Hmm... Generalized Hill's Law eh? I like the ring of that, it could
definitely work! :>
 
The only way they seem to know how to make Xeon win here is to create a
custom chipset that turns Xeon into a ccNUMA architecture. Opteron already
does this out of the box without anything special required.

Unfortunately for AMD, I'm not sure that this alone is a big enough
reason for the design to succeed. Simply being better and cheaper has
never been enough when it comes to computers.
Yes, again, with Itanium, they need special chipsets to take it out of the
shared-memory domain and into the ccNUMA domain. If people are going to be
spending money to develop chipsets around Itanium just to make it perform
well, then spending money on something else that already performs well out
of the box, may net you better results.

I guess once Cray Strider systems become available, I guess we'll know how
well an external interface performs for Opteron then.

I'm not sure that we'll really get to see anything too interesting
come out of this Cray system. Sure, it'll make for some decent HPC
numbers, but it doesn't look to me like Cray is going to be selling
these systems in competition with Dell and HP's regular server
line-up.
AMD also recently started sponsoring the Ducatti motorcycle racing team. Yet
another avenue of marketing available for them now.

Yes, but I doubt that these are really big volume sellers. Nice
marketing, sure, but they aren't going to do much to shake up the
market. Intel's Centrino marketing campaign, on the other hand, has
shaken up the notebook market.
I think power-savings is important, but I don't think anybody cares if it
gets to an extremely high level. I doubt anyone would notice too much
difference between a 5 hour battery life vs. a 7 hour one.

Well we aren't really at 5 hour battery life for most Pentium-M
notebooks yet, so there's still a ways to go. Also a lower powered
CPU allows the use of a larger screen or a faster hard disk without
requiring more battery power.
Are you talking about VIA chipsets, or the VIA processor?

VIA processors. Unfortunately the two tend to go together, and it's
likely that the (generally crummy) VIA chipsets are preventing the VIA
processors from gaining much ground.

VIA's processors might not win any number crunching contests, but they
seem to offer reasonably compelling performance at the low-end for a
VERY low cost (most of their chips seem to sell for about $20-$25 in
quantity) and low power consumption. Their roadmaps has the
performance going up by a decent amount in the near future without a
large increase in power consumption or cost.
 
Tony Hill said:
Unfortunately for AMD, I'm not sure that this alone is a big enough
reason for the design to succeed. Simply being better and cheaper has
never been enough when it comes to computers.

It also has something to do with having ready-made infrastructure available.
I think this is really the big factor behind the success, that AMD has
already done most of the groundwork ahead of time for systems development.
I'm not sure that we'll really get to see anything too interesting
come out of this Cray system. Sure, it'll make for some decent HPC
numbers, but it doesn't look to me like Cray is going to be selling
these systems in competition with Dell and HP's regular server
line-up.

Cray won't be competing against the two or four processor servers obviously,
but it's going to compete against HP and Sun and IBM's 64-processor units.

Yousuf Khan
 
Tony Hill said:
Well we aren't really at 5 hour battery life for most Pentium-M
notebooks yet, so there's still a ways to go. Also a lower powered
CPU allows the use of a larger screen or a faster hard disk without
requiring more battery power.

Tony, my casual observations of laptops tell me that the display is
the power hog, followed by the CPU and then the hard disk. How far
off am I?

VIA's processors might not win any number crunching contests, but they
seem to offer reasonably compelling performance at the low-end for a
VERY low cost (most of their chips seem to sell for about $20-$25 in
quantity) and low power consumption. Their roadmaps has the
performance going up by a decent amount in the near future without a
large increase in power consumption or cost.

Via CPUs might be optimum smarts/power chips for portables, as
modified by a low cost constraint. If only the darned TFT display
didn't consume so much power...
 
Tony, my casual observations of laptops tell me that the display is
the power hog, followed by the CPU and then the hard disk. How far
off am I?
Slide 19 of

http://nesl.ee.ucla.edu/courses/ee202a/2002f/lectures/L07_4pp.pdf

shows the display consuming 36% of the power and CPU/memory only 21%
(fall 2002). Funny thing is, I've had my legs made overly warm even
by my Centrino laptop, with which I am very happy, but I can't ever
remember being made uncomfortable by the heat of the display. :-P.

Via CPUs might be optimum smarts/power chips for portables, as
modified by a low cost constraint. If only the darned TFT display
didn't consume so much power...
and weren't so expensive, making it kind of strange to look for a
sub-$100 processor to pair up with one. :-P.

RM
 
Nate said:
What I'd love to see is what Intel could come up with for _really_ low power
processors; the Pentium-M runs around 12W for the 1.2ghz model, or 7W for
the 900Mhz model. The two lowest power Intel chips I could find were the
embedded ULV Celeron 400 which runs on 4.2W, and the embedded ULV P166MMX
which runs on 4.1W. Now the latter is probably too slow to do non-embedded
work these days, but the ULV C400 might well make an intriguing subnotebook
chip.

What does a P-M 900mhz cost? How about the ULV C400? And which process was
the C400 make on...?

Both ULV Celerons in the µFC-BGA package (400 MHz and 650 MHz) were
designed with a 130 nm process.

http://intel.com/design/intarch/datashts/27380402.pdf

In a laptop, what fraction of the battery consumption is the CPU
responsible for? How much energy can a battery store?
 
Robert Myers said:
Slide 19 of

http://nesl.ee.ucla.edu/courses/ee202a/2002f/lectures/L07_4pp.pdf

shows the display consuming 36% of the power and CPU/memory only 21%
(fall 2002). Funny thing is, I've had my legs made overly warm even
by my Centrino laptop, with which I am very happy, but I can't ever
remember being made uncomfortable by the heat of the display. :-P.

Probably because the display is efficiently converting most of its
electricity into light, whereas the CPU is inefficiently radiating some of
its energy off into space.

Yousuf Khan
 
Tony, my casual observations of laptops tell me that the display is
the power hog, followed by the CPU and then the hard disk. How far
off am I?

Depends on the chip your using. The "Mobile" Pentium4 processors can
consume up to 70W when going full out, and even for fairly limited
processing they'll use 30-40W. Most laptop displays use a maximum of
about 20W, and usually they have a reduced power mode when running off
batteries.

On the other hand, the Pentium-M uses only 25W TDP, and will often be
using only 5-10W during light processing.
Via CPUs might be optimum smarts/power chips for portables, as
modified by a low cost constraint. If only the darned TFT display
didn't consume so much power...

Well unfortunately finding specifications for this is a bit tough, but
even 15" desktop TFT screens only consume 25-30W of power. Presumably
a laptop TFT, with their reduced power operating mode, would be
noticeably less than that.

I would throw out a guess of 10-15W for a 14" or 15" TFT screen
running off batteries, maybe even less. Certainly the processor and
the screen are well within the same ballpark. Compare that to the
Mobile Celeron with a TDP of 35W and very few power saving features
and the VIA's ~10W CPU starts to look fairly reasonable, especially
when it only costs $25.
 
Tony said:
Depends on the chip your using. The "Mobile" Pentium4 processors
can consume up to 70W when going full out, and even for fairly
limited processing they'll use 30-40W. Most laptop displays use a
maximum of about 20W, and usually they have a reduced power mode
when running off batteries.

On the other hand, the Pentium-M uses only 25W TDP, and will often
be using only 5-10W during light processing.

Only the latest models have a TDP spec of 25 W:

1.70 GHz @ 1.484 V 24.5 W
1.60 GHz @ 1.484 V 24.5 W
1.50 GHz @ 1.484 V 24.5 W
1.40 GHz @ 1.484 V 22 W
1.30 GHz @ 1.388 V 22 W
1.20 GHz @ 1.180 V 12 W
1.10 GHz @ 1.180 V 12 W
1.00 GHz @ 1.004 V 7 W
900 MHz @ 1.004 V 7 W
600 MHz @ 0.956 V 6 W
600 MHz @ 0.844 V 4 W

http://intel.com/design/mobile/datashts/25261202.pdf
 
Back
Top