ULV Celeron M vs. VIA CPUs

  • Thread starter Thread starter George Macdonald
  • Start date Start date
G

George Macdonald

While this is probably not that important to most people here, it has
been asked about in the past; I also thought it interesting that Intel
would commision Tolly Group, who usually specialize more in network
eqpt. evaluations to do this comparison. There's a .PDF available
here: <http://www.tollygroup.com/DocDetail.aspx?DocNumber=205107>
which I assume is publicly available; I have a subscription to Tolly's
e-mail newsletter but I don't think that gives any special access.

At any rate, the ULV Celeron outclasses all 3 VIA CPUs, which only
have 64KB L2 cache each vs. 512K for the celeron M, and the VIA CPUs
failed to run the SysMark 2004 and WebMark 2004 benchmarks "due to
architecture reasons", though they don't seem say what they are.
VIA's Nehemiah also failed to complete SPECint_base2000 - again no
stated reason.<shrug> Anybody here know what the reasons for failure
are?
 
While this is probably not that important to most people here, it has
been asked about in the past; I also thought it interesting that Intel
would commision Tolly Group, who usually specialize more in network
eqpt. evaluations to do this comparison. There's a .PDF available
here: <http://www.tollygroup.com/DocDetail.aspx?DocNumber=205107>
which I assume is publicly available; I have a subscription to Tolly's
e-mail newsletter but I don't think that gives any special access.

At any rate, the ULV Celeron outclasses all 3 VIA CPUs, which only
have 64KB L2 cache each vs. 512K for the celeron M, and the VIA CPUs
failed to run the SysMark 2004 and WebMark 2004 benchmarks "due to
architecture reasons", though they don't seem say what they are.
VIA's Nehemiah also failed to complete SPECint_base2000 - again no
stated reason.<shrug> Anybody here know what the reasons for failure
are?

Who cares about most people, anyway. They're just not that important,
when you get right down to it. ;-).

If you scale up by a factor of 15 to bring the 7 watts up to Itanium
levels, you get an impressive specfp score of 6000. In kvetching
about IBM's Blue Gene, I wished that they had used a low power x86
chip.

Don't know about Nehemiah's architectural oddities.

For all the world, it looks like the Celeron M 600MHz would compare
favorably to the Tualatin Celeron 1.3Ghz:

http://www.digit-life.com/articles2/roundupmobo/via-c3-nehemiah.html

So, I wonder how much these things cost? Everything soldered down,
$100 ITX sounds completely doable for a fanless PC. How come nobody's
selling one?

RM
 
George said:
At any rate, the ULV Celeron outclasses all 3 VIA CPUs, which only
have 64KB L2 cache each vs. 512K for the celeron M, and the VIA CPUs
failed to run the SysMark 2004 and WebMark 2004 benchmarks "due to
architecture reasons", though they don't seem say what they are.
VIA's Nehemiah also failed to complete SPECint_base2000 - again no
stated reason.<shrug> Anybody here know what the reasons for failure
are?

With such low performances, it wouldn't surprise me that the
"architecture reasons" are simply because the applications they were
testing simply refused to load up.

Interestingly, I was listening to a podcast on Zdnet the other day, and
they were interviewing a Windows XP Embedded programmer. He said his
application runs on VIA chips because they don't want any moving
mechanical parts in their computer systems whatsoever, not even hard
drives or fans. One interesting thing he noted about the VIA chips is
that they seem to spread their workload over much more smoothly. He
compared it against an old P3, and said that you'll see the CPU meter
will show long idle periods followed huge spikes in the P3, but for some
reason the VIA C3 seem to have a constant amount of workload but hardly
any peaks. Perhaps the instruction timings are different on the C3's
therefore programs which rely on instruction timing don't get their
expected values on C3's?

Yousuf Khan
 
Interestingly, I was listening to a podcast on Zdnet the other day, and
they were interviewing a Windows XP Embedded programmer. He said his
application runs on VIA chips because they don't want any moving
mechanical parts in their computer systems whatsoever, not even hard
drives or fans.

Not at all surprising, the failure rate on those components tends to
be significantly higher than for solid state components, especially if
you operating in some less-than-ideal environments.
One interesting thing he noted about the VIA chips is
that they seem to spread their workload over much more smoothly. He
compared it against an old P3, and said that you'll see the CPU meter
will show long idle periods followed huge spikes in the P3, but for some
reason the VIA C3 seem to have a constant amount of workload but hardly
any peaks. Perhaps the instruction timings are different on the C3's
therefore programs which rely on instruction timing don't get their
expected values on C3's?

The VIA C3 is a mostly (completely?) in-order processor, so while this
hurts it's performance a bit, it does tend to make the chip a bit more
predictable and steady. With the P3 you're looking at an out-of-order
processor. This may result in more idle periods where the processor
is waiting for data from main memory and then periods of higher peaks
where the chip is able to run more instructions at once.

Of course the other, and much simpler, explanations is simply that the
P3 does all it's work much more quickly than the C3. Where the C3 is
constantly having to work at a steady pace just to keep up, the PIII
can simply zip through the work and then sits around waiting for the
next batch to come through.

Either way, it really shouldn't matter much since it's really the
maximum power consumption that counts more than anything else. It's
not like WinXP Embedded is a Real-Time OS, so it doesn't make much
sense to put timing constraints on your processing. The difference
between the chips is a bit interesting, but probably not all that
important.
 
Is nobody selling one, or is nobody selling one in U S of A? ;-)

Morris Minors never sold here. Sold very well indeed overseas. For
instance. ;-)
I haven't looked into the details of the "third-world" low-priced
computer efforts. Why would anybody be doing such a thing as a
humanitarian/social enterprise when commercial interests have it all
taken care of?

In any case, I think all such efforts will be overtaken by events, as
fans and moving parts generally disappear from computers and they
become truly portable (I hesistate to say wearable, but it's probably
appropriate).

I don't really understand what's holding up events. The chips we're
used to talking about here are dinosaurs, except for server
applications and conceivably for hard core gamers and hackers. Even
for the latter group, I think that boxes that sit on the desk or on
the floor are going to be regarded as uncool.

RM
 
Tony said:
Either way, it really shouldn't matter much since it's really the
maximum power consumption that counts more than anything else. It's
not like WinXP Embedded is a Real-Time OS, so it doesn't make much
sense to put timing constraints on your processing. The difference
between the chips is a bit interesting, but probably not all that
important.

No, but it does impact the peak power consumption requirements. With a
peaky CPU, you have to design for worst case heat conditions, which
would be much higher than a steady-state CPU.

Yousuf Khan
 
No, but it does impact the peak power consumption requirements. With a
peaky CPU, you have to design for worst case heat conditions, which
would be much higher than a steady-state CPU.

I don't think the issue is so much a "peaky" CPU, rather than a CPU
that isn't well matched to the application.
 
No, but it does impact the peak power consumption requirements. With a
peaky CPU, you have to design for worst case heat conditions, which
would be much higher than a steady-state CPU.

Actually no, there's no reason to believe that a CPU that operates at
a steady level will have any lower (or higher) peak power consumption
than one that does all it's work in a short period of time and then
sits idle. That might be the case under certain specific situations,
but to say that as a general rule would be a gross oversimplification.
 
Actually no, there's no reason to believe that a CPU that operates at
a steady level will have any lower (or higher) peak power consumption
than one that does all it's work in a short period of time and then
sits idle.

Sorry Tony, but you don't make sense here. If all the work for the day is
done in one second, then the processor must be able to dissipate the power
for the day's work in one second. The *peak* power here is equal to the
total power, but over a second rather than a day.
That might be the case under certain specific situations,
but to say that as a general rule would be a gross oversimplification.

No, that is the *general* solution. It's always better to average power
over a loner time. Add to this the fact that a processor that needs this
peak performance also may need advanced technology, things get worse
rapidly.
 
Keith said:
I don't think the issue is so much a "peaky" CPU, rather than a CPU
that isn't well matched to the application.

What do you mean? Give an example.

Yousuf Khan
 
What do you mean? Give an example.

The CPU isn't "peaky", rather the application doesn't need the full horses
of the CPU, thus it wastes more power. A high-performance CPU will use
more power per op than a turkey. If the additional compute power isn't
needed, that CPU is a poor choice for the application. Embedded folks
*should* understand this.

It has nothing to do with a processor being "peaky", rather the processor
is wasting power because it was poorly chosen.
 
Back
Top