PII vs PIII

  • Thread starter Thread starter Gregory L. Hansen
  • Start date Start date
Because there is no reason for a 10 second delay. It wasn't due to
having a single CPU unless the task that was running was improperly
assigned a high or time-critical priority. Tasks that are actually
time-critical, like video capture or CDRW/DVD burning, don't even come
near the delay you mention. If that specific software is THAT bad,
you need either reassign it's priority, or replace it if possible.

Well, if I need to replace the software, I'm all for it.

That means I'll throw out the Linux kernel, X windows, and a variety of
others. I'm sure there are better alternatives.

You haven't followed the LKML lately, have you? The issue of
interactivity and responsiveness on single-CPU machines is something that
they're trying really, really hard to get to an acceptable level. Now
Linux is better at this than any other desktop OS I've tried, but it still
has problems.

Up until now, people have had to make do with hacks, tweaks, and
"optimizations" (like adjusting the priority of X) which have notable
detrimental side-effects as well. They're working really, really hard to
try and make that unnecessary.

For me, on my dual-CPU machines, they're not necessary, the machine
doesn't get unresponsive. It's really that simple.

steve
 
Steve Wolfe said:
[qt]
I have a dual Pentium 133 that is still *very* useable as a desktop, be
it under Linux or NT. A Pentium 233 would not be as usable.

For the last time, Lane, "usable" does not mean "will finish first".

If you haven't figured that out already, you're a blind idiot. If you
HAVE figured it out already, you're a weak-minded, dishonest idiot for
pretending not to.

steve

So your going to take better than a 50 percent hit on just about every
application just so you can run something in the background a little
smoother. Why on earth would anybody believe what your saying.

and your calling me an idiot.

Lane
 
Steve Wolfe and SIOL simply speculate why a dual processor
machine would be faster. Without numbers means their reasoning
is only speculation. This machine happens to be a 486DX2-66.
It has been more responsive than many 200 Mhz Pentium
machines. Explain that? After all that is how I *feel* -
therefore it must be true!

Lets say we are compiling for 2 to 5 minutes. One 300 Mhz
processor ends up with most of the load of a single thread
process. Now we press a key. Keypress is a high priority
task. In a dual processor system, one processor (either one)
must stop what it is doing and respond to that higher priority
task now. Either processor will take same time to perform
that response. Response takes longer (compared to a faster
single processor system) because each dual processor is
slower. But one higher speed CPU would execute same number of
instructions to stop compiling and respond to that higher
priority task. Which is going to respond to that high
priority task quicker? Faster processor in the single CPU
system. Slower processors in a dual system simply take longer
to respond to that interrupt.

In a dual processor system, same number of instructions must
be processed (either in CPU doing the compiling or in other
more idle CPU). But since each CPU is slower, it takes longer
to respond to that human action - a keystroke. Single
processor system using faster processor will respond faster to
interrupt of that higher priority task.

Steve and SIOL cannot argue with any of this because it uses
the same type of speculative reasoning that they use. IOW the
only thing we can say for certain is that Steve and SIOL
*feel* their dual processor systems are faster - without doing
a double blind test - without any numbers - and without any
good solid facts to prove their claim.

We can say they *feel* the dual processor system is faster.
But that is all we can say based upon facts they have
provided. They speculated. When a number is provided - ie a
10 second response - well that number says something else is
wrong - as Kony noted. A claim that dual processors increase
response results in a completely different conclusion as soon
as a number is provided - ie 10 seconds. Just demonstrating
again why claims must include solid facts such as numbers.

Junk science is used to claim, for example, that a dual 300
Mhz system will be more responsive compared to a single 600
Mhz system. And since neither Steve nor SIOL have included
other critical information, then we really don't know about
other essential features that more affect overall system
performance. For all we know, the motherboard in that dual
processor machine was better designed than motherboard in a
faster single processor system. Where are essential
specifications such as bus speeds, what peripherals are on
which type of bus, delays intentionally installed to
compensate for design weaknesses, memory timing and type,
amount of cache - primary and secondary, video memory, video
processor, etc. Just more missing facts that make
'speculation' of which is faster simply 'subjective'. Just
another reason why no numbers means SIOL and Steve Wolfe
cannot make fully honest claims. Why they instead use junk
science reasoning.

I asked for a simple 'best fact' to support his claim.
Instead Steve again posted a claim that he previously posted a
claim. Again he did not post that one best fact - which is
consistent among junk science posters. They fear to post
something that can be carefully scrutinized. Steve failed to
cite a good honest number to support his *feelings* - in
response to a post that asks only for that one best fact. As
noted previously, this deceptiveness is what a junk scientist
does. Avoids posting scientific facts and numbers because
what he perceives alone is proof enough.

Above is posted a fundamental theory that says Steve and
SIOL are deceiving to us. However unlike their junk science
reasoning, I will not claim that the above theory proves they
are wrong. I will simply say that the above theory
demonstrates their ideas can be flawed AND that we need
numbers. Posted is the theory that agrees with what I have
seen - the faster single processor system is more responsive
AND executes everything faster.

Look at how many times they have posted - and not one
numerical fact to demonstrated their claims. They demonstrate
junk science reasoning. My experience 'proves' they are
wrong. Stalemate. But then I am not trying to make a claim
without numbers and science facts.

I really don't care whether the dual processor system is
more responsive. I do care when junk science reasoning is
being promoted as a replacement for honesty - a claim based on
scientific principles - including numbers. They repeatedly
post a claim without numbers - or anything else to demonstrate
their claim. Lane Lewis is has quite correctly challenged
their 'facts' as only speculation based upon human perception.
 
w_tom said:
Steve Wolfe and SIOL simply speculate why a dual processor
machine would be faster.

No. "SIOL" is only saying that at these CPU speeds nowadays, SIOL's hunger
for raw speed is satisfied long ago and he is going after responsiveness
rather than pure speed these days.
Without numbers means their reasoning
is only speculation. This machine happens to be a 486DX2-66.
It has been more responsive than many 200 Mhz Pentium
machines. Explain that? After all that is how I *feel* -
therefore it must be true!

If it works that way under XP or Linux & XWindows & KDE 3.1.4 and lets you
to hav job done without waiting for the machine, then certainly :o)

Lets say we are compiling for 2 to 5 minutes. One 300 Mhz
processor ends up with most of the load of a single thread
process. Now we press a key. Keypress is a high priority
task. In a dual processor system, one processor (either one)
must stop what it is doing and respond to that higher priority
task now. Either processor will take same time to perform
that response. Response takes longer (compared to a faster
single processor system) because each dual processor is
slower. But one higher speed CPU would execute same number of
instructions to stop compiling and respond to that higher
priority task. Which is going to respond to that high
priority task quicker? Faster processor in the single CPU
system. Slower processors in a dual system simply take longer
to respond to that interrupt.

1. Ever heard of CPU affinity ? When you start new tasks, kernel gets to
decide what priority do new jobs get and whaich CPU gets to run them. It
doesn't need to be a random choice.

2. With two or more CPU setup, one CPU can get all those keypresses under
standard 18 ms interrupt while other hangs "do not disturb" sign and churns
away at the max speed.

3. When you have many tasks and many of them demand real time reaction, CPU
has to switch between tasks very frequently. Task switch costs microseconds.
When frequency of task switches rises, considerable % of CPU power goes just
for task switching. In 2.6 kernel IIRC there is setting for preferred kernel
clock for task switching. Higher clock means smoother multitasking, lower
means smaller CPU overhead.


For the bazilliont time, responsiveness is not strictly the same as speed. I
don't really care about machines response to keypress time as ling it is
under 100 ms or so. Same goes for mouse movement etc. There is some time
allowed for the operation. If operation is done during this time, machine
feels responsive, otherwise not.
It doesn't have particular sense to strive for response anything faster
than that, since user won't notice anyway, but it certainly should stay
inside some time limits...

In a dual processor system, same number of instructions must
be processed (either in CPU doing the compiling or in other
more idle CPU). But since each CPU is slower, it takes longer
to respond to that human action - a keystroke. Single
processor system using faster processor will respond faster to
interrupt of that higher priority task.

You don't seem to understand modern microprocessors. These are not the same
as microcontrollers. One instruction on the microcontroller takes small,
defined number of clocks. One instrustion on Athlon, P3, P4 etc can take
anywhere from theoretical minimum number (usually one) to several hundred
clock cycles, heavilly depending on circumstances.

CPU speed is not nearly like water in a bucket- you can't just add and
subtract it so easily.
Problem with time of response is NOT CPU speed. If it had been, even 486
would be more than enough to have responsive system- just as you have
observed before.
Problem is that CPU can't always drop what it's doing just to react to your
keypress or move a mouse pointer, open/close window etc. Sometimes it can't
because it's executign a task that just has to be done right now (like
preparing DMA transaction for CDR writing etc) and sometimes it can't
response to many requests without crapping itself just with taskswitching.

Dual CPU system can be much slower (in terms of absoulute speed- I'm not
talking about responsiveness here) or it can be much faster (more than 2x )
than uniCPU system.

UniCPU will be faster for tasks that are not multithreaded or tasks that are
neatly multithreaded so that they can be executed in in linear order or at
least in bigger chunks and cannot be executed in parallel. SInce thez tend
to use latest geenration RAMs they will excell alsso with very memory
intensive applications. Those are the situations where Duallies can't take
advantage of extra CPU and where this CPU even slows them down (cache
synchronisation- "snooping" etc)

But in situations that demand parallel execution of many threads and tasks
Duallie can be much faster, despite all.
I fnumber of tasks is not too high and L2 of both CPUs is big enough not to
get thrashed, dual CPU machine can beat single by more than 100%. Smaller
taskswitch frequency will also bring its benefits.

Steve and SIOL cannot argue with any of this because it uses
the same type of speculative reasoning that they use. IOW the
only thing we can say for certain is that Steve and SIOL
*feel* their dual processor systems are faster - without doing
a double blind test - without any numbers - and without any
good solid facts to prove their claim.

No, I "feel" my machine as more responsive. Not strictly "faster".

Just as I said, I really couldn't care less if kernel compilation takes two
or five minutes on my machine.
Or how much Gentoo's "emerge -u world" takes- as long as it is inside some
acceptable limits AND as long as I don-t see it.

Do I really need to have numbers to see for example, that on my newest
notebbok Toshiba Sattelite 5200-801 (2 GHz P4, 512 Mb DDR, GEforce 4 Go etc)
I can't watch a movie, listen to music or work under KDE smoothly when
Gentoo does its updates ?
Junk science is used to claim, for example, that a dual 300
Mhz system will be more responsive compared to a single 600
Mhz system.

But I'm not trying to be a scientist here. I was just talking about my
experiences. I don't want to invest money into making science out of this.
I had a Tualatin [email protected] as the office machine that was meant for
tasks from office work to CD burning, printing, besides that it serves as a
firewall and router.

Now the Tualatin board is in the drawer and on its place works Dual P3-1Gig.
It works better. I can burn CDs on all three units and experience full
bandwidth under samba or ADSL and even print and never ruin writing a single
CD in the process.

What instrumentation do I need to "prove this scientifically" ?


And since neither Steve nor SIOL have included
other critical information, then we really don't know about
other essential features that more affect overall system
performance. For all we know, the motherboard in that dual
processor machine was better designed than motherboard in a
faster single processor system.

Motherboard differences can't bring or take that much from performance. But
just to keep you happy, here it goes>

Uni cpu was 1.3 GHz Tualatin, oveclocked from 100 MHz FSB to 133 MHz FSB.
Board was QDI's 10T with VIA chiopset. It had 1.5 Gb SDRAM. I have used
extra IDE card (Promise Ultra 100 Tx2) to connect four 180 Gb IDE disks
(IIRC Maxtor). Graphic card is IIRC nVidia Geforce 4 MX400.
And 3 TEACs CD-W540E units. There used to be four, but one has died recently
Oh, yes, machine has had a couple of Ethernet cards in it (i think 3), since
it also works as a firewall and router.


When I changed it to duallie, I pretty much changed only board with CPUs.
Everything else is the same. Even RAM.
Dual system runs at native speed at 133 MHz FSB.

Above is posted a fundamental theory that says Steve and
SIOL are deceiving to us.

Sure. And all that dual board sold are just a part of dirty plot to **** up
the consumer.
However unlike their junk science
reasoning, I will not claim that the above theory proves they
are wrong. I will simply say that the above theory
demonstrates their ideas can be flawed AND that we need
numbers.

But you have numbers. Just look at the benchmarks.
We are just saying that benchmarks don't cover the feeling of
responsiveness.
You have to experinece it for yourself, then decide.
If your tasks don't need it, you won't understand why people are buying this
stuff.
If your tasks need it, you won't use single CPU machine ever again.
You'll even start looking for dual CPU notebook...
I really don't care whether the dual processor system is
more responsive. I do care when junk science reasoning is
being promoted as a replacement for honesty - a claim based on
scientific principles - including numbers. They repeatedly
post a claim without numbers - or anything else to demonstrate
their claim. Lane Lewis is has quite correctly challenged
their 'facts' as only speculation based upon human perception.

So, what should I do ? All I care is performance under load that I'm
experiencing.
I don't care for standard benchmarks. 3DMarks etc don't meen crap for the
role this machine plays.

Should I mount back the Tualatin with its board and roast a couple of
hundred CD's under full load just to
"scientifically proove a point" ? No, thanks.

Why bother ? I'm not trying to sell you anything...
 
IOW the
only thing we can say for certain is that Steve and SIOL
*feel* their dual processor systems are faster - without doing
a double blind test - without any numbers - and without any
good solid facts to prove their claim.

You know what? You win. You and Lane both. Really.

You ask me for numbers, I give them to you, you either don't believe them
or ignore them. I ask you for numbers, you come up with *none*. Then you
whine like a little girl that *I* didn't give you any numbers.

You ask for reasons, I give them, you ignore it. I ask you for reasons,
you give none.

You come up with supposed direct quotes which never existed, then call
others "junk scientist".

You're right, I can't argue against you. Because you're an idiot.

steve
 
Dual CPU system can be much slower (in terms of absoulute speed- I'm not
talking about responsiveness here) or it can be much faster (more than 2x )
than uniCPU system.

I'm sure that they will cry that a more than 2x speedup isn't possible.
Here's an application where it is.

Take a cheap gigabit NIC, throw it in a single machine, and try sending
data at full gigabit speeds to an application on the box for processing.

With a cheap gigabit NIC (no interrupt coallescing), the machine will be
brought to it's knees by the sheer flood of interrupts. Really. And if you
don't believe it, go talk with the clustering guys about why gigE is rarely
used in serious clusters.

So, because of the flood of interrupts, the CPU can only do a very small
amount of processing with the app receiving the data - say, one-tenth to
one-fifth of it's normal ability.

Now, use an SMP machine. One CPU gets hammered while servicing the
gigabit NIC, the other CPU's free to run the app receiving the data, so you
get next to it's full ability for processing data - say, 80%.

In this case, the jump (from 20% to 80%) represents a 4x speedup. The
catch is that the CPU's in the SMP system don't magically outperform those
of the UP machine, but rather, one CPU takes the whipping and allows the
other to actually perform to it's ability.

Oh, drat. Now I'll just be called a "Junk Scientist" again, because my
views didn't coincide with tom's.

steve
 
If you haven't figured that out already, you're a blind idiot. If you
So your going to take better than a 50 percent hit on just about every
application just so you can run something in the background a little
smoother. Why on earth would anybody believe what your saying.

and your calling me an idiot.

Yes, I am. If you're saying that I take a hit greater than 50% on every
application, you're an idiot.

Here you go, Lane, your big chance: Prove it. Give me some numbers.
Give me some benchmarks in real-world situations. I double-dog-dare you.

steve
 
SIOL - I was probably building computer systems (at
component level with soldering iron) before you were even
born. Apparently you don't even understand the priority
system of task execution. High priority task gets processed
immediately at the expense of a compiler program. And no
processor puts up a "do not disturb" sign. Task is only
executed, at most, for a prescribed amount of time - and then
another task is taken up. Or current task is immediately
interrupted to perform a higher priority task. No task can
"camp out" on a microprocessor as in SIOL's Point #2. There
is even this little thing called time slicing. Also nowhere
was mention of "random choice". Where out of the blue did
"random choice" come from?

Because you don't fully understand how a preemptive
multitasking OS works, then you think a dual processor system
should be more responsive. If multiprocessor systems worked
as you described, then yes, the multiprocessor system would be
more responsive. But preemptive MT does not work as
described. Processors are constantly taking up new tasks even
when the current task is not completed. High priority or real
time tasks - that make for system interface responsiveness -
are processed immediately. Difference: a faster processor
means that real time task will be picked up and completed
quicker - which is why a 600 Mhz processor will finish
processing a real time task faster than two 300 Mhz
processors. It is why that 600 Mhz machine can be more
responsive.

But again, this is speculation. You have not provided
numbers for your claims AND not even provided a research
study. Therein lies the problem. Whether dual processor
system is more or less responsive is irrelevant. You have
again only provided speculative theories; some not even based
in how preemptive MT works. Then you claim those speculative
theories prove your personal, subjective, observation. You
have only an opinion which you misrepresent as scientific
fact. It is what we call junk science reasoning.

I am not saying the single high speed CPU is more or less
responsive than slower, multiprocessor system. I am saying
you do not have numbers or even a study to make your claims.
Even worse, because you could not support your claims with
numbers, then you posted insults at Lane Lewis, et al.
Personal insults is a symptom of junk science reasoning.

Your only proof is your emotional opinion of how you 'feel'
the dual processor system works. And you did not even
demonstrate that both single and dual processor systems are
equivalently designed - have same memory capacity - same bus
speeds - same video subsystem. Just more reasons why what you
'felt' is actually nothing more than speculation.

BTW, experience with current technology preemptive
multitasking OSes would indicate that 486 CPU cannot run XP.
IOW understanding how preemptive MT OSes works was not
demonstrated, AND experience is lacking. SIOL is also not
familiar with hardware required for an XP system. That again
is my point. Insufficient background (and numbers) to make
those claims. Conclusions are based on junk science
reasoning. Reasoning only good enough to express a personal
opinion - a relationship unique to that one person's machines.
 
Steve said:
I'm sure that they will cry that a more than 2x speedup isn't possible.
Here's an application where it is.

Take a cheap gigabit NIC, throw it in a single machine, and try sending
data at full gigabit speeds to an application on the box for processing.

With a cheap gigabit NIC (no interrupt coallescing), the machine will be
brought to it's knees by the sheer flood of interrupts. Really. And if you
don't believe it, go talk with the clustering guys about why gigE is rarely
used in serious clusters.

So, because of the flood of interrupts, the CPU can only do a very small
amount of processing with the app receiving the data - say, one-tenth to
one-fifth of it's normal ability.

Now, use an SMP machine. One CPU gets hammered while servicing the
gigabit NIC, the other CPU's free to run the app receiving the data, so you
get next to it's full ability for processing data - say, 80%.

Funny you should mention this, Steve. I was about to post very similar
information about an hour ago, but decided against prolonging this
thread. Howver, here it is, just to prove your point.

Earlier today I tried a rather informal test on my dual 1GHz P-III
system with a GigE card in it. I hammered it with one gigabyte of data
coming in over the network and monitored the server with xosview.
During data transfer one CPU or the other was at greater 90% CPU
utilization, usually 100%. (This is "system" time, not user
applications.) Just from servicing interrupts from the gigabit card.
Without a second CPU, the machine would have been crippled by the
interrupt flood.

Same thing happens in the other direction, too. Transmitting that data
causes CPU utilization to go through the roof.

Of course, this is just one example. I'm sure somebody will cry: "but
those are rare circumstances that'll lead to that situation". Sure.
But there are situations where the same theory applies. When you
consider all such situations, they are surprisingly common.
 
Steve Wolfe said:
Yes, I am. If you're saying that I take a hit greater than 50% on every
application, you're an idiot.

Here you go, Lane, your big chance: Prove it. Give me some numbers.
Give me some benchmarks in real-world situations. I double-dog-dare you.

steve

Nice try.
I of course never said that. What's your interpretation of " just about "
does that mean "every"

It's time to end this when someone double-dog-dares-me.

Lane
 
I'm sure that they will cry that a more than 2x speedup isn't possible.
Here's an application where it is.

Take a cheap gigabit NIC, throw it in a single machine, and try sending
data at full gigabit speeds to an application on the box for processing.

With a cheap gigabit NIC (no interrupt coallescing), the machine will be
brought to it's knees by the sheer flood of interrupts. Really. And if you
don't believe it, go talk with the clustering guys about why gigE is rarely
used in serious clusters.

So, because of the flood of interrupts, the CPU can only do a very small
amount of processing with the app receiving the data - say, one-tenth to
one-fifth of it's normal ability.

Now, use an SMP machine. One CPU gets hammered while servicing the
gigabit NIC, the other CPU's free to run the app receiving the data, so you
get next to it's full ability for processing data - say, 80%.

In this case, the jump (from 20% to 80%) represents a 4x speedup. The
catch is that the CPU's in the SMP system don't magically outperform those
of the UP machine, but rather, one CPU takes the whipping and allows the
other to actually perform to it's ability.

Oh, drat. Now I'll just be called a "Junk Scientist" again, because my
views didn't coincide with tom's.

steve

That might be a rather unique situation though, if I needed constant
high-bandwidth on a LAN I'd get a more expensive Gbit adapter as soon
as a 2nd CPU/motherboard.


Dave
 
SIOL - I was probably building computer systems (at
component level with soldering iron) before you were even
born.

And that gives you some kind of seniority ?
Just ot of curiosity- what have you actually built by yourself ?


Apparently you don't even understand the priority
system of task execution.

If there are not enough CPU cycles and programs demand impossible
combinations of CPU loads, then no prioritisation in the world can help you.

High priority task gets processed
immediately at the expense of a compiler program.

What a load of crap. When tasks "get executed", compiler program that
compiled them has no longer say in what gets executed and how.

And no
processor puts up a "do not disturb" sign.

Even interrupts have their priority.Not every interrupt gets to be serviced
at the moment of arrival.
Thus the parallel- "do not disturb". I still think it's quite appropriate...


Task is only
executed, at most, for a prescribed amount of time - and then
another task is taken up. Or current task is immediately
interrupted to perform a higher priority task. No task can
"camp out" on a microprocessor as in SIOL's Point #2.

On Linux, AFAIK kernel has these rights.It can reserve (due to I/O etc) CPU
power at will/needs.
On 2.6 this is somewhat remedied, I believe. This was the whole point of
having RTOS kernels- "normal" ones weren't deterministic enough...

There
is even this little thing called time slicing. Also nowhere
was mention of "random choice". Where out of the blue did
"random choice" come from?

I have meant to say that program can have say about which CPU(s) it gets
executed on.
It need not strictly be kernel's choice- which could be seen from outside
observer's perspective as a pseudorandom.
That is, not knowing the variables that affected the kernel's decision, it
would seem random...

BTW: What about time slicing ? Firs time I have encountered it was when
fiddling with my forst Sinclair QL IIRC it was something like 1989.Sinclair
was so proud of its multitasking capabilities...
So, what is so magic about time slicing or job scheduling as Sinclair liked
to put it ?

Because you don't fully understand how a preemptive
multitasking OS works, then you think a dual processor system
should be more responsive. If multiprocessor systems worked
as you described, then yes, the multiprocessor system would be
more responsive. But preemptive MT does not work as
described. Processors are constantly taking up new tasks even
when the current task is not completed.

What is new about that ? Sure they are. They are executing job after job
until next scheduler's interrupt. So what ?


High priority or real
time tasks - that make for system interface responsiveness -
are processed immediately.

O.K. Lets' say I do:

-burn a CDR
-watch a DivX
-do some window manipulations (resizing etc) with my mouse

To the system, all these tasks are high priority. So how does it decide what
to service first, interrupt for filling the next few sectors to CDR, deal
with mouseclicks or prepare next few frames of MPEG4 for display ?

Lets leave aside extra bazilion of processes with lower priority for a
moment and concentrate just on "big ones..."

Difference: a faster processor
means that real time task will be picked up and completed
quicker - which is why a 600 Mhz processor will finish

I don't fully understand this. Real time task can be processed in time or
not in time.
WHat do you mean to say, that with fast P4 you can watch Matrix in 15 min ?

Or that 600 MHz CPu will have extra time" to spare" in each timeslice ?

Even if so (not necesarilly your OS can always make sensible use out of
this), 600 Mhz CPU does not switch between tasks 2x faster than 300 Mhz. It
is 2x faster only with the data in cache. But in real life its L1 and L2
caches are full of tasks that compete for execution. And task swittch costs
quite some external accessess.
And DRAM doesn't get twice as fast every 18 months like CPUs allegedly do.

But again, this is speculation. You have not provided
numbers for your claims AND not even provided a research
study. Therein lies the problem. Whether dual processor
system is more or less responsive is irrelevant. You have
again only provided speculative theories; some not even based
in how preemptive MT works. Then you claim those speculative
theories prove your personal, subjective, observation. You
have only an opinion which you misrepresent as scientific
fact. It is what we call junk science reasoning.

I say that THIS WORKS FOR ME ! Really, that is all I'm saying.
Just today I have burned some 250+ CDRs and printed some 400+ CD stickers.
Not ONE CD lost- on three burners, and with nice Ethernet traffic.
All this while doing other things on this machine. What more do I need ?
Dhrystons, Sisoft Sandra etc ?
Why ? I have tried uni CPU board in this machine and I have tried dual. It
works better.
It works as good as I need it to ! That is all what counts with real time
machines !

I don't run from numbers but getting them will cost me downtime and money.
For what ?
To get a Quake framerate ?

Thing does what it is supposed to do and all I'm telling you is that for me
dual CPU has its value.

I am not saying the single high speed CPU is more or less
responsive than slower, multiprocessor system. I am saying
you do not have numbers or even a study to make your claims.

Sure I do. Here are some numbers:

With uni CPU Tualatin 1.7GHz I couldn't burn ONE CDR without ruining it. Now
I can burn a CDR ALWAYS.

Difference in performance is 100%- and let's not talk about a ratio...

How's that for a study ?

Even worse, because you could not support your claims with
numbers, then you posted insults at Lane Lewis, et al.
Personal insults is a symptom of junk science reasoning.

What insults ? Can you find at least one ?
How about his/hers insults ?
Your only proof is your emotional opinion of how you 'feel'
the dual processor system works. And you did not even
demonstrate that both single and dual processor systems are
equivalently designed - have same memory capacity - same bus
speeds - same video subsystem. Just more reasons why what you
'felt' is actually nothing more than speculation.

Some more crap from you. I DID say that the only thing I have exchanged was
just a board & CPU.
And I DID gave detailed explanation of confuigurations.

Take another look and tell me what else do you want to know about
configurations ?

BTW, experience with current technology preemptive
multitasking OSes would indicate that 486 CPU cannot run XP.
IOW understanding how preemptive MT OSes works was not
demonstrated, AND experience is lacking. SIOL is also not
familiar with hardware required for an XP system.

"SIOL" was just putting someone others claims to the test about 486 being
responsive and therefore fast.
Your attention to details is a bit shallow.

That again
is my point. Insufficient background (and numbers) to make
those claims. Conclusions are based on junk science
reasoning. Reasoning only good enough to express a personal
opinion - a relationship unique to that one person's machines.

We can't all be blessed with your deeper understanding of the universe and
everything in it...
 
John-Paul Stewart said:
Steve Wolfe wrote:

Funny you should mention this, Steve. I was about to post very similar
information about an hour ago, but decided against prolonging this
thread. Howver, here it is, just to prove your point.

Earlier today I tried a rather informal test on my dual 1GHz P-III
system with a GigE card in it. I hammered it with one gigabyte of data
coming in over the network and monitored the server with xosview.
During data transfer one CPU or the other was at greater 90% CPU
utilization, usually 100%. (This is "system" time, not user
applications.) Just from servicing interrupts from the gigabit card.
Without a second CPU, the machine would have been crippled by the
interrupt flood.

Same thing happens in the other direction, too. Transmitting that data
causes CPU utilization to go through the roof.

Of course, this is just one example. I'm sure somebody will cry: "but
those are rare circumstances that'll lead to that situation". Sure.
But there are situations where the same theory applies. When you
consider all such situations, they are surprisingly common.

I was lurking on comp.sys.sinclair and one guy that is doing firmware for
TCP/IP layer on Ethernet interface for QL got an interesting problem. IIRC
interface has sufficient buffer memory, so CPU doesn't have to poll it too
often.

This is good since he uses QL's interrupt that gets executed on 20 ms. In
terms of 100 Mbit EThernet time, 20 ms is a long time to wait. But since he
had deep enough buffer, he did not worry.

But then he run onto a problem: Ethernet card (normally) understands only
Ethernet frames. To the card, TCP/IP is just a data. But TCP/IP protocol
demands acknowledge after each received frame. There can be of course some
window between last received and last acknowledged frame, but in practice it
has been shown that no sender is willing to send frame after frame and
blindly wait more than 20 ms for the first acknowledge to come.

So he had implementation that ran perfecty correct, it just was a little
slow. It could transfer some 50 packets per second with 1.5 kB each= some 70
kB/s ;o)

Since core of the TCP/IP has to be done by the CPU, I was kind of expecting
this. It's bad enough on 100 Mbit, and it is got to be a killer on 1 Gbit. I
was constantly hoping that some standard solution with might emerge- like
Ethernet chip also understanding IP, for example- and taking care for
elemetary but labor intensive stuff...
 
That might be a rather unique situation though, if I needed constant
high-bandwidth on a LAN I'd get a more expensive Gbit adapter as soon
as a 2nd CPU/motherboard.

Ah, but therein lies the tradeoff: The adaptors which reduce interrupt
floods, via interrupt coallescing, do it at the expense of increased
latency. In some applications, that's alright. In others, it's a killer.

Of course, you can go even higher in cost to use switches which support
jumbo frames, but then you're speaking of such high dollar amounts that you
might as well just use SMP machines to begin with. : )

steve
 
Since core of the TCP/IP has to be done by the CPU, I was kind of expecting
this. It's bad enough on 100 Mbit, and it is got to be a killer on 1 Gbit. I
was constantly hoping that some standard solution with might emerge- like
Ethernet chip also understanding IP, for example- and taking care for
elemetary but labor intensive stuff...

Between interrupt coallescing, jumbo frames, and SMP systems, there are
plenty of ways to make things work well. The interrupt coallescing will do
the trick, but does cost latency. Many good drivers allow for adjustments
of just when the coallescing kick in. With SMP, of course, you can handle
interrupts without the problems, and jubmo frames let you do 64K packets,
which cuts your interrupt number by a factor of something like 40. However,
SMP systems and/or jumbo frames do cost more money, and interrupt coalescing
costs latency. But if you have the money to throw at it, the problem can be
solved easily. : ) All in all, people who have the hardware to throw at it
and do the right kernel tuning do get a full gigabit/second. Actually
*doing* something with that data is another question, processing and/or
storing 100+ megabytes/second is nothing to sneeze at.


steve
 
Nice try.
I of course never said that. What's your interpretation of " just about "
does that mean "every"

It's time to end this when someone double-dog-dares-me.

Of course it is, this time *you* were asked to back up your statements.

steve
 
Back
Top