Hi Dick,
answering your two last post....
First - yes you are right - when there is allways traffic - than the difference between Sync and Async is almost nothing.
But this is neither a fact in all applications nor is it statet in the post of the thread starter.
It is like telling somebody - use snowchains they are allways better - without knowing where or when he drives
Your 3 to 4 times better performance - maybe I know what you mean.
Assume that the async event is fired by an interrupt. So how (in simple words) is this implemented?
You wait for something.
The system remembers the fact that you wait - than your thread is marked as "needs no CPU time".
When an interrupt occures the system checks his list of "waiters" and if a thread is found waiting for that specific thing
this thread is marked as "needs soon CPU" (or even scheduled imideatly).
Anyhow the waiting thread gets control - or a little more precise the system prepares the executing environment for
that specific thread a "thread switch" occures.
And you are right - all these things take "a lot of" time. Searching for "waiters", context switching, and so on.
In older CPUs it took sometimes more time to switch the task than to do the work in the task
Also OS designs did there work to slow down context switching.
Interrupt handling had the same problem - it was time consumming - and "high speed systems" were often designe in a "polling
000style".
You wrote a book about serial communication - therefore I use the UART as an example assuming that you know this peace of hardware.
Older UARTs (USARTs like 8051) did not have a buffer. So you had to take a lot of effort (especially with slow CPUs) to communicate
with a higher bandwith (baudrate). And if you really needed performance polling was the solution because interrupthandling was
significant slower.
But newer UARTs have a buffer of some bytes - and those bytes made the difference.
Now you could gain the same speed with interrupts - if you are doing it right.
And here we are - back to the 3 to 4 times more performance.
When (even with bufferd UARTs) you use interrupts for every single byte - there is almost no better performance than with unbufferd
UARTs. But if you do it like it should be done - you win.
And how (allways in async) should it be done?
You wait for the event (handle the interrupt) - and since the (time consumming) task swithch has taken place you collect all you can
get in your handler.
So if I "WaitForObject" and something happens - I don't consume one byte (or what ever it is) - I consume - and before I wait
again - I check if there is maybe more to consume.
By the way - modern systems implement the "WaitFor..." in such a manner - this means an "extension" to the simple way I wrote above!
Simple:
You wait for something (call WaitFor...)
The system remembers the fact that you wait - than your thread is marked as "needs no CPU time".
When an interrupt occures the system checks.....
"Modern":
You wait for something (call WaitFor...)
At the entrance of WaitFor.... occures a check if there is allready something ready to use.
In this case no TaskSwitch occures - the function returns imideatly to the caller.
So I don't beleave that you can get 3 to 4 times better performance with polling.
Especially if the developer does his async thing well (Wait - "read" all available - wait again) you will IMHO be only
a few percent faster.
And the more things that are going on - your "profite" gets less.
For an example - we have an app which has to:
Listen on an UDP socket, reading a GPS device, reading a handwriting recognition hardware (serial), a barcode reader
and checking for feedback from an external navigation application.
This would mean (in your approach) on loop for the UDP, one for GPS, one for the pen, one for the barcode reader
and last not least one for the navi - 5 permanent loops - and the app itself should also run (store things in a DB, display
information on a form, call webservices......).
And this app shows another thing - Barcode comes only sometimes in 0.0....% of the runtime.
Handwriting come also in only 0.0....% of the runtime.
For UDP - maybe it take 5 or more minutes till one information comes in.
And GPS send it's sentences also only once in a second.
So in speciall cases - your poll could bring slightly better performance - assuming permanent data.
But like Charles - I'm very interested in such benchmarks that show 3 to 4 times better performance.
I still agree with your
Use "blocking calls" is for sure (in most cases) the worst choice.
Regards
Manfred