John said:
Hi
Is there a sleep function in vb.net which when used gives time to other apps
to maintain responsiveness of other apps which this app is on sleep?
System.Threading.Thread.Sleep(Milliseconds).
or
Imports System.Threading.Thread
....
....
Sleep(milliseconds)
However, I'm curious about your usage. This is a deep subject but I
will try to summarize. I apologize for the length of this message,
but this is one of my favorite topics.
Windows is a pre-emptive OS and all processes/threads will get a
quantum as time to do work. All things being equal (thread priority,
no waitable kernel objects), the OS thread scheduler will delegate
"equal" time (called a QUANTUM) to all equal priority threads.
A quantum is around 10-15 msecs depending on the CPU. 15ms is pretty
average with today dual-core machines and all Windows are NT based
(old 95 had a 10-13ms quantum). It is called quantum because its a
multiple of the minimum sleep. Assuming 15ms is the quantum for your
machine:
Sleep(1) to Sleep(15) will always sleep 15 ms.
Sleep(16) will sleep two quantums or 30 ms
Sleep(31) will sleep 3 quantums or 45 ms
and so on. 1 quantum = 15 ms
However,
Sleep(0)
is a special sleep which acts more like a POKE or YIELD (in RTOS
terms) to wake others threads in the systems, but Windows will not
wait one quantum to rewake up the calling thread and restart it. So
this can be a problem if using Sleep(0) the wrong way. I will explain why.
If you think your process is "hogging" the system, the reasons are
generally:
- If the process is a GUI applet, it may not be processing the
message queue fast enough. However, this generally slows your
process down, not others. Calling My.Application.DoEvents() will
solve that problem.
- The typical main reason is the process/thread context switching is
too high.
Context switching (CS) is a very expensive Windows and CPU operation.
A CS is when the WINDOWS:
- Stops the current thread X in the CPU (preemption),
- Swaps out all its code, stacks and memory usage, stores it,
- Swaps in the Next thread Y code, stack, memory usage from storage,
- Starts the thread Y
This is done, again when all things are equal, every QUANTUM for all
threads.
So a process thread will HOG the CPU when its CS is too high - PERIOD
Lower the CS, and the machine will begin to run smoothly.
You can see the context switching count in the task manager for any
process. For individual threads, use Performance Counter.
Lets use this example.
Lets suppose you have a LOOP that does a lot of work for you, reading
and processes data blocks:
Sub DoWork1()
while I.Have.Data do
work_on_data
end while
end sub
You realize this:
"Wow! DoWork() is necessary, but its really dragging the
system down. The CPU is high and other processes are
not responsive. Need to improve."
The classic natural approach is to do what you asked, put in some
sleep time slicing:
Sub DoWork1(ms as integer)
while I.Have.Data do
work_on_data
Sleep(ms)
end while
end sub
Now, without the slice, again Windows will automatically preempt by
doing a context switch somewhere in the above code and will do so at
the spot where the compiled lines translated OP CODES clock cycles add
up to a quantum - this is called a natural time slice.
By adding the Sleep() slice, you begin to altering the natural time
slicing.
So what value do you use?
If ms = 0, this MAY BE a mistake because it will increase the context
switching. The CPU usage might be higher than without it.
You can use 1 quantum (1 to 15) or more, this is where you will begin
to be friendly but without a doubt the DoWork1() will take even
longer. You will be friendly with the system but the time to
completion may be beyond an acceptable level.
So this is a fine tuning process. There are various fine tuning
methods. Remember the idea is to reduce the expensive Context
Switching overhead. One idea is to only do so every N number of records:
Sub DoWork1(slicer as integer)
dim x as integer = slicer
while I.Have.Data do
work_on_data
if Slicer > 0 AndElse x = 0 then
Sleep(1)
x = Slicer ' start counter again
continue while
end if
x = x - 1
end while
end sub
But again, its a fine tunning process how many records are processed
before you sleep. Sometimes, this will afford you to do POKE,
Sleep(0), if you only do it occasionally, not all the time.
Also, sometimes you can have hardware interrupt based context
switching depending on what you doing in the loop. Reading and writing
to DISK or anything that have hardware Interrupts, including moving
the mouse around, that will force a pre-emptive context switching
to process the interrupt.
Kernel Objects and Event Drive Processes
Whenever possible, the ideal solution is to use EVENTS or Kernel
Objects. This is ideal because the Windows Scheduler will put the
thread to sleep and will only wake it up only when the event is
signal. This is one of the exceptions to the "All things being equal"
rule. A thread waiting on an event is optimal - performance wise.
But depending on what you doing, using events are not always possible,
but to illustrate it using the above example, suppose you are doing
polling, waiting for data so you can process it.
If you create a timer to poll for data, if its frequency is too high,
it can wasteful and if its processes data far too often, that can bog
down the machine as well.
If you had a way to get a SIGNAL when the data arrives, that would be
ideal and this is often what to look for to improve performance and
cooperation with other processes running.
But even then, if you did have a SIGNAL, you might not want to process
the data right away. Your design can wait until you get X amount or
threshold of data before you process. So the signal might start a
timer to built up the data queue and only process when X is reached
or the timer expires. In our works, this is common technique to
improve performance and queue processing.
Of course, this is all implementation specific so it all depends on
what you doing, but in general if you want to be cooperative with
other processes and are asking about Sleep(), consider the concepts I
touch upon above because a Sleep is not always the solution. The
overall solution to look for, is to reduce the context switching
without negatively slowing down your work.
--