C#, Threads, Events, and DataGrids/DataSets

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

I am trying to run a thread off of a form, and every once in a while the thread will raise an event for the form to read. When the form gets the event, the form will place the event into a dataset and display it on a datagrid that is on the form. The problem is that the thread will slowly take over all of the processor time. After about 8 events, the form will not even respond anymore. Here is the guts of my test code

// Class and event for Threa
using System

namespace ThreadTestStuf

public delegate void TestEventHandler(object sender,int count)
public class TestThread
public event TestEventHandler TestEvent
public bool stopRunning = false
public TestThread(
{
public void RunningThread()
int xyz = 0
while (!stopRunning)
xyz += 1
Console.WriteLine("Count: " + xyz.ToString())
if (xyz % 1000 == 0)
TestEvent(this,xyz)







// Form that call the test threa
// Data set only has (int count) and (string desc) in i

using System
using System.Drawing
using System.Collections
using System.ComponentModel
using System.Windows.Forms
using System.Data
using System.Threading
using ThreadTestStuff

namespace ThreadTes

public class ThreadTestForm : System.Windows.Forms.For

private ThreadTest.TestSet testSet1
private System.Windows.Forms.DataGrid TestDG
private System.Windows.Forms.Button StartThreadButton
private Thread localThread
private TestThread localTestThread

private System.ComponentModel.Container components = null

public ThreadTestForm(

InitializeComponent()


protected override void Dispose( bool disposing
{
localTestThread.stopRunning = true
localThread.Abort()
if( disposing

if (components != null)

components.Dispose()


base.Dispose( disposing )


private void InitializeComponent(

this.testSet1 = new ThreadTest.TestSet()
this.TestDG = new System.Windows.Forms.DataGrid()
this.StartThreadButton = new System.Windows.Forms.Button()
((System.ComponentModel.ISupportInitialize)(this.testSet1)).BeginInit()
((System.ComponentModel.ISupportInitialize)(this.TestDG)).BeginInit()
this.SuspendLayout()
//
// testSet
//
this.testSet1.DataSetName = "TestSet"
this.testSet1.Locale = new System.Globalization.CultureInfo("en-US")
//
// TestD
//
this.TestDG.DataMember = ""
this.TestDG.DataSource = this.testSet1.TestTable
this.TestDG.HeaderForeColor = System.Drawing.SystemColors.ControlText
this.TestDG.Location = new System.Drawing.Point(16, 24)
this.TestDG.Name = "TestDG"
this.TestDG.Size = new System.Drawing.Size(320, 144)
this.TestDG.TabIndex = 0
//
// StartThreadButto
//
this.StartThreadButton.Location = new System.Drawing.Point(224, 184)
this.StartThreadButton.Name = "StartThreadButton"
this.StartThreadButton.Size = new System.Drawing.Size(120, 32)
this.StartThreadButton.TabIndex = 1
this.StartThreadButton.Text = "Start Thread"
this.StartThreadButton.Click += new System.EventHandler(this.StartThreadButton_Click)
//
// ThreadTestFor
//
this.AutoScaleBaseSize = new System.Drawing.Size(5, 13)
this.ClientSize = new System.Drawing.Size(376, 253)
this.Controls.Add(this.StartThreadButton)
this.Controls.Add(this.TestDG)
this.Name = "ThreadTestForm"
this.Text = "Thread Test Form"
((System.ComponentModel.ISupportInitialize)(this.testSet1)).EndInit()
((System.ComponentModel.ISupportInitialize)(this.TestDG)).EndInit()
this.ResumeLayout(false)


#endregio

/// <summary
/// The main entry point for the application
/// </summary
[STAThread
static void Main()

Application.Run(new ThreadTestForm())

private void EventHappend(object sender, int count

localThread.Interrupt()
testSet1.TestTable.AddTestTableRow(count,"Hello There")
// MessageBox.Show(localThread.ThreadState.ToString())


private void StartThreadButton_Click(object sender, System.EventArgs e
{
localTestThread = new TestThread();
localTestThread.TestEvent += new TestEventHandler(this.EventHappend);
localThread = new Thread(new ThreadStart(localTestThread.RunningThread));
localThread.Start();
localThread.IsBackground = true;

}
}
}

Can anyone help?
Thanks,
Dennis Owens
 
Dennis Owens said:
I am trying to run a thread off of a form, and every once in a while
the thread will raise an event for the form to read. When the form
gets the event, the form will place the event into a dataset and
display it on a datagrid that is on the form. The problem is that the
thread will slowly take over all of the processor time. After about 8
events, the form will not even respond anymore. Here is the guts of my
test code.

I'm not surprised - you've got 8 threads in a tight loop. That's bound
to take over the processor! However, you've got a few other nasties
going on...

Firstly, you're accessing stopRunning in a non-thread-safe way. You
should either declare it as being volatile, or wrap any access to it in
a lock.

Secondly, you should never update the GUI from a non-UI thread, as you
currently are doing. You should use Control.Invoke to invoke a delegate
on the UI thread.

Thirdly, why are you calling localThread.Interrupt() from your event?
At that time, you're actually running *in* the thread you're trying to
interrupt!
 
The interrrupt was just a wild guess to try and stop it from sucking up all the processor time. I forgot I left it in there. The only line in that method should be the adding of the row in the dataset. As for your second thing, is this what is causing the thread to take over. Well I will lookup Control.Invoke and see if I can figure it out. I still don't see the eight threads, just the form and the test thread

Thanks Dennis Owen
 
Hi,
stopRunning doesn't need to be declared volatile and it should not be used
with locks.
Native integer assignment is atomic operation as well as native integer
promotion (even so the last doesn't even count).
No compiler would optimize away loading of cycle variable when there is
nontrivial cycle body (like non-inlined method calls inside of cycle).
And you don't need release/acquire semantic for single variable with atomic
assignment.
Using LOCK for accessing it would be unnecessary performance hit if not to
say a mistake.

I suppose that he misinterpreted meaning of Thread.Interrupt, and you are
correct with the rest of your comments.

-Valery.

See my blog at:
http://www.harper.no/valery
 
You have tight loop inside your thread method - of course it eats all
processor resources!
And btw, setting thread as background doesn't affect thread
priority/scheduling. it only says that .Net process can terminate even so
there are some background threads still running (I just think that you got
it wrong too)

and do as Jon said to you - use Invoke when you update controls from non-UI
threads.

-Valery

See my blog at:
http://www.harper.no/valery


Dennis Owens said:
The interrrupt was just a wild guess to try and stop it from sucking up
all the processor time. I forgot I left it in there. The only line in that
method should be the adding of the row in the dataset. As for your second
thing, is this what is causing the thread to take over. Well I will lookup
Control.Invoke and see if I can figure it out. I still don't see the eight
threads, just the form and the test thread?
 
Ok here is a simple question, how should this simple example be written

Thanks Dennis Owens
 
btw (for avoiding being misinterpreted) I didn't say that using bool flag as
thread event is good design :-). He should use kernel object like event for
signaling exit.

-Valery.

See my blog at:
http://www.harper.no/valery

Valery Pryamikov said:
Hi,
stopRunning doesn't need to be declared volatile and it should not be used
with locks.
Native integer assignment is atomic operation as well as native integer
promotion (even so the last doesn't even count).
No compiler would optimize away loading of cycle variable when there is
nontrivial cycle body (like non-inlined method calls inside of cycle).
And you don't need release/acquire semantic for single variable with atomic
assignment.
Using LOCK for accessing it would be unnecessary performance hit if not to
say a mistake.

I suppose that he misinterpreted meaning of Thread.Interrupt, and you are
correct with the rest of your comments.

-Valery.

See my blog at:
http://www.harper.no/valery
 
Valery Pryamikov said:
stopRunning doesn't need to be declared volatile and it should not be used
with locks.
Native integer assignment is atomic operation as well as native integer
promotion (even so the last doesn't even count).

Being atomic has nothing to do with it. The memory model does not
guarantee that the running thread will *ever* see writes from another
thread unless a memory fence is involved.

You can argue about whether or not it'll actually happen, but I prefer
to work from guarantees when it comes to multi-threading - because
sooner or later, some architecture will come along and destroy all
assumptions apart from the guarantees.
 
Dennis Owens said:
The interrrupt was just a wild guess to try and stop it from sucking
up all the processor time. I forgot I left it in there. The only line
in that method should be the adding of the row in the dataset. As for
your second thing, is this what is causing the thread to take over.
Well I will lookup Control.Invoke and see if I can figure it out. I
still don't see the eight threads, just the form and the test thread?

Sorry, I thought you'd meant there were 8 clicks, not 8 events -
misread. Yup, there'll only be the one extra thread. It will still be
in a tight loop though, which is going to stuff you to some extent
*whatever* you do.
 
Valery Pryamikov said:
btw (for avoiding being misinterpreted) I didn't say that using bool flag as
thread event is good design :-). He should use kernel object like event for
signaling exit.

On the contrary, I'd say using a boolean (but using it properly) is a
perfectly reasonable way of exiting the thread. How would your code
with an event work? It's basically going to end up doing something
*equivalent* to just checking a flag, assuming that the thread wants to
keep doing work until it's told to stop.

Using a boolean is simple (when done right) and allows clean exit
(unlike, say, aborting the thread). Sure, it requires a memory barrier
in order to guarantee that the thread sees the appropriate change in
value, but those are very cheap in the grand scheme of things. What
benefit is there in doing anything else?
 
Jon, you are wrong. Atomic assignment has everything to do with it - read on
section om .Net memory model in .Net specs...
And btw. regardless of processor and memory architecture there is always
guaratee that memory writest from one thread will be seen by any other
thread. It is only order of read-to-write or write-to-read that is not
guaranteed and can require memory barrier depending on memory and processor
architecture. (read-modify-write order is not guarateed on any memory and
processor architecture).
In this sample stopRunning doesn't require such (believe me, I used quite
some time for learning and working with multithreading programming).

-Valery

See my blog at:
http://www.harper.no/valery
 
kernel object event is a different thing than C# delegate event.
if you are interesting about how Win32 events are working, read for example
Jeff Richter book.

-Valery (Windows SDK MVP since 1999).

See my blog at:
http://www.harper.no/valery
 
Valery Pryamikov said:
Jon, you are wrong. Atomic assignment has everything to do with it - read on
section om .Net memory model in .Net specs...

I have - and while atomicity is necessary, it's not sufficient.
And btw. regardless of processor and memory architecture there is always
guaratee that memory writest from one thread will be seen by any other
thread.

Not if the JIT compiler decides to keep the writes "local", not
flushing them back to the main processor memory. It could keep the
value of the variable within a register, for instance, and only write
it back at the end of a method. Similarly, the reading thread could
keep the value of the variable within a register and only read it from
memory once, at the start of the method. Both of those are possible
(though unlikely) under the .NET memory model.

The Java memory model makes all of this somewhat clearer, IMO - while
it's obviously not a good idea to write to the Java memory model when
working in .NET, it gives a good feeling of just how horrible things
can end up.
It is only order of read-to-write or write-to-read that is not
guaranteed and can require memory barrier depending on memory and processor
architecture. (read-modify-write order is not guarateed on any memory and
processor architecture).

So where is the guarantee that writes are seen immediately? If they're
not seen immediately, where's the guarantee that they're seen by any
specific time without any memory barriers? If there's no guarantee of
them being seen by any specific time, what's to stop an implementation
from (say) caching the flag in a register and never (within an infinite
loop) going back to main memory?
In this sample stopRunning doesn't require such (believe me, I used quite
some time for learning and working with multithreading programming).

A lot of people do, and a lot of people will never get burned by it.
The same is true of the double checked locking algorithm - that doesn't
mean it's correct. Just because something has always worked for you
doesn't mean it complies with the specifications.
 
Valery Pryamikov said:
kernel object event is a different thing than C# delegate event.

Yes, I wasn't talking about C# delegate events either.
if you are interesting about how Win32 events are working, read for example
Jeff Richter book.

I know a bit about how Win32 events work. I don't see how they're
relevant in this case. Could you provide some code using events which
is a) correct, and b) simpler or "better" in some other way than using
a flag?
 
Jon, for God sake, go read some docs and try to reflect it!
JIT compiler just compiles IL to x86 (or whatever platform it is developed
for).
Some optimizing compilers could optimize away loading variable into register
for cycles with trivial cycle body as optimization technique, but no
compiler will do it for non-trivial cycle body (there are books on compiler
optimization theory if you are interesting). Volatile has several meanings
(overloaded semantic) and one of these meanings is to signal optimizing
compiler that it can't use that particular type of optimization. However any
non-inlined method call is one of the criteria that rules off that
optimization too (non-trivial cycle body) and delegate call is never inlined
(not speaking about other things).
Other meaning of volatile is defined by .Net spec where it adds
release/acquire semantic for variable reads/writes. This is important only
for non-trivial memory objects with non atomic assignment and consistency
requirements. For example if we speak of some class that has some instance
fields, then volatile could be important to guarantee that all memory writes
from class constructor will be completed before memory read on 'this'. But
with usage pattern that we discuss here it could never make that problem. I
can even give you exact prove of this fact based both on x86 and .Net memory
models (but I rather would not do it for not wasting time of the newsgroup
readers on something they probably isn't interesting to read anyhow).
As I already said - regardless of processor architecture and memory model,
it is always guaranteed that memory write from one thread will be seen by
all other threads. This is a general rule of computing. period. Order isn't
guaranteed, but visibility is!

Jon, I've used several years of my life learning and programming symmetric
multiprocessing and I'm not going to argue here with you about things that
you apparently don't know well. You can have you last word if you want,
however be warned that trying to argue about something that you are not
really familiar could just harm your reputation as a specialist.

-Valery

See my blog at:
http://www.harper.no/valery
 
Any mutlithreading sample could be used for demonstrating this (literally
tons of them).
You simply create one or more events in your program and use
WaitForSingleObject/MultipleObjects(Ex) from your thread.

-Valery.

See my blog at:
http://www.harper.no/valery
 
The same is true of the double checked locking algorithm - that doesn't
mean it's correct. Just because something has always worked for you
doesn't mean it complies with the specifications.
I learned about problems related to singelton double locking initialization
pattern many years ago (in last century literally), and even have a couple
of Petri Nets diagrams with prove of this problem still lying in my desk....

-Valery.

See my blog at:
http://www.harper.no/valery
 
Valery Pryamikov said:
Jon, for God sake, go read some docs and try to reflect it!

Um, I *have* read the specification. I have seen no guarantee of the
type you're implying exists.
JIT compiler just compiles IL to x86 (or whatever platform it is developed
for).
Yup.

Some optimizing compilers could optimize away loading variable into register
for cycles with trivial cycle body as optimization technique, but no
compiler will do it for non-trivial cycle body (there are books on compiler
optimization theory if you are interesting).

They could, however. That's the point - they could, and on some
architectures they may do. Yes, it won't be a problem on x86, but I
don't believe the spec guarantees it won't be *in general*.
Volatile has several meanings
(overloaded semantic) and one of these meanings is to signal optimizing
compiler that it can't use that particular type of optimization.

No, it's more than that. Volatile in the .NET CLI has a very clear
meaning, to do with memory barriers. A volatile read/write affects more
than just the variable being read/written - it affects the whole
"stream" of memory accesses.
However any
non-inlined method call is one of the criteria that rules off that
optimization too (non-trivial cycle body) and delegate call is never inlined
(not speaking about other things).

I don't see where inlining is actually relevant here.
Other meaning of volatile is defined by .Net spec where it adds
release/acquire semantic for variable reads/writes.

That's the meaning I'm talking about, seeing as I'm talking about the
specification.
This is important only
for non-trivial memory objects with non atomic assignment and consistency
requirements. For example if we speak of some class that has some instance
fields, then volatile could be important to guarantee that all memory writes
from class constructor will be completed before memory read on 'this'. But
with usage pattern that we discuss here it could never make that problem. I
can even give you exact prove of this fact based both on x86 and .Net memory
models (but I rather would not do it for not wasting time of the newsgroup
readers on something they probably isn't interesting to read anyhow).

Any proof *cannot* be based on the x86 memory model, as the .NET memory
model doesn't refer to the x86 memory model at all, and is indeed much
weaker than it.
As I already said - regardless of processor architecture and memory model,
it is always guaranteed that memory write from one thread will be seen by
all other threads. This is a general rule of computing. period.

Not really. The general rule (to my mind) is that there is some way of
enforcing visibility, but that's not necessary what happens
immediately.
Order isn't guaranteed, but visibility is!

Nope. It really isn't - not without memory barriers. If you're saying
that there isn't a single memory model which pretty much explicitly
states that the code posted might not work, I refer you to the Java
memory model. Choice quotes are:

<quote>
Best practice is that if a variable is ever to be assigned by one
thread and used or assigned by another, then all accesses to that
variable should be enclosed in synchronized methods or synchronized
statements.
</quote>

<quote>
Each thread has a working memory, in which it may keep copies of the
values of variables from the main memory that is shared between all
threads. To access a shared variable, a thread usually first obtains a
lock and flushes its working memory. This guarantees that shared values
will thereafter be loaded from the shared main memory to the threads
working memory. When a thread unlocks a lock it guarantees the values
it holds in its working memory will be written back to the main memory.
</quote>

Now, as I said before, the Java memory model isn't quite the same as
the CLI memory model, but both are relatively weak in terms of the
guarantees they give.

Here's another quote, this time from .NET - the Thread.MemoryBarrier
method documentation:

<quote>
Synchronizes memory. In effect, flushes the contents of cache memory to
main memory, for the processor executing the current thread.
</quote>

Now, that suggests that there is the idea of a "cache" and "main
memory" and that they won't necessarily be in sync. Some kind of
flushing may be required in some situations. Where is the *guarantee*
that such a flush occurs in the posted code?
Jon, I've used several years of my life learning and programming symmetric
multiprocessing and I'm not going to argue here with you about things that
you apparently don't know well. You can have you last word if you want,
however be warned that trying to argue about something that you are not
really familiar could just harm your reputation as a specialist.

If it really is guaranteed, why not just post the relevant parts of the
the CLI specification? If I really am wrong, I'd *really* like to be
proven wrong. For one thing, it would make my life easier when writing
similar code!

I'm certainly *not* an expert in the x86 memory model, or indeed any
specific processor's memory model. I wouldn't even say I'm an *expert*
in the CLI memory model, although I know more about that than about any
specific processor model.

I'm not trying to argue that under the current .NET implementation, the
posted code won't always work. It may well work for all future
implementations on every architecture, too - but I don't believe it's
guaranteed to. All I'm after is some evidence of that guarantee, or an
acknowledgement that it doesn't exist.
 
Valery Pryamikov said:
Any mutlithreading sample could be used for demonstrating this (literally
tons of them).
You simply create one or more events in your program and use
WaitForSingleObject/MultipleObjects(Ex) from your thread.

But the thread doesn't *want* to wait. An event would be fine to use in
a queuing system, where it was being given extra work by another
thread. I would usually use an event (or actually just pulse a monitor,
which is very similar) to control the "more work or a signal to stop
has come in" but still use a flag to show the difference between more
work being present and a request to stop.

The code posted, however, didn't rely on another thread giving it any
more work - it doesn't want to *wait* for a signal from another thread,
it just wants to notice when such a signal has been provided, at some
future (but not too distant future) time.
 
I learned about problems related to singelton double locking initialization
pattern many years ago (in last century literally), and even have a couple
of Petri Nets diagrams with prove of this problem still lying in my desk....

While I don't have the Petri Nets diagrams, I too learned about the
problems with it (in the Java memory model at least) quite a while ago.

My point is that people use things which work for them, and then they
assume that they're guaranteed to work. Just because using a simple
flag with no locking or memory barriers worked for you every time you
used it doesn't mean it's guaranteed to work. (Were you even working in
the .NET memory model then? It may have been guaranteed to work in the
memory model you were using, but that doesn't mean it's guaranteed to
work in the .NET memory model.)
 
Back
Top