<<<
Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.
My understanding is that this attribute implements a per-method lock.
While a thread is executing code inside the specific locked method , all
other threads will need to wait until the current thread exits.
Other methods in the class ( even those that are locked in identical
fashion ) are not affected the lock.
Is this correct ?
Here's an example that outlines my thinking :
using System;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using System.Text;
namespace Namespace1
{
public class Class1
{
// external threads repeatedly call Method1,Method2,Method1, etc.
[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method1()
{
// thread #2 currently executing here,
// when finished will wait for thread #1 to exit Method2
}
[MethodImplAttribute( MethodImplOptions.Synchronized )]
public void Method2()
{
// thread #1 currently executing here,
// when finished, will wait for thread #2 to exit Method1
}
}
}
Josip Medved said:
... so , [MethodImplAttribute(MethodImplOptions.Synchronized )] is just
some special magic which accomplishes same behind the scenes ... ?
No, it does something like this
public void Method1() {
lock {
//some code
}
}
Interestingly, on adding this attribute to all of my public methods that
perform data-access , all of my System.Data.SqlClient concurrency-related
exceptions magically went away.
That is because if you add this to all your methods there is no real
concurency.
What are the practical drawbacks of using this attribute on a per-method
basis ?
Once method is entered, all other methods need to wait. Even those
methods that
could be capable of returning result without interference with other
parts of
code are on wait. No matter how many threads you have, only one is
allowed access
at any point in time.
Often this is too restrictive. Most usual scenario (at least for me)
is having few
writes of data and many reads. In that case ReaderWriterLock gives you
real
performance boost (in this case lock does really bad).
Also having multiple lock objects if your class does some things that
are not
in close relation gives nice boost.
However to give real advice what to use, one always needs to see code
and how that
code is to be used and thus I gave advice to just put lock everywhere
and MEASURE
where you have a preformance problem. There is no sense (IMHO) to make
perfect
arhitecture to squeze every possible nanosecond. If your program is
slow, just
optimize few methods that do have a problem and you will see huge
performance
boost (80/20 rule applies). Time spent on optimizing something that
your customer
will not feel is wasted time better spent on testing and debugging.
There is not many applications where every processor clock is so
precious that
everything needs to be perfect (and I have my doubts that anybody
writes those
applications in C#).