I want to add a little bit to what Frans said ..
Academically speaking ... yes it should work. But what happens is, if you
work on multiple sqlConnection objects, and even if you close them, to keep
the transactionally consistent, and due to connection pooling, the physical
connection remains open, and keeps the resources locked for upto the
transaction time out, which is 1 minute by default.
Now if you are using SQL Server 2000, your transaction gets promoted even if
there is one single SqlConnection/database connection within your
transaction scope. So you end up locking rows in isolation level
serializable - which leads to lower performance. But atleast this will work,
albeit at a much much reduced performance, so you are much better off
implementing your own transaction implementation much like what I recommend
at the end of this message anyway.
But if you are using SQL 2k5, your transaction gets promoted ONLY WHEN the
second RM shows into the picture or same transaction scope. So essentially
what will happen here is
User1: Select <--- one connection, no promotion, readcommitted isolation
level
User2: Select <--- two connections
User1: Modify Disconnected
User2: Insert <---- two or three connections (depending upon pooling logic),
and this insert enters into commandtimeout, promotion - Serializable
isolation level, MSDTC shows up.
User1: Update <--- three or 4 connections, and deadlock occurs, followed by
deadlock resolution, and user2:insert will time out, and user1:update will
be made the deadlock victim since connection3, came in and "attempted to
lock" before connection 4. Of course #3 couldn't lock because either 1 or 2
had already locked the resources.
So the above is not a good implementation .. right? Especially because in
event of conflict, both parties fail, you get much reduced performance, and
frankly unpredictable behavior.
Thus if you are wrapping multiple business objects within one single
transaction, and you are relying on SqlConnection's logic to wrap all that
together, you need to be aware of what is going on underneath. Simply
wrapping stuff in a transactionscope, and hoping the magic to happen .. umm
... won't happen
. Atleast not yet.
So what is the solution? The solution is to implement your own RMs if you
want the transactionscope paradigm to work. In your own RMs you can
implement whatever logic you want, and If I were you, I'd put in standard
ADO.NET concurrency checks in there. Now this would be a volatile RM since
you are talking about in-memory data - your business object, or dataset, or
datatable. The database itself being the durable resource can be relied on
using simple SqlTransaction, where you would need to break away from the
current transactionscope and prevent sqlconnection's autoenlistment in the
process.
Don't ask for a code sample LOL
, well .. it's just a question of time, I
would completely love to write one up especially for you, but I am writing a
few samples for a number of websites/magazines so keep watchin'.
So, what do you do for now? Don't bite Sys.Tx unless you fully understand
it. My book Chapter 11 (Frans was a reviewer on it), explains transactions,
including sys.tx in depth. I should have an article being published on
codeguru.com and developer.com explaining sys.tx and writing your own
volatile enlistment RM in a day or two. But I must encourage you to
understand how Sys.Tx works under the scenes before blindly wrapping your
entire logic in TransactionScopes. If it were that easy, someone would make
a GUI tool written in VB6 to do that and we'd be out of a job
.
Wow long answer.
- Sahil Malik [MVP]
ADO.NET 2.0 book -
http://codebetter.com/blogs/sahil.malik/archive/2005/05/13/63199.aspx