ADO.NET 2.0 TransactionScope object question: across business obje

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

I am new to .NET Framework 2.0. I need to make my application transactional
on the business objects level. Which means, I have a controler class to
create and call several business objects. These business objects will call
data access objects and update some database tables. I need to make the whole
thing transactional across these business objects.

I heard they say in ADO.NET 2.0, "Wrap everything in a TransactionScope
object, and it takes care of everything else for you". Can I simply use
TransactionScope in my Controler class to wrap these business objects up and
make them transactional across these business objects?

Thanks a lot.
 
BF - "Wrap everything in a transactionscope..." is a pretty big statement
and is a bit of an oversimplification ;-). the TransactionScope is an
amazing object and quite powerful, but there's nothing magic about it
although admittedly it has that mystique. Right now, Sql Server , Oracle
and MSMQ are supported in it, but when 'everything' is referred to, it means
everythign with respect to those sources. So depending on how you define
everything, the asnwer could be yes or no. In general though, I'd say that
'everything' is probably a no. If you remove a bunch of stuff from a
collection that's wrapped in a TS, then no, it's not able to automatically
'commit' or 'rollback' those changes at the object level. If you write out
a text file to a certain point , it can't unwrite the changes that you just
wrote out. So in that respect, 'everything' isn't handled for you.

With that said (and I apologize if I pointed out the obvious, i've just come
across many misconceptions about what all it can do and wanted to make sure
I was clear there), it does handle most everythign with respect to database
transactions for you. So if you update data on MydataBase in
SqlServerInstance2, then delete a few records in MySecondDB on
SqlServerInstance1, then Read a bunch of messages from MyPublicQueu3, then
add a bunch of acknoledgements to MSMQ4 and finally write confirmations to
MyOracleDb5, and the fourth step fails to you want to roll everythign back,
it'll do it for you (as well as commit if there weren't any problems). So
yes, from the basic single source transaction to the full scale distributed
transaction, the TS will handle it for you.

If you can tell me a little more about the scenario though, i can probably
be of more assistance.

Cheers,

Bill
 
My case only involves database access. But it's a multi-tier application. I
need to make transactions across different business objects. All these
business objects will call some data access objects to update database. I
have a controller class to create and access these business objects. And I
need to make it transactional across business objects.

Thanks a lot.
 
But does TransactionScope work in this case? I don't have user interactions
in these business objects. But each business objects will call data access
objects and they will create different SqlConnection objects.

Thanks a lot.
 
BF said:
But does TransactionScope work in this case? I don't have user
interactions in these business objects. But each business objects
will call data access objects and they will create different
SqlConnection objects.

IF there's no userinteraction, that should indeed work. Everything
shold be controlled by the same non-ADO.NET transaction (which
effectively uses a COM+ transaction)

Frans
Thanks a lot.


--
 
I want to add a little bit to what Frans said ..

Academically speaking ... yes it should work. But what happens is, if you
work on multiple sqlConnection objects, and even if you close them, to keep
the transactionally consistent, and due to connection pooling, the physical
connection remains open, and keeps the resources locked for upto the
transaction time out, which is 1 minute by default.

Now if you are using SQL Server 2000, your transaction gets promoted even if
there is one single SqlConnection/database connection within your
transaction scope. So you end up locking rows in isolation level
serializable - which leads to lower performance. But atleast this will work,
albeit at a much much reduced performance, so you are much better off
implementing your own transaction implementation much like what I recommend
at the end of this message anyway.

But if you are using SQL 2k5, your transaction gets promoted ONLY WHEN the
second RM shows into the picture or same transaction scope. So essentially
what will happen here is

User1: Select <--- one connection, no promotion, readcommitted isolation
level
User2: Select <--- two connections
User1: Modify Disconnected
User2: Insert <---- two or three connections (depending upon pooling logic),
and this insert enters into commandtimeout, promotion - Serializable
isolation level, MSDTC shows up.
User1: Update <--- three or 4 connections, and deadlock occurs, followed by
deadlock resolution, and user2:insert will time out, and user1:update will
be made the deadlock victim since connection3, came in and "attempted to
lock" before connection 4. Of course #3 couldn't lock because either 1 or 2
had already locked the resources.

So the above is not a good implementation .. right? Especially because in
event of conflict, both parties fail, you get much reduced performance, and
frankly unpredictable behavior.

Thus if you are wrapping multiple business objects within one single
transaction, and you are relying on SqlConnection's logic to wrap all that
together, you need to be aware of what is going on underneath. Simply
wrapping stuff in a transactionscope, and hoping the magic to happen .. umm
... won't happen :). Atleast not yet.

So what is the solution? The solution is to implement your own RMs if you
want the transactionscope paradigm to work. In your own RMs you can
implement whatever logic you want, and If I were you, I'd put in standard
ADO.NET concurrency checks in there. Now this would be a volatile RM since
you are talking about in-memory data - your business object, or dataset, or
datatable. The database itself being the durable resource can be relied on
using simple SqlTransaction, where you would need to break away from the
current transactionscope and prevent sqlconnection's autoenlistment in the
process.

Don't ask for a code sample LOL :), well .. it's just a question of time, I
would completely love to write one up especially for you, but I am writing a
few samples for a number of websites/magazines so keep watchin'.

So, what do you do for now? Don't bite Sys.Tx unless you fully understand
it. My book Chapter 11 (Frans was a reviewer on it), explains transactions,
including sys.tx in depth. I should have an article being published on
codeguru.com and developer.com explaining sys.tx and writing your own
volatile enlistment RM in a day or two. But I must encourage you to
understand how Sys.Tx works under the scenes before blindly wrapping your
entire logic in TransactionScopes. If it were that easy, someone would make
a GUI tool written in VB6 to do that and we'd be out of a job :).

Wow long answer. :-)

- Sahil Malik [MVP]
ADO.NET 2.0 book -
http://codebetter.com/blogs/sahil.malik/archive/2005/05/13/63199.aspx
 
Thanks for your reply Sahil. You really thought it all through. But what is
RM? I am not familiar with these abbrevaiations.

And in my case, I use object oriented design. I have a data access class for
each database table. And I have an entity class mapping to each database
table. Then it's in the business logic level, I access these data access
classes and entity classes to access or update database tables. So if I want
to make it transactional in updating 2 or more tables, I need to put the
transaction control at business logic level. And we should be able to make
database processes transactional in business logic level. This should be a
basic architect issue. This will make our code better sharable and lots of
other benefits. I think I don't need to list all of the benefits here.

But if you can explain what you mean by "The solution is to implement your
own RMs". I will understand your solution better.

Thanks a lot.
 
Hey dude :), my latest article is online now -
http://codebetter.com/blogs/sahil.malik/archive/2005/11/18/134844.aspx . I
suggest you should read it.

Okay regarding your question
So if I want
to make it transactional in updating 2 or more tables, I need to put the
transaction control at business logic level.

First of all,
A) are you using SQL2k5 ?
B) Are you using distributed database transactions or transactions involving
multiple SqlConnection objects?

if A and B are true - bad idea.
if A is true but B is false - good idea
if A is false but B is either true or false - good idea.

You are looking towards leveraging Sys.Tx for a better architecture - that
is a good right step in the right direction. But before you can leverage it
for your architecture, you do need to understand it thoroughly. RM is
Resource Manager - the dude that works with a transaction manager so it's
the resources it manages are worked with in a transactional manager.

I'd suggest giving a good reading to Sys.Tx before settling down on an
architecture.

- Sahil Malik [MVP]
ADO.NET 2.0 book -
http://codebetter.com/blogs/sahil.malik/archive/2005/05/13/63199.aspx
 
Back
Top