tricky locking problem with ado.net sql2000

  • Thread starter Thread starter wrxguru
  • Start date Start date
W

wrxguru

Hi, I have an application where a group of tables are updated.. one of
the table (product) holds the latest product number .. to find the
latest prod # i just use a sqlreader then when i find the latest # i
add one to it. i then insert it via a dataadapter/dataset. i then
go to execute the sqlready to find the next latest and find it is
locked out of the whoel table no matter what level of isolation i
use.

all of these updates are in a transaction becuase i want to back them
out if i strike any problems. is it possible to read table rows
alread updated/inserted in the same transaction ?

regards Robin
 
Why not let the server do this for you? That's what identity columns are
for.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________

wrxguru said:
Hi, I have an application where a group of tables are updated.. one of
the table (product) holds the latest product number .. to find the
latest prod # i just use a sqlreader then when i find the latest # i
add one to it. i then insert it via a dataadapter/dataset. i then
go to execute the sqlready to find the next latest and find it is
locked out of the whoel table no matter what level of isolation i
use.

all of these updates are in a transaction becuase i want to back them
out if i strike any problems. is it possible to read table rows
alread updated/inserted in the same transaction ?

regards Robin
 
The server is designed to serialize the increment so no other transaction
interferes. It's designed with a boat-load of factors in mind that can far
better address your problem.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________

wrxguru said:
thanks for the reply Bill, but how will they help this situation?

robin
 
I think I do understand the identity columns and usage however my
question remains that if I wish to see what has just been inserted
into the database within a transaction am I able to do this within
the same transaction ?

Robin
 
wrxguru said:
I think I do understand the identity columns and usage however my
question remains that if I wish to see what has just been inserted
into the database within a transaction am I able to do this within
the same transaction ?

Yes. You can always see your own changes. However you can only see the
changes inside the same connection. If you are using a second connection
you cannot see the changes.

In Query Analyzer, each window is a seperate connection. If in one window
you run

CREATE TABLE T(I int)
go
begin transaction
insert into t(i) values(1)

and then

select * from T,
you will see the row.

If you run

select * from T
from another window, it will block untill you commit or rollback your
transaction.

BTW this is only the first problem you will encounter generating your own
incrementing keys, so you really should use an Identity column.

David
 
Yes, you should be able to. Does your transaction update several tables? If
not, transactions are not needed.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________

wrxguru said:
I think I do understand the identity columns and usage however my
question remains that if I wish to see what has just been inserted
into the database within a transaction am I able to do this within
the same transaction ?

Robin
 
Yes, my update spans several tables and the problem is I have inserted
records into the product table but when i use another connection (via
a sql reader as i have general functions to lookup for the latest
item id for example) to see the product table i get locked out
because the transaction has got the whole table locked no matter what
isolation i use. i thought another connection would be able to see
the table at least to read if the isolation level was read
uncomitted?

robin
 
wrxguru said:
Yes, my update spans several tables and the problem is I have inserted
records into the product table but when i use another connection (via
a sql reader as i have general functions to lookup for the latest
item id for example) to see the product table i get locked out
because the transaction has got the whole table locked no matter what
isolation i use. i thought another connection would be able to see
the table at least to read if the isolation level was read
uncomitted?

Looks like your real problem is that you are using multiple connections.
You need to restructure your DAL methods so they can all share a connection
for the duration of your transaction. You can read through the locks with
certian settings on your reading transaction, but you really don't want to.
What if another connection had made uncommited changes to the data you are
querying?

David
 
Just my opinion:
In most normal circumstances I would disagree with this.

Currently our commands will wait for thirty seconds to timeout, so the
transaction will have to be active for at least that long. This is an
outrageous amount of time to have a transaction active during production and
you should take a hard look at what you are doing that is taking too long.
Make your update logic as tight as possible, open the connection as late as
possible and close it as soon as you can and make your transactions as
atomic as feasible.

If your scenario is not a "normal" production scenario, that is if you are
doing admin work or bulk copying large amounts of data in pre-production
take a look at the NOLOCK Tsql Statement. You can also use the
ReadUncommited isolation level, but not the way you tried it. ReadUncommited
iso level places the same locks as ReadCommited, what makes it special is
that it ignores locks placed by other transactions. So if lock a table and
update values while in one transaction I can, in a different connection,
start a readuncommited iso level transaction and look at the values updated
by the first transaction. This is not something I would use in production.

Happy programming.
 
Angel Saenz-Badillos said:
Just my opinion:
In most normal circumstances I would disagree with this.

Currently our commands will wait for thirty seconds to timeout, so the
transaction will have to be active for at least that long. This is an
outrageous amount of time to have a transaction active during production and
you should take a hard look at what you are doing that is taking too long.
Make your update logic as tight as possible, open the connection as late as
possible and close it as soon as you can and make your transactions as
atomic as feasible.

The poster is using two different connections on the same thread. First he
begins a transaction and changes data on one connection. Then he tries to
read the changed table from another connnection, but using the _same_
thread.

like this:

con1.BeginTransaction
new Command("update T",con).ExecuteNonQuery
new Command("select * from T").ExecuteReader
con1.CommitTransaction

The query times out because it's waiting on a transaction that won't be
comited until after the query completes!!!

David
 
Back
Top