Hello, and thanks for your reply.
There is not a unique answer. It depends on your business requirements, type
of concurrency chosen (optimistic or pessimistic), database (Oracle, Ms Sql
Server, .).
I'm on SQL Server 2000, and I need a pessimistic approach because data field
I may update is very important (it's customer's available credit and I need
to decrement or increment when I provide a service).
But from you description .
If you have a lot of threads which run this transaction, I don't see any
reason to use optimistic concurrency.
In a pessimistic concurrency scheme for Sql Server, for what you do an
isolation level of Read Committed is enough;
That's good. So I don't need a serializable transaction?
just read the row and place an
update lock on the row (UPDLOCK locking hint in Sql Server) and call the
update. Or you can use a Repeatable Read isolation level and don't bother
about locking hints. The first is more scalable.
I'was trying to use ado.net's transaction, so I defined a Repeteable Read
transaction, started it, readed the value, updated the row, and commit the
transaction. This seems ok, and I think I've data integrity and my field is
updated without error. Using this I noticed that, with only 5 concurrent
request (I tried to run 5 threads to see how the solution works), I've a lot
of "deadlock" reported by sql server. Why this? My transaction only read a
row, update it, insert a new row on an other table, commit transaction. I
can understand that second and other transactions wait for the first has
committed, I can understand some timeout report (but not with only 5
concurrent and this very quick transaction), but I cannot understand why a
deadlock.
Is there any way I can work without deadlock?
thanks