Lock Table

  • Thread starter Thread starter Geoff Murley
  • Start date Start date
G

Geoff Murley

In a SQL Server 2000 Database I have a Table which is
shared by many users. I have a VB.NET Procedure which
contains many calls to Stored Procedures and ADO.Net
Commands to clear and update this shared table. What I
would like to do at the start of this lengthly procedure
is to place an exclusive lock on the table until the whole
procedure has finished running all its separate update
statements. Then I will unlock it. If another user runs
the same procedure while the table is locked by the first
then I would like a message displayed to say that "the
table is in use, Please try again later." Is there an
easy way to achieve all of this? It would be difficult to
put all the update commands in the same transaction as
different connections are used.
 
Not sure why different connections are used. Is your data distributed over
multiple databases? If so, consider setting up a linked server. Then, you can
put all in one transaction.

---

Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************
 
The reason why I have different Connections is that I have
a DataReader using one connection and within its While Read
() loop I have various update commands firing which use a
different connection to the same database. When I tried
using the same connection for both the datareader and the
update commands I got an error message saying that the
same connection could not be used.

I was just woundering whether in VB.Net there was a
statement that could be placed at the start of my
procedure saying: Lock(myTable) exclusively
and at the end of my procedure saying UnLock(myTable)
or must I use transactions?

If an update statement was issued against a Table already
locked exclusively by a Transaction, would this cause an
Exception to be produced (if so, which type?), which I
could handle by showing a messagebox to the User to try
again later?
 
Depending on how often you run this procedure, you might be building an
application that simply won't scale--at least not very far. Sure, you can
lock a table by creating a Transaction with Repeatable Read attributes and
executing a SELECT from within the transaction that encompasses all rows.
This does not prevent others from seeing the data in the table--they are
simply prevented from changing it.

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
That's exactly what I am looking for.

Here's my situation.

I'm transferring data from two DB2 tables to SQL server.

The second table has a 1:n relation to the first on a primary key.

So, while I'm transferring the child table, I wouldn't want anyone to be
adding additional records, to maintain relational integrity.

I would want to /freeze/ the existing data, then delete it, then allow
additional data entry on the DB2 side.



I was just woundering whether in VB.Net there was a
statement that could be placed at the start of my
procedure saying: Lock(myTable) exclusively
and at the end of my procedure saying UnLock(myTable)
or must I use transactions?


--
incognito...updated almost daily
http://kentpsychedelic.blogspot.com

Texeme Textcasting Technology
http://texeme.com
 
William said:
Depending on how often you run this procedure, you might be building an
application that simply won't scale--at least not very far. Sure, you can
lock a table by creating a Transaction with Repeatable Read attributes and
executing a SELECT from within the transaction that encompasses all rows.
This does not prevent others from seeing the data in the table--they are
simply prevented from changing it.

But can they add additional rows ( INSERT ) or just prevented from
changing ( UPDATE )?

--
incognito...updated almost daily
http://kentpsychedelic.blogspot.com

Texeme Textcasting Technology
http://texeme.com
 
I think you're worrying about something that won't happen. If your PK on the
target table is autogenerated (an Identity column), as new rows are added
from your import routine, new rows can be added by other threads that won't
get the same ID values--that's guaranteed by SQL Server. Another approach is
to use a GUID as a PK. This way there can never be a collision.

hth

--
____________________________________
William (Bill) Vaughn
Author, Mentor, Consultant
Microsoft MVP
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
Here's the conclusing I came to.

The source machine should only add records using a transaction block.
And I should only add records to the target in a transaction block.

So, for two tables, with a 1:n relationship, sharing a PK, the person
adding data to the source machine should not be able to commit a new
record to the parent table, until the child records are added.

Then, I implemented the same logic in my c# program.

I set up a SqlTransaction. I insert the parent record in the target,
then I add the child records. I don't commit the parent until all the
child records are added. This insures that someone reading the the
parent records will not get an incomplete set of child records.

My code looks like:

sql400conn _connBlock = new sql400conn();

SqlConnection sqlconn = _connBlock.makeSQLConnection();
SqlCommand cmd = new SqlCommand("TriggerQueue_INSERT0", sqlconn );
SqlTransaction trans = null;

cmd.CommandType = CommandType.StoredProcedure;

cmd.Parameters.Add("@TriggerID", SqlDbType.Int , 4);
cmd.Parameters.Add("@TriggerQueueID", SqlDbType.Int, 4);
cmd.Parameters["@TriggerQueueID"].Direction=ParameterDirection.Output;

iDB2DataReader iread = getAS400Table("MANLIB.TRGQUEL1");

while(iread.Read())
{

cmd.Parameters["@TriggerID"].Value=Convert.ToInt32(iread["TQTRIGID"].ToString());

try
{
trans = sqlconn.BeginTransaction(IsolationLevel.ReadCommitted);
cmd.Transaction = trans;
int confirm = cmd.ExecuteNonQuery();

updateSQLTriggerQueueParameter(
Convert.ToInt32(iread["TQQUEID"].ToString()),
(int) cmd.Parameters["@TriggerQueueID"].Value);

trans.Commit();

deleteAS400Queue(Convert.ToInt32(iread["TQQUEID"].ToString()));
}
catch(Exception e)
{
Debug.WriteLine(e.ToString());
trans.Rollback();
}
}
cmd.Dispose();
iread.Close();

I think you're worrying about something that won't happen. If your PK on the
target table is autogenerated (an Identity column), as new rows are added
from your import routine, new rows can be added by other threads that won't
get the same ID values--that's guaranteed by SQL Server. Another approach is
to use a GUID as a PK. This way there can never be a collision.

hth


--
incognito...updated almost daily
http://kentpsychedelic.blogspot.com

Texeme Textcasting Technology
http://texeme.com
 
Back
Top