When using transaction, the stored procedure takes 10 times longer, why?

  • Thread starter Thread starter Chung Tsen
  • Start date Start date
C

Chung Tsen

Using C#.NET and SQL Server 2000

When we execute a stored procedure wrapped in SQLTransaction with 14
updates / 8 inserts, from within our C# application -- it takes about
10 minutes to execute.

If we take out the SQLTransaction, it takes about 20~30 seconds to
execute.

The stored procedure is very simple with a series of
updates/selects/inserts from one to the next.

Here is a small snippet of the code with transaction..

Cn.Open();
SqlTransaction SQLTrans = cn.BeginTransaction();
SqlCommand dc = new SqlCommand(sCmdText,Cn);
dc.CommandType = CommandType.Text;
dc.Transaction = SQLTrans;
return dc.ExecuteNonQuery();
Cn.Close();

We know transaction requires the server to do more work in case of
failures, but... it shouldn't be this much slower, right ? If anyone
could tell us why it becomes so much slower when using SQLTransaction,
and possible solutions, it'd be a great help!

Thanks in advance,

Chung
 
You're right, on a system with only one user, transactions
may actually increase the speed of the application, since
the transaction information is only written to disk (the
transaction log file) when the commit takes place, rather
on each individual insert/update.

However, I suspect that the problem is being caused by the
isolation level of the transaction. When a transaction is
referred to as Isolated it means that it does not depend
on other transactions - it behaves as if it is the only
transaction being carried out (whereas other transactions
may actually be executing concurrently).

To achieve isolation, data is locked from the time it is
modified to when the transaction is committed or rolled
back. Hence, if you have many concurrent executions of
your stored procedure from multiple users, each one will
be serialised to some degree with respect to the others,
and each will therefore take longer. For example, if each
set of operations normally take 30 seconds and there are
20 users trying to do this concurrently, then the last one
will take up to 30x20=600 seconds or 10 minutes to
complete.

You have a number of options open to you to improve the
performance:

1) Remove the transaction. This will compromise data
integrity since updates/inserts/selects from different
users will be interleaved, and you will also lose any
chance to rollback changes in the event of an error.

2) Change the isolation level (default is Read Committed)
to Read Uncommitted. This will give you the ability to
rollback the transaction, but effectively tells SQL Server
to ignore locks on data and so will still jeopardise data
integrity during concurrent transactions. You specify the
isolation level as a parameter to BeginTransaction().

3) Look at your queries and identify why they are taking
as long as 20-30 seconds in the first case (very long for
such simple operations). Perhaps you need more (or fewer)
indexes on critical tables. SQL Server has an index
tuning wizard which will identify what indexes are
required on your tables for best performance. Judicious
tuning may get the time for the stored procedure down to a
fraction of a second, in which case even with transactions
and isolation, you may still get a faster overall
performance.

Neil.
 
It appears from the code provided, that the queries are SQL strings
including literal valies -- no parameters. It may help performance
considerably to turn them into parameterized queries. It may help even more
to turn them into stored procedures.

--Bob
 
It appears from the code provided, that the queries are SQL strings
including literal valies -- no parameters. It may help performance
considerably to turn them into parameterized queries. It may help even more
to turn them into stored procedures.

Why do you get a performance gain by doing that?

Regards
Anders
 
Hi Anders,

Because SQL Server has to compile each unique command it receives. If a SQL
command changes every time you sent it to the back end because it contains
different data values, then a *lot* more compiles take place. On the other
hand if you send parameterized queries, the parameter names never change,
and SQL Server can substitute the parameter values without doing a
recompile.

In the case of stored procedures, not only are the interpretation of the SQL
command(s) and associated execution plans precompiled, but all the
statements within the sp as well (with certain exceptions). And while
parameterized queries that are sent to SQL Server may "fall out of the
cache" depending on how many different commands SQL Server has to track, an
SP "stays" compiled until someone edits it.

--Bob
 
Chung,

Just a shot in the dark ... but I have seen mysterious stuff like this
happen if your statistics are bad. Try rebuilding the statistics on the
involved table.

True, it shouldn't matter about the transaction, but maybe the transaction
causes the optimizer to take a different path to the solution for some
reason, and that path is working from bad assumptions.

--Bob
 
Nevermind about my reply regarding statistics -- I replied to the wrong
message.

--Bob
 
Back
Top