SP in Typed Dataset Timeout vs Management Studio Good Performance

  • Thread starter Thread starter Jason Wilson
  • Start date Start date
J

Jason Wilson

I've never run into this before and have burned two days trying to
figure it out.

I have a relatively performant store procedure that runs in less than
a second when called from Management Studio but when run from a typed
dataset in an asp.net page it sits for 30 seconds and then throws a
SQL timeout exception.

An SQL trace of the both look exactly the same -- even when I through
every option SQL Profiler has to offer.

I am using the same account in both instances and as best I can tell
the connection properties are set the same.

There are numerous writes to the tables that are in the query, but the
reads are all dirty (NOLOCK clause). SP_WHO2 doesn't show any
blocking issues. Also server resource utilization is low. Besides --
if it were a utilization of contention issue, it would affect both
instances.

I am really at a loss...
 
I have a relatively performant store procedure that runs in less than
a second when called from Management Studio but when run from a typed
dataset in an asp.net page it sits for 30 seconds and then throws a
SQL timeout exception.

Check the protocols installed on the server. In general, you are going
to get better performance from TCP/IP in most "my web server is on one
box and SQL is on another" scenario. In many instances, simply opening
a protocol on SQL Server solves the problem.

If this is SQL Server 2005 or greater, turning on the SQL Browser
service can help a ton. In some instances, nothing works without it, so
you may already have this on. Some of hte SPs for SQL Server disable
this service if you have it running when you add the SP.

The reason this problem happens, when it is what I have described above,
is the web server is taking more time negotiating the connection than it
has to negotiate AND run the command. You can increase timeout of
conneciton and command and "solve" the lack of data problem, but you
have not solved the "slow" problem.

That is where I would look first.

Peace and Grace,

--
Gregory A. Beamer (MVP)

Twitter: @gbworld
Blog: http://gregorybeamer.spaces.live.com

*******************************************
| Think outside the box! |
*******************************************
 
Check the protocols installed on the server. In general, you are going
to get better performance from TCP/IP in most "my web server is on one
box and  SQL is on another" scenario. In many instances, simply opening
a protocol on SQL Server solves the problem.

If this is SQL Server 2005 or greater, turning on the SQL Browser
service can help a ton. In some instances, nothing works without it, so
you may already have this on. Some of hte SPs for SQL Server disable
this service if you have it running when you add the SP.

The reason this problem happens, when it is what I have described above,
is the web server is taking more time negotiating the connection than it
has to negotiate AND run the command. You can increase timeout of
conneciton and command and "solve" the lack of data problem, but you
have not solved the "slow" problem.

That is where I would look first.

Peace and Grace,

--
Gregory A. Beamer (MVP)

Twitter: @gbworld
Blog:http://gregorybeamer.spaces.live.com

*******************************************
|      Think outside the box!             |
*******************************************

I think it must statistics related. Updating the statics tends to
clear up the problem -- not sure why it runs fast in SMS and not
in .Net. I'm not sure that this solution isn't a red herring though.
 
Exactly the same issue we are facing. In Management studio it executes in
less than a second (over internet) and from our .NET code, it throws up a
timeout error (local network).

Please let me know if you come accross a solution that works. Thanks.
 
Sriram said:
Exactly the same issue we are facing. In Management studio it executes in
less than a second (over internet) and from our .NET code, it throws up a
timeout error (local network).

Please let me know if you come accross a solution that works. Thanks.

This is not an uncommon situations, and the seemling bizarre difference
usually has its explanations.

To start with, in SSMS run this command "SET ARITHABORT OFF", and then
run your stored procedure again. Chances are now that it will now run
slowly.

It is not that this command itself has any importance, but different
settings of ARITHABORT gives results in different cache entries. .Net
applications usually run with ARITHABORT OFF, SSMS with the same setting
ON.

Each time SQL Server runs a stored procedure for which there is no
cached plan, the optimizer has to build a plan, and it sniffs the
current input parameters to determine what is a good plan. If these
parameters are atypical, this can lead to bad performance for further
executions.

Assuming the the execution in SSMS was slow with ARITHABORT off, view
the exeuction plan, and open it in XML. At the bottom of the plan
you find the sniffed parameter values. This can give you some ideas
where they come from. If you now say:

EXEC sp_recompile yourprocedure

I would expect the .Net application to run quickly.

I should add that there are more possible explanations, and if the
above does not work out, please give more details on what you actually
are doing, and also specify which version of SQL Server you are using.

--
Erland Sommarskog, SQL Server MVP, (e-mail address removed)

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
 
I think it must statistics related. Updating the statics tends to
clear up the problem -- not sure why it runs fast in SMS and not
in .Net. I'm not sure that this solution isn't a red herring though.

As strange as it sounds, that is not that strange. While the SQL may be the
same, the method of getting the SQL at the engine can change things. Not
sure why. Glad you found a solution. Hope it is not a red herring.

Peace and Grace,

--
Gregory A. Beamer (MVP)

Twitter: @gbworld
Blog: http://gregorybeamer.spaces.live.com

*******************************************
| Think outside the box! |
*******************************************
 
As strange as it sounds, that is not that strange. While the SQL may be the
same, the method of getting the SQL at the engine can change things. Not
sure why. Glad you found a solution. Hope it is not a red herring.

Peace and Grace,

--
Gregory A. Beamer (MVP)

Twitter: @gbworld
Blog:http://gregorybeamer.spaces.live.com

*******************************************
|      Think outside the box!             |
*******************************************

Well sp_Updatestats consistently corrects the issue, but unless I want
to run it several times a day then I need to find another solution. I
am running VS2008 with .Net FW 3.5 and SQL Server 2005 Standard. One
of the tables queried sees many updates to per day. The SP (below)
has a subtree cost under 5. I can get it under 1 with more robust
indexing, but I don't want to kill write performance:

USE [Interface_Support]
GO
/****** Object: StoredProcedure [dbo].[pr_SearchMessages] Script
Date: 12/14/2009 08:16:04 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROC [dbo].[pr_SearchMessages]
@MessageType varchar(10) = '%'
, @MessageSubType varchar(10) = '%'
, @PatientLastName varchar(35) = '%'
, @ExamCode varchar(10) = '%'
, @PatientID int = 0
, @ExamID int = 0
, @Order varchar(50) = '%'
, @DateRecorded varchar(20) = '1/1/2001'
, @Org varchar(50) = '%'
, @Event varchar(50) = '%'
AS
BEGIN -- Proc


SET NOCOUNT ON;

-- Set Date Range
DECLARE @BEGIN datetime
DECLARE @END datetime
IF ISDATE(@DateRecorded) = 1 AND @DateRecorded <> '1/1/2001'
BEGIN
SET @DateRecorded = CONVERT(varchar(20),CAST(@DateRecorded as
datetime),101)
SET @BEGIN = CAST(@DateRecorded AS datetime)
SET @END = CAST(@DateRecorded +' 23:59' AS datetime)
END
ELSE
BEGIN
SET @BEGIN = GETDATE()
SET @END = GETDATE()
SET @DateRecorded = '*'
END


-- Conduct Query
SELECT MessageLogID
, DateRecorded
, Interface
, InterfaceDescription
, OrganizationCode
, OrganizationName
, PatientID
, PatientLastName + ', ' + PatientFirstName [Patient]
, MessageType
, MessageTypeCode
, MessageSubType
, MessageSubTypeCode
, ExamIdentifierInternal [Accession]
, ExamIdentifierexternal [Order]
, ExamID
, ExamCode
, ExamName
, EventType
, EventNote
, CASE
WHEN [Message] IS NULL
THEN NULL
ELSE 'App_Themes/mail.png'
END [Message]
FROM vw_MessageLogExpanded WITH(NOLOCK)
WHERE (
(MessageType Like '%' + @MessageType + '%')
AND (MessageSubType Like '%' + @MessageSubType + '%')
AND (PatientLastName LIKE '%' + @PatientLastName + '%')
AND (ExamCode LIKE '%' + @ExamCode + '%')
AND (OrganizationCode LIKE '%' + @Org + '%')
AND (EventType LIKE '%' + @Event + '%')
AND (ExamIdentifierInternal LIKE '%' + @Order + '%' OR
ExamIdentifierExternal LIKE '%' + @Order + '%' )
)
AND (PatientID = @PatientID OR @PatientID = 0)
AND (ExamID = @ExamID OR @ExamID = 0)
AND (DateRecorded BETWEEN @BEGIN AND @END OR @DateRecorded = '*')
Order BY MessageLogID

END -- Proc
 
Well sp_Updatestats consistently corrects the issue, but unless I want
to run it several times a day then I need to find another solution.

You need to find out why the stats are getting thrown out of whack. The
only thing I see in the sproc that could be affecting anything is the
dates, and that should not be an issue.

If necessary, you can add "hints" to the stats to help tune things. Here
is a short article on theory:
http://snurl.com/tooci

And another that describes stats (you probably know much of this, as you
are resetting the stats already):
http://snurl.com/toofl

Other options are setting up additional indexes on the table for the
query(s). This will not solve the stats issue, per se, but will make it
quicker to get an answer. If you are already "over indexed", you can set
up a view (esp. if you are not updating on the page) and index it. These
solutions may be just masking the problem, however.

If these don't work, you can turn off autostats and then manually create
the stats you need. This is not always a one time solution, nor is it
one I would suggest for the newb. It is better than autostats creating
bad stats on a daily basis, however.

one way to figure out the stats is run DBCC SHOW_STATISTICS (table_name
, index_name) after you have recalculated and are sure they are correct.
You can then manually create the same stats.

DBCC SHOW_STATISTICS (table_name , index_name)

is very useful to see how the stats are degrading and may give you some
clues why they are degrading so you can fix them.

Sorry for being so general, but I see nothing in the stored proc that
jumped out at me at first glance.

Peace and Grace,

--
Gregory A. Beamer (MVP)

Twitter: @gbworld
Blog: http://gregorybeamer.spaces.live.com

*******************************************
| Think outside the box! |
*******************************************
 
Jason said:
Well sp_Updatestats consistently corrects the issue, but unless I want
to run it several times a day then I need to find another solution. I
am running VS2008 with .Net FW 3.5 and SQL Server 2005 Standard. One
of the tables queried sees many updates to per day. The SP (below)
has a subtree cost under 5. I can get it under 1 with more robust
indexing, but I don't want to kill write performance:

Having seen the procedure, I am not surprised that it runs slowly. I am
more surprised that it ever runs fast. A quick fix may be to add
OPTION (RECOMPILE) at the end, so that you get a new query plan
every time. With that query, there is no cached plan that will be
good for you.

You may also be interesting in my article about dynamic search
condition where I discuss several options to slay this kind of beasts.
http://www.sommarskog.se/dyn-search-2005.html



--
Erland Sommarskog, SQL Server MVP, (e-mail address removed)

Links for SQL Server Books Online:
SQL 2008: http://msdn.microsoft.com/en-us/sqlserver/cc514207.aspx
SQL 2005: http://msdn.microsoft.com/en-us/sqlserver/bb895970.aspx
SQL 2000: http://www.microsoft.com/sql/prodinfo/previousversions/books.mspx
 
As probably mentioned, google (errrr "bing") "Parameter Sniffing" "Sql
Server".

........

One workaround (OPTION RECOMPILE) was mentioned. But I didn't see this
other work around.

create procedure dbo.uspThisRocks
( @SomeParameter int )
AS


declare @SomeParameterLocal int
select @SomeParameterLocal = @SomeParameter


GO


Then refer to the @SomeParameterLocal in your code instead of the variable
name in the proc definition.


You should do the little bit of research (via the google search mentioned
above) to figure out why that might work.
 
Back
Top