H
henry.lee.jr
Quick question all,
I have a bad habit where I still write my SQL command code like so:
string sql;
sql = Properties.Settings.Default.PROC_MyProc;
sql += "@Param1='" + param1 + "'";
sql += " ,@Param2=" + param2.ToString();
sql += "@Param3='" + param3 + "'";
sql += " ,@Param4=" + param4.ToString();
if([some coondition]) { sql += "@Param5='" + param5 + "'"; }
// Execute sql
I always have a library function that I use to handle special cases
(strings with a single quote, etc), but the main reason I do this is
support. I realize lots of fancy articles warn against this old
fashioned approach, but (A) it has never once caused me any problems
and (B) it may even resolve some of the below issues:
I've had some trouble with using the Parameter objects in the past. In
some scenarios, when there was a very long list of parameters, it
would just fail with an exception. I was able to research and find out
that when you have a high number of parameters, sometimes SQL Server
and ADO.NET "just couldn't handle it". We proved this by removing some
parameters, and to be sure it wasn't a bad parameter at the end, we
even changed the order of the parameters around. It seemed to be the
number of them, not the content or type. This concerned me quite a bit
(and maybe with more recent versions of SQL Server and ADO it has been
handled). We're not talking hundreds of parameters either, more like
20 or so. This was .NET 2.0 and SQL Server 2000.
The other reason is so that I can easily debug/output my exact SQL and
run it in query analyzer to see why it may be failing. My question is
a simple one: Is there a way to see exactly what SQL the parameterized
command object is trying to execute so you can test it against SQL
Server? If not, I really dislike the black box concept, where you
can't see what M$ may be doing wrong behind the scenes.
I have a bad habit where I still write my SQL command code like so:
string sql;
sql = Properties.Settings.Default.PROC_MyProc;
sql += "@Param1='" + param1 + "'";
sql += " ,@Param2=" + param2.ToString();
sql += "@Param3='" + param3 + "'";
sql += " ,@Param4=" + param4.ToString();
if([some coondition]) { sql += "@Param5='" + param5 + "'"; }
// Execute sql
I always have a library function that I use to handle special cases
(strings with a single quote, etc), but the main reason I do this is
support. I realize lots of fancy articles warn against this old
fashioned approach, but (A) it has never once caused me any problems
and (B) it may even resolve some of the below issues:
I've had some trouble with using the Parameter objects in the past. In
some scenarios, when there was a very long list of parameters, it
would just fail with an exception. I was able to research and find out
that when you have a high number of parameters, sometimes SQL Server
and ADO.NET "just couldn't handle it". We proved this by removing some
parameters, and to be sure it wasn't a bad parameter at the end, we
even changed the order of the parameters around. It seemed to be the
number of them, not the content or type. This concerned me quite a bit
(and maybe with more recent versions of SQL Server and ADO it has been
handled). We're not talking hundreds of parameters either, more like
20 or so. This was .NET 2.0 and SQL Server 2000.
The other reason is so that I can easily debug/output my exact SQL and
run it in query analyzer to see why it may be failing. My question is
a simple one: Is there a way to see exactly what SQL the parameterized
command object is trying to execute so you can test it against SQL
Server? If not, I really dislike the black box concept, where you
can't see what M$ may be doing wrong behind the scenes.