LIKE and IN expressions in parameter queries

  • Thread starter Thread starter Rob Oldfield
  • Start date Start date
R

Rob Oldfield

Does anyone have any idea how to do this? I want to pass a select command a
parameter and just use that as a LIKE/IN comparison with my SQL data.

Thanks
 
Rob said:
Does anyone have any idea how to do this? I want to pass a select command a
parameter and just use that as a LIKE/IN comparison with my SQL data.

Thanks

You pass it in the same way that you would for any other INSERT or
UPDATE statement.

SELECT * FROM MYTABLE
WHERE ID IN ( @param1, @param2 )
 
John Bailo said:
You pass it in the same way that you would for any other INSERT or
UPDATE statement.

SELECT * FROM MYTABLE
WHERE ID IN ( @param1, @param2 )

Hmm. Yes, that works but for INs that method means that I have to know how
many items I'm going to have in the list (though I guess it'd be possible to
use 'too many' in the setup and just set any redundant ones to an impossible
value. What I was looking for was more along the lines of select * from
mytable where id in (@listparams) so that I can set @listparams to be the
single variable 'a','b','c' or whatever. I've just tried that and it
doesn't work.

And on the LIKEs, if I enter a criterion of LIKE %@param1% (if that's turned
into a hyperlink, please ignore it) in Visual Studio then the SQL statement
comes out as LIKE N'%@p1% (ditto) and I don't even get asked for a
parameter value as it's just taking it as text.
 
Rob said:
And on the LIKEs, if I enter a criterion of LIKE %@param1% (if that's
turned into a hyperlink, please ignore it) in Visual Studio then the SQL
statement
comes out as LIKE N'%@p1% (ditto) and I don't even get asked for a
parameter value as it's just taking it as text.

What you really want is a SQL statement builder class that:

(1) Takes in an array of values and a SQL parameter collection object.

(2) Uses StringBuilder and appends an equivalent number of @param names
(@param1, @param2, l...)

(3) Then, for each string, it .Add() s a parameter to the parameter
collection.

(4) Passes back the parameter collection.
 
John Bailo said:
What you really want is a SQL statement builder class that:

(1) Takes in an array of values and a SQL parameter collection object.

(2) Uses StringBuilder and appends an equivalent number of @param names
(@param1, @param2, l...)

(3) Then, for each string, it .Add() s a parameter to the parameter
collection.

(4) Passes back the parameter collection.

OK. I knew that I could just rebuild the SQL statement on the fly, but was
hoping to avoid it. Thanks for the advice.
 
Rob said:
OK. I knew that I could just rebuild the SQL statement on the fly, but
was
hoping to avoid it. Thanks for the advice.

Well an alternative would be to use a foreach loop instead of IN

So you would just use one @param per ExecuteQuery and take the results and
put it into an Array.

foreach( string s in myArrayOfLikeValues)
{
//@param = s

// ExecuteQuery

// take record, or value, add to an array

}
 
John Bailo said:
Well an alternative would be to use a foreach loop instead of IN

So you would just use one @param per ExecuteQuery and take the results and
put it into an Array.

foreach( string s in myArrayOfLikeValues)
{
//@param = s

// ExecuteQuery

// take record, or value, add to an array

}

That's certainly a potential solution if you don't care in the
slightest about performance - but generally people don't like making N
queries where 1 would do...
 
Interesting. Those 'More information' headings at the bottom - are they
supposed to be hyperlinks? And do you know where to?
 
Jon said:
That's certainly a potential solution if you don't care in the
slightest about performance - but generally people don't like making N
queries where 1 would do...

Yes, I see your point...but one could question that on the SQL side as
well. I mean, under the covers, what is the SQL server executable
really doing when you issue a command like LIKE IN? "SQL Server" is
all c++ code ultimately, and its probably parsing the LIKE IN clause and
creating multiple calls to the database...so you're stuck with multiple
calls anyway. LIKE and IN are very slow poorly executing SQL
statements at the outset, and whenever you can avoid SQL syntax that
uses IN you should.

So, in essence, by creating c# code your doing multiple calls on the
client side. Also, you could thread these calls, or maybe batch them
up or write a stored procedure that takes a delimited array as a value
(as your other post suggests -- althought it seems very poorly worded).
 
John Bailo said:
Yes, I see your point...but one could question that on the SQL side as
well. I mean, under the covers, what is the SQL server executable
really doing when you issue a command like LIKE IN? "SQL Server" is
all c++ code ultimately, and its probably parsing the LIKE IN clause and
creating multiple calls to the database...

I very, very much doubt that. Databases are far smarter than that - if
you give SQL Server (or any other reasonable database) a decent amount
of information, it's likely to do *much* better than keeping that
information to yourself and making several calls.

For instance, if it needs to do a table scan, it can do that scan
*once*, checking (very quickly using a hash, probably) for each row
whether or not it's in any of the "IN" parameters. That's far quicker
than doing N table scans, having to drag each row into cache each time.

Of course, you need to add to that the difference in network
performance, latency etc.

Additionally, in terms of semantics, it can be a real pain issuing
multiple calls - if you're trying to get the distinct values for
another column, or sorting, or anything like that, you basically end up
having to do it client-side.
so you're stuck with multiple
calls anyway. LIKE and IN are very slow poorly executing SQL
statements at the outset, and whenever you can avoid SQL syntax that
uses IN you should.

Sure, but when you need it, you need it. It's like saying to avoid
joins where they're not needed - that doesn't mean you should fetch
whole tables and do the joins manually on the client side instead!
So, in essence, by creating c# code your doing multiple calls on the
client side. Also, you could thread these calls, or maybe batch them
up or write a stored procedure that takes a delimited array as a value
(as your other post suggests -- althought it seems very poorly worded).

I'm not sure which post you're referring to, but I never suggested
taking a delimited array as a stored proc parameter. I would suggest
building the "IN" parameter in SQL:

.... WHERE Foo IN (@param1, @param2, @param3 etc)

- the bit in brackets needs to be built up dynamically, but that's far
from difficult.


I can see no advantages to issuing multiple calls in this situation
other than the *tiny* advantage of not having to build the string up
dynamically. I can see *lots* of advantages to using a dynamically
generated "IN" clause.
 
Jon said:
I can see no advantages to issuing multiple calls in this situation
other than the *tiny* advantage of not having to build the string up
dynamically. I can see *lots* of advantages to using a dynamically
generated "IN" clause.

I've been finding that running sql statements on threads simultaneously
can have enormous speed advantages ( much of my work is with db2 on an
iseries).

Using my method, I could thread the three LIKES, and not use the IN at
all and run my statements simultaneously.
 
John Bailo said:
I've been finding that running sql statements on threads simultaneously
can have enormous speed advantages ( much of my work is with db2 on an
iseries).

Using my method, I could thread the three LIKES, and not use the IN at
all and run my statements simultaneously.


Sorry to butt back in to a thread that's going way above my head, but why
not just run some testing? You each write the code your way, and pass the
result to each other so that both can be tested in different environments?
 
John Bailo said:
I've been finding that running sql statements on threads simultaneously
can have enormous speed advantages ( much of my work is with db2 on an
iseries).

Using my method, I could thread the three LIKES, and not use the IN at
all and run my statements simultaneously.

And unless you've got 3 processors on the database and nothing else
using it, that could easily still run slower.
 
Rob said:
Sorry to butt back in to a thread that's going way above my head, but why
not just run some testing? You each write the code your way, and pass the
result to each other so that both can be tested in different environments?

Because that would -- like, work, dude.
 
Jon said:
And unless you've got 3 processors on the database and nothing else
using it, that could easily still run slower.

What I really need to do, is to go to an open source database like mySql and
check out the parsing routines as they are mapped to the actually file
scans.

I've always been really /curious/ as to what the magic is of parsing SQL
syntax and turning it back into regular code...which is what I'm assuming
is all that a database *engine* does.

In some sense, since these days, 99 percent of SQL code is done within a
scripting or bytecode language ( c#, vbs, java ) what we do is take c-type
code, wrap SQL statements into it, message it to a database, which then
parses the SQL code, and executes c++ statements!

The real efficiency would be to get rid of the SQL interpreter entirely and
just run c# code against the data files. To me that would be the fastest.
 
John,

I had errors with my dataconnection, so I jump in this thread, only giving a
reaction on your message.

I don't like (hate is the better word) the SQL expression language. However
in my opinion are you as a lot of people trying to get performance from that
small time needed for compiling that code.

The most time done with reading data from a server over a dataline to a
client is not that compiling part, therefore I would not expect to much from
giving it direct in whatever other code.

Just my thought,

:-)

Cor
 
John Bailo said:
What I really need to do, is to go to an open source database like mySql and
check out the parsing routines as they are mapped to the actually file
scans.

I've always been really /curious/ as to what the magic is of parsing SQL
syntax and turning it back into regular code...which is what I'm assuming
is all that a database *engine* does.

In some sense, since these days, 99 percent of SQL code is done within a
scripting or bytecode language ( c#, vbs, java ) what we do is take c-type
code, wrap SQL statements into it, message it to a database, which then
parses the SQL code, and executes c++ statements!

The real efficiency would be to get rid of the SQL interpreter entirely and
just run c# code against the data files. To me that would be the fastest.

No. Bear in mind that the C# or Java code is usually isn't running on
the same machine as the database, so you've got a separation anyway -
given that separation, it makes sense to use something which is nearer
the metal. Much as I love C#, I wouldn't use it to try to implement a
database on the same scale as SQL Server or Oracle.
 
Cor said:
The most time done with reading data from a server over a dataline to a
client is not that compiling part, therefore I would not expect to much from
giving it direct in whatever other code.

Good argument...however, I would turn it around on you.

The reason that it doesn't take much time is that -- as in this argument
-- we've conditioned ourselves to warp our code to encapsulate our
requests in SQL.

Whereas, much of the time, I just want to grap individual data bits on
threads and utilize it...not create a statement, then batch it up, then
get back a bunch of data, then parse it ( ds(0), ds(1) etc etc ).

If I could go direct to a more flexible data store/object store that
more closely mapped to what I do in c# and java I think I would find
greater performance.

I would much prefer to make my business logic in an OO language as well.

I think SQL 2005 addresses these needs, but really any database that
lets you strip out the SQL interpreter would work as well.

From what I know the really big business databases aren't relational,
their big file systems that are manipulated by COBOL.
 
Jon said:
No. Bear in mind that the C# or Java code is usually isn't running on
the same machine as the database, so you've got a separation anyway -
given that separation, it makes sense to use something which is nearer
the metal. Much as I love C#, I wouldn't use it to try to implement a
database on the same scale as SQL Server or Oracle.

But that's not what I said.

I'm not trying to replace the dbms with another.

I want to eliminate it entirely and read my data direct as files into c#
and manipulate it there.

WHat I need is a data server simple as an http server that can take raw
requests and let me manipulate on the middle ware.
 
Back
Top