SqlDataAdepter vs Manual Data binding - Performance question.

  • Thread starter Thread starter Aykut Canturk
  • Start date Start date
A

Aykut Canturk

As all we know there are 2 ways for binding gridview to sql table. using
sqldata adapter and using code. What I want to know is, in large tables, is
there a perfomance difference. When using code we don't specify any page
size or anything. so all data must be retreived from sql table to web server
memory, is that right ? Maybe, only maybe, sql data adapter uses different
approach about loading table rows for perfomance issues.

in VB6, with classic ado, there were some concepts like getting some data
first and shot it to user, in grid for example, then if user scrolls down,
we get neccesseary data. also we could see this in components, after showing
data on screen, when you observe scrollbar, you could understand that data
is still loading while you work.

in classic asp we did paging, for example using IDs and pagesize commands.
but again we would load limited amount data into memory.

ado.net changed the concept and I don't understand how memory never gets
full or there is no performance issue when I open dataset into memory with
code. espescially microsoft encourage us to use multiple datasets, data
tables, disconnected recordsets,.... and all these eats up memory and
performance right ?

any explanation or suggestion wihich approach is better when working with
large tables and grids ? Or any reliable article suggestion on that ?

thanks.
 
As all we know there are 2 ways for binding gridview to sql table. using
sqldata adapter and using code. What I want to know is, in large tables, is
there a perfomance difference. When using code we don't specify any page
size or anything. so all data must be retreived from sql table to web server
memory, is that right ? Maybe, only maybe, sql data adapter uses different
approach about loading table rows for perfomance issues.

in VB6, with classic ado, there were some concepts like getting some data
first and shot it to user, in grid for example, then if user scrolls down,
we get neccesseary data. also we could see this in components, after showing
data on screen, when you observe scrollbar, you could understand that data
is still loading while you work.

in classic asp we did paging, for example using IDs and pagesize commands..
but again we would load limited amount data into memory.

ado.net changed the concept and I don't understand how memory never gets
full or there is no performance issue when I open dataset into memory with
code. espescially microsoft encourage us to use multiple datasets, data
tables, disconnected recordsets,.... and all these eats up memory and
performance right ?

any explanation or suggestion wihich approach is better when working with
large tables and grids ? Or any reliable article suggestion on that ?

thanks.

Just one point about your sentence "When using code we don't specify
any page
size or anything". I usually build my tables manually and not through
binding, but I do specify a page size and number of rows through code,
like this for instance :

SELECT ID, Name FROM (SELECT ROW_NUMBER() OVER (ORDER BY Name) AS
RowNumber, * FROM MyTable) AS Result WHERE RowNumber BETWEEN 10 AND 20

This way, you only return the rows you're interested in. Obviously,
you have to handle pagination yourself instead of relying on the
framework, but performance-wise, I'm pretty much on par with a
standard gridview binding (which, by the way, if correctly configured,
will also limit the number of rows you're working on at one point in
time). So, memory and performance aren't hampered.

PS : the code above is for SQL 2005, it was way more convoluted with
SQL 2000.

Michel
 
Back
Top