Large Data Sets using DataReader

  • Thread starter Thread starter Charlie
  • Start date Start date
C

Charlie

I have a stored procedure that will return a dataset that contains the
first n records of a select statement. I use a DataReader to retrieve
the records to write them to a file. When the SP only returns 10,000
records, it can process those records at a linear processing rate 1000
records in 2 seconds. When I get 50,000 records back the linear
processing rate drops to 1000 record in 18 seconds. What puzzles me
is that the first 1000 records are the same no matter how many we get
back in the complete set. However, eventhough they're the same, they
take 9 times longer because there are a total of 50,000 rather than
10,000. I though a DataReader, since it deals with one row at a time,
would have a linear processing rate and that rate would be without
regard to the total number of records retrieved. I've narrowed the
increasing consumption time to the .getString() method. It is NOT in
the .read() method!

Can anyone explain this to me or tell me what I'm doing wrong.

Thanks in advance.

Charlie
 
What you're doing wrong? First you're using a query interface to do bulk
operations. I would use BCP or DTS to handle bulk data. As far as your
problem goes, I expect the GC is kicking in as the strings are destroyed.

--
____________________________________
Bill Vaughn
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
Back
Top