C
Charlie
I have a stored procedure that will return a dataset that contains the
first n records of a select statement. I use a DataReader to retrieve
the records to write them to a file. When the SP only returns 10,000
records, it can process those records at a linear processing rate 1000
records in 2 seconds. When I get 50,000 records back the linear
processing rate drops to 1000 record in 18 seconds. What puzzles me
is that the first 1000 records are the same no matter how many we get
back in the complete set. However, eventhough they're the same, they
take 9 times longer because there are a total of 50,000 rather than
10,000. I though a DataReader, since it deals with one row at a time,
would have a linear processing rate and that rate would be without
regard to the total number of records retrieved. I've narrowed the
increasing consumption time to the .getString() method. It is NOT in
the .read() method!
Can anyone explain this to me or tell me what I'm doing wrong.
Thanks in advance.
Charlie
first n records of a select statement. I use a DataReader to retrieve
the records to write them to a file. When the SP only returns 10,000
records, it can process those records at a linear processing rate 1000
records in 2 seconds. When I get 50,000 records back the linear
processing rate drops to 1000 record in 18 seconds. What puzzles me
is that the first 1000 records are the same no matter how many we get
back in the complete set. However, eventhough they're the same, they
take 9 times longer because there are a total of 50,000 rather than
10,000. I though a DataReader, since it deals with one row at a time,
would have a linear processing rate and that rate would be without
regard to the total number of records retrieved. I've narrowed the
increasing consumption time to the .getString() method. It is NOT in
the .read() method!
Can anyone explain this to me or tell me what I'm doing wrong.
Thanks in advance.
Charlie