J
John Bailo
I wrote a webservice to output a report file. The fields of the report
are formatted based on information in an in-memory XmlDocument. As
each row of a SqlDataReader are looped through, a lookup is done, and
format information retrieved.
The performance was extremely poor -- producing about 1000 rows per minute.
However, when I used tracing/logging, my results were inconclusive.
First of all, based on the size of the data and the size of the
XmlDocument, I would have expected the whole process per record to be < 1ms.
I put a statement to record the time, to the millesecond, before each
call to the XmlDocument, and in the routine, before and after each XPath
query. Then I put a statement after each line was written to the text
stream.
What was odd, was that I could see milleseconds being chewed up in the
code, that contributed to the poor performance, the time where it was
chewed up was random! Sometimes the XmlDocument was 0 ms, sometimes
20-30s per lookup. Sometimes, the clock would add ms in the loop that
retrieved the record from the dataset.
Another thing that puzzled me is that as the program ran, performance
*degraded* -- the whole loop and all the individual processes ran slower
and slower!
To me, this indicates severe problems with Ms .NET garbage collection
and memory management.
are formatted based on information in an in-memory XmlDocument. As
each row of a SqlDataReader are looped through, a lookup is done, and
format information retrieved.
The performance was extremely poor -- producing about 1000 rows per minute.
However, when I used tracing/logging, my results were inconclusive.
First of all, based on the size of the data and the size of the
XmlDocument, I would have expected the whole process per record to be < 1ms.
I put a statement to record the time, to the millesecond, before each
call to the XmlDocument, and in the routine, before and after each XPath
query. Then I put a statement after each line was written to the text
stream.
What was odd, was that I could see milleseconds being chewed up in the
code, that contributed to the poor performance, the time where it was
chewed up was random! Sometimes the XmlDocument was 0 ms, sometimes
20-30s per lookup. Sometimes, the clock would add ms in the loop that
retrieved the record from the dataset.
Another thing that puzzled me is that as the program ran, performance
*degraded* -- the whole loop and all the individual processes ran slower
and slower!
To me, this indicates severe problems with Ms .NET garbage collection
and memory management.