L
Lee Gillie
New to ADO.NET so seeking comments on strategy.
I need to scan data, and build a "keyfile" on data I am extracting
during the scan. Actual fields varies by job, and I dynamically create
this from a description, and there may be 5-30 fields of integers and
strings typically. Small jobs may be a few hundred records, but I could
have up to 250,000. The machine this runs on is an extremely fast, dual
processor, with green gobs of physical memory. I need to sort the data
in alternate orders, make passes over it, updating and filling in more
fields as I go.
My thought was to use an Dataset. I understand I can make tables
programmatically, and populate programmatically, record at a time, field
at a time. Then I can use a Dataview to make passes in alternate
orderings, and make changes and fill in empty fields.
After each pass I want to persist the major table to Jet, primarily for
problem analysis. As each pass is done, replace the table content on
disk with what I have in my in-memory dataset.
I have already implemented this talking directly to JET, but have been
dissappointed in the performance, even after some amount of tweaking.
It would seem the new approach should be a real screamer. Seeking
comments from those more experienced with ADO.NET on this approach.
I need to scan data, and build a "keyfile" on data I am extracting
during the scan. Actual fields varies by job, and I dynamically create
this from a description, and there may be 5-30 fields of integers and
strings typically. Small jobs may be a few hundred records, but I could
have up to 250,000. The machine this runs on is an extremely fast, dual
processor, with green gobs of physical memory. I need to sort the data
in alternate orders, make passes over it, updating and filling in more
fields as I go.
My thought was to use an Dataset. I understand I can make tables
programmatically, and populate programmatically, record at a time, field
at a time. Then I can use a Dataview to make passes in alternate
orderings, and make changes and fill in empty fields.
After each pass I want to persist the major table to Jet, primarily for
problem analysis. As each pass is done, replace the table content on
disk with what I have in my in-memory dataset.
I have already implemented this talking directly to JET, but have been
dissappointed in the performance, even after some amount of tweaking.
It would seem the new approach should be a real screamer. Seeking
comments from those more experienced with ADO.NET on this approach.