M
MyndPhlyp
I'm going through one table serially and looking up additional values from
other tables. (Sounds pretty normal, right?) All the tables are indexed on
common columns. The secondary tables are a 0-n:1 relationship with the
primary table.
Right now, I'm using .Filter to get the limited Recordset on the secondary
tables with:
<primarykey> = <somevalue> AND <anothercolumn> = <someothervalue>
(.Filter has to be used rather than .Find due to the "AND" in the criteria.)
The overhead is painful. Comparing the throughput of a serial read on the
primary table with the throughput of the two added tables via .Filter, the
performance degrades in the order of 1:10 (or so it seems). I've tried
creating a compound index on the primary and secondary columns, but no
relief.
Is there a better way than .Filter to look up rows in the secondary
Recordsets?
other tables. (Sounds pretty normal, right?) All the tables are indexed on
common columns. The secondary tables are a 0-n:1 relationship with the
primary table.
Right now, I'm using .Filter to get the limited Recordset on the secondary
tables with:
<primarykey> = <somevalue> AND <anothercolumn> = <someothervalue>
(.Filter has to be used rather than .Find due to the "AND" in the criteria.)
The overhead is painful. Comparing the throughput of a serial read on the
primary table with the throughput of the two added tables via .Filter, the
performance degrades in the order of 1:10 (or so it seems). I've tried
creating a compound index on the primary and secondary columns, but no
relief.
Is there a better way than .Filter to look up rows in the secondary
Recordsets?