S
Sean Nijenhuis
Hi, thanks for your time.
I'm currently working on a database whereby it imports a rather large single
field text file. (400000 records - 50Mb)
The point of doing this is that it is a telephone accounts listing, number
called, date time, duration etc. However, the number / extension from which
the calls are made is listed once at the top of the "page".
What I'm doing is using ADO to look for that "first" line, identify the
telephone number in that first line, and then, .edit it to a field with each
subsequent line until the next "first" line. This works fine - however -
Database grows to 1.7Gb.
Now sure, compact and repair will reduce it down to 108MG, but heres the
problem, i need to perform more calculations and identify more fields before
i can "leave" the database to do the compact and repair.
Also, this is the test data (50Mb) - the actual data this needs to be
preformed on is 178Mb. Immediatly hit the 2gb limit.
1) Is there any "neater" way of identify or anaylsing the records in order
to prevent the bloat?
2) Can a update query written in such a way that it updates the current
record with data from the previous record that its just updated?
Any help will be much appreciated!
Thanks
Sean
I'm currently working on a database whereby it imports a rather large single
field text file. (400000 records - 50Mb)
The point of doing this is that it is a telephone accounts listing, number
called, date time, duration etc. However, the number / extension from which
the calls are made is listed once at the top of the "page".
What I'm doing is using ADO to look for that "first" line, identify the
telephone number in that first line, and then, .edit it to a field with each
subsequent line until the next "first" line. This works fine - however -
Database grows to 1.7Gb.
Now sure, compact and repair will reduce it down to 108MG, but heres the
problem, i need to perform more calculations and identify more fields before
i can "leave" the database to do the compact and repair.
Also, this is the test data (50Mb) - the actual data this needs to be
preformed on is 178Mb. Immediatly hit the 2gb limit.
1) Is there any "neater" way of identify or anaylsing the records in order
to prevent the bloat?
2) Can a update query written in such a way that it updates the current
record with data from the previous record that its just updated?
Any help will be much appreciated!
Thanks
Sean