access to sql -- synch process with ado.net

  • Thread starter Thread starter JM
  • Start date Start date
J

JM

i have an Access app with about 15k rows that will slowly grow (this is
about a two years of data so far) indefnitely. The table structure is
horrible in the access design (can not be changed, its a third party
app) so on the sql side the table structure is totally different. What
i need to do is pull out all these rows and compare them indvidually
with the rows in sql and then update or insert into sql server table. I
am trying to figure out the best way to do this to make it as fast and
efficient as possible. I don't think dts is an option becuase the table
structures are so different, the dts process would have to be really
complex so i don't see the point. Here are the two scenarios i came
with in a .net environment.

1) ado.net all the way >>>load the data from access into a datareader,
load the data from sql into a dataset using a dataadapter to bind it to
the sql server. iterate through the access reader, for each row,
evaluate it against the dataset/table and either update, insert, or
leave it alone. do this for every row in the data reader. resynch the
data back to sql from the adapter.

2) individiaul sql statements >>> loop through the datareader, run a sql
statement against sql and see if the row exists, or needs to be updated,
run an insert or update statement against sql server and do this for
each row in the access datareader.
 
How about:
Move all the data to SQL Server via DTS or BCP.
Use SQL Server stored procedure(s) to munge the data in place.
Move any needed data back to SQL Server via DTS or BCP.

hth

--
____________________________________
Bill Vaughn
MVP, hRD
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
thanks for the response, but i am not sure if i can get that to work. I
will have to think about how i would do this in sql. i will be handling
each row individually coming from access because for each row in access
i have to do the following:

check if the row exists in sql table based on the id:
Yes: see if two fields have changed (business rules only allow two
fields to change), if they are, update those two fields with new values.
Also there is a string value in one other field that i put in a table of
its own so i have to get back the fk value (and add the field to the
table first if it does not exist). This all has to be done for each row
that i bring back from access.
No: see if the string value from one field exists in the relation
table, if not add it and get the fk value back, otherwise get the
existing fk value, and add to sql as a new row from access.

without using cursors which i wont' do, i am not sure how i would do
this easily in sprocs. Do you still think sprocs are the best?

off topic:
bill, i belive you are a sql guru from what i have seen, i recenly
learned that the query execution plan for sprocs are not compiled before
run time. I read this article in books on line "Execution Plan Caching
and Reuse" that if i understood this to be correct, both sprocs and
dynamic sql querys are compiled at run time so the concept of dynamic
queries being slower then sprocs on the db server is not as valid of an
argument as it used to be. Can you validate if i interpreted this
correctly?
 
Bill's suggestion would undoubtedly be best, but since you can't
change the Access app's schema, probably a variation of 1 is the best
approach. I'd load both the Access row and the related SQL Server row
into a DataSet where you can examine each value, then post any changes
back to the SQL Server (you can use a Command object to either execute
a stored procedure or direct SQL). If no changes are needed, fetch the
next row(s). Your processing would all be done in code, so you'd have
to check for concurrency violations if that's an issue.

As far as stored procedures vs. direct SQL, it's not all just about
plan caching. When you send a direct SQL statement, the server must
parse it for syntactical correctness and translate into machine code
(all those names become numbers), then create and cache a query plan.
For a sproc, the syntax checking and all happens when you write it
save it, not on first execution. One other major reason you should
write sprocs is security -- you won't be the victim of a SQL injection
attack if you use parameterized stored procedures and avoid executing
direct SQL while removing all permissions from the base tables. A
third benefit is encapsulation -- you can write your data logic once,
and call it from multiple clients.

-- Mary
MCW Technologies
http://www.mcwtech.com
 
The query plan is recreated every time the SP executes and there is no
cached plan. In other words, if there is a cached plan in memory from a
previous invocation, it's reused--otherwise the QP is regenerated and cached
for later use. This means the new QP is optimal given the set of input
parameters at the time--but not necessarily optimal when another request
comes in for the SP that uses a different set of parms. If a QP is not used
for awhile, it can get tossed from the cache (if space is needed). QPs are
tossed if you run sp_recompile or you use WITH RECOMPILE in the SP. The
trick is to create the SP and indexes so that regardless of the input
parameters, the same (optimal) QP is used. Make sense?

hth

--
____________________________________
Bill Vaughn
MVP, hRD
www.betav.com
Please reply only to the newsgroup so that others can benefit.
This posting is provided "AS IS" with no warranties, and confers no rights.
__________________________________
 
Back
Top