Philosophical question: how often to submit updates to database

  • Thread starter Thread starter Earl
  • Start date Start date
E

Earl

A concept I've waffled on from time-to-time with ADO.Net: how often to
submit updates back to the database. For example, I have some CF apps where
it's quite obvious the *when* part of submitting the update -- soon as you
come off of the record. For other desktop apps, it's not quite so obvious.
For example, I have a CRM app where there are dozens of fields for one
customer record, as well as many different datatables for sales,
appointments, etc.. So, I wonder: submit the update every time a single
change is made or submit it only when a different customer is selected? The
downside to the update for every single change is the obvious network/server
overhead; the downside for waiting on a customer change is what if the app
gets closed/crashes, etc? I'm not looking for a *right* answer here, just
some opinions. Thoughts, strategies, ideas from others would be appreciated.
 
Earl said:
A concept I've waffled on from time-to-time with ADO.Net: how often to
submit updates back to the database. For example, I have some CF apps where
it's quite obvious the *when* part of submitting the update -- soon as you
come off of the record. For other desktop apps, it's not quite so obvious.
For example, I have a CRM app where there are dozens of fields for one
customer record, as well as many different datatables for sales,
appointments, etc.. So, I wonder: submit the update every time a single
change is made or submit it only when a different customer is selected? The
downside to the update for every single change is the obvious
network/server overhead; the downside for waiting on a customer change is
what if the app gets closed/crashes, etc? I'm not looking for a *right*
answer here, just some opinions. Thoughts, strategies, ideas from others
would be appreciated.

My strategy: Forget performance. Transaction scopes, and local change
batching and undo should be driven by usability. Ease of use for the user.
Ease of use for the programmer. Ease of use for the subsequent
maintainer-of-code. These are the things that matter. Performance will
sort itself out.

David
 
Here is the bookish answer - As few network roundtrips as possible.

Now that's easier said than done, because the longer you spend disconnected,
the more stale data you have, the more data you have to keep disconnected,
and the more difficult your updates may get (because you've been out of
date), and the more your users are confused because you are working with
stale data.

What I generally do is, I look at the business case and figure out the most
time I can remain disconnected without bending in 60 different directions.
And then I do that. You have to strike the right balance between performance
and maintainable code - and that is situation specific.

Say for instance, in your scenario - what happens after the drop down is
selected? How much data are you changing? Will it be okay to defer an update
because whatever changes you made, the user will not loose TOO MUCH of
his/her work by not going back to the db in case of a concurrency conflict?

And in certain scenarios, you still may have a user disconnected for a LONG
time - say when editing data on a complex screen. What do you do then?
Maintain an open connection with pessimistic locking? Heck no !! That is
where you need to get creative in your optimistic locking scenarios.

This is a rather interesting topic to solve, I have discussed all of these
scenarios in Chapter 10 of my book. I would highly recommend you should
check that out.

- Sahil Malik [MVP]
ADO.NET 2.0 book -
http://codebetter.com/blogs/sahil.malik/archive/2005/05/13/63199.aspx
__________________________________________________________
 
Back
Top