Help with Master/Detail UI in ADPs

  • Thread starter Thread starter Patrick Jackman
  • Start date Start date
P

Patrick Jackman

In Access DAO/Jet applications, I handle Master/Detail relationships by
using unbound fields for the Master table and a subform bound to a local
temp table for the Detail rows with OK, Apply and Cancel buttons on these
forms. If the user clicks OK or Apply, I do concurrency checks then I write
from the unbound fields to the Master table on the server and from the local
temp table to the Detail table on the server for rows that are new, dirty or
deleted. If the user clicks Cancel, I just close the form.

In some applications I have up to 5 Detail subforms against a Master without
any issues. Using temp tables allows me to validate business rules between
the various M/D entities before saving any changes. I also use
multi-instance forms to allow users to have multiple views of M/D data open
concurrently and I keep track of what data in a local temp table belongs to
which form instance by loading the subform's hWnd along with its data.

Using local temp tables for Detail entities and local tables for lookups
allows me to have acceptable performance with 20 - 25 concurrent users
against a 700 MB backend. This approach is code intensive but I've had time
to streamline it over the last 12 years of working with Access full-time.

I would like to start using SQL Server 2000. If I connect with ODBC I can
continue with the same design approach using local temp Jet tables. But I
would like to consider using ADPs to avoid the reported performance penalty
of ODBC.

Is there a "best practice" for Master/Detail form design with ADPs?

I've tried several approaches without success:
1. Bind ADO adLockBatchOptimistic recordsets to the Master and Detail forms
then set the ActiveConnection = Nothing. When I reconnect and issue
UpdateBatch, the M will update in certain situations but the D never
updates.

2. Create an in memory ADO recordset for the Detail, load it with data and
bind it to the Detail subform. The binding fails with #Error in each field
of the subform.

3. The "Access 2002 Enterprise Developer's Handbook" approach on p. 281:
"Using Transactions with Bound Forms". It works until I attempt to open a
second form instance on the same row while the 1st is still in a
transaction. And I've read posts here suggesting this is not a best
practice.

What is the preferred approach for dealing with this issue in ADPs? Has
anyone written about it from a real world perspective?

Patrick
 
Dear Patrick:

You have a very good design paradigm going there.

Here's an approach I prefer for ADPs with MSDE or SQL Server.

Make sure user's log in, and keep track of who is posting records. Post the
user to all new records. Allow updates to take effect immediately.

Mark records as being tentative (another column in the table) and put them
right into the database immediately. Having a record in the table, but
tentative, is really just the same thing as having it in a local table,
except that if the user's computer breaks down, that user has access to
those same records from any other system though his login. All the business
rules can be in affect immediately through the data being placed directly in
the public database, so if another user begins entering, say, an invoice
already entered by tentatively, it is blocked initially. That user will
know the invoice has been at least partially entered, and by whom.

When you query for reports, you must determine whether the tentative data
belongs on the report or not, and filter it out if not.

All forms are then bound (if desired).

Given decent server hardware, moving from Jet to MSDE, you'll find the
performance is unlikely to be an issue. 20 users may be a pretty full load
for Jet, but will be a light load for most applications with MSDE. This is
expecially true with a "small" back end of 700 MB. If you have a gigabyte
of memory in the server, it will quickly cache all the repeatedly accessed
data, and you'll be flying.

Perhaps this approach will be attractive to you, and can be adapted to your
needs.

Tom Ellison
 
I'm curious what "acceptable" performance is? I've seen systems designed
this way with the same number of users, I've not even had acceptable
performance with a local MDB file of 700mb! I think you and your users will
be very impressed with the performance boost they'll get by going to a true
client/server architecture.

I agree with Tom's suggestions, but will add the following:

One of the advantages of using a client/server architecture is the
availability of the server to process transactions. The ADP/SQL(or MSDE)
will require a different style of coding in order to be effective. A lot of
the functionality you used to get from VBA will need to be coded in T-SQL.

You are used to doing your transaction processing from the local workstation
using vba code, what you would do in this situation is post the data to the
database, possibly even into a "staging" set of tables for processing or
even directly into your transaction tables.

If you use the "staging" method, you could use your "ok" button to execute a
stored procedure to validate the data and transfer it to the transaction
tables

If you choose to write directly to the transaction tables, you can either
use a trigger to validate each row as it's entered, or use the "tentative"
or "Pending" flag as Tom suggested and execute a stored procedure against
the pending records for each transaction, OR you can execute a batch update
periodically (even automatically via SQL Server Agent) through out the day,
depending on your needs.

One word of caution I tell everyone new to ADP/SQL server, watch your use of
' and "...vba likes " for strings and SQL server will use '. Getting the two
mixed up can be nasty to track down!

Another suggestion would be to use temp tables, however sometimes Access can
be flakey about using them.

Just my two cents worth here. I've pretty much exclusively used the ADP/SQL
combination since 2001 and I don't think I would ever go back to mdb files,
even for the smallest/simplest apps since Microsoft has made MSDE/SQL Server
express edition free, it just makes more sense to know that all of my apps
are scaleable from both a data and concurrent user standpoint.

Good Luck!

Mark Shultz
Procurement Data Specialist
RAD, Inc. (dba Interstate All Battery Centers)
 
Thanks Mark.

Acceptable performance to me is a screen open in under 2 seconds.

If I could choose between "staging" tables on the server and in-memory
tables on the client, I would prefer to bind subforms to disconnect-able
recordsets, or directly to business objects, but I haven't been successful
with this yet. I was successful a few years ago using Rockford Lhotka's
ODSOLEDB approach from his "Professional Visual Basic 6 Distributed Objects"
to bind a VB6 business object to an ADODB recordset. But there appears to be
limits on the kind of recordsets that can be bound to ADP forms. Do you know
where this is documented?

In my present architecture, each child screen in a parent/child relationship
is self contained. It has public Create, Load, IsValid, ObjectIsDirty and
ApplyEdit methods that are called by equivalent methods in the parent
screen. The ApplyEdit methods have a DAO.Workspace parameter to manage the
entire entity's transaction.

I'll watch out for mixing up 's and "s; thanks for the head's up.

Patrick.
 
Thanks for the ideas Tom.

I think I would prefer temporary tables to avoid having to add filters
everywhere to exclude the "tentative" records. Temporary tables on the
server would need to hold the user's log in along with the screen's hWnd.

If I were to have a screen with an unbound parent and 4 child screens
bound to temporary tables on the server, would this consume 4 database
connections?

Regarding MSDE, I thought it was performance limited to 5 concurrent users.
Is it commonly used with 20 users?

Patrick.
 
Dear Patrick:

If I had a nickel for every time . . .

Answering your last question first:

No, MSDE is not limited to 5 concurrent users, it is limited to 5 concurrent
processes. It is commonly used with 50 and even 100 users.

5 concurrent processes means 5 query threads (queries) running at any one
time. If each user runs a query every 10 seconds (and that's a very high
average. I usually use 30 seconds) and each query takes half a second (not
unlikely if properly optimized) then with100 users the engine is busy with
an average of 2 threads at any one time. It would be statictically rare
even in this rather extreme case to have 5 concurrent queries running. Even
then, it doesn't bomb, it throttles. Everybody go get a cup of coffee, and
it will be back in a minute.

The users whose connection to the server is idle at any moment don't count.
So, unless the usage is incredibly high, or the query demands are very
complex (or not optimized) your 20-some user scenario is almost certainly no
strain. You'll want good server hardware and plenty of memory in it for
cache to improve this even more. You may get the average processing time
(not including the time to transmit results over the network, just the time
to prepare them in the server's memory) down well under the half second
estimate.

For an application such as you described, this can be the best bang for the
buck.

Don't quote me on it, but the new SQL Express does not appear to have that
limit or any throttle. Also, it has doubled the database size limit and has
some great new features, and improved performance. You might research that.
I know I'm looking forward to trying it when I get a chance. I still
haven't figured out what the licensing restrictions are for it, but I know
when I download it I'll have work for my lawyer to figure that out. Ouch!

More, inline, below.

Tom Ellison


Patrick Jackman said:
Thanks for the ideas Tom.

I think I would prefer temporary tables to avoid having to add filters
everywhere to exclude the "tentative" records. Temporary tables on the
server would need to hold the user's log in along with the screen's hWnd.

You can still create temp tables on the server. I use the computer network
name and the hWnd (window handle number) of the instance of Access to create
this, along with a name for that "local" table. If there are multiple
instance of the application open on the same client, this still ensures
uniqueness.

Putting temp records in the table is superior for performance. Deleting
them from one place and inserting them another is a lot more work for the
server than just changing a column. You may want to re-think how much work
is involved in implementing the Tentative column. I wouldn't think that's
much work at all.
If I were to have a screen with an unbound parent and 4 child screens
bound to temporary tables on the server, would this consume 4 database
connections?

Again, counting connections is not the point in any case.
Regarding MSDE, I thought it was performance limited to 5 concurrent
users.
Is it commonly used with 20 users?

Not so at all, as I explained.
 
I would look into dropping all DAO code in favor of ADO. This is because ADO
is more native to the way the ADP will access the SQL Server data. For
example, your connect string can always be "currentproject.connection" when
you're working with the same backend database.

As far as binding a form to a recordset...I've not used this myself, but
I've read in a few places that forms and reports have a RECORDSET property
that allows you to bind the form directly to the recordset.

Here is a link to a Microsoft knowledge base post on that subject...
http://support.microsoft.com/kb/281998

The thing to remember here is you are moving from a workstation based
architecture to a client/server architecture. The key here is to make the
server do as much work as possible, that's what it's there for, and to
transmit the smallest amount of data over the network as possible. Let the
server handle the transactions. If you use continue to process the majority
of your data logic on the workstation, you could potentially end up with a
performance drag because large amounts of data to the client.

The other thing I'm noticing here is that you seem to be wanting to keep
very tight control of the transaction, again from the workstation...SQL
server is completely transaction based, so if you tell it you're starting a
transaction and for some reason it doesn't complete (workstation looses
connection, etc) the ENTIRE transaction gets rolled back no matter what
happens to the workstation.

In order to get performance advantages out of using a SQL server, beyond
just getting a larger data store, you pretty much need to re-write your app
and it's logic to use the new "Server" layer that wasn't available before.

Just my two cents worth, of course.

Mark Shultz
Procurement Data Specialist
RAD, Inc. (dba Interstate All Battery Centers)
 
Thanks Tom, for making the distinction between processes and database
connections with regards to the "5" number. This will open up a world of new
possibilities for some of my clients.

And I'll have to look into SQL Express as well. The 2 GB limit of MSDE will
soon be a problem for one potential convert.

Excellent advice. Thanks again.

Patrick.
 
Thanks Mark. Yes I would plan on dropping DAO use in adps.

The link about recordsets is quite informative.

You comments about delegating transactions to SQL Server are much
appreciated. That hadn't occured to me yet.

A re-writing of my app is certainly required. I'm trying to get a overview
of "best practices" and you've been helpful. I orderd a copy of Mary Chipman
and Andy Baron's upsizing book yesterday and hope to get a few more ideas
from there. I've been reading Russel Sinclair's book and Paul Litwin's
Access 2002 Enterprise.

Unfortunately I have never found a book that deals adequately with how to
handle OK and Cancel processes on Access forms displaying a one-to-many
relationship where the many is handled with a subform in datasheet or
continuous forms view. It's quite astonishing to me as this kind of
relationship exists throughout databases. I've come up with my own approach
for mdb's by having local tables but local tables aren't available in adp's.
Tom Ellison has suggested an approach and I've been hoping that others would
too.

Patrick.

"Mark Shultz Jr" <[email protected]>
wrote in message
I would look into dropping all DAO code in favor of ADO. This is because ADO
is more native to the way the ADP will access the SQL Server data. For
example, your connect string can always be "currentproject.connection" when
you're working with the same backend database.

As far as binding a form to a recordset...I've not used this myself, but
I've read in a few places that forms and reports have a RECORDSET property
that allows you to bind the form directly to the recordset.

Here is a link to a Microsoft knowledge base post on that subject...
http://support.microsoft.com/kb/281998

The thing to remember here is you are moving from a workstation based
architecture to a client/server architecture. The key here is to make the
server do as much work as possible, that's what it's there for, and to
transmit the smallest amount of data over the network as possible. Let the
server handle the transactions. If you use continue to process the majority
of your data logic on the workstation, you could potentially end up with a
performance drag because large amounts of data to the client.

The other thing I'm noticing here is that you seem to be wanting to keep
very tight control of the transaction, again from the workstation...SQL
server is completely transaction based, so if you tell it you're starting a
transaction and for some reason it doesn't complete (workstation looses
connection, etc) the ENTIRE transaction gets rolled back no matter what
happens to the workstation.

In order to get performance advantages out of using a SQL server, beyond
just getting a larger data store, you pretty much need to re-write your app
and it's logic to use the new "Server" layer that wasn't available before.

Just my two cents worth, of course.

Mark Shultz
Procurement Data Specialist
RAD, Inc. (dba Interstate All Battery Centers)
 
Back
Top