The limits of Access?

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

Hi everyone,

I have a client who has been running an Access database for several years
now and it has done them well. However in the last 6 months or so, their
business has increased more than ten fold (alright for some), and they are
now processing in the region of 2000 records a week, each with around 3-6
child records.

They are experiencing serious slow downs and complete failures in some cases.

Is there something I can do to fix it, or have I found the limits of this
technology?

Thanks very much!
 
Hmm. That's around 0.5 million new related records each year. That's
starting to push Access.

More importantly if the business is going that fast, what do you expect the
situation to be in another 6 months? Do you need to use that 6 months to get
a heavier database in place, or will it be reaching breaking point if you
leave it til then to consider?

You may already be aware of Tony Toews' Access Performance FAQ at:
http://www.granite.ab.ca/access/performancefaq.htm
Many of the items he lists there are crucial, including Name AutoCorrect,
SubdatasheetName, and so on.

Presumably you have already split the database if there are multiple users.
If not, this article introduces the idea:
http://allenbrowne.com/ser-01.html

Other suggestions:
- Use optimistic locking.
- Disable Record-level locking if practical.
- Identify particular processes/queries/reports/forms that are problematic,
and pinpoint what the bottleneck is by testing.
- Keep a log of when users sign in and out. Users/computers that are not
signing out could be crashing, falling off the network, or otherwise
contributing to the problem.
 
You have not hit the limit of the technology, but there are some things to
check.
First the size limit for an Access database file is 2G. If you are
approaching that, then it may be time to consider upsizing to SQL Server.
Note that limit is per mdb.
Be sure your database is split, that is all data objects in the back end,
all other objects in the front end.
Be sure each user has their own copy of the front end database on their own
computer, not sharing a common front end on a server.
If everyone is sharing one mdb that contains all objects (Not Split), then
you really need to split this database. It is in extreme danger.
If everyone is sharing a front end on the server, install a copy on each
user's computer.
In either of the above cases, you are doubling network traffic for each user.
Also have a conversation with the Network manager to ensure there is not a
network issue causing the slow down.
Set each user's front end mdb to Compact On Close.
Have a procedure defined to regularly compact and repair the back end mdb.
Can some of the older data be archived to another database to free up space?

And last, if you have done everything correctly above and it is an mdb size
issue, consider splitting the back end into multiple mdbs. (Remember, the 2G
limit is per mdb). This is not a simple task, because there are
relationships to consider, but if it is possible to move some data to another
mdb, it would help with the size issue.

Good Luck
 
Hi Allen,

Thanks very much for the reply.

We expect it to carry on this way, and what I had left out is that they have
comissioned and had written a new application using SQL Server and .NET to
replace it - but they "can't find the time to test and implement" it, so
instead they are still coming to us for fixes for the old database.

I was hoping that the response I got here would validate my reply to them -
and it has.

We have, as you say, already split the database, and tried some of the other
suggestions you give like optimistic locking. The other suggestions are great
- if they choose to persist then I will definately give them a go!

Do you know of any articles which directly reference the limitations of
Access - i.e. how many records you can expect it to process, etc?

Thanks very much for the info :)

Dom
 
Dom, I doubt there are definitive article dealing with where the limits are.

There are too many variables that keep changing, e.g. hardware (such as more
RAM and faster), network (10? 100? 1gb? collisions?), disks, Unicode
compression, and service packs. Versions are different. User-defined
functions executed in queries tend to be much slower now than they did 5
years ago. Access bugs. Securtity. Encryption. Developer skill. Optimization
techniques. Bottlenecks. There's just too many variables.
 
Back
Top