T
Tim
Hello everyone. I visit this board pretty regularly but have never
posted. My question is an application architecture question... so
hopefully it's okay to post in the dotnet general forum.
I have this project I'm proposing but want to get some opinions on a
good architecture. Some of the requirements are as follows:
1. 30+ tables and 25+ lookup tables
2. Small number of records added monthly (<100)
3. Fairly complex, PCU-intensive reports acessed monthly
4. Extremely complex, PCU- and bandwidth-intensive reports accessed a
few times a year (complex calculations involving tens of thousands of
records, multiple aggregations, outer joins -- queries referencing
dozens of other queries referencing 30+ tables. Record source of report
after calculations includes ~10,000 records).
5. One or two primary users for data entry and reporting, but future
needs may require reports accessed by dozens of clients.
The moderately-expensive monthly reports could be handled by a central
DB server and the network, but the super-expensive reports I think are
too much to ask a central server and the company's network. Therefore,
I think it's necessary to store data locally on users machines and run
the reports from there.
Since the reports are complex, I think a relational data store is
required on the client machines to generate them (rather than a saved
Dataset). I was thinking MSDE would work.
So my biggest question is such: What's the best way to keep a central
server in sync with client DBs without too much toll on the network and
central server? My two ideas so far are:
1) Build the application as if the client DB is the main database.
Don't worry about excessive traffic between the app and the DB because
it's on the same machine. Use SQL Server replication to sync the
central server to the client machines.
The downside to this design seems to be requiring replication to be set
up at the central server for each client machine. This might not be
feasible because I'd like people to be able to install the client
software without the server DB knowing about each installation.
2) Timestamp every single record saved in the client DB (and also every
record stored in the central DB). When the client app opens, check the
central DB for new records and download them. When the client app
closes, upload all new records to the central server.
The downside here might be that each time a client opens the app, a
whole bunch of queries are run on the server, checking timestamps in 30
or so tables. This could be a big hit. Also, timestamps on the client
machines need to be in synch with the timestamps on the server, so a
web service would be required to expose the system clock of a central
computer.
Sorry, for such a long post (especially for my first one), but I really
need some input on this. Is there an easier way to handle this
situation? I'm not in love with either of my ideas.
Any help is GREATLY appreciated.
posted. My question is an application architecture question... so
hopefully it's okay to post in the dotnet general forum.
I have this project I'm proposing but want to get some opinions on a
good architecture. Some of the requirements are as follows:
1. 30+ tables and 25+ lookup tables
2. Small number of records added monthly (<100)
3. Fairly complex, PCU-intensive reports acessed monthly
4. Extremely complex, PCU- and bandwidth-intensive reports accessed a
few times a year (complex calculations involving tens of thousands of
records, multiple aggregations, outer joins -- queries referencing
dozens of other queries referencing 30+ tables. Record source of report
after calculations includes ~10,000 records).
5. One or two primary users for data entry and reporting, but future
needs may require reports accessed by dozens of clients.
The moderately-expensive monthly reports could be handled by a central
DB server and the network, but the super-expensive reports I think are
too much to ask a central server and the company's network. Therefore,
I think it's necessary to store data locally on users machines and run
the reports from there.
Since the reports are complex, I think a relational data store is
required on the client machines to generate them (rather than a saved
Dataset). I was thinking MSDE would work.
So my biggest question is such: What's the best way to keep a central
server in sync with client DBs without too much toll on the network and
central server? My two ideas so far are:
1) Build the application as if the client DB is the main database.
Don't worry about excessive traffic between the app and the DB because
it's on the same machine. Use SQL Server replication to sync the
central server to the client machines.
The downside to this design seems to be requiring replication to be set
up at the central server for each client machine. This might not be
feasible because I'd like people to be able to install the client
software without the server DB knowing about each installation.
2) Timestamp every single record saved in the client DB (and also every
record stored in the central DB). When the client app opens, check the
central DB for new records and download them. When the client app
closes, upload all new records to the central server.
The downside here might be that each time a client opens the app, a
whole bunch of queries are run on the server, checking timestamps in 30
or so tables. This could be a big hit. Also, timestamps on the client
machines need to be in synch with the timestamps on the server, so a
web service would be required to expose the system clock of a central
computer.
Sorry, for such a long post (especially for my first one), but I really
need some input on this. Is there an easier way to handle this
situation? I'm not in love with either of my ideas.
Any help is GREATLY appreciated.