Client / Server Design for Windows.System.Forms

  • Thread starter Thread starter Dave
  • Start date Start date
D

Dave

I have been developing a MS Access ADP database with SQL Server 2000.
Originally, my customers needed a useable database to store 60 roughtly
60 megs of data as quickly as possible, otherwise I would have used
something else, like VB or C#. What sucks is that over time, they have
requested additional functionality and before I realized it, I suddenly
had a 300 mb database with over 30 tables, and tons and tons of views,
stored procedures, functions, triggers, etc... It could theoretically
store up to to 1gb of data within the next few years... All the
business logic has been implemented in T-SQL, and the Access client is
a dumb as possible.

I'm planning to replace the Access client with a C#
System.Windows.Forms client. There will only a handful of users for
each instance of the application at best. They will always always be
connected on a 100mb LAN. Most of the users will require very little
bandwith except for a select fiew who will require the ability to run
large reports. The design of the database can be changed, as I will be
responsible for all aspects of the design and implementation of this
project in the long term. Being that I am the only programmer on this
project and it is complex, I'll need to maximize time in the
development process. This could easily become my entire life, and I'd
rather that not happen.

The way I look at it, I've got a few choices for the basic design and
implementation of my System.Windows.Forms client. I could:

Use ADO.Net Connections to sql server, storing a connection instance
and related data adapters / sets within in each form they are used.
Good side: it's a drop in replacement requiring no changes to the
already exising SQL Database. Bad side: there would be thousands of
lines of T-SQL stored procedures and triggers to manage and I don't
think I should migriate the bussiness logic to the client side given
the scenario that the users are delayed in updating their clients to a
newer version that fixes bugs in the business logic, for example.

--OR--

Use .NET Remoting within a custom windows service or IIS to provide an
interface to inteface to query a database and return typed DataSets
that will be merged with the form's bound dataset. Good side: if I use
IIS, I don't have to write authentication or permissions into the
service. Also, since I'm working on this long term, I can slowly
migrate its business logic into the service (and C#), eventually
eliminating all of T-SQL's awkard error handling...

Would any of you chose either of these design strategies or something
else entirely? It is still months away for it to even be expected that
I'm working on it, but I'm trying to get an early start and am in the
planning stages.
 
Dave,
you seem to be debating either a 2-tier design
Winforms client -> Database
or a 3-tier one
Winforms client -> middle tier -> Database

In the second case the recommended technology choice would be
Winforms client -> IIS / ASMX web services -> Database

and not remoting.

If your concern around 2-tier is updating client apps, then you have 2 choices
- build an auto-updating client - not as complicated as it seems since there
are a couple of different samples e.g.
http://www.windowsforms.net/articles/appupdater.aspx
- run the app from a network share - i.e. don't deploy to user desktops

I don't think there is anything wrong with the 2-tier approach, since you
don't need to scale. You obviously want to maintain logical seperation of UI
and business logic, but it doesn't follow that you need physical seperation.
For most corporate apps, people will choose a 3-tier approach, but that is
going to be more expensive in dev costs and I don't think you will see the
benefits.

Niroo [MSFT]
 
Well, I've heard from various google searches that one shouldn't return
data sets over a ASMX web service. There's a number of different
reasons why not to, and a lot of very vocal opposition against it, and
the one that appears to be the loudest is that non .NET clients
wouldn't be able to make much sense of the diffgrams you get when a
DataSet is returned by an ASMX web service.

Since non .NET applications will never need to access the database,
would it then be "okay" to return datasets in an ASMX webservice? I've
been programming long enough to know that certain design strategies can
look really cool at first but become a pain to manage once the
application reaches a certain size.

If I still shouldn't be returning datasets even given that non .NET
applications will never be accessing the service, how should the data
be returned? All those websites that bash returning datasets from a
ASMX service offer little attractive alternatives. One alternative
would be to MANUALLY make classes corresponding to tables and
properties withone those classes that correspond to each table's
fields. But that alone would take many boring hours to mention coding
the logic necessary to insert, remove, delete child tables and so
forth. It strikes me as duplicating the logic already present in typed
datasets, which are automatically generated by based on the database
schema by vstudio. Why would I want to dupliate functionality that's
already present and robust?

So, that's why I was suggesting remoting.
 
Dave,
you are right in saying that using datasets over asmx web serivces results
in a non-interoperable soltution and it rules out non-.Net clients. The other
downside is that they can be quite sizeable, since the dataset includes
meta-data plus other stuff. So sending a single row will result in a sizeable
SOAP message.

If neither of these things are major concerns for you then datasets over
asmx web services is fine. It's the solution that results in the smallest
number of lines of code. From the brief outline you've given of your app, it
looks like the pragmatic choice. You still need to be careful if you are
returning 1000's of rows, since the amount of memory consumed can grow
significantly, but any reasonable size of data should work well. A little
prototyping would be prudent.

Returning datasets over remoting is not really much different. Its still
..Net to .Net only and since datasets serialize to XML there is no perf
advantage.

ASMX is the best choice in most cases - you get to host in IIS, can use
Windows authentication for security and IIS is a very efficient listener (in
Windows 2003). Remoting is generally overused and should be considered the
exception rather than the default choice.

Regards
Niroo [MSFT]
 
It seems then, that the simplest way to do this would be not very
efficient over ASMX web servies.

It seems the most logical to make a single DataSetwhose schema maps to
all of the database tables and views, including the relationships
between them. Where the DataSet is instantiated and used, only those
tables that are needed will be filled.

But if I do that, that entire schema of all the tables and views will
be transferred with each SOAP request... If I just drag all the tables
and views from the server explorer into a new DataSet, the resulting
XSD file is 153kb. Does that mean the SMALLEST amount of data returned
from any queries is ~153kb? Goodness! Even over a 100 mb lan, a slow
computer wouldn't be able to proccess the schema all that quickly.
(Incidentally, the .cs file generated for the DataSet is 3.2 megs!)

SO... The way around that I can see would be to use a StringWriter
with DataSet.WriteXml to return JUST the data contained within the
DataSet, not the schema. The client can then use DataSet.ReadXml to
load that into it's own DataSet.

What do you think of that?
 
I should clarify:

[WebMethod]
public string DoSomeQuery()
{
StringWriter out = new StringWriter();
MyDataSet ds = new MyDataSet();
myAdapter.Fill(ds);
ds.WriteXml(out);
return out.ToString();
}
 
Back
Top