Newbie object design questions

  • Thread starter Thread starter CDX
  • Start date Start date
C

CDX

ok, so I'm new to C#, but have some experience with objects. I'd be real
grateful to anyone with some help on the things below.


So I've got some objects I've created that calculate P&L numbers and
some other objects that know how to summarize those numbers up. What I'm
having conceptual problems with are the following....

1. I would assume that I should populate these objects using some form
of ADO call. Does it make sense for me to try and use a DataReader to
place the data into new instances of my objects? Or it there a different
way this should be done. The way I've done it in other languages is to
loop through each row returned from the sql connection, create a new
instance of my object (and populate it), then add that instance to some
collection.

I haven't really seen any examples or tutorials using this technique.
Mostly it is just using adaptors to load the data, but that data isn't
really like the objects that I am defining, to me they appear to be more
like pure data with functionality for being in a dataGrid.


2. With regards to dataGrids. My tendency would be for me to place my
objects in some form of collection that could be placed into a DataGrid.
I obviously would need to set the data grid up to call the correct
getters/setters in my instances. This way I have logic within my objects
to handle formulas and even stuff like saving. Again, what I'm seeing so
far appears to be just data in collections that have functionality for
dataGrids, not objects designed by the programmer to handle application
logic.

So I guess another question I have is if I'm just coming at C# in the
wrong direction. Am I expecting to do things in it that are supposed to
be done in other ways and, if so, is that the way I should write C# code.

Thanks to anyone who made it this far and especially anyone who answers
this.

Chip
 
Chip - you can use a Reader to populate your business objects, but you can
use a DataTable to accomplish most common tasks.

If you use a DataTable/DataSet you can also bind your grid to this and with
the notion or Rowstate - that's a lot of wheel you won't have to reinvent
 
W.G. Ryan eMVP said:
Chip - you can use a Reader to populate your business objects, but you can
use a DataTable to accomplish most common tasks.

If you use a DataTable/DataSet you can also bind your grid to this and with
the notion or Rowstate - that's a lot of wheel you won't have to reinvent
I'm not sure I understand. Do I create my own business objects and use
them somehow in a DataTable/DataSet, or can I not use DataTable/DataSet
if I have my own business objects?
 
Hey Chip,

Short answer:

Check out DataSets/DataTables as suggested to save time and effort.

Long answer:

Broad topic there. You probably won't find a consensus answer to this
question, but I think where many of us are headed--and I believe the
other response was pointing you in that direction too--is toward the use
of smart data containers that make binding to UI components simple, in
this case the ADO.NET DataSet. The alternative is the use of custom
business entities (with which you're obviously familiar).

In my current work I'm using the former approach after years of the
latter, because I believe that with this approach I can meet my
requirements without all of the additional effort required by
custom-coding everything. [This in turn leads toward a huge discussion
of business objects, object-relational mapping, and code generation
which has been done to death. I've used 'em all. Just search for any of
these phrases and you'll have plenty to read!]

I'd like to point you towards a good demo application, but really it
doesn't yet exist. I happen to like the basic architecture of the
QuickStore demo in Microsoft's UI Process Application Block, mainly for
its separation into logical layers (n-tier-type design; I wouldn't spend
a lot of time beyond that on the app because there's quite a lot there
to digest).

So look into DataSets, and possibly even (strongly) typed datasets. You
might then end up with business objects that reference/interact with
DataSets or business objects that inherit DataSets (composition vs.
inheritance). In either case the business objects can still manage your
business logic/rules. So you can still have a business object layer,
data access layer, etc.; it's just the means of data transport that will
change. Smart containers, data buckets, that's all.

If you structure your project carefully, you'll end up with a nice,
layered design that offers the benefit of easy binding to any UI (Web,
Win, Web services) while keeping components loosely-coupled.

No more looping allowed! Let something else do the heavy lifting for you. ;)

- Mike
 
Mike,
Thanks for the thoughtful response. This is the sort of thing I was
looking for. If I understand you correctly then, you are saying that if
I go with defining my own business objects independant of the dataSet
model, then I'll have to write a ton of interface code (to both the GUI
and database), thus reinventing the wheel.

So your recommendation is to use inheritance and/or containment to place
my business logic into the correct framework.

So in this little exploratory app I'm trying to build, I have "Detail"
models that hold P&L data at a very granular level. Then I have
"Summary" models which can hold an arbitrary number of the detail models
and summarize them up to different grouping levels. These "Summary"
models would essentially represent the rows in my dataset.

Since I do not yet know much about dataSets, especially the internals,
I'm going to theorize on what I think I should do. I'd love to hear your
input and recommendations. I think it will help me understand the
language and usage much better.

From my research, I see that the dataSet has a DataTableCollection,
which holds onto DataTable instances. Each DataTable instance has a
DataRowCollection, which holds onto instances of DataRow. Finally each
DataRow has an ItemArray, which holds the individual pieces of Data.

From what I see, I'd love to be able to have the ItemArray be an
instance of my "Summary" model.

So in order to use this as my row model, I believe I would have to do
something with the DataColumnCollection, or more specifically, the
DataColumn. It would make sense to me to have the DataColumn contain the
message that must be sent to the row model in order to get the
appropriate value. I can only assume that the current logic for
obtaining the values for a row uses the ordinal number of the DataColumn
into the ItemArray. So I would have to track down the piece of code that
actually does this and override it somehow.

In terms of loading the data, it seems like the DataAdaptor does the
work of gluing the Sql data into the dataSet as well as setting up the
DataColumns. So I guess I would subclass the DataAdaptor to create
objects from the raw data and place them into my modified DataSet as
well as altering the DataColumn (or my subclass of it) appropriately.


So Mike (any anyone else who made it this far), is this the approach
that needs to be taken in order to combine the business logic? From
rereading your message, it seems like I may have taken a much more
comprehensive approach than is necessary. While I can definately relate
to reusing the architecture for data access and the display, it seems to
me like they could have made it a bit easier to glue in your own "Row"
objects without having to do what I think needs to be done.

Thanks a lot for reading through and commenting.
Chip





Mike said:
Hey Chip,

Short answer:

Check out DataSets/DataTables as suggested to save time and effort.

Long answer:

Broad topic there. You probably won't find a consensus answer to this
question, but I think where many of us are headed--and I believe the
other response was pointing you in that direction too--is toward the use
of smart data containers that make binding to UI components simple, in
this case the ADO.NET DataSet. The alternative is the use of custom
business entities (with which you're obviously familiar).

In my current work I'm using the former approach after years of the
latter, because I believe that with this approach I can meet my
requirements without all of the additional effort required by
custom-coding everything. [This in turn leads toward a huge discussion
of business objects, object-relational mapping, and code generation
which has been done to death. I've used 'em all. Just search for any of
these phrases and you'll have plenty to read!]

I'd like to point you towards a good demo application, but really it
doesn't yet exist. I happen to like the basic architecture of the
QuickStore demo in Microsoft's UI Process Application Block, mainly for
its separation into logical layers (n-tier-type design; I wouldn't spend
a lot of time beyond that on the app because there's quite a lot there
to digest).

So look into DataSets, and possibly even (strongly) typed datasets. You
might then end up with business objects that reference/interact with
DataSets or business objects that inherit DataSets (composition vs.
inheritance). In either case the business objects can still manage your
business logic/rules. So you can still have a business object layer,
data access layer, etc.; it's just the means of data transport that will
change. Smart containers, data buckets, that's all.

If you structure your project carefully, you'll end up with a nice,
layered design that offers the benefit of easy binding to any UI (Web,
Win, Web services) while keeping components loosely-coupled.

No more looping allowed! Let something else do the heavy lifting for
you. ;)

- Mike
 
Chip,

The DataSet is like an in-memory database--it can have one or more
tables, relationships, constraints, etc. In the typical Customer/Order
scenario these two tables might both live in the same dataset to take
advantage of these features.

Are Summary and Detail separate database tables? [They sound a bit like
an Order/OrderDetail structure.]

I recommend investigating typed datasets, if you haven't already. To do
so just add a new "Data Set" item to your project, then use Server
Explorer to browse to a data source (for example, Servers|Your|SQL
Servers|etc.). Drag a table onto the designer pane, save, and then
expand the dataset's node in your project file tree to see the generated
class file wrapper. Instant data access class.

There are some choices to make here and there's definitely a learning
curve (and a conceptual shift, I believe) involved, but I've found some
nice benefits with this approach.

- Mike
 
Mike,
Beliec me, I understand the need for a good UI and the need to implement
it quuckly and easily. As I've always undertood it, one of the real
strenghts of object orientation is the ability to deal with data and
logic as one. This implementation does not appear to encourage that type
of development.

Yes my objects are kind of a master, child relationship, except that the
master objects are virtual, they are created on the fly as the data is
grouped. This way I can create an arbitrary collection of "Child"
records and summarize them. I have a Smalltalk application that does
this very well. I can group, subgroup, etc to any level and on any key
(plus some).

As for a cenceptual change, I would have thought that object oriented
design should be similar across these languages. And I think it is,
because I can design the same objects in either and I feel that my
implementation has a solid grounding. However this is a framwork
developed my MS and really doesn't seem to support object code very
well. I can see how easy it is to create these data sets using the
environment, but I'm still having problems getting over the fact that I
cannot bind my business logic to my data.


Mike said:
Chip,

The DataSet is like an in-memory database--it can have one or more
tables, relationships, constraints, etc. In the typical Customer/Order
scenario these two tables might both live in the same dataset to take
advantage of these features.

Are Summary and Detail separate database tables? [They sound a bit like
an Order/OrderDetail structure.]

I recommend investigating typed datasets, if you haven't already. To do
so just add a new "Data Set" item to your project, then use Server
Explorer to browse to a data source (for example, Servers|Your|SQL
Servers|etc.). Drag a table onto the designer pane, save, and then
expand the dataset's node in your project file tree to see the generated
class file wrapper. Instant data access class.

There are some choices to make here and there's definitely a learning
curve (and a conceptual shift, I believe) involved, but I've found some
nice benefits with this approach.

- Mike
Mike,
Thanks for the thoughtful response. This is the sort of thing I was
looking for. If I understand you correctly then, you are saying that
if I go with defining my own business objects independant of the
dataSet model, then I'll have to write a ton of interface code (to
both the GUI and database), thus reinventing the wheel.

So your recommendation is to use inheritance and/or containment to
place my business logic into the correct framework.

So in this little exploratory app I'm trying to build, I have "Detail"
models that hold P&L data at a very granular level. Then I have
"Summary" models which can hold an arbitrary number of the detail
models and summarize them up to different grouping levels. These
"Summary" models would essentially represent the rows in my dataset.

Since I do not yet know much about dataSets, especially the internals,
I'm going to theorize on what I think I should do. I'd love to hear
your input and recommendations. I think it will help me understand the
language and usage much better.

From my research, I see that the dataSet has a DataTableCollection,
which holds onto DataTable instances. Each DataTable instance has a
DataRowCollection, which holds onto instances of DataRow. Finally each
DataRow has an ItemArray, which holds the individual pieces of Data.

From what I see, I'd love to be able to have the ItemArray be an
instance of my "Summary" model.

So in order to use this as my row model, I believe I would have to do
something with the DataColumnCollection, or more specifically, the
DataColumn. It would make sense to me to have the DataColumn contain
the message that must be sent to the row model in order to get the
appropriate value. I can only assume that the current logic for
obtaining the values for a row uses the ordinal number of the
DataColumn into the ItemArray. So I would have to track down the piece
of code that actually does this and override it somehow.

In terms of loading the data, it seems like the DataAdaptor does the
work of gluing the Sql data into the dataSet as well as setting up the
DataColumns. So I guess I would subclass the DataAdaptor to create
objects from the raw data and place them into my modified DataSet as
well as altering the DataColumn (or my subclass of it) appropriately.


So Mike (any anyone else who made it this far), is this the approach
that needs to be taken in order to combine the business logic? From
rereading your message, it seems like I may have taken a much more
comprehensive approach than is necessary. While I can definately
relate to reusing the architecture for data access and the display, it
seems to me like they could have made it a bit easier to glue in your
own "Row" objects without having to do what I think needs to be done.

Thanks a lot for reading through and commenting.
Chip
 
Chip,

Creating a typed dataset generates a wrapper class file which you could
then modify with business logic. However, if your database schema
changed, you'd need to regenerate the dataset and any changes you made
would be lost. The solution to this problem would be to derive a class
from the typed dataset and add your business logic there, since you
would have easy access to all of the properties of the parent class.
Then if you needed to regenerate the parent you could do so without
losing the business logic contained within the child class.

BUT, you mentioned the following:

In my current project I have a large yet relatively straightforward
database design, so instead of writing data wrapper classes I chose to
let Visual Studio do the heavy lifting (I still have to model the
business rules, and do so in a business object layer).

If your business entities don't correspond to concrete database tables,
then you need to assemble them in some way, as you've stated (you could
use stored procedures, but then you'd have a little logic here, a little
logic there, and everywhere). So you do have an extra step but I believe
you could still benefit from the approach I've described because you'll
have to choose a container for your data, which typically means using
collections or a "smart" container like a DataSet or DataTable.

BUT (another big but), there's this:

If this--especially the "plus some"--means custom or
application-specific functionality, then code-your-own may be the way to
go. I guess it depends on how complicated the groupings are and whether
they can be handled by the built-in data objects.

Well, I believe it all depends on requirements. I think you'll be able
to write any type of classes you like (okay no multiple inheritance, but
interfaces help). I'm still coding for maximum flexibility and with an
eye toward the future, but based on current requirements my answer for
"How much is enough?" now is "Just enough." ;)

- Mike

"Everyone I know has a big but. C'mon Simone, let's talk about your big
but." - Pee-wee
 
Mike,
Can you explain to me a little bit of the "code-your-own" method you
mention at then end of your response? I'll tell you want I'm up to and
maybe you can comment a little more.

I'm a senior designer and developer for a Hedge fund. I've written their
entire application in Smalltalk, which is a great tool for developing
complex modeling of esoteric trading instruments. So for the last 7
years I've been doing some real good stuff with objects, but in
Smalltalk. Now that I am looking around in the job market, I'm finding
that I need more breadth in my programming languages. So I'm very
comfortable with objects and system/application design, but I need to
ramp up on C#.

Now I realize that generating interfaces using the tools available is a
great way to ramp a system up, but I wonder if shops who are really
doing serious C# development are using them or if they are doing
"code-your-own". If I was a project manager of a reasonably large
system, I think I would balk at using those tools because there
obviously is a diversion from how I would go about designing the domain
models. So one of my questions is, What am I more likely to find in a
development shop that is writing a serious trading app? I'd like to
spend my time learning the ins and outs of that instead of generating code.


Thanks,
Chip
 
If I may jump in on this discussion....

Mike, I'm very interested in your experience, because I'm starting down
the road of grappling with the same problems in C# after years of
working in C, C++, and Java, "rolling my own" business objects.

I see Microsoft pushing heavily toward DataSets as the glue between the
database and the UI, creating lots of powerful features in the UI that
connect to DataSets (some that seem to work and others that don't), and
making it easy to get DataSets out of databases and across the network.

However, in all of this they seem to have collapsed the n-tier
application down to two tiers: the database and the heavyweight UI*
with DataSets as the go-between. The business layer seems to be spread
throughout the UI. Not nice.

On MS speaker I heard recently actually almost-admitted that MS is
seeing Web Services as a big plus because the DataSet architecture it's
been pushing up to now really is (hush, don't spread it around)...
two-tier, and they were wondering how they were going to dig themselves
out of that hole.

I'm watching all of this with great interest, probably writing far too
much code as I figure out how I can take advantage of all of this great
drag-n-drop technology without losing my business layer.

Sounds like you've been where I am and made a decision. I'd be
interested to hear about your experiences.

*No fair claiming that ASP.NET means a super-thin UI; it's still a
heavyweight UI... it just happens to run on the IIS server rather on
the user's desktop. :)
 
Bruce,
This is what I'm wrestling with also. I'm coming from Smalltalk which is
poor on the UI, but IMHO very strong in segragating the domain logic
from the UI. Either I'm missing something or Microsoft has really hosed
things with this design. I'm basically posting the same question in
various places to see if I'm just missing something or if this is really
the "way to do it".

Chip
 
Hi Bruce,

Here's my take on this . . .

I understand your (and Chip's) concerns, but I think what you're seeing
is the result of Microsoft marketing the ease-of-use of their products.
It's okay with me that they do, because they're a business trying to
demonstrate the value of their tools, and they will work as advertised.
[Just wanna give them a break, for once. ;)]

If we could only build two-tiered apps with this technology we'd all go
running to Java!

If you look at something like the QuickStore demo in MS's UI Process
Application Block, I think you'll see an example of how an application
could be structured. [I'm talking about the basic layers of an
application; I wouldn't take the sample too literally.]

Here are the basic layers of an application based on the demo (other
standard application services, such as security management, might be
built separately--possibly as Web services--and shared organization-wide):

- UI (this can be WebForms, WinForms, Web services)
- UI (process) controller(s)
- Business Entity
- Business Objects
- Data Access Layer

The Business Entity is the means of data transport and is passed among
the layers. In this case each entity is a typed dataset. It could also
be some other data structure, but typed datasets can be automatically
generated and are most easily bound to UI components. [In Chip's
scenario it's not as straightforward since entities don't have concrete
table mappings. I think there are workarounds, though.]

The business entity is just a data bucket--each layer gets handed the
bucket, adds to it, takes from it, or just passes it on to the next
responsible layer.

The UI's dumb and only knows how to talk to the UI Controller. The UI
Controller (you can have as many of these as you need; the demo has a
bunch) of course orchestrates the process of gathering data from the UI
and then passing the bucket to the business object layer.

The business objects can do their thing (enforcing rules, etc.) and can
then hand off the entity to the data layer or, alternatively, hand it
back to the controller to let it do it (based on design decisions).

And so on down through the layers. Basic validation can be handled in
each layer (different UIs have different validation methods, for
example), and if complex you might build a validation-specific layer as
well. It's the n-tier app, .NET-style. Each of these layers is a
separate project in your solution, and thus a distinct assembly when
compiled. When adding project references, each project only knows about
the other assemblies it needs to communicate with. For example, the UI
project isn't going to know anything about the data access layer.

This is all very specific and just a demonstration showing that you
don't have to build a two-tiered app in .NET. I'm sure with the
experience you guys have, none of these concepts is particularly
new--I'm just trying to show that with .NET you can get the job done. :)

- Mike
 
Chip,

Because Microsoft makes an effort to market their products from an
ease-of-use standpoint, I think in all fairness some of their
capabilities get lost in the message. So if there's any problem here it
may be more of a business issue than a technical one. And I think your
perception is shared by many coming from other technologies such as Java.

I've only read about Smalltalk but other than some language features
that you may not find in C# (similar to Java in design approach), in
general I think you'll be able to do what you need to do with the Framework.

As for the "complex modeling of esoteric trading instruments," I'm no
help there so hopefully someone here with .NET experience in your domain
can comment on that. ;)

I will say, though, that any limitations you encounter will not be
because Microsoft promotes drag-and-drop data components!

Okay, on to the real questions.

I believe there are two issues we have to deal with: where to model
business rules and how to manage data persistence.

Regarding business rules, I think your question is: does Microsoft
expect us to shovel everything into one tier?

I can safely state (though nobody appointed me spokesman ;) that most of
us developing larger applications are not using drag-and-drop data
access components. I still have distinct business and data-related
tiers, and I'm sure many others do as well.

So while modeling business rules may be the most important and
potentially complicated aspect of our projects, their destination is
pretty much a given based on proven practices.

I think persistence is tougher to address not because it's more complex,
but rather due to all the options we have available, among them:

- built-in objects vs. custom objects
- coding by hand vs. automation
- and among the automation approaches: code generation (compile-time)
vs. object-relational mapping (run-time)

Using the example of my current project (again), I chose a) automation,
using b) built-in objects (typed datasets), for c) a compile-time, code
generation data entity solution.

In contrast, my colleagues working on a different project chose a)
coding by hand, using b) custom objects (business domain objects), for
c) a run-time, object-relational mapping solution (uses reflection and
has a bunch of clever features). [As an aside, I built a small code
generator for their domain objects which makes almost the whole process
automated. From my standpoint, though, there are a lot of moving parts
which makes maintenance and accessibility to future developers more
difficult, a key consideration in my quest for a simpler solution with
fewer moving parts.]

So to sum up, I doubt you'll end up using drag-and-drop data access
components, and I hope I've been able to outline some of the issues I
think you will encounter as you begin to design your system.

If you'd like to research particular tools--some of which are
freeware--just search for "code generator" or "object-relational
mapping" in these newsgroups and you'll turn up the usual suspects (if I
mention them by name they'll appear as if conjured). ;)

I decided to go with typed datasets this time. If you want to find some
good discussions of this topic in the groups search for threads written
by Kathleen Dollard in the 2002-2003 timeframe.

Maybe others have some insights they can share, too.

- Mike

"Homer sleep now."
 
Hi CDX,

In regards to your question on design, did you check out the article I
wrote on your very subject?

http://www.developersdex.com/gurus/articles/739.asp

If you wish I could email you some code to get you started.

I also so believe using the reader is the most efficient way to retrieve
data in combination with stored procedures.

Happy Coding,

Stefan
C# GURU
www.DotNETovation.com

"You always have to look beyond the horizon and can never be complacent
-- God forbid we become complacent."

Jozef Straus
 
Very interesting and educational. Mike, thank you very much.

A couple of questions, though, or more a couple of sticking points most
likely a result of my lack of understanding.

First, one of my worries about this design is that since the Business
Entity, the typed data set, is automatically generated, doesn't that
set you up for churn problems? If you change your database schema,
doesn't that change the Business Entity, the typed datasets (which I
realize are classes auto-magically generated from the dataset schemas)?
Or do you have to take care to isolate the data sets used as the
Business Entity from the actual structure of the database and the Data
Access Layer has to mediate? The latter seems to me the only way to
make the thing resilient in the face of change, but then perhaps I
don't understand. :)

Second, is the Business Entity then just dumb data, which passes by the
Business Objects on its way to and from the database? If so, doesn't
this mean that it's not really protected... not really encapsulated,
because anyone along the way can alter the data in arbitrary ways
within the limits of what the dataset schema defines? This sounds to me
more like the old-style separation of data from code.

However, you made a remark to Chip earlier about inheriting from the
classes generated from the strongly-typed data set and implementing
business rules there, which makes more sense to me: under that design,
the business rules travel around with the data and protect it no matter
where the Business Entity goes.

The fog is beginning to lift.... :)
 
Bruce,

I think your analysis is spot-on.
First, one of my worries about this design is that since the Business
Entity, the typed data set, is automatically generated, doesn't that
set you up for churn problems? If you change your database schema,
doesn't that change the Business Entity, the typed datasets (which I
realize are classes auto-magically generated from the dataset schemas)?

Yes and yes. ;) I believe that's true of substantial changes within most
any architecture, but if your database design isn't straightforward
and/or changes frequently, extra work needs to be done.
Or do you have to take care to isolate the data sets used as the
Business Entity from the actual structure of the database and the Data
Access Layer has to mediate?

Can that be done effectively, from the standpoint of both costs and
benefits? I guess it really depends on the frequency of change--we do
what's required to meet project goals and requirements.
However, you made a remark to Chip earlier about inheriting from the
classes generated from the strongly-typed data set and implementing
business rules there, which makes more sense to me: under that design,
the business rules travel around with the data and protect it no matter
where the Business Entity goes.

Yep, that's the solution others have come up with (there are some good
posts from Kathleen Dollard on this topic from 2002-2003--interesting
discussion!).
The fog is beginning to l ift....

Well once it clears and you've got this all figured out, please come
back and tell us so we'll know how it's done. ;)

- Mike
 
Stefan,
I'd love to read your article and look over some source code. Send it to
the email above without the NoSpam

Thanks for your input,
Chip
 
Or do you have to take care to isolate the data sets used as the
Can that be done effectively, from the standpoint of both costs and
benefits? I guess it really depends on the frequency of change--we do
what's required to meet project goals and requirements.

One of the problems I've battled over the years is the tendency of
applications, which multiply like rabbits, to know too much about how
the data is structured, essentially casting your database schema in
stone. When you come along to change things in the database, kaboom! a
bunch of (usually business-critical) applications blow up because they
assumed such-and-so about the data.

Another problem is that things aren't necessarily stored in the
database in the same way that the business thinks of them. This can be
by accident (original programmers didn't understand the business well
enough) or by design (you can make more effective use of the database
by folding two business entities into one table, or having multiple
tables storing information about one business entity).

I agree that modern databases have come a long way in solving the first
problem. Changing the length of a field no longer trickles out to the
application and blows it up (particularly in .NET where you have
dynamically sized strings). You can add columns to a table without
affecting applications that don't care about those columns. To that
extent, it's a fair question to ask how much more engineering against
change is cost-effective.

However, as an example, what happens if I normalize a database table
into two tables (a lookup and what's left of the original table)? Oh, I
know: we're all supposed to design database schemas with everything
normalized from the outset... and we all do, right? :*) You have to
have some way of allowing old applications to see the data as if it
were still denormalized, or you have a lot of rewriting to do just for
an internal reorg of your data. Ugly. Or, even more ugly, you don't
normalize the data because it would cause too much code churn, so you
just live with it the way it is and it gets worse and worse....

The second problem is more difficult, I think: for business reasons,
technical reasons, or poor design, your database tables don't map
one-to-one onto business objects. Now what? For example, where I work,
a lift, or pallet is a stock item, but from the business's point of
view it's a special stock item with special rules and operations
associated with it. In object terms that sounds like a subclass to me,
but in the database it's just another row in the inventory master.
Mediating between these two points of view is what a business layer is
supposed to do, but I'm having trouble imagining how you do that if
your business entities are auto-generated from database tables.

As I said, my impression is that Microsoft is looking to Web Services
to save the day here: the business layer lives on the server side, and
delivers data to the client organized in business terms, no matter how
it happens to be stored in the database. The client can then accept the
data as an ADO.NET dataset, secure in the knowledge that the schema of
the dataset depends upon the wsdl (the Web Service contract), not the
organization of the raw data. Of course, the contract can always
change, but that's a business-level, logical change, not a data
reorganization.
 
Bruce,

All great points, IMO, and I agree.

I think the use of stored procedures as (yet another) layer of
abstraction can help with some of the issues you've mentioned. You can
generate typed datasets from stored procedures, thus avoiding the
one-to-one table mapping. Of course the usual drawback there is having
another separate collection of logic to maintain and it isn't the "big"
solution to these problems. SOA is. ;)

- Mike
 
Just to Chip in here :) on the topic of change. With my system change is
constant. It's not a matter of me not making the correct design in the
first place (not that that never happens), but rather the fact that the
system and business is always expanding to encompass new features. There
are new types of securities out there that traders want to trade, or we
now want to track the carry costs of each position. My system has been
designed to be flexible and change quickly to these demands, but it has
not (purposely) been designed to attempt to handle all possible
situations.

So it is essential that I can keep the business logic away from the
persistence and the UI. It just doesn't cut it having to deal with this
other logic when dealing with the "business object". That is why I'm
really taken back that we even need to have this discussion. I consider
this seperation to be very elementary to good object design and cannot
fathom why Microsoft would let something like this come to market. Let's
assume that they agree and come up with better frameworks for this. What
about all the code that's out there right now that worked around this
problem (or worse yet, ignored the separation). That sort of change is
going to be tough to swallow for a lot of systems.

Chip
 
Back
Top