G
Garrek
I'm curious as to opinions on the use of Typed Datasets with a database
larger than but a handful of tables as provided in most samples.
Say I have a database with 100 tables. I'll break these 100 tables
into logical groups such as User tables, Customer tables, Order tables,
Product tables, Vendor tables, and others.
Is there a recommended approach to any of the following:
1. Create one massive Typed Dataset that represents all tables in your
system. Whether you need it to just store a new customer or create a
complete chain of events from a customer ordering a new product from a
new vendors, you will always use this single Typed Dataset.
One of the possible issues with this scenario is that I may want to
read partial data. If I created a massive Typed Dataset with relations
between each table I'll have to worry about turning some 'off' to push
the data in correctly. Sounds like a nightmare.
2. Create multiple Typed Datasets broken into the logical groups as
stated above. For instance, if there are four tables to describe a
Customer then you would create a single Typed Dataset of those four
tables. One point of confusion, in my opinion with this method, is how
do you decide where to place a table linking many Customers to an Order
(in this scenario the groups of tables for Orders would be in a
separate Typed Dataset).
3. What about Typed Datasets for reporting? Or in this case is the
consensus to stick with an Untyped Dataset? For instance, I want a
report of all orders complete with some min/max values, averages, etc.
I don't believe it would be appropriate to use a Typed Dataset created
in method #1 or #2 mentioned above. In my opinion the Typed Datasets
defined in option #1 or #2 are for transactional purposes; not
reporting.
Any feedback on this topic would be appreciated. I understand there is
no magic bullet. I've used Typed Dataset for a project and haven't
been as excited about their use as I had hoped. I'm wondering if
perhaps I did it improperly or... there is a better method. =)
Thanks.
larger than but a handful of tables as provided in most samples.
Say I have a database with 100 tables. I'll break these 100 tables
into logical groups such as User tables, Customer tables, Order tables,
Product tables, Vendor tables, and others.
Is there a recommended approach to any of the following:
1. Create one massive Typed Dataset that represents all tables in your
system. Whether you need it to just store a new customer or create a
complete chain of events from a customer ordering a new product from a
new vendors, you will always use this single Typed Dataset.
One of the possible issues with this scenario is that I may want to
read partial data. If I created a massive Typed Dataset with relations
between each table I'll have to worry about turning some 'off' to push
the data in correctly. Sounds like a nightmare.
2. Create multiple Typed Datasets broken into the logical groups as
stated above. For instance, if there are four tables to describe a
Customer then you would create a single Typed Dataset of those four
tables. One point of confusion, in my opinion with this method, is how
do you decide where to place a table linking many Customers to an Order
(in this scenario the groups of tables for Orders would be in a
separate Typed Dataset).
3. What about Typed Datasets for reporting? Or in this case is the
consensus to stick with an Untyped Dataset? For instance, I want a
report of all orders complete with some min/max values, averages, etc.
I don't believe it would be appropriate to use a Typed Dataset created
in method #1 or #2 mentioned above. In my opinion the Typed Datasets
defined in option #1 or #2 are for transactional purposes; not
reporting.
Any feedback on this topic would be appreciated. I understand there is
no magic bullet. I've used Typed Dataset for a project and haven't
been as excited about their use as I had hoped. I'm wondering if
perhaps I did it improperly or... there is a better method. =)
Thanks.