S
Siegfried Heintze
Whenever I create a new database query, I have to answer a couple of
questions that are even harder to ignore if you are using visual studio:
(1) Should I create a new table in an existing dataset or a new dataset? Is
there peril in having one table per dataset and having lots of data sets?
(2) Should I use a generic late binding dataset or have visual studio create
a custom early binding dataset class for me?
(3) Should I implement my one to many relationship in the SQL with an INNER
JOIN clause or should I create two datatables and join them in the dataset
using the visual studio dataset GUI designer using drag and drop?
(4) If I have a many to many relationship, is it possible to implement this
in the dataset GUI designer instead of in the SQL with INNER JOIN
statements? What would be the advantage?
(5) Am I going to supply the dataset with add, delete and update objects?
What is the advantage of assigning these Command objects to the Dataset
instead of just calling them directly as needed if they are needed?
(5a) When you are configuring a new data adapter, you get the oppertunity to
let visual studio automatically generate the SQL statements to INSERT,
DELETE and UPDATE and assign their respective command objects to the
dataset. I've done it and it even worked, but I did not feel like I
understood it. How does visual studio know how to automatically generate the
SQL update, add and delete SQL statements when you have a complex join
statement anyway?
(5b) When is visual studio not able to automatically generate the UPDATE,
DELETE, and INSERT SQL statements and their respective command objects? I
received a message from Visual Studio recently complaining that I was not
using enough PK to automatically generate the UPDATE, DELETE and INSERT SQL
statements. This was OK since I could not imagine how Visual Studio was
going to know how to implement these statements anyway. Was it because I was
joining some fields that were secondary indices?
I've successfully used the dataset GUI designer to connect the foreign key
field to the PK of another table. (I'm not talking about the query designer
here -- this is connecting fields in the dataset after they have been read
out of the database tables). It worked. I was impressed but I was not (still
not) clear on the advantage. When I tried to do the same thing with a many
to many I could not figure out how to do it.
So I'd love some help on the perils and merits to each approach!
Thanks,
Siegfried
questions that are even harder to ignore if you are using visual studio:
(1) Should I create a new table in an existing dataset or a new dataset? Is
there peril in having one table per dataset and having lots of data sets?
(2) Should I use a generic late binding dataset or have visual studio create
a custom early binding dataset class for me?
(3) Should I implement my one to many relationship in the SQL with an INNER
JOIN clause or should I create two datatables and join them in the dataset
using the visual studio dataset GUI designer using drag and drop?
(4) If I have a many to many relationship, is it possible to implement this
in the dataset GUI designer instead of in the SQL with INNER JOIN
statements? What would be the advantage?
(5) Am I going to supply the dataset with add, delete and update objects?
What is the advantage of assigning these Command objects to the Dataset
instead of just calling them directly as needed if they are needed?
(5a) When you are configuring a new data adapter, you get the oppertunity to
let visual studio automatically generate the SQL statements to INSERT,
DELETE and UPDATE and assign their respective command objects to the
dataset. I've done it and it even worked, but I did not feel like I
understood it. How does visual studio know how to automatically generate the
SQL update, add and delete SQL statements when you have a complex join
statement anyway?
(5b) When is visual studio not able to automatically generate the UPDATE,
DELETE, and INSERT SQL statements and their respective command objects? I
received a message from Visual Studio recently complaining that I was not
using enough PK to automatically generate the UPDATE, DELETE and INSERT SQL
statements. This was OK since I could not imagine how Visual Studio was
going to know how to implement these statements anyway. Was it because I was
joining some fields that were secondary indices?
I've successfully used the dataset GUI designer to connect the foreign key
field to the PK of another table. (I'm not talking about the query designer
here -- this is connecting fields in the dataset after they have been read
out of the database tables). It worked. I was impressed but I was not (still
not) clear on the advantage. When I tried to do the same thing with a many
to many I could not figure out how to do it.
So I'd love some help on the perils and merits to each approach!
Thanks,
Siegfried