My understanding of the dataset

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

Hello

Let's say I have the following three table
1. Customer (100,000 rows
2. Invoice (1,000,000 rows
3. Detail (10,000,000 rows)
4. Product (100 rows

an
Customer has a 1-M relationship with Invoice and Invoice has 1-M relationship with Detail. There is a M-1 relationship between Order and Product

When I create a dataset that has these three tables in them, use it on a Windows form and fill it, am I correct in assuming that all the data (11.1 million rows) will have come down the network pipe?

I was told some one that I was right and that's why once the dataset gets filled, subsequent operations are fast because the data is available locally

If that's true, I will have to put in some code so that I can have some search criteria the user has to enter so that I retrieve only those customers that meet that criteria

Thanks

Venkat
 
With that amount of data you shouldn't be loading it all into the dataset.

Venkat Venkataramanan said:
Hello:

Let's say I have the following three tables
1. Customer (100,000 rows)
2. Invoice (1,000,000 rows)
3. Detail (10,000,000 rows)
4. Product (100 rows)

and
Customer has a 1-M relationship with Invoice and Invoice has 1-M
relationship with Detail. There is a M-1 relationship between Order and
Product.
When I create a dataset that has these three tables in them, use it on a
Windows form and fill it, am I correct in assuming that all the data (11.1
million rows) will have come down the network pipe?
I was told some one that I was right and that's why once the dataset gets
filled, subsequent operations are fast because the data is available
locally.
If that's true, I will have to put in some code so that I can have some
search criteria the user has to enter so that I retrieve only those
customers that meet that criteria.
 
Hi Venkat,

Adrian is correct.

Here's the essential issue: ado .net provides disconnected datasets. If the
data is disconnected, it can only be in memory or in dynamically created xml
(but in memory for useful purposes). If it's going into memory, yes, it
will work as fast as possible, but getting it all into ram is resource
intensive.

Try to find ways to limit the initial draw of data. This is more difficult
than many would make you believe, so I understand your problem.
Nevertheless, you have to narrow (fewer columns) and/or filter (fewer rows)
to the extent possible, or else it will take very long to get it all into
ram. I have 1 gig ram, and loading a 25 column 650,000 row table takes
minutes, not seconds.

HTH,

Bernie Yaeger

Venkat Venkataramanan said:
Hello:

Let's say I have the following three tables
1. Customer (100,000 rows)
2. Invoice (1,000,000 rows)
3. Detail (10,000,000 rows)
4. Product (100 rows)

and
Customer has a 1-M relationship with Invoice and Invoice has 1-M
relationship with Detail. There is a M-1 relationship between Order and
Product.
When I create a dataset that has these three tables in them, use it on a
Windows form and fill it, am I correct in assuming that all the data (11.1
million rows) will have come down the network pipe?
I was told some one that I was right and that's why once the dataset gets
filled, subsequent operations are fast because the data is available
locally.
If that's true, I will have to put in some code so that I can have some
search criteria the user has to enter so that I retrieve only those
customers that meet that criteria.
 
Hi Venkat,

I think that if you are doing this in a multiuser environment, your
application is probably more busy with correcting concurrency errors than
with real processing.

Net is disconnected data processing.

Just my thought,

Cor
 
Back
Top