Generally speaking, yes, though bigger and faster hard disk(s) will also
help too...
There's really no easy answer to this one - however, this wouldn't be a bad
place to start:
http://www.microsoft.com/downloads/details.aspx?FamilyID=e2c0585a-062...
This begs the obvious question: are your tables properly indexed? *Storing*
2 million records will not phase SQL Server one bit. However, if you're
trying to *fetch* 2 million records from your database to your webserver,
then you really need to rethink your application...
Thanks for your reply! I will take a look at that tool you suggested.
There are two extremes of the resultset we expect from the tables.
1) Fetch just a few records out of the 2 millon records that satisfy a
certain criteria (where clause) after a join operation among large
tables and display in a DataGrid.
2) Get all the records from 2 tables with 2 million records each and
have some relationship associated with them without any where clause
and populate the same DataGrid.
Its basically a search module where the user is given 5 parameters to
fillin their search criteria. If the user want to just get all the
records he doesnt provide any search criteria and clicks on search.
This seach needs to access these huge tables.
In both cases there is obviously a join operation on 2 large tables.
It takes around 2-3 mins for such a query to run even in SQL Server
Query Analyzer or Enterprise Manager. How do we improve this in the
first place? If we just look at the first scenario for now, there isnt
a problem with loading large amount of data into the webserver since
we are only fetching a few(10 records for eg.) records and only those
would be loaded into the webserver.
Thanks for ur time.
Sum.