SQLAdapter and scalability

  • Thread starter Thread starter Galin Iliev
  • Start date Start date
G

Galin Iliev

Hello guys,
I am researching the best way to update SQL server with bunch on
information.
let's suppose that we have
one table in SQL database with 500 rows. load it in memory as Dataset and
add more 500 rows, change 250, and delete 100. when you call
SQLDataAdapter.Update( dataset ) you will have 500 INSERT queries ( or SP
calls ), 250 Update, and 100 delete.

the problem will be when 100 or 1000 users decide to do this at one time.

The quiestion is : Is this best way (according Microsoft) for updating SQL
server with bunch with information? or is there another aproach about this?
maybe something like packaging information into one and transfer it to
server?

According you which is best way: using central SQL server
- OR -
one central server and many locally MSDE with replication?

I know it is depending on the case, but let's talk a little bit
theoretically

Thank you in advance

Galin Iliev [MCSD]
Senior Software Developer
Software Consulting Services
 
These are some good questions.

The most important thing to realize is Scalability is not performance.

Say you own a business in which you use donkeys to lug load form point A to
B. You could get an Arnold Schwarzanegger donkey - that's performance. Or
you could get 5 spice girls donkeys that work well with each other - that's
scalability. And 5 spice girls, or lets say 50 spice girls will actually
lift more load than one Arnlod Schwarzanegger donkey ever will.

Also, Scaleable systems will in many cases end up costing lesser for the
same high performance, than a high performing system.

So far as pushing in massive updates to SQLServer - ADO.NET provides many
facilities. For an ultra scaleable and performing system, I would,

a) Create a stateless data abstraction layer. Wrap all sql server
interaction there.
b) Not use CommandBuilder, specify your own updates/inserts/deletes
c) Use a good concurrency management architecture that doesn't prove to be
too heavy on the system.
d) Look into SqlXml and passing in Xml to your stored procs. With Yukon you
can directly pass in a dataset. This will prevent multiple calls to a stored
proc.
e) Design a chunky not chatty interface/architecture.
f) Design your application in such a way that any logical module has it's
own connection string - that way you have the flexibility of offloading a
badly written app on a different d/b.

Regards - Central SQL Server or replicated MSDE's.

I like central sql server because of the following reasons -
a) Easier to maintain.
b) You can cluster it, you can transaction log ship it, you can create
failovers easily, and of course replication, and don't forget running
reports is MUCH easier when you don't have to screw with data in 600
databases.
c) You have one thing to backup.
d) You need one DBA, LOL
e) You donot need to worry about your d/bs growing over 4 gigs (msde
restriction).
f) You donot have to worry about synching databases.
g) Many other reasons.

The only advantage a distributed MSDE architecture would give you is people
who connect only to the MSDE d/b will not go down when the SQL Server goes
down (well you could architect the d/bs to do that). So in a system which is
spread all over the united states for example, that might be a good idea.
(I've actually helped architect exactly that for www.hanger.com). But then
you have the logistical problem of some or the other database being always
down and your 10 dba team running around like chickens with their heads cut.

- Sahil Malik
You can reach me thru my blog at
http://www.dotnetjunkies.com/weblog/sahilmalik



Galin Iliev said:
Hello guys,
I am researching the best way to update SQL server with bunch on
information.
let's suppose that we have
one table in SQL database with 500 rows. load it in memory as Dataset and
add more 500 rows, change 250, and delete 100. when you call
SQLDataAdapter.Update( dataset ) you will have 500 INSERT queries ( or SP
calls ), 250 Update, and 100 delete.

the problem will be when 100 or 1000 users decide to do this at one time.

The quiestion is : Is this best way (according Microsoft) for updating SQL
server with bunch with information? or is there another aproach about this?
maybe something like packaging information into one and transfer it to
server?

According you which is best way: using central SQL server
- OR -
one central server and many locally MSDE with replication?

I know it is depending on the case, but let's talk a little bit
theoretically

Thank you in advance

Galin Iliev [MCSD]
Senior Software Developer
Software Consulting Services
 
Back
Top