This is usually a bad idea, but it depends on your goals.
This is usually a good idea, but it depends on your goals.
Amount of data is actually an argument in favor of putting logic in stored
procedures. The fastest way of processing data is doing it right at the
database. While T-SQL takes a performance hit from being an interpreted
language, this usually pales to the performance hit from shuffling data over
the network (including the waits for roundtrips).
Complexity of operation, on the other hand, is an argument in favor of
writing the logic in C#, or any programming language more convenient than
T-SQL (which is practically all of them). That said, SQL is capable of
complex set-based operations that would be much more tedious to write in C#,
so what is "complex" varies.
For business logic, "complex" usually means "contrived", from a programming
point of view: this action is allowed but only if such-and-such conditionis
satisfied, unless... This needs maintainability and robustness over
everything else, which makes T-SQL a poor choice (as we'll get to in a moment).
That said, putting as much "trivial" consistency and validation in your
database as possible is still a good idea. Everything will need to pass it
before being stored, so any errors you can catch there are things you never
need to worry about. Inconsistent data is a huge pain to fix afterwards, if
it can be fixed at all. Anything you *can* chisel in stone regardless of the
variable business logic (like "the price of an item may not be negative")
*should* be chiseled in stone, in the form of constraints. Where to draw the
line is a design issue.
Even this is *in addition to* checking it at the business logic level, not
*instead of*, as your database usually can't tell you what was wrong with
the data on a logical level, and you (usually) can't subject your users to
an "INSERT statement violates constraint 'CK_Price_not_negative'" message..
This is also another argument against putting your business logic in stored
procedures: you'll have to devise a consistent, machine-readable way for
reporting errors back and stick to it. Error-handling strategies are
difficult enough to get right for applications written in one language, let
alone for those written in two.
Ease of development and quality of implementation are the most important
arguments against logic in stored procedures: T-SQL is slow and clumsy,
tends to produce reams of code that's hard to structure (passing information
between stored procedures is an art unto itself), is tricky to write
robustly (even with the new TRY statement) and is hard to version. All this
is bad news for maintainability.
A less mentioned drawback (because most people don't care about this
scenario) is that T-SQL (procedural SQL in general) isn't portable. If you
want to move to another RDBMS to store your data, good luck doing that if
you've constructed 2,000 line stored procedures that only work on SQL
Server. Not that porting your data to another RDBMS is trivial if you don't
do this, but at least it's a lot less impractical.
One argument that has to be mentioned as definitely in favor of T-SQL, on
the other hand, is flexibility. Executing a single ALTER PROCEDURE statement
to fix a problem is significantly faster than having to roll out a new
version of your application. Of course, this is again bad news for
maintainability, because tracking down a problem caused by some bozo making
a SELECT procedure do a stealth INSERT somewhere is no fun. Being able todo
this might save your bacon one day, but as a certain arachnid superhero is
wont to say, with great power comes great responsibility.
This depends on what you're doing more: retrieving data or processing data.
With a separate app server, processing data is offloaded from the database
server at the expense of extra data traffic. The amount of data retrieved
may or may not increase depending on how much logic is redistributed. If
you're using one "dumb" SQL server that has to pump boatloads of data to
many "smart" app servers, processing power will not be distributed evenly,
and your database (and network) will need scaling anyway.
While this bolsters point #2 (T-SQL is harder to maintain), you can't get
away with having no SQL developers either because, left to their own
devices, the C# developers are likely to construct databases and queries
that are grossly inefficient for large data sets. Not because they're dumb,
but because appreciating the performance differences between in-memory data
structures and on-disk tables takes some experience.
The whole point of SQL is to shield you from these concerns, so programmers
will write queries expecting that the database will take care of everything
somehow, but this abstraction will eventually break down when the data goes
past a critical size. Retooling the design after the fact can be costly and
incur downtime. Upgrading the server(s) is a simple and thought-free
solution, but not always a cost-effective or practical one.
That said, for most purposes it's sufficient to have one or two C#
programmers with experience who can advise the others on database structure
and potential performance issues, so the design is at worst "not optimal"
instead of "critically deficient". I think I can fairly confidently say
this, because I'm one of those programmers at my company.