I've created a data-access-layer component that has two parts: the base
(which requires only a connectionstring, and then the subclass that must
provide a connectionstring in whatever manner it sees fit (usually
transparently).
DataTable result = null;
SqlDb data = new SqlDb();
result = data.ExecuteTable("select * from table");
// data.ExecuteDataSet("...");
// data.ExecuteReader("...");
// data.ExecuteNonQuery("...");
// data.ExecuteScalar("...");
data.AddParameter("@Param1", SqlDbType.Int, ParameterDirection.Input, 4,
param1Value);
data.CommandType = CommandType.StoreProcedure;
data.CommandText = "sp_Proc1";
result = data.ExecuteTable();
// data.ExecuteDataSet();
// data.ExecuteReader();
// data.ExecuteNonQuery();
// data.ExecuteScalar();
Internally, it actually manages opening and closing the connection itself in
one of two ways by using a ConnectionBehavior enum:
ConnectionBehavior.Always;
ConnectionBehavior.OnDemand;
Always means when the object is created until it is disposed the connection
is always active. This is not the default and I've never had to use this
feature. OnDemand means that the connection is opened as late as possible
and closed as early as possible encapsulating the actual data execution
internally, so you don't have to worry about it.
Is this the type of thing you are talking about? If so, it is easy to
create one. I've seen just about every publicly available data access
component/block and more complex alternatives such as ORM wrappers. Each
has a strength and a weakness. The strength in mine is its ease of use, the
weakness is that in its current incarnation it is database (sql server)
dependant. The strength of others I've seen is that it can handle any
database and the weakness of most others is that many (not all) of which
I've seen are not as simple to use or implement as mine.
In my new version that targets C# 2.0 it is 1,500 lines of code few (about
700 lines now), does the same thing, and supports Oracle, OleDb, SqlServer,
and Odbc connections natively. I am working on a "lite" version that
supports only the core execution types and some basic transaction support
(and cross-object-instance transaction support) and an "extended" version
that extends that functionality into what is most useful about my current
version, it has an internal "plug-in" mechanism so we can get some very
advanced diagnostics and run-time if we need to debug (configurable via an
XML doc and can be turned on/off).
The diagnostics plugins that I've written include: exact execution timer
indicating how long the execute commenced accurate to the 200ns on my 2GHz
P4, a complete dump of the first 10 (or more, configurable) results of a
data table or all tables in a dataset, a "reprentation" fo the stored
procedure that you can "copy/paste" to see exactly what the DAL executed to
get its results, complete with feedback on the return/output param if there
was one, and can get emails on SPs/Sql queries that failed for some reason
or another (such as SP/Table not found, invalid parameter name, etc.).
For readers, it is a little more complex, there is a configurable option
that will actually cause it to execute the command the first time
(internally) as a dataset so it can dump the results and then it'll execute
again as a reader so you can work with it. Keep in mind, this is only for
debugging and not intended for production use and is very useful for
debugging. There is a second (configurable) option that allows us to use a
"disconnected" reader (don't ask on the internals as it is hard to explain,
its a custom reader, er, actually, a DataSet that "look" like an
IDataReader) that executes and the it gets logged and returns the same
instance back to you except in this case you are not connected as in the
case of a real reader.
The SP diagnostics read can be "filtered" by SP name, parameter criteria
(whether @Param1 is Int and less than 15 And/Or @Param2 is Varachar and
contains "ledger" -- and similiar condition compounds) as well as whether it
is a DataTable, DataSet, Reader, or whatever that is being executed.
All of our SP have internally diagnostics in them, as well, and when the DAL
encounteres issues, the SP diagnostics tell us one thing, that things work
fine, then when we use the DAL dump and the "representation" of the SP
command and params that was executed, we see a different story and have, in
some cases, cut our debugging time from hours (what used to happen
frequently) to minutes.
Another plugin uses custom performance counters to track memory usage,
number of open connections, and connections created per second and so on.
We use it to track down memory leaks but it seems pretty solid.
Another tells us that if an execution took longer than n seconds to complete
(by default 6 seconds) then we get an email with all relevant information.
Anyway, since I'm not trying to write a book here I'll terminate this
message but if you are interested, I can share my new C# 2.0 version with
you (should work in 2003 since I'm not using any 2.0 specific features --
yet). The diagnostics aren't there yet because I need to come up with a
better plugin architecture that doesn't load the plugins 15,000 times a
minute as in the current one but nonetheless...
BTW: I've been using a similiar variant of this data access component in an
enterprise application that receives about 15,000 requests per minute about
10 hours of the day during peak times and there seems to be no major
performance issues that we've needed to concern ourselves with.
Thanks,
Shawn