DAL

  • Thread starter Max André Bündchen
  • Start date
M

Max André Bündchen

I'm new in .Net world and I have a multi database system to develop in my hands. This system should work with SQL Server, Oracle and
Firebird.

In the .Net world I could make a DAL with typed datasets (one set for each DB) and all works with no problems. However, I couldn't
work with typed datasets for many project reasons. Instead I would like with direct SQL and own data access logic. All fine with
that, because I can make a Helper class using the System.Data.Common of the .Net 2.0.

In this scenery I have some questions. For example, when I accomplish a INSERT in a table with a id column filled by the DB (a
auto-inc field), how take to the application this new ID in a concurrency scenery? The typed dataset make that using a proprietary
sintax for each DB (i.e., the scope_identity() for SQL Server), but if I need a DB-agnostic DAL, I don't know how accomplish that.
Sure I can make a stored procedure for that, but in this case I will manage three sets of stored procedures, one for each table of
each DB, and it's a (big) problem in my case.
 
O

Onwuka Emeka

There are a couple of object persistence frameworks or object relational
mappers out there that can help make your application DB-agnostic by
abstracting
to some level the data acess strategy and let you concentrate on the
business logic of your application.
I recommend that you take a look at NHibernate.
It should solve most if not all the probems you have listed as regards
direct sql, concurrency etc.

You can learn more about nhibernate from :
www.nhibernate.org/
www.theserverside.net/articles/showarticle.tss?id=NHibernate
http://onwuka.blogspot.com/2006_01_01_onwuka_archive.html (a quick start)
 
M

Max André Bündchen

I'm looking for some approach without a ORM like NHibernate basically because performance and some complicated relations on the
database that are dificult to implement in a mapper.

Thanks for the advice yet.
 
G

Garth Wells

I've never implemented a system that supports multiple databases,
but this is a question I have seen and thought about quite a bit.
I've always thought that the safest approach would be to
determine the minimum ANSI-compliance level of each database
and not use SQL that exceeds this level. And certainly avoid
any proprietary design options like auto-incrementing columns
(e.g., IDENTITY in SQL Server). And obviously you could not
use database side objects like stored procedures, as all SQL
would pass-through from the client.

This approach would force you to do any data operations that
would require SQL that exceeded the minimum ANSI-compliance
level with procedural code in the application, but it would ensure
that you would never pass the database code that it could not
execute. The main benefit of this approach is that you would only
have to manage one database schema/script for all environments.
The biggest disadvantage to this approach is that you would be
using less-efficient data manipulation operations. I guess you
would have to weigh the pluses and minuses of each approach
and decide accordingly. From my perspective, managing three
database schemas/scripts would be a nightmare.


Max André Bündchen said:
I'm new in .Net world and I have a multi database system to develop in my
hands. This system should work with SQL Server, Oracle and
Firebird.

In the .Net world I could make a DAL with typed datasets (one set for each DB)
and all works with no problems. However, I couldn't
work with typed datasets for many project reasons. Instead I would like with
direct SQL and own data access logic. All fine with
that, because I can make a Helper class using the System.Data.Common of the ..Net 2.0.

In this scenery I have some questions. For example, when I accomplish a INSERT
in a table with a id column filled by the DB (a
auto-inc field), how take to the application this new ID in a concurrency
scenery? The typed dataset make that using a proprietary
sintax for each DB (i.e., the scope_identity() for SQL Server), but if I need
a DB-agnostic DAL, I don't know how accomplish that.
Sure I can make a stored procedure for that, but in this case I will manage
three sets of stored procedures, one for each table of
 
M

Matt Noonan

Garth said:
I've never implemented a system that supports multiple databases,
but this is a question I have seen and thought about quite a bit.
I've always thought that the safest approach would be to
determine the minimum ANSI-compliance level of each database
and not use SQL that exceeds this level. And certainly avoid
any proprietary design options like auto-incrementing columns
(e.g., IDENTITY in SQL Server). And obviously you could not
use database side objects like stored procedures, as all SQL
would pass-through from the client.

But this *can* be accomplished with a database-agnostic ORM and a little
help from the Enterprise Library, including full functionality with SQL
Server IDENTITY columns and Oracle SEQUENCEs. I don't have a Firebird
version (yet :). This is not to say there wouldn't be obstacles to overcome,
but I do believe it is acheivable. You are correct that sacrifices will
probably have to be made, but I don't think the problem is with
functionality so much as datatypes. (see next comment)

This approach would force you to do any data operations that
would require SQL that exceeded the minimum ANSI-compliance
level with procedural code in the application, but it would ensure
that you would never pass the database code that it could not
execute. The main benefit of this approach is that you would only
have to manage one database schema/script for all environments.

This is where the challenge lies, I believe. Reconciling datatypes between
different vendors can be a pain, even if you stick with the DbType
enumeration (as the EL does). Even sticking to Microsoft databases, there
are some datatypes to avoid (OLE in Access comes to mind). And Oracle has a
nasty tendency to report all number types as DECIMAL (even when the column
is truly an int).

This could possibly be overcome by creating different business logic layers,
one for each DB, and then abstracting them through either a web service or
perhaps a factory model. But then you reintroduce the problem of having to
manage the different schemas on each platform.

The biggest disadvantage to this approach is that you would be
using less-efficient data manipulation operations. I guess you
would have to weigh the pluses and minuses of each approach
and decide accordingly. From my perspective, managing three
database schemas/scripts would be a nightmare.

Indeed. :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top