I'll concede that you are correct and that Mr. Beamer is referring to the
cumulative time savings over the life of the application being measured in
seconds, and not the specific time to retrieve a connection from the pool.
I mistakenly assumed this after reading a number of questionable posts he
has made recently, including the one on how failure to release connections
could result in "increased (sic SQL Server) licensing costs." I am most
likely completely in the dark about a recent change in SQL Server licensing
that allows one to use middleware to multiplex connections - that's what I
get for being an ostrich...
I should really document the problems with this behavior thoroughly and
generate a proof, but the writing required is at the doctoral thesis level
and the supporting data comprises several Gb. Also, you need at least 2
multi-CPU servers handy and several man-weeks to run the repros and perform
independent data collection and analysis in order to independently begin to
properly understand the behaviors. In addition, you need to be well-versed
in both Oracle and SQL Server, including the locking characteristics at
various isolation levels.
Since I'm just too bloody lazy to waste my time producing a document that
would likely be read by no more than a handful of people, and that would be
understood by fewer yet, I'll just point out 2 very basic scenarios that
make releasing the connection ASAP questionable as a general practice:
1. An object running within a distributed transaction requests a
connection. At the initial point, no Global Transaction Identifier is
associated so the base pool services the requests. A statement is executed,
and the connection is released to the pool - but the question often
overlooked is, to which pool? Since the connection attributes at this point
do not match those of the base pool, a new pool is created. The next
request from an object participating in this distributed transaction may use
the connection just returned, but assuming the state has been updated to
reflect its availability. Under light loading this works most of the time,
though occasionally you will have 2 or 3 connections within the pool
associated with the distributed connection - even though the object can
*never* have more than one connection active at any given moment in time.
While a timing condition can be anticipated (due to OS activity, context
switching, etc.) where a connection request occurs before connection 1 is
marked as available, the issue with 3 active connections in a single
distributed transaction when all connections are *always* properly released
and disposed is the clue to the root cause of the case where connections
participating in distributed transactions may be orphaned in an "inactive"
state and never completely released (even though the pools themselves
eventually are destroyed).
Since a connection enlisted in a distributed transaction can only be
used by an object participating in the same distributed transaction, the
latency associated with returning the connection to the pool and retrieving
it again can be avoided. The act of releasing such a connection to "the
pool" does not in any way increase the number of connections generally
available to other activities outside the scope of the transaction, and in
fact the added latency only increases the likelihood that additional new
connection requests will be generated (only to be discarded in 3m to 7m 40s
due to inactivity, thus begins the vicious cycle of connection thrashing -
but that's another story, and a fairly complex one that only comes into play
for applications with high utilization rates).
There are a number of observations I could make here based on 6 months
of personal experience (at 60-80 hours/week) researching this issue
in-depth, and on feedback from Oracle Support / Bug Diagnosis & Escalation
(which combined racked up over a man-year on research, development,
providing optimization/tuning guidance, running my repros in Oracle's labs,
and performing interactive debugging on my system). However, it would take
many pages and diagrams to convey the necessary concepts for anyone lacking
the basic understanding of the problem to gain any real understanding of
when it is appropriate to hold the connection active for the life of the
public method call and when it is safe to release it per the common
recommendations.
2. Connection-based transactions that span multiple internal calls before a
commit is possible. I don't think there is a real need to go into details
about this, is there?
--
Michael D. Long
Angel Saenz-Badillos said:
I believe that Cowboy's reference to "seconds" in cost of opening
connections from the pool applies to the lifetime of an application, not
to
the cost of retrieving an individual connection from the pool. I agree
with
the point that he is trying to make regarding opening a connection in each
method versus passing the connection, in an asp.net application if you
keep
the connection open unnecessarily you are stealing valuable resources from
the rest of your application and this can easily add up to much more than
the almost negligible cost of retrieving a connection from the pool IMO.
--
Angel Saenz-Badillos [MS] Managed Providers
This posting is provided "AS IS", with no warranties, and confers no
rights.Please do not send email directly to this alias.
This alias is for newsgroup purposes only.
I am now blogging about ADO.NET:
http://weblogs.asp.net/angelsb/
Kevin Spencer said:
Cowboy is an excellent and experienced programmer. I've known him for years.
What are your credentials?
--
HTH,
Kevin Spencer
.Net Developer
Microsoft MVP
I get paid good money to
solve puzzles for a living
"Cowboy (Gregory A. Beamer) - MVP"
wrote
in message
The few seconds you save pulling from pool
If you EVER observe a latency measured in SECONDS to retrieve a connection
from the pool then you have a MAJOR PROBLEM. While I hate to be
rude,
do
you have any practical experience with coding and deploying real-word
n-tier
applications?