Site Maintenance - HttpModule

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

All

Architecture:
An aspnet website serves as the interface for several scalable
applications.

Problem:
The ability to put the website into maintenance mode is required. The
maintenance mode needs to be selective - conditional on origination IP
address to allow internal personnel to view the site during a
deployment operation, and either the whole site or certain sections
may need to be selectively put into maintenance. Also, we want to
minimise the risk of killing a user's session while they are in the
middle of a workflow operation.

Thoughts:
I note that people use HttpModules for this sort of thing, and this
looks fairly straightforward. In order to accommodate the last part of
the problem above, I suppose a change to the config for the HttpModule
could redirect any user without an existing session to a maintenance
page. When there are no more sessions open, it is safe to deploy.
However, this means that we need to keep track of user sessions. Since
there appears to be no reliable way to do this automatically (please
correct me if I'm wrong), this means that we would need to store
session IDs and last request time. We can then use a comparison with
the session timeout value to clean up expired sessions. All of this
seems *relatively* straightforward (again, please correct me if I'm
wrong), but what concerns me is the storing of this data. If we store
it in a database, this means a database connection for every request.
If we store it in memory, it feels as if it won't be scalable as the
traffic increases.

Questions:
Does this make sense? Do we think that what I want to do is
achievable? Is there anything out there that does what I want before I
start rolling my own? Any other thoughts or observations would be most
welcome.

Many thanks!
 
(e-mail address removed) wrote in @h8g2000yqm.googlegroups.com:
All

Architecture:
An aspnet website serves as the interface for several scalable
applications.

Problem:
The ability to put the website into maintenance mode is required. The
maintenance mode needs to be selective - conditional on origination IP
address to allow internal personnel to view the site during a
deployment operation, and either the whole site or certain sections
may need to be selectively put into maintenance. Also, we want to
minimise the risk of killing a user's session while they are in the
middle of a workflow operation.

I am running out of time this morning, so I will have to examine
thoughts later. Here are a couple of things that can help.

1. Depending on how you are caching user information, you can often
"maintain" provided it is just a short "update the code" without killing
the user if you set up session state in SQL Server. This slows down
session state a bit (not bad), but takes the session out of the server.
You can also use an external session server (separate from all boxes in
the web farm) to accomplish the same thing, but that requires a separate
box, which you may or may not have.

2. For updating a site, it is better to stage on the local server in
another folder. You then set up a separate "site" to run through and
sanity check the deploy. As long as both sites are set up equivalently
in IIS, you should have no issue, as you are on the same box. Once you
are comfortable with the changes, xcopy over to production, except
perhaps the web.config (if you have different versions).

NOTE that you can still munge up a user in a single web box scenario if
he hits the site right as you are copying bits over, so nothing is
foolproof. But, with most sites, a local copy of the bits takes a few
seconds, so the likelihood you hit someone in mid request is pretty
small.

A better option is to have a web farm for the site(s) you manage. You
still need an external session mechanism (SQL or session server), but
you can take one server down and upgrade (even check the validity of the
code locally) and then bring it back up and start on the other server
(s). The user will never know any box went down if they are working, as
there is always a box that is serving them

I looked briefly at your HTTP Module methodology. I will have to think
about it, but it seems it could work. The question is why would you
incur this much work if you can simply move state off the web boxes (can
be done in the web.config).

Oh, one gotcha on a web farm. Make sure you manually set the keys in the
web.config file, as two machines will generate separate session keys
(ouch). I tend to do this out of habit, as it sets me up for web farming
from the start.



--
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

Twitter: @gbworld
Blog: http://gregorybeamer.spaces.live.com

******************************************
| Think outside the box! |
******************************************
 
Gregory

Many thanks for your input. Some more comments inline below.
Architecture:

[..]Snip session state management information[..]
2. For updating a site, it is better to stage on the local server in
another folder. You then set up a separate "site" to run through and
sanity check the deploy. As long as both sites are set up equivalently
in IIS, you should have no issue, as you are on the same box. Once you
are comfortable with the changes, xcopy over to production, except
perhaps the web.config (if you have different versions).

This is pretty much how we have things set up currently. An "archive"
zip file is created by CCNET with each build, containing all files
required for a deployment. This includes a web.config file for each
environment. We have a Powershell script to handle the actual
deployment. The Powershell script deployes the correct web.config file
for the environment, and either deploys the latest "archive" file if
no version is specified, or alternatively a specific version. This all
works pretty well.
NOTE that you can still munge up a user in a single web box scenario if
he hits the site right as you are copying bits over, so nothing is
foolproof. But, with most sites, a local copy of the bits takes a few
seconds, so the likelihood you hit someone in mid request is pretty
small.
A better option is to have a web farm for the site(s) you manage. You
still need an external session mechanism (SQL or session server), but
you can take one server down and upgrade (even check the validity of the
code locally) and then bring it back up and start on the other server
(s). The user will never know any box went down if they are working, as
there is always a box that is serving them

My concern is with the workflow operations that are available to the
users. If there's a website deployment, then this could result in the
site being temporarily unavailable in the middle of a workflow
operation. Even if state is maintained, this might not be particularly
useful depending on the time it takes to deploy. Even if it's only
moments, an error page in the middle of a multi stage process on a
website never fills me (personally) with confidence.

If there's a deployment to the application layer, things could
potentially be more problematic. At best the user would experience an
error state as described above, and at worst the system would be left
in an inconsistent state. I know, it shouldn't - but I'm not writing
*all* of the code <grin>.

In addition, the user base is currently very small. Although this
reduces the chances of a user experiencing a problem, it also means
that if a user *does* expeience a problem this is a much higher
percentage failure rate than if there were many more users.
I looked briefly at your HTTP Module methodology. I will have to think
about it, but it seems it could work. The question is why would you
incur this much work if you can simply move state off the web boxes (can
be done in the web.config).
Oh, one gotcha on a web farm. Make sure you manually set the keys in the
web.config file, as two machines will generate separate session keys
(ouch). I tend to do this out of habit, as it sets me up for web farming
from the start.

I suspect that at this stage the cost implication of adding an
additional web server is prohibitive - especially as although this
would address the issue of a website deployment, it wouldn't
necessarily (as far as I understand it) address the issue of an
application layer deployment (by the way, some of the application
processing that goes on is asynchronous with the website requests).
Having said all of this, I don't see loads of people trying to figure
out how to solve the same problem. This may indicate that I'm either
trying to solve the wrong problem, or that there's a different way to
manage our environment.

Thanks again for your help.
 
(e-mail address removed) wrote in @g23g2000vbr.googlegroups.com:

Rather than type a lot of answers, here is the general rules summarized.

The quickest way to update an app on a single server, with minimal
downtime is as follows:

1. Ensure state is off the server (SQL Server or external state server).
All user session information is then stored outside of the app, when it
resets. I have not played extensively with workflow on a web server, but
since the basic web mechanism is session cache on server side (along
with other caching) and session cookie, along with viewstate, on the
client side, as long as you have session on another server and are
careful about caching in app (or set up persitance when the app goes
down), there is little danger of impacting users.

2. The least impact is to copy the bits to the server in another
directory, test and then deploy by copying to production. I like to use
a staging directory that I can actually run for a last minute sanity
check. You then copy all but the configs over.

NOTE: There is some risk with this method

If you add some safety, by adding a deployment package, you add time,
which means you can impact some users. It is safer, as you are less
likely to overwrite a production config file, but the deploy adds time.


Now, for framework deployment. I am assuming this is refactoring type
work, or at minimum, not changing the interface to the UI layer (as you
would have to deploy the web application for this).

There are a couple of ways to handle this with minimal downtime.

One is to simply xcopy the assemlbies. As long as the interface has not
changed and you have not specified a particular version, the downtime is
the amount of time to copy over. One caveat here is you might have the
assemblies locked, which means a short recycle of IIS. This is unlikely
in most small apps, but there is a risk here.

Another way is to deploy these assemblies to the GAC. If you go this
route, set up the config with assembly versions. You then deploy the new
assemblies to the GAC on the server and update the config to point to
the new assemblies. You can still test in a staging app root by updating
it first. This is likely to be less dangerous, but there is a cost here:
all of your assemblies MUST be strongly typed.

The GAC can be a pain to work with, but you take the pain for the added
safety.

Hope this helps!


--
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

Twitter: @gbworld
Blog: http://gregorybeamer.spaces.live.com

*******************************************
| Think outside the box! |
*******************************************
 
Back
Top