R
Robert Paris
I have a question on clustering a web application's diff. components. (Note: I am a developer, not a network admin so please excuse my ignorance on certain issues)
We have an application with a web server, app server and database. The web server is mainly just the front interface into the app server which does most of the work. There will be a lot of document upload/download and generation with this system, but not a whole tone of users. Given the low number of users, clustering is not important. However, given how this app is critical to the business (it is what they use to DO their business), it MUST be up all the time, so we want a fail-over system.
There are a few restrictions, however, with the main one being lack of a lot of money. So the ideal cluster setup is out of the question. So, with that knowledge, can you guys comment on how I can make my below proposed setup the *best* possible, without incurring huge costs of adding tons of servers.
(Again, I know this isn't an ideal setup, but it's a starting point and takes into account cost)
----------------------------------
_->| WebServer + Database + AppServer |
----------- ------------- _- ----------------------------------
| INTERNET |<----->| Dispatchers |<-|
----------- ------------- -_ ----------------------------------
-->| WebServer + Database + AppServer |
----------------------------------
The dispatcher is actually more than one as it has a dispatcher for the web server, database and maybe app server. The problem is that they don't have the money for more than three servers, which is why I chose this setup. So my question:
1. Is this even worth it? Or should I just give up on fail-over gains?
2. How could I improve the above setup?
3. If we added just one more server (for dispatcher or whatever) would that make a HUGE difference?
4. What problems could we expect?
Thanks to everyone for any help you can give!
We have an application with a web server, app server and database. The web server is mainly just the front interface into the app server which does most of the work. There will be a lot of document upload/download and generation with this system, but not a whole tone of users. Given the low number of users, clustering is not important. However, given how this app is critical to the business (it is what they use to DO their business), it MUST be up all the time, so we want a fail-over system.
There are a few restrictions, however, with the main one being lack of a lot of money. So the ideal cluster setup is out of the question. So, with that knowledge, can you guys comment on how I can make my below proposed setup the *best* possible, without incurring huge costs of adding tons of servers.
(Again, I know this isn't an ideal setup, but it's a starting point and takes into account cost)
----------------------------------
_->| WebServer + Database + AppServer |
----------- ------------- _- ----------------------------------
| INTERNET |<----->| Dispatchers |<-|
----------- ------------- -_ ----------------------------------
-->| WebServer + Database + AppServer |
----------------------------------
The dispatcher is actually more than one as it has a dispatcher for the web server, database and maybe app server. The problem is that they don't have the money for more than three servers, which is why I chose this setup. So my question:
1. Is this even worth it? Or should I just give up on fail-over gains?
2. How could I improve the above setup?
3. If we added just one more server (for dispatcher or whatever) would that make a HUGE difference?
4. What problems could we expect?
Thanks to everyone for any help you can give!