K
Karch
Lets say I have a number of web servers that all feed data to a common SQL
Server database, which processes and sends to a master SQL Server. I am
looking at a solution that involves Service Broker at the higher levels and
that all works fine. My question is: what is the recommended way to get data
from the web server to the first-tier Service Broker? I've done a lot of
searching and reading and have not found any definitive answers. But my
ideas are:
1) Write directly to the database from the web server.
I don't really like this solution because it violates the concepts OO design
and abstraction. It also doesn't give me any assurances that the data gets
moved, other than maybe some logging. It also shifts alot of the processing
onto IIS, which I would like to be as "light" as possible, simply servicing
requests.
2) Write to a local queue managed managed by a WCF service hosted in a
Windows Service.
This gives me some abstraction, but also adds some layers that may be
unnecessary. And, if I'm not mistaken, it forces me to use a local MSMQ if I
want to make sure that the messages are durable. And there is still the
question of writing to the queue - seems like I still need to do this from
the IIS hosted web application or use memory-mapping or something. Maybe
someone could elaborate on this one.
3) Install SQL Express on the web servers and use Service Broker all the way
up the line.
This seems a bit of a hack to me - to install SQL Express on the web
servers - but would work. Also, does anyone know how this impacts licensing;
is it really truly free to install Express anywhere, even if its 50-100 web
servers?
Beyond the options above, what is the recommended way to traverse that first
segment to Service Broker if one needs durable messages? What if volatile
messages are okay - should it be done a different way? I feel pretty
comfortable with Service Broker once it gets the initial data - my issue is
getting the data in the pipeline to start.
Thanks
K
Server database, which processes and sends to a master SQL Server. I am
looking at a solution that involves Service Broker at the higher levels and
that all works fine. My question is: what is the recommended way to get data
from the web server to the first-tier Service Broker? I've done a lot of
searching and reading and have not found any definitive answers. But my
ideas are:
1) Write directly to the database from the web server.
I don't really like this solution because it violates the concepts OO design
and abstraction. It also doesn't give me any assurances that the data gets
moved, other than maybe some logging. It also shifts alot of the processing
onto IIS, which I would like to be as "light" as possible, simply servicing
requests.
2) Write to a local queue managed managed by a WCF service hosted in a
Windows Service.
This gives me some abstraction, but also adds some layers that may be
unnecessary. And, if I'm not mistaken, it forces me to use a local MSMQ if I
want to make sure that the messages are durable. And there is still the
question of writing to the queue - seems like I still need to do this from
the IIS hosted web application or use memory-mapping or something. Maybe
someone could elaborate on this one.
3) Install SQL Express on the web servers and use Service Broker all the way
up the line.
This seems a bit of a hack to me - to install SQL Express on the web
servers - but would work. Also, does anyone know how this impacts licensing;
is it really truly free to install Express anywhere, even if its 50-100 web
servers?
Beyond the options above, what is the recommended way to traverse that first
segment to Service Broker if one needs durable messages? What if volatile
messages are okay - should it be done a different way? I feel pretty
comfortable with Service Broker once it gets the initial data - my issue is
getting the data in the pipeline to start.
Thanks
K