[...]
5001- clients : switch to UDP
I'm skeptical that this is a good idea.
The only person I know personally first-hand who has written a large-scale
server architecture from scratch in .NET, he supports hundreds of thousands
of simultaneous clients with TCP using the async APIs (the first version).
There are plenty of examples using UDP out there. Documented examples.
UDP won't decrease network overhead. It probably will even increase it.
It will decrease network overhead. But that was not the point.
It
for sure _will_ increase code complexity, along with design,
implementation, and maintenance costs.
It uses the same event driven model as TCP without a thread per
connection, so no change for that part.
Some reliability features missing in UDP may need to be added, but
that will be rather trivial compared to all the other challenges
in a solution of that magnitude.
It _might_ reduce local resource
usage, depending on platform, but at the same time will likely increase the
required numbers of re-transmits for any given datagram, due to the lack of
per-client buffering (again, depending on platform...it's theoretically
possible for a network driver to maintain per-client buffering even for
UDP, but I'm not aware of any such implementation).
UDP is good for certain things. But I'm aware of no valid evidence to
suggest it's the right approach for increasing the number of clients a
server can support.
????
Getting rid of the connection concept do avoid problems with max
connections rather effectively.
Frankly, the fact that HTTP is designed around TCP and that HTTP servers
are able to handle well above your "5000" number seems proof enough to me
that there's no need to change protocols just to support larger numbers of
clients. There _are_ web servers out there successfully dealing with far
larger numbers than that.
I think you have misunderstood how HTTP works. HTTP connects, interact
and disconnect. Using keep-alive enables multiple interactions between
connect and disconnect.
Therefore N users accessing the web server does not mean
N concurrent TCP connections.
You can serve maybe 25000 users with just 2500 TCP connections.
Furthermore at least the most used web server (Apache) let connections
wait in the backlog queue until it is ready to process the request. I
suspect that IIS/ASP.NET does something similar.
HTTP and a socket client/server solution are simply very different
in how they do things, so you can not conclude much from HTTP.
Yes, you need a machine configured to handle the number of connections you
want to support. But ensuring that is way easier to do than writing a
reliable network i/o component on top of UDP.
UDP is generally appropriate when your particular need doesn't require
reliable delivery of data; otherwise, TCP is the right solution. I will
grant that it's a common mistake to choose UDP because one thinks it's
faster, more efficient, etc. but in practice those things turn out to
generally not be true, nor worth the trade-off of an incorrect UDP-based
implementation that screws up your data.
What is needed for reliability depends on the problem domain.
And if there are high requirements, then there are well-tested
libraries available. I don't know if such exist for .NET though.
Arne