Issuing multiple Socket.BeginReceiveFrom() calls at one go

  • Thread starter Thread starter Jonas Hei
  • Start date Start date
J

Jonas Hei

In all the samples that illustrate the use of Socket.BeginReceiveFrom(),
once a BeginReceiveFrom() call is issued, then we wait for an incoming
message, and then
(a) either another BeginReceiveFrom() is issued from the
AsyncCallback or
(b) in the AsyncCallback sets an event which signals the main thread
to continue and issue another BeginReceiveFrom().

My concern is that if we need to be highly scalable (say 100s or 1000s
of incoming UDP messages) then this way isn't really going to cut it.

I've thought of quite a few optimizations such as allocating buffer
space at startup (to avoid troubling GC with pinned memory) and also
posting the incoming message to a local queue and processing it later
and also playing with ThreadPool.SetMinThreads to setup optimum values.

My question is:

Is it possible to issue multiple calls to Socket.BeginReceiveFrom() at
one go? If yes, what is the maximum number? I've tried this with 20 or
so and it seems to work fine. I've illustrated the technique in the code
below. Any comments?


public class Communicator {
Socket skt = new Socket(AddressFamily.InterNetwork,
SocketType.Dgram, ProtocolType.Udp);
private Thread listenerThread;
int initialListeners;
public Communicator() { }
public void Start(int initialNumberOfListeners) {
initialListeners = initialNumberOfListeners;
listenerThread = new Thread(new ThreadStart(Listen));
listenerThread.Start();
}
private void Listen() {
IPEndPoint localEndPoint = new IPEndPoint(
IPAddress.Parse("0.0.0.0"), 9020);
skt.SetSocketOption(SocketOptionLevel.Socket,
SocketOptionName.ReceiveBuffer,
65536);
skt.Bind(localEndPoint);

StateObject[] so2 = new StateObject[initialListeners];
for(int i=0; i<initialListeners; i++)
{
so2 = new StateObject(4096);
so2.workSocket = skt;
skt.BeginReceiveFrom(
so2.buffer,
0,
so2.BufferSize,
0,
ref so2.tempRemoteEP,
new AsyncCallback(ReceiveFromCallback),
so2);
}

}

public void Stop() {
bStop = true
}

public void ReceiveFromCallback(IAsyncResult ar) {
try {
StateObject so = (StateObject) ar.AsyncState;
Socket listenerSocket = so.workSocket;

StateObject newst = new StateObject(4096);
newst.workSocket = listenerSocket;
listenerSocket.BeginReceiveFrom(
newst.buffer,
0,
newst.BufferSize,
0,
ref newst.tempRemoteEP,
new AsyncCallback(ReceiveFromCallback),
newst);

IPEndPoint sender = new IPEndPoint(IPAddress.Any, 0);
EndPoint tempRemoteEP = (EndPoint)sender;

int read = listenerSocket.EndReceiveFrom(
ar,
ref tempRemoteEP);

if(read > 0) {
so.sb.Append(Encoding.ASCII.GetString(
so.buffer,
0,
read));
string strContent;
strContent = so.sb.ToString();
//post strContent to a Queue for later processing
}
}
catch(ObjectDisposedException) { }
}

}

public class StateObject {
public Socket workSocket = null;
public int BufferSize;
public byte[] buffer = null;
public StringBuilder sb = new StringBuilder();
public StateObject(int buffersize) {
BufferSize = buffersize;
buffer = new byte[BufferSize];
}
public EndPoint tempRemoteEP = (EndPoint)(new
IPEndPoint(IPAddress.Any, 0));
}
 
Hi

Because one socket will have one receive buff, so even multiple threads is
waiting on a socket, in the mean time, there will be just one thread is do
the receive job.
Also if we create multiple threads to wait for the socket data, it is hard
to do the control in the program, because we can not guarantee in certian
time, which thread is arranged receiving the job. So maybe some time we
find that more than one thread is working, that is because after do the
receive job, the thread may do something else, and now another UDP is
incoming, so another thread is receiving the data. That is to say, in fact
the data receiving is serialized for one socket.

So I think your code seems OK, but we do not specifed multiple thread at
the start.
We just set the ReceiveFromCallback in the ReceiveFromCallback after we
receive the data.

Best regards,

Peter Huang
Microsoft Online Partner Support

Get Secure! - www.microsoft.com/security
This posting is provided "AS IS" with no warranties, and confers no rights.
 
Sorry I don't exactly understand your suggestions. It would be nice if
you could elaborate a little.

Basically by dispatching multiple Socket.BeginReceiveFrom() calls I am
trying to achieve better performance (or scalability - just to increase
the sheer nubmer of UDP messages my application can receive per second).

My reasoning was that if I issue just one Socket.BeginReceiveFrom() call
and then issue another call only after one incoming message has arrived,
then I thought that if my large number of incoming messages are being
received then there is a chance that my application would miss a few. I
guess I read somewhere that windows can only queue upto 5 messages in
the receive buffer. Is that correct?

If yes, then if various devices are bombarding hundreds (or thousands)
of UDP messages per second at my server then there is this risk that
this buffer of 5 messages will become full and some messages from this
buffer might get discarded (by OS) before my code an issue another
Socket.BeginReceiveFrom() call.

Does that make any sense? Am I on the right track here?
 
Hi

I think there may be some confusion, in the TCP connection, if we have a
listener. Then it will be a fix connectors count, e.g. 5, when the 6th
connector arrived, but none of 5 connector is accepted, then the 6th will
be disposed.

But for the UDP receiver, we do have a buffer, it will buffer the packet.
Because the UDP package will be read per packet, so even there are more
than one packet in the buff, we can only read one every time.
The reading process is similar with below.
e.g. There are 5 threads, ABCDE.
A begin to receive-->Lock the buff memory(in the mean time, the other
thread can not lock it)--->read memory--->unlock( in the process the other
ones can not access to the buff)

Best regards,

Peter Huang
Microsoft Online Partner Support

Get Secure! - www.microsoft.com/security
This posting is provided "AS IS" with no warranties, and confers no rights.
 
Back
Top