D
Dave Sexton
Hi William,
Thanks for your response.
That doesn't answer "How often will a blocking Send block?", although it does answer, "for how long?".
I was looking for an actual value but I found it in the docs for TcpClient.SendBufferSize on MSDN: "The default value is 8192
bytes".
You said in another branch of this thread (a few times) that Send will block for an ACK if it hasn't received one after a certain
number of packets have been sent without an ACK and that you believe that number to be 2. Did I understand that correctly?
Given that the IP header + TCP header is 256 bytes [http://en.wikipedia.org/wiki/Transmission_Control_Protocol], + the 10 bytes of
data being sent by Send from the example in the OP means that the example was sending 266 bytes per packet, with one packet each
iteration. Is that correct? (Does the send buffer size include the size of the headers as well?)
Therefore, the first Send was obviously not filling up the 8192 byte send buffer. The first Send returned immediately because it
did not wait for ACK, even if it was the "Blocking" type. The second Send failed because the RST was already in the stack by the
time it was executed. Good so far?
I revised your example yet again (I added the code to the end of my post):
1. I set Socket.Blocking to false.
2. Number of bytes was changed to 1, for each iteration calling Send.
3. Hard-coded "1" as the length of bytes being sent in the Send method.
4. Disregarded the return value from Send.
5. I removed the 100 millisecond wait completely, and even removed the call to WriteLine, replacing it with code that compares the
SocketError to ConnectionReset and simply breaks when equal.
6. Increased the number of iterations to 100.
Second Send still fails with ConnectionReset. I though for sure that I could get the example, some how, to Send more than once
before failing, even if it would be only twice due to your wait-for-ACK explanation, but I could not. I assume having the server
and client run on the same machine (and within the same process even) creates no latency in the server's response with RST so the
second Send will always fail no matter what I do. I wonder if testing this code on seperate machines would produce the expected
results: At least two Sends complete before a subsequent Send fails with ConnectionReset. Goran's illustration, along with
everyone's explanations, seem to indicate that its possible and even likely in a truly distributed application.
Upon further reading it seems that Nagle affects the size of the packets. I guess if the buffer size is not affected then the Nagle
algorithm has to work within that constraint. I thought the Nagle algorithm might have been playing a part in the behavior of your
example. I see now that Nagle might prevent Send from sending immediately, and that is contridicatry to the results that the second
Send failed, so I don't believe, any longer, that Nagle is playing any part.
Got it, thanks.
So blocking Send will always return the number of bytes sent as long as SocketError is Success, otherwise it seems to return zero.
Interesting. So the buffer is volatile and must be synchronized. This means that either a copy of the buffer has to be made before
calling BeginSend, a different buffer must be used each time or that write access to the buffer must be synchronized with all calls
to BeginSend, although that does defeat the purpose of an asyncronous method.
So BeginSend returns immediately to the caller and never waits on ACK. A non-blocking Send only buffers as much as it can at the
time it's called (and never waits on ACK?), and a blocking Send will buffer everything before returning to the caller but sometimes
waits on ACK first before returning to the caller.
Does EndSend behave like a blocking or non-blocking Send with respect to the return value and whether it waits on ACK?
I verified your results but I also tried setting the SendBufferSize to 10 and the byte array to 11 but the second Send failed, like
usual. This means that overflowing the buffer doesn't necessarily cause Send to wait for an ACK. Maybe a zero buffer does, but
that really doesn't prove anything.
Yes, somebody (I think Alan) mentioned that as well in a previous post. I just assumed that it would be valid, even if it would be
a poor design, to have the client of one's protocol figure out if the server had shut down receiving for any reason by trying to
send data. I guess it would be better in that case to send a few bits to the client warning it of such an event. Even so, it's
nice to understand this aspect of TCP just to be well-rounded and this topic still applies to the OP, if I understood it correctly
(Socket weirdness)!
My reasoning for that last statement was to say, simply that it's difficult to infer the mechanisms used by TCP that cause the
behavior of your example without the help of some shared knowledge.
Perfect! Thanks. (I have to admit that I didn't even think to look for an RFC. I feel ashamed.)
Revised code sample where the second call to Send always fails (at least when the client and server are executed within the same
process):
private static readonly EventWaitHandle waitForAsyncSend = new EventWaitHandle(false, EventResetMode.ManualReset);
private static void SocketTest()
{
// Server.
TcpListener l = new TcpListener(IPAddress.Any, 9001);
l.Start();
new Thread(delegate()
{
using (Socket socket = l.AcceptSocket())
{
socket.Shutdown(SocketShutdown.Receive);
WriteLine("Server shutdown receive.");
waitForAsyncSend.WaitOne();
// expecting blocks of 1 byte each
WriteLine("Server about to poll for data");
// examine first batch
if (socket.Poll(8000000, SelectMode.SelectRead))
{
byte[] buffer = new byte[1];
try
{
int read = socket.Receive(buffer);
WriteLine("Server read bytes: " + read);
}
catch (SocketException ex)
{
if (ex.ErrorCode == 10053)
{
WriteLine("Server read error: " + ex.SocketErrorCode.ToString());
}
else
throw ex;
}
}
WriteLine("Closing client connection");
}
WriteLine("Server stopping");
l.Stop();
}).Start();
// Client.
byte[] buf = new byte[1];
using (Socket s = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp))
{
s.Blocking = false;
WriteLine("Blocking mode:{0}", s.Blocking);
s.BeginConnect(IPAddress.Loopback, 9001, delegate(IAsyncResult result)
{
s.EndConnect(result);
Thread.Sleep(1000);
SocketError se = SocketError.Success;
int i = 0;
for (; i < 100; i++)
{
s.Send(buf, 0, 1, SocketFlags.None, out se);
if (se == SocketError.ConnectionReset)
break;
}
WriteLine("Failed iteration: " + i);
/*
// Note the different results with async send.
IAsyncResult ar = s.BeginSend(buf, 0, buf.Length, SocketFlags.None, out se, null, null);
WriteLine("Non-blocking SocketError: " + se.ToString());
if (ar != null)
read = s.EndSend(ar); // ar is null.
WriteLine("Non-blocking bytes written to kernel:{0}", read);
*/
waitForAsyncSend.Set();
}, null);
waitForAsyncSend.WaitOne();
Thread.Sleep(500);
Console.WriteLine("Click 'Enter' to exit");
Console.ReadLine();
}
}
private static readonly object sync = new object();
private static void WriteLine(string message)
{
lock (sync)
{
Console.WriteLine(message);
Console.WriteLine();
}
}
private static void WriteLine(string format, params object[] args)
{
lock (sync)
{
Console.WriteLine(format, args);
Console.WriteLine();
}
}
Thanks for your response.
| 1. How often will a blocking Send block, and for how long?
socket.SendTimeout (also paired ReceiveTimeout)
That doesn't answer "How often will a blocking Send block?", although it does answer, "for how long?".
| 2. I understand this depends on the size of the buffer, so how big is the
kernel buffer?
socket.SendBufferSize (and ReceiveBufferSize)
I was looking for an actual value but I found it in the docs for TcpClient.SendBufferSize on MSDN: "The default value is 8192
bytes".
You said in another branch of this thread (a few times) that Send will block for an ACK if it hasn't received one after a certain
number of packets have been sent without an ACK and that you believe that number to be 2. Did I understand that correctly?
Given that the IP header + TCP header is 256 bytes [http://en.wikipedia.org/wiki/Transmission_Control_Protocol], + the 10 bytes of
data being sent by Send from the example in the OP means that the example was sending 266 bytes per packet, with one packet each
iteration. Is that correct? (Does the send buffer size include the size of the headers as well?)
Therefore, the first Send was obviously not filling up the 8192 byte send buffer. The first Send returned immediately because it
did not wait for ACK, even if it was the "Blocking" type. The second Send failed because the RST was already in the stack by the
time it was executed. Good so far?
I revised your example yet again (I added the code to the end of my post):
1. I set Socket.Blocking to false.
2. Number of bytes was changed to 1, for each iteration calling Send.
3. Hard-coded "1" as the length of bytes being sent in the Send method.
4. Disregarded the return value from Send.
5. I removed the 100 millisecond wait completely, and even removed the call to WriteLine, replacing it with code that compares the
SocketError to ConnectionReset and simply breaks when equal.
6. Increased the number of iterations to 100.
Second Send still fails with ConnectionReset. I though for sure that I could get the example, some how, to Send more than once
before failing, even if it would be only twice due to your wait-for-ACK explanation, but I could not. I assume having the server
and client run on the same machine (and within the same process even) creates no latency in the server's response with RST so the
second Send will always fail no matter what I do. I wonder if testing this code on seperate machines would produce the expected
results: At least two Sends complete before a subsequent Send fails with ConnectionReset. Goran's illustration, along with
everyone's explanations, seem to indicate that its possible and even likely in a truly distributed application.
| 3. Is the size of the buffer affected by the Nagle algorithm in any way?
Don't think so, but not sure.
Upon further reading it seems that Nagle affects the size of the packets. I guess if the buffer size is not affected then the Nagle
algorithm has to work within that constraint. I thought the Nagle algorithm might have been playing a part in the behavior of your
example. I see now that Nagle might prevent Send from sending immediately, and that is contridicatry to the results that the second
Send failed, so I don't believe, any longer, that Nagle is playing any part.
| 4. Does the size of the buffer fluxuate, or can it be changed
programmatically?
See above.
Got it, thanks.
| 5. If a blocking Send isn't waiting for a response from the server why not
just write the buffer directly into unmanaged memory (or
| pin a copy) and return immediately to the caller? i.e., why block at all?
It does not block if buffer space is available. If space is available,
it copies the user buf and returns N. Non-blocking socket mode gets a > little more complex. It will copy upto the point it has space for and
return N or something < N, then your code needs to send the rest of the buf.
So blocking Send will always return the number of bytes sent as long as SocketError is Success, otherwise it seems to return zero.
| I take it that this is what BeginSend does?
BeginSend does not copy user buffer, but keeps it pinned and driver
uses user buffer directly. Another reason why BeginSend can be more
efficient as no buffer copy overhead. In a busy system, this can be a
drain. Not sure if there is every a case where is does a copy and releases
the users buffer?
Interesting. So the buffer is volatile and must be synchronized. This means that either a copy of the buffer has to be made before
calling BeginSend, a different buffer must be used each time or that write access to the buffer must be synchronized with all calls
to BeginSend, although that does defeat the purpose of an asyncronous method.
So BeginSend returns immediately to the caller and never waits on ACK. A non-blocking Send only buffers as much as it can at the
time it's called (and never waits on ACK?), and a blocking Send will buffer everything before returning to the caller but sometimes
waits on ACK first before returning to the caller.
Does EndSend behave like a blocking or non-blocking Send with respect to the return value and whether it waits on ACK?
| 6. The example in the OP attempts to send 10 bytes a few times,
synchronously, and it seems that the second Send always failed after
| RST in my testing. Will increasing or decreasing the number of bytes sent
in the first Send cause this behavior to change? In
| other words, if the first send no longer blocks (if it currently is
blocking, depending on the size of the buffer and the number of
| bytes sent), is it possible that the second Send will not always fail
because the time it has taken to normally Send has decreased
| even if the time it takes to receive the RST has remained the same?
Interestingly, if you set SendBufferSize to 0 in the code we are talking
about, on my tests, the *first send does throw the error. So it would seem,
it is blocking for the ACK because of this zero buffer. Try it out and see
if you see the same.
I verified your results but I also tried setting the SendBufferSize to 10 and the byte array to 11 but the second Send failed, like
usual. This means that overflowing the buffer doesn't necessarily cause Send to wait for an ACK. Maybe a zero buffer does, but
that really doesn't prove anything.
| I only ask the last question because it seems to me that this behavior is
really unpredictable and that no real example can be
| written that will function identically on each individual computer. In
other words, it's impossible to understand this behavior
| only through testing.
But I think we are talking about an error in "our" protocol so not sure this
matters. The connection is implicitly shutdown half-way by server. Server
can send and client can receive - all good. Client should not be sending
anyway sence it should "know" the state of the protocol - hence the error in
our protocol. Client only knows explicitly, after it trys a send and gets
the ACK with the RST set.
Yes, somebody (I think Alan) mentioned that as well in a previous post. I just assumed that it would be valid, even if it would be
a poor design, to have the client of one's protocol figure out if the server had shut down receiving for any reason by trying to
send data. I guess it would be better in that case to send a few bits to the client warning it of such an event. Even so, it's
nice to understand this aspect of TCP just to be well-rounded and this topic still applies to the OP, if I understood it correctly
(Socket weirdness)!
My reasoning for that last statement was to say, simply that it's difficult to infer the mechanisms used by TCP that cause the
behavior of your example without the help of some shared knowledge.
| RTFM is acceptible Just, where is the manual exactly? I'll check out
TCP I as William recommended, but if there is a genuine
| manual that describes the protocol on the web somewhere I'd like to know.
You can also read the RFCs (i.e. 793, 3168)
ftp://ftp.rfc-editor.org/in-notes/rfc793.txt
Perfect! Thanks. (I have to admit that I didn't even think to look for an RFC. I feel ashamed.)
Revised code sample where the second call to Send always fails (at least when the client and server are executed within the same
process):
private static readonly EventWaitHandle waitForAsyncSend = new EventWaitHandle(false, EventResetMode.ManualReset);
private static void SocketTest()
{
// Server.
TcpListener l = new TcpListener(IPAddress.Any, 9001);
l.Start();
new Thread(delegate()
{
using (Socket socket = l.AcceptSocket())
{
socket.Shutdown(SocketShutdown.Receive);
WriteLine("Server shutdown receive.");
waitForAsyncSend.WaitOne();
// expecting blocks of 1 byte each
WriteLine("Server about to poll for data");
// examine first batch
if (socket.Poll(8000000, SelectMode.SelectRead))
{
byte[] buffer = new byte[1];
try
{
int read = socket.Receive(buffer);
WriteLine("Server read bytes: " + read);
}
catch (SocketException ex)
{
if (ex.ErrorCode == 10053)
{
WriteLine("Server read error: " + ex.SocketErrorCode.ToString());
}
else
throw ex;
}
}
WriteLine("Closing client connection");
}
WriteLine("Server stopping");
l.Stop();
}).Start();
// Client.
byte[] buf = new byte[1];
using (Socket s = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp))
{
s.Blocking = false;
WriteLine("Blocking mode:{0}", s.Blocking);
s.BeginConnect(IPAddress.Loopback, 9001, delegate(IAsyncResult result)
{
s.EndConnect(result);
Thread.Sleep(1000);
SocketError se = SocketError.Success;
int i = 0;
for (; i < 100; i++)
{
s.Send(buf, 0, 1, SocketFlags.None, out se);
if (se == SocketError.ConnectionReset)
break;
}
WriteLine("Failed iteration: " + i);
/*
// Note the different results with async send.
IAsyncResult ar = s.BeginSend(buf, 0, buf.Length, SocketFlags.None, out se, null, null);
WriteLine("Non-blocking SocketError: " + se.ToString());
if (ar != null)
read = s.EndSend(ar); // ar is null.
WriteLine("Non-blocking bytes written to kernel:{0}", read);
*/
waitForAsyncSend.Set();
}, null);
waitForAsyncSend.WaitOne();
Thread.Sleep(500);
Console.WriteLine("Click 'Enter' to exit");
Console.ReadLine();
}
}
private static readonly object sync = new object();
private static void WriteLine(string message)
{
lock (sync)
{
Console.WriteLine(message);
Console.WriteLine();
}
}
private static void WriteLine(string format, params object[] args)
{
lock (sync)
{
Console.WriteLine(format, args);
Console.WriteLine();
}
}