tcpclient in .net 2.0

  • Thread starter Thread starter Keith Langer
  • Start date Start date
K

Keith Langer

Maybe someone has come across this situation and has a way to handle
it.

If I have a tcpClient object and do a GetStream.Read on it after
setting a ReceiveTimeout, I'm getting different behavior in .Net 2.0
than in 1.0 when the method returns 0 bytes (which signifies a
timeout). In both cases I get a System.IO.IOException , but in the
2.0 framework the socket also disconnects when this occurs (in 1.0 it
did not disconnect). They have made some improvements in the
tcpClient class, such as exposing the underlying socket, so I'd prefer
to keep using tcpClients. Any idea how I can prevent this
disconnect? Right now I'm checking DataAvailable and using
Thread.Sleep as a workaround, but I'd prefer to do a blocking read
call.


Thanks,
Keith
 
[...] In both cases I get a System.IO.IOException , but in the
2.0 framework the socket also disconnects when this occurs (in 1.0 it
did not disconnect). They have made some improvements in the
tcpClient class, such as exposing the underlying socket, so I'd prefer
to keep using tcpClients. Any idea how I can prevent this
disconnect?

I don't know how they implement the timeout, but assuming they use the
underlying socket timeout, then there's a good reason for disconnecting
the socket after a timeout, as the socket is in an indeterminate state at
that point.

So, it's likely that this change in behavior is actually a usability
bug-fix, so that inexperienced socket programmers don't go trying to
continue to use a socket on which a timeout has occurred.

Note that this receive timeout is different from using a timeout in other
methods (e.g. Socket.Select()). You can implement a timeout in a variety
of other ways that don't invalidate the socket. But the ReceiveTimeout
property likely sets the socket's receive timeout option directly
(setsockopt(...SO_RCVTIMEO..)), and that's what will invalidate the socket.

Pete
 
Back
Top