RE: How to implement the Socket Timeout when receiving data?

  • Thread starter Thread starter Rich Hanbidge [MSFT]
  • Start date Start date
R

Rich Hanbidge [MSFT]

Hi Anderson,
You are running into a problem with TCP itself. TCP will not detect
connection failures when you are receiving data, only when sending.

What you can do though, is use Socket.Select. This will keep you from
freezing the application.

You may be able to perform a Send of size zero, to test the connection, but
I don't know if that will work. You may mess up any reads from the other
end.

Good luck!

Rich
This posting is provided "AS IS" with no warranties, and confers no rights.
--------------------
| From: "Anderson Takemitsu Kubota" <[email protected]>
| Subject: How to implement the Socket Timeout when receiving data?
| Date: Mon, 30 Jun 2003 12:24:38 -0300
| Lines: 24
| X-Priority: 3
| X-MSMail-Priority: Normal
| X-Newsreader: Microsoft Outlook Express 6.00.2800.1158
| X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1165
| Message-ID: <#[email protected]>
| Newsgroups: microsoft.public.dotnet.framework.compactframework
| NNTP-Posting-Host: 200.189.66.138
| Path: cpmsftngxa09.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP10.phx.gbl
| Xref: cpmsftngxa09.phx.gbl
microsoft.public.dotnet.framework.compactframework:9493
| X-Tomcat-NG: microsoft.public.dotnet.framework.compactframework
|
| Hi!
|
| I read some posts on this group about this topic and I saw I need to
| implement the close in a timer event.
| Can anybody help me for this? I don't know how to do it.
|
| This is a sample code that I am using:
|
| while (true)
| {
| bytes = cSocket.Receive(buffer, buffer.Length, 0);
| output.Write(this.buffer,0,this.bytes);
| if ( this.bytes <= 0)
| break;
| }
|
| The problem is that if I lost connection in the while block the
| cSocket.Receive doesn't return any value and freezes the application.
|
| Thank you.
|
| Anderson T. Kubota
|
|
|
 
Hi Rich!

First of all, thanks.
But I believe that I did it well using CSocket from eVC 3.0. It returns an
error when fired socket.receive or socket.send.

Anderson T. Kubota
 
Yes, and that's a problem in many industrial applications of Ethernet
devices. It would be a *huge* improvement to WinSock if you could set the
keep-alive time-out on a socket-by-socket basis, with any socket whose value
wasn't explicitly set using the backward-compatible registry entry. This
would allow quick responsiveness to changing network conditions in real-time
applications, while minimizing network usage for applications where the
connectivity state isn't a critical parameter (Web browsing).

Paul T.

Rich Hanbidge said:
Awesome explaination Paul!

I think the default Keep Alive option is very long though, and set on the
system. That is, if you lower the keep alive timeout, it will be system
wide, and start to suck bandwidth.

From MSDN:
TCP Keep-Alive Messages
A TCP keep-alive packet is simply an ACK with the sequence number set to
one less than the current sequence number for the connection. A host
receiving one of these ACKs will respond with an ACK for the current
sequence number. Keep-alives can be used to verify that the computer at the
remote end of a connection is still available. Windows 2000 TCP keep-alive
behavior can be modified by changing the values of the KeepAliveTime and
KeepAliveInterval registry entries
(HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters). TCP keep-alives
can be sent once for every interval specified by the value of KeepAliveTime
(defaults to 7,200,000 milliseconds, or two hours) if no other data or
higher level keep-alives have been carried over the TCP connection. If
there is no response to a keep-alive, it is repeated once every interval
specified by the value of KeepAliveInterval in seconds. By default, the
KeepAliveInterval entry is set to a value of one second.

Cheers,
Rich
This posting is provided "AS IS" with no warranties, and confers no rights.
--------------------
| From: "Paul G. Tobey [eMVP]" <[email protected]>
| References: <#[email protected]>
<[email protected]>
<[email protected]>
| Subject: Re: How to implement the Socket Timeout when receiving data?
| Date: Fri, 11 Jul 2003 09:44:24 -0700
| Lines: 114
| X-Priority: 3
| X-MSMail-Priority: Normal
| X-Newsreader: Microsoft Outlook Express 6.00.2800.1158
| X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1165
| Message-ID: <[email protected]>
| Newsgroups: microsoft.public.dotnet.framework.compactframework
| NNTP-Posting-Host: s2.instrument.client.aces.net 198.182.119.2
| Path: cpmsftngxa06.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP10.phx.gbl
| Xref: cpmsftngxa06.phx.gbl
microsoft.public.dotnet.framework.compactframework:28009
| X-Tomcat-NG: microsoft.public.dotnet.framework.compactframework
|
| It's possible for you to detect a lost connection on receive, but, in
order
| to see the limitation, you have to understand a little about what happens
| when you are sending and receiving.
|
| Send
| When you send, the packet is sent and TCP starts a timer. If it doesn't
| receive an acknowledgement from the other end of the connection within
some
| time-out, it assumes that the connection is lost (it actually retransmits
a
| few times, but you get the idea). When this time-out occurs, send()
returns
| an error and the connection is marked 'down'.
|
| Receive
| When you ask to receive some packet, TCP doesn't know when to expect it,
so
| there's no time-out to be set. If the connection should happen to die
after
| the packet *is* received, but before the acknowledgement can be sent out,
or
| if the acknowledgement is sent out, but the original sender doesn't send
the
| acknowledgement of the acknowledgement (yes, it's *really* a lot of
| handshaking), then the loss of connection can be detected, the socket will
| be marked as 'down' and the current or maybe the next recv() will fail.
|
| If the connection is idle, however, or if you call recv() during an idle
| period on the connection, neither end can possibly know whether the other
| end is there or not. For this purpose, keep-alive packets were added to
the
| TCP specification. If a socket has been idle for some period (the
standard
| is two hours, so don't expect fast real-time response), each end will try
to
| send an empty packet to the other end, just to make sure he's still there.
| If an acknowledgement is received before the standard time-out, then the
| connection is marked as good and, some time later, another keep-alive will
| be tried to make sure it's still good. If the acknowledgement does not
| occur, the socket will be marked as down (again, there are retransmits,
not
| just a single packet, but we'll gloss over that), and the next recv() or
| send() on either end will fail. However, since keep-alives take up
network
| bandwidth, they are not on by default. You have to turn them on. In
| WinSock, you send SO_KEEPALIVE to setsockopt() to do this.
|
| Paul T.
|
| | > Hi Rich!
| >
| > First of all, thanks.
| > But I believe that I did it well using CSocket from eVC 3.0. It returns
an
| > error when fired socket.receive or socket.send.
| >
| > Anderson T. Kubota
| >
| > | > > Hi Anderson,
| > > You are running into a problem with TCP itself. TCP will not detect
| > > connection failures when you are receiving data, only when sending.
| > >
| > > What you can do though, is use Socket.Select. This will keep you from
| > > freezing the application.
| > >
| > > You may be able to perform a Send of size zero, to test the
connection,
| > but
| > > I don't know if that will work. You may mess up any reads from the
| other
| > > end.
| > >
| > > Good luck!
| > >
| > > Rich
| > > This posting is provided "AS IS" with no warranties, and confers no
| > rights.
| > > --------------------
| > > | From: "Anderson Takemitsu Kubota" <[email protected]>
| > > | Subject: How to implement the Socket Timeout when receiving data?
| > > | Date: Mon, 30 Jun 2003 12:24:38 -0300
| > > | Lines: 24
| > > | X-Priority: 3
| > > | X-MSMail-Priority: Normal
| > > | X-Newsreader: Microsoft Outlook Express 6.00.2800.1158
| > > | X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1165
| > > | Message-ID: <#[email protected]>
| > > | Newsgroups: microsoft.public.dotnet.framework.compactframework
| > > | NNTP-Posting-Host: 200.189.66.138
| > > | Path: cpmsftngxa09.phx.gbl!TK2MSFTNGP08.phx.gbl!TK2MSFTNGP10.phx.gbl
| > > | Xref: cpmsftngxa09.phx.gbl
| > > microsoft.public.dotnet.framework.compactframework:9493
| > > | X-Tomcat-NG: microsoft.public.dotnet.framework.compactframework
| > > |
| > > | Hi!
| > > |
| > > | I read some posts on this group about this topic and I saw I need to
| > > | implement the close in a timer event.
| > > | Can anybody help me for this? I don't know how to do it.
| > > |
| > > | This is a sample code that I am using:
| > > |
| > > | while (true)
| > > | {
| > > | bytes = cSocket.Receive(buffer, buffer.Length, 0);
| > > | output.Write(this.buffer,0,this.bytes);
| > > | if ( this.bytes <= 0)
| > > | break;
| > > | }
| > > |
| > > | The problem is that if I lost connection in the while block the
| > > | cSocket.Receive doesn't return any value and freezes the
application.
| > > |
| > > | Thank you.
| > > |
| > > | Anderson T. Kubota
| > > |
| > > |
| > > |
| > >
| >
| >
|
|
|
 
Back
Top