M
Massimo
I'm facing quite a strange problem with a network server application.
This is quite a complex project, involving some embedded roaming clients
sending data to a central server using GPRS modems, and only the server part
is being developed using .NET: the clients uses C-language firmware running
on microcontrollers.
I'm not quite sure the clients' TCP/IP libraries really follow every
existing standard, so I can't be sure the problem isn't there (I'm quite
sure of the opposite, actually)... but they can connect without troubles to
any other TCP/IP network server in the world (ok, maybe this is exagerating
a bit, but you get the idea), so I think there must be some problem in the
server code.
The server is quite simple: it just sits there, waits for client
connections, accepts them and starts reading from the sockets until a server
shutdown is requested or the connection breaks. It never sends any data
because the client-server protocol is unidirectional, and it never closes
the connection, unless preliminary authentication fails or an error is
detected.
This is (roughly) the server's code:
----------
int firstpacketsize = 10;
int port = 42;
Socket serversocket = null;
void Start()
{
serversocket = new
Socket(AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp);
serversocket.Bind(new IPEndPoint(IPAddress.Any,port));
serversocket.Listen(10);
serversocket.BeginAccept(AcceptCallback,null);
return;
}
void AcceptCallback(IAsyncResult ar)
{
Socket clientsocket = serversocket.EndAccept(ar);
string ip = (clientsocket.RemoteEndPoint as
IPEndPoint).Address.ToString();
Console.WriteLine("Connection started from {0}",ip);
byte[] packet = new byte[firstpacketize];
Console.WriteLine("Waiting for first packet");
int r =
clientsocket.Receive(packet,Protocol.AuthPacketSize,SocketFlags.None);
// ...
// Some error checking and client authentication code
// ...
// Now a new ClientConnection object is created and starts its own
// asynchronous data reading and processing
// ...
serversocket.BeginAccept(AcceptCallback,null);
return;
}
----------
This code works perfectly when the client is a .NET program, but crashes
when a connection is made by the "real" client. A SocketException is thrown
during the first clientsocket.Receive(), complaining about the connection
being forcibly closed by the remote host. But the worse is still to come:
after this error (and the subsequent clientsocket.Close()),
serversocket.BeginAccept() too throws an exception!
The client, of course, didn't close anything... or at least didn't want to.
I think there must be some nasty bug in the client's TCP/IP libraries,
because the server works flawlessly with a .NET client... but, as I said,
the clients seem to work ok when connecting to other network servers, so
maybe the problem's here and I'm not correctly handling some error that
usually doesn't happen but sometimes do.
What can I do to understand what's really happening here?
Do you see any flaw in my server code which could account for this behaviour
upon clientsocket.Receive()?
And how can an error in clientsocket.Receive() cause another error in
serversocket.BeginAccept()?!?
I'm really puzzled here...
Massimo
This is quite a complex project, involving some embedded roaming clients
sending data to a central server using GPRS modems, and only the server part
is being developed using .NET: the clients uses C-language firmware running
on microcontrollers.
I'm not quite sure the clients' TCP/IP libraries really follow every
existing standard, so I can't be sure the problem isn't there (I'm quite
sure of the opposite, actually)... but they can connect without troubles to
any other TCP/IP network server in the world (ok, maybe this is exagerating
a bit, but you get the idea), so I think there must be some problem in the
server code.
The server is quite simple: it just sits there, waits for client
connections, accepts them and starts reading from the sockets until a server
shutdown is requested or the connection breaks. It never sends any data
because the client-server protocol is unidirectional, and it never closes
the connection, unless preliminary authentication fails or an error is
detected.
This is (roughly) the server's code:
----------
int firstpacketsize = 10;
int port = 42;
Socket serversocket = null;
void Start()
{
serversocket = new
Socket(AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp);
serversocket.Bind(new IPEndPoint(IPAddress.Any,port));
serversocket.Listen(10);
serversocket.BeginAccept(AcceptCallback,null);
return;
}
void AcceptCallback(IAsyncResult ar)
{
Socket clientsocket = serversocket.EndAccept(ar);
string ip = (clientsocket.RemoteEndPoint as
IPEndPoint).Address.ToString();
Console.WriteLine("Connection started from {0}",ip);
byte[] packet = new byte[firstpacketize];
Console.WriteLine("Waiting for first packet");
int r =
clientsocket.Receive(packet,Protocol.AuthPacketSize,SocketFlags.None);
// ...
// Some error checking and client authentication code
// ...
// Now a new ClientConnection object is created and starts its own
// asynchronous data reading and processing
// ...
serversocket.BeginAccept(AcceptCallback,null);
return;
}
----------
This code works perfectly when the client is a .NET program, but crashes
when a connection is made by the "real" client. A SocketException is thrown
during the first clientsocket.Receive(), complaining about the connection
being forcibly closed by the remote host. But the worse is still to come:
after this error (and the subsequent clientsocket.Close()),
serversocket.BeginAccept() too throws an exception!
The client, of course, didn't close anything... or at least didn't want to.
I think there must be some nasty bug in the client's TCP/IP libraries,
because the server works flawlessly with a .NET client... but, as I said,
the clients seem to work ok when connecting to other network servers, so
maybe the problem's here and I'm not correctly handling some error that
usually doesn't happen but sometimes do.
What can I do to understand what's really happening here?
Do you see any flaw in my server code which could account for this behaviour
upon clientsocket.Receive()?
And how can an error in clientsocket.Receive() cause another error in
serversocket.BeginAccept()?!?
I'm really puzzled here...
Massimo