E
Engineerik
I have created a socket server and a client which both run on the same
computer. The socket connection is done over localhost port 11050.
The server and the client are based on the sample asynchronous socket server
example code from the msdn library included with visual studio 2005.
The code works fine when run on windows xp.
If the applications are run on a windows 2000 computer there seems to be a
problem passing stings over 1440 characters in length.
When a return string (len>1440) is sent from the server to the client, only
the first 1440 characters actually travel to the client. I have used a
tcpspy program to confirm that the 1440 characters recieved at the client are
indeed all that left the server.
When I pass new input from the client to the server the server sends more
data back to the client but what arrives at the client is the remainder of
the string from the previous call.
From that point on all the communication between the client and server are
out of sync.
Sometimes a subsequent send from the server to the client will end up
pushing both the previous result and the current output so that the server
"catches up" to the client but this behavior is unpredictable.
Whenever the client and server are "caught up" with each other, s string
passed from the server to the client with a length greater than 1440 will
result in only the first 1440 characters being sent back and the client and
server are out of sync again.
Anyone know why this would happen on win2k?
computer. The socket connection is done over localhost port 11050.
The server and the client are based on the sample asynchronous socket server
example code from the msdn library included with visual studio 2005.
The code works fine when run on windows xp.
If the applications are run on a windows 2000 computer there seems to be a
problem passing stings over 1440 characters in length.
When a return string (len>1440) is sent from the server to the client, only
the first 1440 characters actually travel to the client. I have used a
tcpspy program to confirm that the 1440 characters recieved at the client are
indeed all that left the server.
When I pass new input from the client to the server the server sends more
data back to the client but what arrives at the client is the remainder of
the string from the previous call.
From that point on all the communication between the client and server are
out of sync.
Sometimes a subsequent send from the server to the client will end up
pushing both the previous result and the current output so that the server
"catches up" to the client but this behavior is unpredictable.
Whenever the client and server are "caught up" with each other, s string
passed from the server to the client with a length greater than 1440 will
result in only the first 1440 characters being sent back and the client and
server are out of sync again.
Anyone know why this would happen on win2k?