Socket data out of order

  • Thread starter Thread starter Lee Gillie
  • Start date Start date
L

Lee Gillie

I am using asynchronous socket receives and writing the data to a disk
file. I am finding all of the data makes it to the file, but it is
SLIGHTLY out of order.

The approach is to create a pool of contexts, each which has a 1K byte
buffer. Then I start it all by getting a buffer from the pool, and do a
BeginReceive on the socket.

In the receive completion handler I call EndReceive.
THEN get a buffer from the free pool, and I invoke BeginReceive again.
If my buffer pool was exhausted I would allocate a new one, rather than
stall.
THEN I invoke BeginWrite to write the just-received data to the disk file.

In the write completion handler I return the context and buffer to my
buffer pool.

I am managing my buffer pool in a .NET "Queue". I protect all Enqueue
and Dequeue with Synclock on the Queue object. Enqueues and Dequeues
seem to have integrity, in that there are the correct number of buffer
pools in the queue when the whole thing winds down.

Without the disk writing I get about 60 MBits / second maximum received
on the socket. When I involve the disk writing it hunkers down to about
22 MBits / second. My pool size is initially 25 contexts/buffers, but
typically grows to about 80-90 with very large and very fast transmissions.

I don't queue up multiple simultaneous socket receives. I know that
doing that could easily result in EndReceives firing out of order with
the data stream. But obviously I am queueing up to about 80-90 pending
disk writes. But they SHOULD be queued up in the order they were
received. And I understand they should then end up in the disk file in
the correct order. I suspect something about this aspect of writing to
the disk is what is getting the data out of order.

Can you see what I am doing wrong? Or perhaps there are some known bugs
in the Framework? Maybe I need a SyncLock that entirely covers the
segment of code starting with the EndReceive and ending with the BeginWrite?
 
Lee said:
Maybe I need a SyncLock that entirely covers the
segment of code starting with the EndReceive and ending with the
BeginWrite?

I did this. The data is now the correct order in the disk file. But it
further throttled the throughput rate down to about 13 Mbits per second.
It no longer dynamically allocates additional buffers. Is this the
very best I can do?

- Lee
 
I think it's the BeginWrite that's causing the problem. Each
BeginWrites runs on its own threadpool thread, so there is no guarantee
they would be executed in the same order in which they were fired.

OTOH, I don't think blocking further EndRecieve's till the BeginWrite
completes is a good solution. I'd suggest creating a separate thread
waiting over a single queue, that synchronously writes out items posted
in the queue to the disk.

This way, the socket handling code would simply post a message to the
queue and continue receiving other messages, while your thread writes
the posted messages in the correct order.

Hope this helps.

Regards
Senthil
 
S. Senthil Kumar said:
I think it's the BeginWrite that's causing the problem. Each
BeginWrites runs on its own threadpool thread, so there is no guarantee
they would be executed in the same order in which they were fired.

OTOH, I don't think blocking further EndRecieve's till the BeginWrite
completes is a good solution. I'd suggest creating a separate thread
waiting over a single queue, that synchronously writes out items posted
in the queue to the disk.

This way, the socket handling code would simply post a message to the
queue and continue receiving other messages, while your thread writes
the posted messages in the correct order.

Hope this helps.

Regards
Senthil

I like your strategy. I am guessing I can get most of my potential
throughput capacity back in this way. I'll try it tomorrow.

I eventually figured out that in my receive handler, it was interrupted
by another receive completion before the disk write was queued. I feel
pretty certain that writes are made in the order they are queued. I have
stacked them up before in other work and preserved data order. But it is
vital to protect the code between end receive and begin write so that
interruption does not occur, otherwise the writes are not queued in the
correct order. I don't know if I can start a synclock before the end
receive call though. It occurs to me that interruption could occur by
the very next line of code. Although I never actually found this to
happen, there is not much code between my original end-receive and
begin-write call.

Your strategy allows me to always effectively queue the write on almost
the very next line of code after the end-write. Reducing the
interruption effect. I think the early queuing of the next begin-receive
opens the opportunity for interruption.

It is not clear to me how we are supposed to protect servicing of
end-receive from interruption 100% and without throttling back the
throughput. If an end-receive is interrupted by another thread of
end-receive it seems likely the one doing the interruption will complete
before the first one.

- Lee
 
Back
Top