L
Lee Gillie
I am using asynchronous socket receives and writing the data to a disk
file. I am finding all of the data makes it to the file, but it is
SLIGHTLY out of order.
The approach is to create a pool of contexts, each which has a 1K byte
buffer. Then I start it all by getting a buffer from the pool, and do a
BeginReceive on the socket.
In the receive completion handler I call EndReceive.
THEN get a buffer from the free pool, and I invoke BeginReceive again.
If my buffer pool was exhausted I would allocate a new one, rather than
stall.
THEN I invoke BeginWrite to write the just-received data to the disk file.
In the write completion handler I return the context and buffer to my
buffer pool.
I am managing my buffer pool in a .NET "Queue". I protect all Enqueue
and Dequeue with Synclock on the Queue object. Enqueues and Dequeues
seem to have integrity, in that there are the correct number of buffer
pools in the queue when the whole thing winds down.
Without the disk writing I get about 60 MBits / second maximum received
on the socket. When I involve the disk writing it hunkers down to about
22 MBits / second. My pool size is initially 25 contexts/buffers, but
typically grows to about 80-90 with very large and very fast transmissions.
I don't queue up multiple simultaneous socket receives. I know that
doing that could easily result in EndReceives firing out of order with
the data stream. But obviously I am queueing up to about 80-90 pending
disk writes. But they SHOULD be queued up in the order they were
received. And I understand they should then end up in the disk file in
the correct order. I suspect something about this aspect of writing to
the disk is what is getting the data out of order.
Can you see what I am doing wrong? Or perhaps there are some known bugs
in the Framework? Maybe I need a SyncLock that entirely covers the
segment of code starting with the EndReceive and ending with the BeginWrite?
file. I am finding all of the data makes it to the file, but it is
SLIGHTLY out of order.
The approach is to create a pool of contexts, each which has a 1K byte
buffer. Then I start it all by getting a buffer from the pool, and do a
BeginReceive on the socket.
In the receive completion handler I call EndReceive.
THEN get a buffer from the free pool, and I invoke BeginReceive again.
If my buffer pool was exhausted I would allocate a new one, rather than
stall.
THEN I invoke BeginWrite to write the just-received data to the disk file.
In the write completion handler I return the context and buffer to my
buffer pool.
I am managing my buffer pool in a .NET "Queue". I protect all Enqueue
and Dequeue with Synclock on the Queue object. Enqueues and Dequeues
seem to have integrity, in that there are the correct number of buffer
pools in the queue when the whole thing winds down.
Without the disk writing I get about 60 MBits / second maximum received
on the socket. When I involve the disk writing it hunkers down to about
22 MBits / second. My pool size is initially 25 contexts/buffers, but
typically grows to about 80-90 with very large and very fast transmissions.
I don't queue up multiple simultaneous socket receives. I know that
doing that could easily result in EndReceives firing out of order with
the data stream. But obviously I am queueing up to about 80-90 pending
disk writes. But they SHOULD be queued up in the order they were
received. And I understand they should then end up in the disk file in
the correct order. I suspect something about this aspect of writing to
the disk is what is getting the data out of order.
Can you see what I am doing wrong? Or perhaps there are some known bugs
in the Framework? Maybe I need a SyncLock that entirely covers the
segment of code starting with the EndReceive and ending with the BeginWrite?