V
Vernon Schryver
If you try sending from both computers at once you get packet collisions and
nothing gets thru.
That's utter drivel.
Vernon Schryver (e-mail address removed)
If you try sending from both computers at once you get packet collisions and
nothing gets thru.
shope said:Try sending something easy the other way during the transfer - ping maybe?
If the sending card cant handle inbound packets during transmit then there
may be something wrong with the driver or the IP stack. Or maybe your send
routine doesnt release any CPU?
The other possiblity is that the software chokes on such a large UDP packet
(i seem to remember some isues with bigger than 32k -1 byte packets to do
with 16 bit arithmetic) - try some smaller sizes if you can tune it.
If the cards are pretty old then they are probably half duplex. If you want
to force this then put an Ethernet repeater / 10M only hub between the 2
PCs.
The sending ethernet card and driver should impose the minimum Ethernet
inter packet gap on the transmission - around 10 uSec.
it isnt the gap so much as "reset time" in the software driver. Some old
cards (3Com 3c501) had to have the driver chip reset each time they recieved
or sent a packet - if the next one arrived during the "dead time" then it
was lost.
This is difficult to achieve in some hardware, and not required at all.Indeed... I didn't know about gap's and transmission times... His remark
about the gap sent me in the right direction.
I know believe 10 mbit ethernet is 10.000.000 bits/sec. Each bit takes 100
nanoseconds to send.
So if one has to send 64.000 bytes... and one wants to receive 64.000 bytes
as well... and the cards are half duplex
Then the calculation is as follows:
64.000 bytes * 8 bits * 100 nanoseconds / (1000 * 1000) = 51.2
milliseconds...
This is a rough estimate...
The idea is my software should no longer try to keep sending, sending,
sending... but it should wait a little bit so it can receive 64 kb.
So the idea is to wait 51.2 milliseconds or maybe even 102.4 milliseconds...
to allow the card and the stack to receive a packet.
That's utter drivel.
Vernon Schryver (e-mail address removed)
Correct, but impolite.
Ethernet controllers are well able to cope with two people wanting to send
at the same time. They will not begin to send if another machine is
already sending - they wait. And if 2 machines start to send at exactly
the same time, they detect this, back off and try again after a random
delay. Same way as people talking in a group of friends.
You can attempt to send as fast as you
like - unless you have a faulty Ethernet adapter, you will not fully block
everyone else who is trying to send.
Steve Horsley said:Ethernet hardware imposes a gap between frames with a slight randomising,
so that after a frame is finished, everyone has a roughly equal chance of
being able to send the next frame.
AFAIK, there is a random delay after a collision, in addition to the
interframe gap (96 bit-times?). Transmission is delayed until the end of a
"passing" frame, if there is one, but then no longer than the interframe gap
requires - no random element. This way gives better performance under light
loads, because there's a good chance of there being only one transmitter.
But under heavy loads, a random delay (as you described) is preferable; a
collision is virtually guaranteed.
Skybuck Flying said:maybe?
I am not sure but it seems that fails as well.
Could be that winsock tries to send the 64 kb in packets of 1500 bytes and
forgets to receive as well.
Indeed... I didn't know about gap's and transmission times... His remark
about the gap sent me in the right direction.
I know believe 10 mbit ethernet is 10.000.000 bits/sec. Each bit takes 100
nanoseconds to send.
So if one has to send 64.000 bytes... and one wants to receive 64.000 bytes
as well... and the cards are half duplex
Then the calculation is as follows:
64.000 bytes * 8 bits * 100 nanoseconds / (1000 * 1000) = 51.2
milliseconds...
This is a rough estimate...
The idea is my software should no longer try to keep sending, sending,
sending... but it should wait a little bit so it can receive 64 kb.
So the idea is to wait 51.2 milliseconds or maybe even 102.4 milliseconds...
to allow the card and the stack to receive a packet.
Skybuck Flying said:Well I wrote a simple program to test this idea.
It sends and receives a 64000 byte packet with winsock...
On the pentiumIII 450 mhz it takes around 3200 microseconds to send it.
On the pentiumIII 450 mhz it takes around 1500 microseconds to receive it
( from the PentiumI 166 )
On the pentiumI 166 mhz it takes around 16000 microseconds to send it.
On the pentiumI 166 mhz it takes around 10343 microseconds to receive it.
( from the PentiumIII 450 )
Vernon Schryver said:AFAIK, there is a random delay after a collision, in addition to the
interframe gap (96 bit-times?). Transmission is delayed until the end of
a "passing" frame, if there is one, but then no longer than the
interframe gap requires - no random element. This way gives better
performance under light loads, because there's a good chance of there
being only one transmitter. But under heavy loads, a random delay (as
you described) is preferable; a collision is virtually guaranteed.
No, there is not and should not be a random delay between back-to-back
transmissions except when the MAC is too slow to keep up. [...]
Such a random delay would do no good unless it were on the order of
a slot time or 64 bytes. [...] A random delay shorter than a slot time
would not be long enough to ensure that second station would get a chance
to start transmitting before the transmitter of the previous packet.
Standards conformant CSMA/CD systems start transmitting their next
packet immediately after their previous packet.
shope said:i believe you are thinking of the required min packet supported - 512 bytes
of UDP, which with overhead is 550+ bytes
if you stick a sniffer on any ethernet running m$soft networking or NFS you
will see plenty of frames of 1500 bytes carrying UDP, and many of those will
be fragments of bigger UDP packets. 64k is a bit unusual tho.
If your application is
send....
I wish I could use large udp packets over the internet always
Sin said:Yep, that's a known fact. Happens with TCP as well. There can be several
packets and ACKs going around once you get out of send. Use a sniffer and
you'll see it right away.
Sin said:Here are a couple of hints concerning your "problem" :
- UDP packets of 64K are not supported by all TCP/IP stacks. I work with QNX
at work and QNX is limited to 8K packets. QNX's stack is a port from another
unix like system if memory serves right, so I suspect other OSes might have
this limitation.
- There should be just a marginal difference as far as speed goes between
sending a 64K packet and a bunch of ~1400 bytes packets. None of the
"physical" packets will be greater than the MTU which is usually around 1500
bytes anyway.
- If you send a 64K datagram, it is segmented in several packets. If one of
these is corrupt or lost, the whole 64K transmission is compromised. By
sending small amounts at a time, you run the same loss/corruption risk, but
it gives you a chance to request only the part that is corrupt by using
checksums or other validation methods.
- If corruption and loss are a problem, UDP is NOT a good choice to begin
with.
- Making UDP reliable AND performant is not a task you want to get into.
Been there, done that. You simply do not have enough control at the
application level to conceive a generic solution.
- UDP across the internet runs a HIGH risk of packet loss. Routers are
notorious for breaking UDP communications.
Perhaps if you told us about your application and what you're really trying
to acheive we might be able to help you better? The little I know about your
project makes me think UDP is definitly not a good option.
when it is really sent or if it will also return earlier.