Bret Mulvey said:
psuedonym said:
In my day we did it like this ;-)
UInt32 u = 0;
Byte[] b = new Byte[4] { 0x0A, 0x0B, 0x0C, 0x0D };
u = (UInt32)((b[0] << 24) | (b[1] << 16) | (b[2] << 8) | (b[3] << 0));
This is much faster than BitConverter.ToUInt32. Turning optimizing on makes
it about five times faster, whereas BitConverter.ToUInt32 only benefits
slightly from the optimize switch.
This is very odd - I would have thought the above would be pretty much
exactly what BitConverter.ToUInt32 would do, and that it would all be
inlined. Is it parameter checking which is slowing it down, do you
think?
BitConverter.ToUInt32 puts the first byte
in the least-significant position of the UInt32, whereas with your method
you can control it (you've done the reverse in your example).
It's not even that simple - BitConverter.ToUInt32 will put the first
byte in whichever position it thinks the platform prefers, so you can't
even depend on it doing a single thing across all platforms (if you're
considering non-Windows in the first place, that is). It's always
surprised me that BitConverter doesn't allow you to *set* the
endianness, or that there aren't two separate classes
LittleEndianBitConverter and BigEndianBitConverter.