Kenny said:
Hello All - have a problem that I'm pulling my hair out on. I'm trying to
take a hex value and convert it to 16bit big-endian format. I can eaisly
get
it to little-endian format by convert.ToUInt16(&H600) - works like a
champ.
But there is no conver to UInt16 big-endian. Am I missing a conversion
function somewhere or do I have to do some shifting and bitmasking to make
this work?
Any routine that converts endianness converts it both ways since it is just
a byte swap so you can use the same method for both.
The "proper" way to do endianness is to have machine specific methods that
either swap or don't depending on the architecture of the machine on which
they are running - In this scenario you don't have to know your architecture
or even think - you just call the method corresponding to the
incoming/outgoing order on all values as they enter and leave your app.
e.g. If my external interface is defined as BigEndian I write
internalVal = BigEndian(externalVal); // incoming
externalVal = BigEndian(internalVal); // outgoing
and if little endian i write:
internalVal = LittleEndian(externalVal); // incoming
externalVal = LittleEndian(internalVal); // outgoing
For any given architecture one of these methods will do nothing and the
other will swap bytes but I don't have to know or care which it is which.
As far as I know no major system does it this way but I came across it as
macros in QNX/Momentics when I was writing embedded C++ and it certainly
makes life much simpler. Of course as macros there is no cost in the no-op
method which is not the case with C#. You could have the same effect by
using something like:
[Conditional("BigEndian")]
void LittleEndian(ref int i) { i = Swap(i); }
[Conditional("LittleEndian")]
void BigEndian(ref int i) { i = Swap(i); }
Of course I appreciate that all the readers of this thread are so embedded
in the MS/intel world that this is a non-issue but personally I keep
forgetting and I like the elegance of this approach.