Magnus Persson said:
The byte order is not wrong since Windows is big-endian. If you need to
control the byte order you can stream it byte per byte yourself. That way
you can store integers in litte-endian instead. But why? If you use a binary
reader to read integers you will receive the same value as stored with the
binary writer. I can see the problem if you write data on a big-endian
system and read the data on a little-endian system but that can all be
solved by adding a header containing information about the "endianess".
If, of course, you have any say in the format. Pretty possible thats been
determined by parties well outside of the OP.
Also, Intel chips are little endian and Windows and native programs, by
extension, usually use use little endian integers nativly. The .NET CLR is
big endian as I recall, which is frustrating at times for binary
interopability with legacy native code. I assume that the JIT'er handles the
endian conversions for code execution, but that doesn't help with writing
binary data.
On other processors, the endian'ness (is there a proper term for this?) can
change. Assuming you get rotor or mono to work on the mac, you are on a
processor that, I think, can be either. However, I believe Mac OS uses
big-endian and native code would for that platform is more likely expect big
endian integers. I'm not sure on all these points, someone correct me if I'm
mistaken please.
Its something we all have to be used to sometimes...