What is wrong with BitArray.CopyTo()?

  • Thread starter Thread starter SpookyET
  • Start date Start date
S

SpookyET

For some reason the method reverses the array, which is very annoying. If
I put in 00001000 which is 8, i get back 00010000 which is 16. If I put in
00000100 (4), I get back 00100000 (32). This means that CopyTo is
useless, since I have to reverse it to get it right.

public static void Main()
{
bool[] bools = new bool[8] { false, false, false, false, false, true,
false, false};
BitArray bits = new BitArray(bools);
byte myByte = 0;
byte[] myByteArray = new byte[bits.Count / 8];
int index;

for (index = 0; index < bits.Length; index++)
{
myByte <<= 1;
myByte |= Convert.ToByte(bits[index]);
}

// An uglier way.
// for (index = 0; index < bits.Length; index++)
// {
// myByte = ((byte)((myByte << 1) | (bits[index] ? 1 : 0)));
// }

bits.CopyTo(myByteArray, 0);
Console.WriteLine(myByte);
Console.WriteLine(myByteArray[0]);
Console.ReadLine();
}

Wouldn't it be better if there was a struct System.Bit with a C# alias of
"bit", that accepts 0/1 and true/false.
bit myBit = 1;
byte myByte = 0;

myByte <<= 1;
byte |= myBit;

This is much easier and cleaner, and no casting. I know about enum with
the [FlagsAttribute], but this has nothing to do with flags/options, for
those of you that might suggest that.
 
Hi,

If you look at your array of bools

bool[] bools = new bool[8] { false, false, false, false, false, true, false,
false};

The first element is element 0, this will map to bit 0 in the BitArray and
the second element, element 1 maps to bit 1 in the BitArray and so on.
Because the natural ordering of bits is from right to left this gives the
impression that the order is being reversed. In your case the 5th element in
the array is true therefore the 5th bit should be on 2^5 = 32 which is the
correct/expected result given the input data.

Your loop:
for (index = 0; index < bits.Length; index++)
{
myByte <<= 1;
myByte |= Convert.ToByte(bits[index]);
}

Is causing the bit reversal from the natural order.

Hope this helps
 
Though I understand your point of view, it's just a very bizar way to handle
them.

A couple of weeks back I had a byte stream on which I had to do bit parsing.
Using something like

Byte [] b = new Byte[2] {5,15}; // this came from a file
BitArray ba = new BitArray(b);

to get the bits is just useless if you want to capture a sequence which goes
over more than 1 byte (some encoding eg) since you would have to reserve the
bits inside each 'byte' if you want to read them logically.

Unless I missed some function which takes care of that, I agree with
SpookyET that it's very annoying.

Yves

Chris Taylor said:
Hi,

If you look at your array of bools

bool[] bools = new bool[8] { false, false, false, false, false, true, false,
false};

The first element is element 0, this will map to bit 0 in the BitArray and
the second element, element 1 maps to bit 1 in the BitArray and so on.
Because the natural ordering of bits is from right to left this gives the
impression that the order is being reversed. In your case the 5th element in
the array is true therefore the 5th bit should be on 2^5 = 32 which is the
correct/expected result given the input data.

Your loop:
for (index = 0; index < bits.Length; index++)
{
myByte <<= 1;
myByte |= Convert.ToByte(bits[index]);
}

Is causing the bit reversal from the natural order.

Hope this helps

--
Chris Taylor
http://dotnetjunkies.com/WebLog/chris.taylor/
SpookyET said:
For some reason the method reverses the array, which is very annoying. If
I put in 00001000 which is 8, i get back 00010000 which is 16. If I put in
00000100 (4), I get back 00100000 (32). This means that CopyTo is
useless, since I have to reverse it to get it right.

public static void Main()
{
bool[] bools = new bool[8] { false, false, false, false, false, true,
false, false};
BitArray bits = new BitArray(bools);
byte myByte = 0;
byte[] myByteArray = new byte[bits.Count / 8];
int index;

for (index = 0; index < bits.Length; index++)
{
myByte <<= 1;
myByte |= Convert.ToByte(bits[index]);
}

// An uglier way.
// for (index = 0; index < bits.Length; index++)
// {
// myByte = ((byte)((myByte << 1) | (bits[index] ? 1 : 0)));
// }

bits.CopyTo(myByteArray, 0);
Console.WriteLine(myByte);
Console.WriteLine(myByteArray[0]);
Console.ReadLine();
}

Wouldn't it be better if there was a struct System.Bit with a C# alias of
"bit", that accepts 0/1 and true/false.
bit myBit = 1;
byte myByte = 0;

myByte <<= 1;
byte |= myBit;

This is much easier and cleaner, and no casting. I know about enum with
the [FlagsAttribute], but this has nothing to do with flags/options, for
those of you that might suggest that.
 
phoenix said:
Though I understand your point of view, it's just a very bizar way to handle
them.

A couple of weeks back I had a byte stream on which I had to do bit parsing.
Using something like

Byte [] b = new Byte[2] {5,15}; // this came from a file
BitArray ba = new BitArray(b);

to get the bits is just useless if you want to capture a sequence which goes
over more than 1 byte (some encoding eg) since you would have to reserve the
bits inside each 'byte' if you want to read them logically.

No, I don't think so. Normally, bit 0 is the least-most bit, and indeed
that's the one which BitArray will use. For instance, {5, 15} can be
regarded as 5 being bits 0-7 (bits 0 and 2 being "on" and the rest
being off) and 15 being bits 8-15 (bits 8-11 being "on" and the rest
being off). That's exactly how I'd naturally expect it to work, and
that's what BitArray does.
 
CopyTo treats arrays of different types differently, look at the code with
a decompiler. Try my code and see for yourself.
By the way, it seems better to ask questions in this newsgroup than the C#
newsgroup because a lot of c# first-timers polute the group with a ton of
question.



phoenix said:
Though I understand your point of view, it's just a very bizar way to
handle
them.

A couple of weeks back I had a byte stream on which I had to do bit
parsing.
Using something like

Byte [] b = new Byte[2] {5,15}; // this came from a file
BitArray ba = new BitArray(b);

to get the bits is just useless if you want to capture a sequence which
goes
over more than 1 byte (some encoding eg) since you would have to
reserve the
bits inside each 'byte' if you want to read them logically.

No, I don't think so. Normally, bit 0 is the least-most bit, and indeed
that's the one which BitArray will use. For instance, {5, 15} can be
regarded as 5 being bits 0-7 (bits 0 and 2 being "on" and the rest
being off) and 15 being bits 8-15 (bits 8-11 being "on" and the rest
being off). That's exactly how I'd naturally expect it to work, and
that's what BitArray does.
 
SpookyET said:
CopyTo treats arrays of different types differently, look at the code with
a decompiler.

Well it's *got* to treat the arrays of different types differently,
obviously. It performs the way I'd expect it to perform in any case
though.
Try my code and see for yourself.

As I've said elsewhere, I've tried your code, and I'm convinced that
you just misunderstood what the BitArray constructor was doing.
By the way, it seems better to ask questions in this newsgroup

Which newsgroup do you mean? This thread is cross-posted.
than the C# newsgroup because a lot of c# first-timers polute the
group with a ton of question.

While I agree that the C# newsgroup isn't particularly appropriate for
this thread, I disagree with you about the reason. The real reason
(IMO) is that this thread is about the behaviour of a framework class,
not C# itself. I'd suggest only posting it on the .framework group
myself.
 
This is what you have to do to make BitArray to eject it right. You have
to put it in reverse. My guess is that it does the same for ints. As you
can see, if put it in as an array of bytes, it ejects right. It treats
index 0 of a bool array as bit 7, and index 7 of the array as bit 0. I
don't understand this. Maybe there should be a another parameter in the
constructor for the way the array should be treated forward/reverse.
Maybe you can explain this to me, please?

public class Test
{

public static void Main()
{
bool[] bools = new bool[8] {false, false, false, false, true, false,
false, false};
BitArray bits = new BitArray(bools.Length);
byte[] myByteArray = new byte[bits.Count / 8];
int boolsIndex = bools.Length - 1;
int bitsIndex = 0;

while (boolsIndex >= 0)
{
while (bitsIndex < bools.Length)
{
bits[bitsIndex] = bools[boolsIndex];
boolsIndex--;
bitsIndex++;
break;
}
}

bits.CopyTo(myByteArray, 0);
Console.WriteLine(myByteArray[0]);

bits = new BitArray(myByteArray);
Array.Clear(myByteArray, 0, myByteArray.Length);
bits.CopyTo(myByteArray, 0);
Console.WriteLine(myByteArray[0]);

Console.ReadLine();
}
}
 
SpookyET said:
This is what you have to do to make BitArray to eject it right. You have
to put it in reverse.

No, you really *don't*, IMO. You're just constructing the BitArray in a
different way to how you think you are. Your example doesn't give 8, it
gives 16 - the bit pattern is 00010000 because bit 0 (the rightmost bit
in a byte) is the first element of the boolean array.
My guess is that it does the same for ints. As you
can see, if put it in as an array of bytes, it ejects right. It treats
index 0 of a bool array as bit 7, and index 7 of the array as bit 0.

No, it's treating index 0 of the bool array as bit 0, i.e. the least
significant bit. You can verify that by printing out the bits after
constructing the bit array. For instance:

using System;
using System.Collections;

public class Test
{
public static void Main()
{
bool[] bools = new bool[8] {false, false, false, false,
true, false, false, false};
BitArray bits = new BitArray(bools);
for (int i=0; i < bits.Length; i++)
{
Console.WriteLine (bits);
}
}
}

prints out

False
False
False
False
True
False
False
False

which is the equivalent of constructing the bit array by passing in
new byte[]{16}.
I don't understand this. Maybe there should be a another parameter in the
constructor for the way the array should be treated forward/reverse.

Well, you could always call Array.Reverse instead...
 
I understand what it does now. In my mind, I was reading the bits like on
paper, from right to left.

decimal hex bit pattern bit number
8 0x0008 00000000 00001000 3
16 0x0010 00000000 00010000 4

This has got me confused. I'm not the only one, since a search on google
game some results with the same question that I have asked, "why
BitArray.CopyTo() returns the bits mirrored?", when in reality we are
inserting the bits reversed. I'm not sure why on paper bits are written
from right to left, since I do not know any programming languages written
from right to left. Thanks for your help and your patience.

SpookyET said:
This is what you have to do to make BitArray to eject it right. You
have
to put it in reverse.

No, you really *don't*, IMO. You're just constructing the BitArray in a
different way to how you think you are. Your example doesn't give 8, it
gives 16 - the bit pattern is 00010000 because bit 0 (the rightmost bit
in a byte) is the first element of the boolean array.
My guess is that it does the same for ints. As you
can see, if put it in as an array of bytes, it ejects right. It treats
index 0 of a bool array as bit 7, and index 7 of the array as bit 0.

No, it's treating index 0 of the bool array as bit 0, i.e. the least
significant bit. You can verify that by printing out the bits after
constructing the bit array. For instance:

using System;
using System.Collections;

public class Test
{
public static void Main()
{
bool[] bools = new bool[8] {false, false, false, false,
true, false, false, false};
BitArray bits = new BitArray(bools);
for (int i=0; i < bits.Length; i++)
{
Console.WriteLine (bits);
}
}
}

prints out

False
False
False
False
True
False
False
False

which is the equivalent of constructing the bit array by passing in
new byte[]{16}.
I don't understand this. Maybe there should be a another parameter in
the
constructor for the way the array should be treated forward/reverse.

Well, you could always call Array.Reverse instead...
 
SpookyET said:
I understand what it does now. In my mind, I was reading the bits like on
paper, from right to left.

No, you were reading them from left to right. If you'd been reading
them from right to left, all would have been well :)
decimal hex bit pattern bit number
8 0x0008 00000000 00001000 3
16 0x0010 00000000 00010000 4

This has got me confused. I'm not the only one, since a search on google
game some results with the same question that I have asked, "why
BitArray.CopyTo() returns the bits mirrored?", when in reality we are
inserting the bits reversed. I'm not sure why on paper bits are written
from right to left, since I do not know any programming languages written
from right to left. Thanks for your help and your patience.

It's not just binary - *all* bases work the same way. What's the least
significant digit of the number 123524? It's '4', right?

There are reasonable reasons to write things this way, but it's also
very reasonable to *number* the bits from right to left. For instance,
the number 1 in binary is bit 0 being set and all the other bits being
clear, however many bits you're using to represent it.

I can see why people might find it unintuitive to start with, but I
think it's entirely reasonable when you sit down and think about it.
 
There are reasonable reasons to write things this way, but it's also
very reasonable to *number* the bits from right to left. For instance,
the number 1 in binary is bit 0 being set and all the other bits being
clear, however many bits you're using to represent it.
<snip>
Jon,
While it might seem reasonable I am sure you realise that bit
numbering from left to right or right to left is not standardised.

It varies depending on cpu manufacturer. Within the Wintel space
we have to deal with the horrible little-endian convention which
for values large than an octet forces a weird mindset, i.e., the
convention for writing things down and their representation is
memory are two different things.

Regards, Oz
 
ozbear said:
While it might seem reasonable I am sure you realise that bit
numbering from left to right or right to left is not standardised.

Um, yes it is.
It varies depending on cpu manufacturer. Within the Wintel space
we have to deal with the horrible little-endian convention which
for values large than an octet forces a weird mindset, i.e., the
convention for writing things down and their representation is
memory are two different things.

Endianness doesn't concern the actual numbering of bits within a value.
A 16-bit value *always* has bit 15 as its most significant bit, and bit
0 as its least significant bit.

The order of the 2 bytes which make up those 16 bits in memory is
independent of the conceptual ordering of the bits themselves. On some
boxes they may be:

7/6/5/4/3/2/1/0 15/14/13/12/11/10/9/8

and on others

15/14/13/12/11/10/9/8 7/6/5/4/3/2/1/0

The significance of bit 0 is always 1, however, and the value of bit 15
is always 32768.
 
Um, yes it is.

You didn't quite read far enough...
Endianness doesn't concern the actual numbering of bits within a value.
A 16-bit value *always* has bit 15 as its most significant bit, and bit
0 as its least significant bit.

My endian-ness comment related to the difference between the
way you write down things on paper, or express them in
source code, and their representation in memory.

My point stands about bit-numbering. Depending on the
manufacturer, they might be numbered left-to-right or
right to left.

For example, the mainframes I deal with frequently number
the bits from 0->15, right to left, and it is big endian.
Motorola processors are big endian, but number their bits
in the reverse. And so on.

Oz
 
ozbear said:
You didn't quite read far enough...

I did, but I'm not sure I saw your point before...
My endian-ness comment related to the difference between the
way you write down things on paper, or express them in
source code, and their representation in memory.

My point stands about bit-numbering. Depending on the
manufacturer, they might be numbered left-to-right or
right to left.

For example, the mainframes I deal with frequently number
the bits from 0->15, right to left, and it is big endian.
Motorola processors are big endian, but number their bits
in the reverse. And so on.

Hmm... that's certainly something I haven't come across. Are you saying
that they treat bit 0 as anything other than the least significant bit?
If so, do you have any references to that?

However things are actually stored in memory, I've never heard of bit 0
being anything other than the least significant bit.
 
Jon Skeet said:
Hmm... that's certainly something I haven't come across. Are you saying
that they treat bit 0 as anything other than the least significant bit?
If so, do you have any references to that?

However things are actually stored in memory, I've never heard of bit 0
being anything other than the least significant bit.

And more (should have said before). However values will be stored in
memory, I've never seen them written on paper (other than as memory
dumps) in any way other than the natural one which follows our normal
decimal system, i.e. bit 0 at the rightmost end, etc.

So even if the number 256 is stored in a bizarre way in memory, unless
I'm *specifically* writing that memory order down, I'd always write it
as

0000000100000000 (with or without the leading zeros)
 
Hi,

Having worked with a number of processor architectures, I am yet to come
across an architecture where the numbering of bits is reversed. As Jon has
pointed out, endianness determines byte order, *NOT* bit order.

Numbers, at least when using the Arabic numerals and/or Latin characters,
are always written on paper with the right most digit representing least
significant order.
 
And more (should have said before). However values will be stored in
memory, I've never seen them written on paper (other than as memory
dumps) in any way other than the natural one which follows our normal
decimal system, i.e. bit 0 at the rightmost end, etc.

Bit numbering from bit 0 on the left to bit 15/32/etc on the right is
common in mainframes, specifically the way IBM and HP/Compaq/Tandem
number their bits (and these machines are not uncommon, handling
most of the world's transactions one way or the other).

See:
http://www-3.ibm.com/chips/techlib/...CF387256B1B0052FFC9/$file/ECC_appnote_1_0.pdf
Page 3 and (watch out for url-wrap)
http://www-1.ibm.com/servers/eserver/zseries/os/linux/pdf/l390ABI0.pdf
Chapter 1 (Low-level system interface), page 3, byte ordering
http://h30163.www3.hp.com/ (NonStop Technical Library) Himalaya
S-Series Server Description Manual, Chapter 3 - TNS Data Formats and
Number Representations, Page 3-5.

So even if the number 256 is stored in a bizarre way in memory, unless
I'm *specifically* writing that memory order down, I'd always write it
as

0000000100000000 (with or without the leading zeros)

Indeed, but it the compiler doing the translation of the external
format to an internal one, which can be quite different from
the processor's view of bit numbers.

Oz
 
ozbear said:
Bit numbering from bit 0 on the left to bit 15/32/etc on the right is
common in mainframes, specifically the way IBM and HP/Compaq/Tandem
number their bits (and these machines are not uncommon, handling
most of the world's transactions one way or the other).

Cheers. Very odd.
Indeed, but it the compiler doing the translation of the external
format to an internal one, which can be quite different from
the processor's view of bit numbers.

But that's irrelevant to the original question, concerning the
behaviour of BitArray. The behaviour of BitArray when constructed with
an array of bools takes them in order, with the first one being bit 0,
etc.

From there, converting it to a byte array deals with it in the natural
way, IMO. So, while bit numbering within internals may not be
standardised, I think it's fair to say that bit numbering of numbers
themselves is standardised, just as it's standardised for decimal:
twelve is never written "21" in decimal, just as it's never written 011
in binary without specific reference to a given architecture.
 
Hi Oz,

I am a long time out of that, but was it not that the representing of a
decimal value is done different in mainframes than in a microprocessor.

I have worked with Burroughs computers, they used 4 bits for a digit.

But I have also worked with IBM mainframes which has more formats for
representing a value as far as I remember me

(That Burroughs I am sure of because that was greath, there was a processor
which internaly converted digits to decimals so you where alway working with
decimals, while it also had hardware instructions with 3 adresses so you had
instructions multiply a, b, giving c).

Cor
 
Hi Jon,

Are you not from England?

That country where they ride on the most significant part of the road.

This time I am not arguing with you
(I agree with the message from Chris and that means with you I saw).

But I had fun about that.

Cor
 
Back
Top