Family said:
Tony Johansson said:
[...]
For example looping through the array and summing all the values in each
index.
The looping should not matter, and the summing would rarely be contained
in a byte result. That is, the sum of many bytes likely exceeds a byte.
Therefore this would likely involve casting to ints, which would actually
take longer.
In native machine code? Probably not. Reading a single byte into a
register versus reading a 32-bit int into a register is basically the
same. Because of the static typing in C# and the specific types involved,
there's no real casting involved (i.e. the run-time doesn't have to do a
check.the data is just copied from one place to another).
IMHO, the correct answer is "who cares?"
The next correct answer is
"the only correct way to know is to write both and measure the performance
of each".
But if we're going to speculate, my expectation is that caching and
virtual memory effects will swamp any other performance consideration. And
in that case, given N array elements, an array of bytes will take 75% less
room, and thus require 75% fewer cache misses and page faults. The byte
array _could_ in fact perform better.
In reality, unless we're talking about data that naturally fits in a byte
array, is for some reason extremely large, and will be processed on a
32-bit system - and so there are non-performance reasons to go with the
byte[] versus uint[] or int[] - the code ought to just be written with
uint[] or int[], until such time as it's been proven that there is some
performance bottleneck that can be addressed by using byte[].
Pete