Optimization Help Needed

  • Thread starter Thread starter Cool Guy
  • Start date Start date
C

Cool Guy

How could I optimize the following method, whose purpose is to concatenate
two byte arrays?

public static byte[] ConcatBytes(byte[] a, byte[] b)
{
byte[] result = new byte[a.Length + b.Length];
uint i;
for (i = 0; i < a.Length; i++)
{
result = a;
}
for (i = 0; i < b.Length; i++)
{
result[i + a.Length] = b;
}
return result;
}

You see, my program uses this method, and to it, it passes two very large
(perhaps in the hundreds of megabytes) byte arrays. This method takes
MINUTES to run in this case.
 
try this:

a.CopyTo(result, 0);
b.CopyTo(result, a.Length);

I'm not sure how fast that is. But nevertheless, you can give it a try.

hope that helps a bit..
Imran.
 
try this:

a.CopyTo(result, 0);
b.CopyTo(result, a.Length);

I'm not sure how fast that is. But nevertheless, you can give it a try.

It's a much nicer method, one which I'll keep in mind for the future, but
it seems to run at about the same speed as my method. I still need
something much faster.

Thanks for the idea though.
 
atleast it was fewer lines of code ;-)
I assume the CopyTo method probably iterates through the array items which
is why it doesn't give us any speed improvement.
I think the fastest way to do this would be to directly copy blocks of
memory which would avoid the iteration altogether.

Here's something I cooked up in VB.NET (sorry - I'm a VB guy - shouldn't be
hard to convert to C# though)

Imports System.Runtime.InteropServices

Private Function FastCopy(ByVal a( ) As Byte, _
ByVal b( ) As Byte) As Byte( )
Dim c(a.Length + b.Length - 1) As Byte
Dim ptr As IntPtr = _
Marshal.AllocHGlobal(a.Length + b.Length)
Marshal.Copy(a, 0, ptr, a.Length)
Dim ptrnew As New IntPtr(ptr.ToInt32 + a.Length)
Marshal.Copy(b, 0, ptrnew, b.Length)
Marshal.Copy(ptr, c, 0, c.Length)
Return c
End Function

give this a try and let me know if it helps..
Imran.
 
I'm surprised it's not faster, following my own tests Array.Copy is much
faster than iterating and copying each byte manualy. Also something I had
noted is that copying the arrays as 32bit integers in unsafe code is
slightly faster than copying bytes one at a time, but it requires more code.
I use that method to compare byte arrays in my "MemCmp" class, because there
is no build in way of doing that.

Etienne Boucher
 
Cool Guy said:
It's a much nicer method, one which I'll keep in mind for the future, but
it seems to run at about the same speed as my method. I still need
something much faster.

Thanks for the idea though.

You might want to try Buffer.BlockCopy. Are you sure the memory isn't
ending up being paged? I wouldn't have thought it would take minutes
with the code you posted - although I haven't tried it myself.
 
Jon said:
Are you sure the memory isn't ending up being paged?

In fact, I think you're right. Last night it suddenly dawned on me that I
was doing something REALLY stupid in reading the whole file into memory. I
mean, after allocating all that memory, my computer would slow to a crawl.
LOL.

It's all so clear now.

I think everything'll be okay when I redesign this so that it reads part of
the file at a time.

Thanks for all the replies!
 
In one of your other postings, you mention that you are reading entire files
into byte arrays. In this posting, you are asking about concatenating byte
arrays. Is it fair to assume that you are concatenating files here? If so,
you will get a much greater speed improvement if you can postpone the
append, and simply write both arrays to the output file!

E.G. instead of:
a) read file 1 into array 1
b) read file 2 into array 2
c) copy array 1 and array 2 into array 3
d) write array 3

simply do this:
a) open output file
b) open input file 1
c) copy 100,000 bytes at a time from file 1 to output file
d) open input file 2
e) copy 100,000 bytes at a time from file 2 to output file
f) close file 1 and file 2
g) close output file

This is CONSIDERABLY faster since you won't have to allocate all that memory
and garbage collect it.

--- Nick
 
Back
Top