First, let me start with pointing out that the place where you get the
most significant performance hit depends on the platform. For example, on
my Athlon64 3500+ machine the emulator takes 10 sec to loop through your
code.
If we enlarge the filestream buffer size by declaring it like this:
FileStream MyFile=new FileStream("MyFile", FileMode.Create,
FileAccess.Write, FileShare.None, 65536 * 4);
We instantly get it down to 1200 msec.
Unfortunately on a PPC (tested on a Symbol 2800 with PPC 2002) we get 45
and 39 sec correspondingly - not impressive at all.
The results on a PPC 2003 (with a 206 MHz CPU) are somewhere in the
middle.
A little bit of experimenting shows that on a PPC most of the time is
spent inside calls to the BinaryWriter.Write. Replacing these calls with
BitConverter.GetBytes and Stream.Write (and this is exactly what
BinaryWriter does internally) we can see that the bulk of time is spent
inside BitConverter.GetBytes. Why is that? Because internally it
allocates a small byte array every time you call it. That is expensive.
Can anything be done about it? Not much, but if you are willing to work a
little you can get by with a code like this (just a quick sample - need
more error checking):
byte[] bytes;
double[] doubles = new double[100000];
int byteLen = Buffer.ByteLength(doubles);
bytes = new byte[byteLen];
for (int iLoop = 0; iLoop <100000; iLoop++)
doubles[iLoop] = Point.x1;
Buffer.BlockCopy(doubles, 0, bytes, 0, byteLen);
MyFile.Write(bytes, 0, byteLen);
for (int iLoop = 0; iLoop <100000; iLoop++)
doubles[iLoop] = Point.y1;
Buffer.BlockCopy(doubles, 0, bytes, 0, Buffer.ByteLength(doubles));
MyFile.Write(bytes, 0, byteLen);
for (int iLoop = 0; iLoop <100000; iLoop++)
{
int nBytes = Encoding.ASCII.GetBytes(Point.Name, 0,
Point.Name.Length, bytes, 0);
MyFile.Write(bytes, 0, nBytes);
}
This brings the execution time down to 13 sec. Most of the gain comes
from not allocating the small arrays over and over. Using
Buffer.BlockCoopy (which is implemented as a P/Invoked routine in
unmanaged runtime) also helps and even ooutweighs the expense of creating
an extra copy of the data
--
Alex Feinman
---
Visit
http://www.opennetcf.org
Renzo said:
Hi, I use VS2003 7.1 with C# language for Pocket PC development.
Well, my application must save/load a lot of small object class to a
bynary
file, so I tried to use FileStream() and BinaryWriter/ReadWriter()
class. My
Test project write #100.000 elements as in the following example:
--------------------
for ( (int) iLoop=1; iLoop<=100.000; iLoop++)
{
MyBinaryWriter.WriteDouble(10.8);
MyBinaryWriter.WriteDouble(20.8);
MyBinaryWriter.WriteString("01234567");
}
--------------------
Well, my Test program works for 76 seconds to save data!
It's too much for me!
So my question is, exist some way or strategy to increase File I/O
performance?