FileStream performace on Pocket PC

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

Hi, I use VS2003 7.1 with C# language for Pocket PC development.
Well, my application must save/load a lot of small object class to a bynary
file, so I tried to use FileStream() and BinaryWriter/ReadWriter() class. My
Test project write #100.000 elements as in the following example:
--------------------
for ( (int) iLoop=1; iLoop<=100.000; iLoop++)
{
MyBinaryWriter.WriteDouble(10.8);
MyBinaryWriter.WriteDouble(20.8);
MyBinaryWriter.WriteString("01234567");
}
--------------------
Well, my Test program works for 76 seconds to save data!
It's too much for me!

So my question is, exist some way or strategy to increase File I/O
performance?
 
Can you post a complete example including how you measured the time?

Cheers
Daniel
 
Hi Renzo!!!

Well, I'm having also a low performance in reading & writting data from
a text file. Also, my "insert" statements in SQLCE are very slow (which at
the end is a write operation to the DB file). Now, in my case, I didn't pay a
lot of attention to this issue due to two approaches:

1. New PDAs with at least 32 Mbytes in memory work way much faster than the
emulator included in VS. In my case, it was almost 5 times faster.

2. A PDA is a small device, and it is going to keep small for some time. I
don't expect a PDA to have the same computer power than a PC. It is just a
different piece of hardware. Old guys like me understand the concept
(something similar happened when PC's arrived on the market).

Hope it helps!!!

Tarh ik
PS: This posting has been posted "AS IS"
 
Daniel thanks for your answare, I put the following code:

public class TPoint
{
public double x1;
public double y1;
public string Name;
}

private void btnWrite_Click(object sender, System.EventArgs e)
{
ArrayList LPoint=new ArrayList();

TPoint Point=(TPoint) LPoint[LPoint.Add(new TPoint())];
Point.x1=10;
Point.y1=20;


FileStream MyFile=new FileStream("MyFile", FileMode.Create);
BinaryWriter FWriter=new BinaryWriter(MyFile);

edtTimeI.Text=System.Convert.ToString(Environment.TickCount);

for (int iLoop = 1; iLoop <=100000; iLoop++)
{
FWriter.WriteDouble(Point.x1);
FWriter.WriteDouble(Point.Y1);
FWriter.WriteString("01234567");
}

edtTimeF.Text=System.Convert.ToString(Environment.TickCount);

MyFile.CloseLavoro();

}

This is a part of my test, I measure the time with TickCount().
Now I try to explain my problem better: I have an ArrayList() of class
object (for xample a TPoint class) and I need to save this array of data to a
file. So I am trying the better way to save data in the smaller possible
time!

Have you some tricks to give me?

Daniel Moth said:
Can you post a complete example including how you measured the time?

Cheers
Daniel
 
Tarh ik thanks for your answer,R) Yes, I programmed with 640KB problems too

Now I try to explain my problem better: I have an ArrayList() of class
object (for xample a TPoint class) and I need to save this array of data to a
binary file. So I am trying the better way to save data in the smaller
possible time!

Have you some tricks to give me?
 
Is this *exactly* the code you are trying? WriteDouble, WriteString and
CloseLavoro are not framework methods... Even so, this code runs 4 times
faster than what you claim (in the emulator). [over a minute sounded too
much which is why I replied]

I suggest you find a real scenario (as you say this is made up) and test it
in release mode, without a debugger attached, on the device.

If your device supports it, queryperformance counter is a better way of
measuring time:
http://groups.google.com/groups?hl=...soft.public.dotnet.framework.compactframework

Anywhere you use doubles you'll get a much greater perf hit than if you use
ints. So if you can use ints, do it.

Sounds like what you really want to do is serialize objects (they don't have
to be in a collection of any sort), so look up serialisation ideas from the
past:
http://groups.google.com/groups?hl=...soft.public.dotnet.framework.compactframework

Cheers
Daniel
--
http://www.danielmoth.com/Blog/


Renzo said:
Daniel thanks for your answare, I put the following code:

public class TPoint
{
public double x1;
public double y1;
public string Name;
}

private void btnWrite_Click(object sender, System.EventArgs e)
{
ArrayList LPoint=new ArrayList();

TPoint Point=(TPoint) LPoint[LPoint.Add(new TPoint())];
Point.x1=10;
Point.y1=20;


FileStream MyFile=new FileStream("MyFile", FileMode.Create);
BinaryWriter FWriter=new BinaryWriter(MyFile);

edtTimeI.Text=System.Convert.ToString(Environment.TickCount);

for (int iLoop = 1; iLoop <=100000; iLoop++)
{
FWriter.WriteDouble(Point.x1);
FWriter.WriteDouble(Point.Y1);
FWriter.WriteString("01234567");
}

edtTimeF.Text=System.Convert.ToString(Environment.TickCount);

MyFile.CloseLavoro();

}

This is a part of my test, I measure the time with TickCount().
Now I try to explain my problem better: I have an ArrayList() of class
object (for xample a TPoint class) and I need to save this array of data
to a
file. So I am trying the better way to save data in the smaller possible
time!

Have you some tricks to give me?
 
First, let me start with pointing out that the place where you get the most significant performance hit depends on the platform. For example, on my Athlon64 3500+ machine the emulator takes 10 sec to loop through your code.
If we enlarge the filestream buffer size by declaring it like this:
FileStream MyFile=new FileStream("MyFile", FileMode.Create, FileAccess.Write, FileShare.None, 65536 * 4);

We instantly get it down to 1200 msec.

Unfortunately on a PPC (tested on a Symbol 2800 with PPC 2002) we get 45 and 39 sec correspondingly - not impressive at all.

The results on a PPC 2003 (with a 206 MHz CPU) are somewhere in the middle.

A little bit of experimenting shows that on a PPC most of the time is spent inside calls to the BinaryWriter.Write. Replacing these calls with BitConverter.GetBytes and Stream.Write (and this is exactly what BinaryWriter does internally) we can see that the bulk of time is spent inside BitConverter.GetBytes. Why is that? Because internally it allocates a small byte array every time you call it. That is expensive. Can anything be done about it? Not much, but if you are willing to work a little you can get by with a code like this (just a quick sample - need more error checking):

byte[] bytes;

double[] doubles = new double[100000];

int byteLen = Buffer.ByteLength(doubles);

bytes = new byte[byteLen];

for (int iLoop = 0; iLoop <100000; iLoop++)
doubles[iLoop] = Point.x1;


Buffer.BlockCopy(doubles, 0, bytes, 0, byteLen);

MyFile.Write(bytes, 0, byteLen);

for (int iLoop = 0; iLoop <100000; iLoop++)
doubles[iLoop] = Point.y1;

Buffer.BlockCopy(doubles, 0, bytes, 0, Buffer.ByteLength(doubles));

MyFile.Write(bytes, 0, byteLen);

for (int iLoop = 0; iLoop <100000; iLoop++)
{
int nBytes = Encoding.ASCII.GetBytes(Point.Name, 0, Point.Name.Length, bytes, 0);
MyFile.Write(bytes, 0, nBytes);
}


This brings the execution time down to 13 sec. Most of the gain comes from not allocating the small arrays over and over. Using Buffer.BlockCoopy (which is implemented as a P/Invoked routine in unmanaged runtime) also helps and even ooutweighs the expense of creating an extra copy of the data
 
Thanks Alex,

I'm asking you another question: can I write a C++ .DLL that Save/Load my
ArrayList() of structure to binary file data? I think that it's possible,
but all this will increase performance?


Alex Feinman said:
First, let me start with pointing out that the place where you get the most significant performance hit depends on the platform. For example, on my Athlon64 3500+ machine the emulator takes 10 sec to loop through your code.
If we enlarge the filestream buffer size by declaring it like this:
FileStream MyFile=new FileStream("MyFile", FileMode.Create, FileAccess.Write, FileShare.None, 65536 * 4);

We instantly get it down to 1200 msec.

Unfortunately on a PPC (tested on a Symbol 2800 with PPC 2002) we get 45 and 39 sec correspondingly - not impressive at all.

The results on a PPC 2003 (with a 206 MHz CPU) are somewhere in the middle.

A little bit of experimenting shows that on a PPC most of the time is spent inside calls to the BinaryWriter.Write. Replacing these calls with BitConverter.GetBytes and Stream.Write (and this is exactly what BinaryWriter does internally) we can see that the bulk of time is spent inside BitConverter.GetBytes. Why is that? Because internally it allocates a small byte array every time you call it. That is expensive. Can anything be done about it? Not much, but if you are willing to work a little you can get by with a code like this (just a quick sample - need more error checking):

byte[] bytes;

double[] doubles = new double[100000];

int byteLen = Buffer.ByteLength(doubles);

bytes = new byte[byteLen];

for (int iLoop = 0; iLoop <100000; iLoop++)
doubles[iLoop] = Point.x1;


Buffer.BlockCopy(doubles, 0, bytes, 0, byteLen);

MyFile.Write(bytes, 0, byteLen);

for (int iLoop = 0; iLoop <100000; iLoop++)
doubles[iLoop] = Point.y1;

Buffer.BlockCopy(doubles, 0, bytes, 0, Buffer.ByteLength(doubles));

MyFile.Write(bytes, 0, byteLen);

for (int iLoop = 0; iLoop <100000; iLoop++)
{
int nBytes = Encoding.ASCII.GetBytes(Point.Name, 0, Point.Name.Length, bytes, 0);
MyFile.Write(bytes, 0, nBytes);
}


This brings the execution time down to 13 sec. Most of the gain comes from not allocating the small arrays over and over. Using Buffer.BlockCoopy (which is implemented as a P/Invoked routine in unmanaged runtime) also helps and even ooutweighs the expense of creating an extra copy of the data


--
Alex Feinman
---
Visit http://www.opennetcf.org
Renzo said:
Hi, I use VS2003 7.1 with C# language for Pocket PC development.
Well, my application must save/load a lot of small object class to a bynary
file, so I tried to use FileStream() and BinaryWriter/ReadWriter() class. My
Test project write #100.000 elements as in the following example:
--------------------
for ( (int) iLoop=1; iLoop<=100.000; iLoop++)
{
MyBinaryWriter.WriteDouble(10.8);
MyBinaryWriter.WriteDouble(20.8);
MyBinaryWriter.WriteString("01234567");
}
--------------------
Well, my Test program works for 76 seconds to save data!
It's too much for me!

So my question is, exist some way or strategy to increase File I/O
performance?
 
I think you can see performance increase if you offload writing the file to
an unmanaged DLL, but you may loose some of it on marshalling the data
structures to unmanaged code. Experiment will tell

--
Alex Feinman
---
Visit http://www.opennetcf.org
Renzo said:
Thanks Alex,

I'm asking you another question: can I write a C++ .DLL that Save/Load my
ArrayList() of structure to binary file data? I think that it's possible,
but all this will increase performance?


Alex Feinman said:
First, let me start with pointing out that the place where you get the
most significant performance hit depends on the platform. For example, on
my Athlon64 3500+ machine the emulator takes 10 sec to loop through your
code.
If we enlarge the filestream buffer size by declaring it like this:
FileStream MyFile=new FileStream("MyFile", FileMode.Create,
FileAccess.Write, FileShare.None, 65536 * 4);

We instantly get it down to 1200 msec.

Unfortunately on a PPC (tested on a Symbol 2800 with PPC 2002) we get 45
and 39 sec correspondingly - not impressive at all.

The results on a PPC 2003 (with a 206 MHz CPU) are somewhere in the
middle.

A little bit of experimenting shows that on a PPC most of the time is
spent inside calls to the BinaryWriter.Write. Replacing these calls with
BitConverter.GetBytes and Stream.Write (and this is exactly what
BinaryWriter does internally) we can see that the bulk of time is spent
inside BitConverter.GetBytes. Why is that? Because internally it
allocates a small byte array every time you call it. That is expensive.
Can anything be done about it? Not much, but if you are willing to work a
little you can get by with a code like this (just a quick sample - need
more error checking):

byte[] bytes;

double[] doubles = new double[100000];

int byteLen = Buffer.ByteLength(doubles);

bytes = new byte[byteLen];

for (int iLoop = 0; iLoop <100000; iLoop++)
doubles[iLoop] = Point.x1;


Buffer.BlockCopy(doubles, 0, bytes, 0, byteLen);

MyFile.Write(bytes, 0, byteLen);

for (int iLoop = 0; iLoop <100000; iLoop++)
doubles[iLoop] = Point.y1;

Buffer.BlockCopy(doubles, 0, bytes, 0, Buffer.ByteLength(doubles));

MyFile.Write(bytes, 0, byteLen);

for (int iLoop = 0; iLoop <100000; iLoop++)
{
int nBytes = Encoding.ASCII.GetBytes(Point.Name, 0,
Point.Name.Length, bytes, 0);
MyFile.Write(bytes, 0, nBytes);
}


This brings the execution time down to 13 sec. Most of the gain comes
from not allocating the small arrays over and over. Using
Buffer.BlockCoopy (which is implemented as a P/Invoked routine in
unmanaged runtime) also helps and even ooutweighs the expense of creating
an extra copy of the data


--
Alex Feinman
---
Visit http://www.opennetcf.org
Renzo said:
Hi, I use VS2003 7.1 with C# language for Pocket PC development.
Well, my application must save/load a lot of small object class to a
bynary
file, so I tried to use FileStream() and BinaryWriter/ReadWriter()
class. My
Test project write #100.000 elements as in the following example:
--------------------
for ( (int) iLoop=1; iLoop<=100.000; iLoop++)
{
MyBinaryWriter.WriteDouble(10.8);
MyBinaryWriter.WriteDouble(20.8);
MyBinaryWriter.WriteString("01234567");
}
--------------------
Well, my Test program works for 76 seconds to save data!
It's too much for me!

So my question is, exist some way or strategy to increase File I/O
performance?
 
Back
Top