Morten,
Thanks for your reply, I understand what you're doing in the code, but
isn't reading line by line slow ?
The file is over 64mb in size, reading it line by line to do a search
seems like a lot of overhead, especially when the user does many searches
while running the app, it would mean reading/searching the >64mb file
many times, that's why I opted to keep the file in memory which might not
be the best idea.
I'm currently trying to get some more time from my client to try and
optimize by creating an index of the file (which doesn't change that
often) and searching through that and retrieving part of the text-file
corresponding to the index...
Jurjen.
Hi Jurjen,
Sounds to me like you could just use a ReadLine() and do a search per
record. You should use the encoding used in the file. If you don't
specify an encoding, UTF-8 is used. You would need some logic added to
keep track of an entire recordset, which can be a string[] of length 9
StreamReader sr = new StreamReader("", Encoding.Default);
string s = null;
string[] recordset = new string[9];
int index = 0;
while ((s = sr.ReadLine()) != null)
{
int i = GetRecordNumber(s);
if (i >= (index + 1))
;// missing record
recordset[index] = s;
index++;
if (i == 9) // complete record
{
if (SearchRecordSet(recordset))
return true;
Array.Clear(recordset, 0, 9);
index = 0;
}
}
PS! Your system clock is a bit too fast
Greg,
The text-file consists of records of 80 characters seperated by a
NewLine.
These records all have a record type 1 thru 9, a set of records start
with
record 1 and end with record 9 at wich point the next set will start
with
record type 1.
I search the contents of the file for the search criteria as entered by
the
user f.i. 2742281, when I find this sequence I have to make sure it's
found
in exactly the right position within the record to make sure I have
compared
it to the right field. Then I have to show this record found (wich
should be
record type 1) and show all records until I find recordtype 9 (of EOF).
I could create an index but that would complicate the app, I also
thought of
maybe creating a datatable to ease the search but I'm pretty sure memory
consumption would be even worse...
I was just wondering why the current app is consuming so much memory
wich is
now clear to me. I guess my client will have to make the decision, cheap
app
wich will use much memory, little more expensive app using less memory.
Regards,
Jurjen.
As CD has said this is expected as strings are UTF-16 .. my question
would
be how you are searching this file.
Are you just doing keyword searches? Depending on the type of search
you
might be much better off doing something like building an index of the
file and loading the index into memory.
Cheers,
Greg
My program needs to search a large textfile (>60MB).
At this time I'm using a streamreader to read the file into a
string-variable (objString = sr.ReadToEnd). Before reading the file
the
proces running my programm uses about 10mb, after reading the
text-file
into the string, it uses over 200mb. I would expect the program to use
between 70 and 100mb.
Is there a more efficient way of storing this data in-memory and still
be
able to search through it ... ?
TIA,
Jurjen.