G
Guest
I have a fairly simple C# program that just needs to open up a fixed width
file, convert each record to tab delimited and append a field to the end of
it.
The input files are between 300M and 600M. I've tried every memory
conservation trick I know in my conversion program, and a bunch I picked up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long to
process the files. (between 10 and 25 minutes). Also, with each successive
file I process in the same program, performance goes way down, so that by the
3rd file, the program comes to a complete halt and never completes.
I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.
What gives?
I'm noticing this very poor memory handling in all my programs that need to
do any kind of intensive string processing.
I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great on
files less than 100K, but if I try to run it on a file that's just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out of
Memory exceptions.
I was wondering if somebody could look at what I've got down and see if I'm
missing something important? I'm an old school C programmer, so I may be
doing something that is bad.
Would appreciate any help anybody can give.
Regards,
Seg
file, convert each record to tab delimited and append a field to the end of
it.
The input files are between 300M and 600M. I've tried every memory
conservation trick I know in my conversion program, and a bunch I picked up
from reading some of the MSDN C# blogs, but still my program ends up using
hundreds and hundreds of megs of ram. It is also taking excessively long to
process the files. (between 10 and 25 minutes). Also, with each successive
file I process in the same program, performance goes way down, so that by the
3rd file, the program comes to a complete halt and never completes.
I ended up rewriting the process in perl which takes only a couple minutes
and never really gets above a 40 M footprint.
What gives?
I'm noticing this very poor memory handling in all my programs that need to
do any kind of intensive string processing.
I have a 2nd program that just implements the LZW decompression
algorithm(pretty much copied straight out of the manuals.) It works great on
files less than 100K, but if I try to run it on a file that's just 4.5M
compressed, it runs up to 200+ Megs footprint and then starts throwing Out of
Memory exceptions.
I was wondering if somebody could look at what I've got down and see if I'm
missing something important? I'm an old school C programmer, so I may be
doing something that is bad.
Would appreciate any help anybody can give.
Regards,
Seg