Thanks again guys.
I have two follow-up questions relating to this:
1) I want the XML file that .Net generates to have separate Nodes for each
language so that we can selectively load only the language text values for
the language that the user selects for the Add-in. That will reduce the
memory and make it quicker to load. Currently the code is producing the XML
in a flat structure:
<XMLData>
<LanguageCode>EN-US</LanguageCode>
<LookupID>391</LookupID>
<LanguageText>Hello.</LanguageText>
</XMLData>
<XMLData>
<LanguageCode>EN-US</LanguageCode>
<LookupID>392</LookupID>
<LanguageText>Select a different email address.</LanguageText>
</XMLData>
<XMLData>
<LanguageCode>EN-US</LanguageCode>
<LookupID>393</LookupID>
<LanguageText>Check that you have installed the correct
driver.</LanguageText>
</XMLData>
<XMLData>
<LanguageCode>EN-US</LanguageCode>
<LookupID>394</LookupID>
<LanguageText>Click OK to continue.</LanguageText>
</XMLData>
<XMLData>
<LanguageCode>FR-FR</LanguageCode>
<LookupID>391</LookupID>
<LanguageText>Bonjour</LanguageText>
</XMLData>
....
This is the code that outputs a SQL stored procedure select query to XML:
try {
sqlConnection1.Open();
ds.DataSetName = "XMLData";
ds.Load(cmd.ExecuteReader(), LoadOption.OverwriteChanges,
"XMLData");
System.IO.FileStream myFileStream = new
System.IO.FileStream(filename, System.IO.FileMode.Create);
ds.WriteXml(myFileStream, XmlWriteMode.WriteSchema);
}
catch (SqlException ex) {
throw ex;
}
finally {
sqlConnection1.Close();
sqlConnection1.Dispose();
cmd.Dispose();
}
How can we make it create a node for each language?
2) Then how can we load just one node (e.g. FR-FR) into the Dictionary?
Hm.
400 KB is nothing today.
It's close to half a megabyte. Systems with 256 MB of RAM are not
uncommon still, and 400K is a noticeable size when the system's memory
is nearly fully utilized and you're dipping into swap. Just because
virtual memory exists isn't really an excuse to act like one has
infinite resources. There is far too much out there that is bloated
and contains way too much overhead.
And in memory will be a lot faster than a database.
Only if the system isn't leaning heavily on swap. My system, it'd be
really fast---I have 8 gigs of RAM. But I know many people using
systems with even memory sizes as small as 128 MB. We don't have to
count memory down to the bit anymore when we're measuring memory
consumption, but there is a middle road between those days and assuming
a machine with infinite memory resources. IOW, that road lies between
assuming there is only a very tiny space is available and optimizing
the crap out of software for size, and not doing anything at all to
think about the memory footprint of what we're doing. The middle road
seems reasonable when you're talking in terms that can be expressed in
tenths of a megabyte. If we were talking about 40K of data, I'd think
that a database was insane overhead. But, a ZIP code database with any
meaningful information is going to be at least 2.1 MB for 80,000
records, not including the structure of the file it's stored in,
whether that be XML or DBF or even CSV. 256 MB of RAM, only 128
processes have to assume that they can grab 2 MB of RAM and you're
dipping into swap (oversimplified, really it's probably closer to 80
processes outside of the operating system, but you get the point).
And writing ones own database library seems as a complete
waste of resources.
A DBF reader is pretty trivial in any language, and functionality to
utilize the index is not much harder than that, really. The format
isn't that complex, it's just a structured flat file, which is easy to
write a library for. I just don't know if someone's written such a
lightweight thing in C# yet. If not, well, whatever---again, it's a
trivial file format to write for, and for a read-only database (from
the application's point of view) it's _very_ simple to implement. A
reasonably decent programmer should be able to implement a relatively
efficient, read-only DBF library in 2 hours (not counting the time it
takes to read the specs); 3 hours if you include an efficient method of
utilizing an index; add a few more hours if you want to write to it
with row-level locking. Subtract a little bit if you're not
implementing the Memo functionality, which is really not necessary
unless you're trying to import old DBase or VFP-based data from
applications that actually made use of the memo fields.
It would be a waste of resources to try to implement a fully functional
library that would handle row-level locking and all of that junk. In
any case, it was just an alternative.
And the aspect of cross platform does not seem to apply
much to an Outlook Addin written in C#.
Outlook addin? No. Generally good practice? Yes, since C# is
cross-platform these days. It's a good habit to get into. Being in
the habit of writing software that is close to cross-platform is
beneficial for many reasons, one of the most important being keeping an
open mind with regard to the implementation details of things.
--- Mike