Number five needs more data.......
We have a 3D engine for massive financial visualization and we use C#
to implement the interface. While we wouldn't recommend C# for vector
processing, it's pretty snappy at what it does. I suspect somethings
at cross purposes.
First principles
< begin a set of opinionated statements>
C# is based on a intermediate virtual system, so it's going to run
slower than C++ native code, and I'm really getting sick of the 'but
no' from the msft folks on this. First because they keep arguing with
how the facts came to be and secondly because they can't claim their
virtualization has zero overhead, and with no overhead the cost is
still higher because they're not doing straight stack pushes and pops
to get parameters like a native app can. C# probably costs an order
of magnitude or more in potential performance. In most cases this
difference is so small as to be pointless, but that doesn't mean its
not there.
So, are you doing any computationally horribly complex or long
operations ? We have operations where we need to normalize several
100K values. That kind of thing goes in C++. Anything that talks to
the user or the net is fine, C# is not shabby, I'm just saying it's
not perfect.
Data transport between dlls by straight function call is cheap. Real
cheap. It's also unmanaged. So's the Windows kernel, but people from
MSFT.NET seem to keep forgetting that when casting elderberry wine and
hampsters at unmanaged code. Transition between the .net managed
architecture and the unmanaged architecture is something I'm pretty
sure I don't want to know anything about. Probably _worse_ than
making sausage. In any case it's very expensive.
..Net makes it real easy to use COM based communications. That's a
wonderful thing. If you haven't dealt with COM before, it's very very
expensive compared to traditional function calls. Really outrageously
expensive if the component you're talking to is in a different
process. And if it's off the machine, hey you're using the DAL du
jour.
< begin a set of highly opinionated statements>
The rules of thumb I use, and the answers to them might help here
< this goes in C++>
for(i = 0;i < 100K or so;i++)
gruesomely complex numerical operation or data diddling
Stay managed as long as possible on the C# side. Technically, we
never do _anything_ unmanaged on the C# side except when we actually
call our DLL functions.
COM is nice but learning to use arrays is twice as nice. Never ever
make 10 com calls when 1 call with an array will do.
Remember that C# garbage collects. If your C++ and C# are too tightly
bound, you'll see a regular 'anti-heartbeat'.
It's nice that C++ can generate IL code. Pointless, but nice. C# is a
better tool for managed code, C++ for unmanaged. For that matter,
it's possible to write C#, generate IL, and then decompile Java, but
thats also pointless. Funny, but pointless. The bad news - managed
C++ code is not C++ with a wee few nits, it's slower. If it wasn't,
it wouldn't be secure. Don't write managed C++, write managed C#.
Unmanaged debugging is real damn slow, compared to either managed c#
debugging or native C++ debugging. Forget about unmanaged debugging if
you turn it on, start it off in C#, and the system immediately starts
running at 1Khz. I haven't worked this all out yet, but I do know
that using a mixture of DLL function and COM calls when one object of
my affection was Excel caused me to form this rule. It was
interesting for the first few minutes as it tried to deal with Excel,
but I guess futility isn't a concept the debugger understands.
Anyway, hope this helps some. Check out the profiler you can get of
the msft site, make sure you only use unmanaged debugging when you
absolutely have to (it is way cool, that is true), make your com calls
efficient if you have to make them at all, and dont make the system
keep jumping between managed and unmanaged code.
regards
mmm
Jos Vernon said:
Can you post the set of flags you're compiling with?
Also, can you give us some idea of the kind of MC++ code you're writing
here? That should give us a clue as to what's going on....
Well my latest incarnation is written as an x86 library which is then linked
with a MC++ shell.
The x86 lib is compiled with - /02 (Maximise Speed), /Ob2 (Any Suitable
Inline Function Expansion), No global optimization (coz it's incompatible
with CLR). Otherwise it's a standard release build,
The .NET shell has the same settings but has global optimizations turned on
(I think this may be being ignored because it's incompatible with .CLR).
Also has Favor Fast Code turned on. The linker optimization is standard.
The shell is a shell - not much there. Typically things like this
(STARTFUNCTION and ENDFUNCTION are a simple set of try/catch handlers for
SEH)...
[Category("Settings"), Description("Add a page at a specified location -
return the page ID")]
int AddPage(int page)
{
STARTFUNCTION()
return mPDF->AddPage(page);
ENDFUNCTION()
}
All suggestions gratefully received.
Does anyone know if MS use MC++ for any of the .NET Framework? Or do they
package all their x86 code in standard x86 DLLs and call it that way?
I can't believe it was used for .NET 1.0 because the AppDomain bug meant
that anything written using it wouldn't work under ASP.NET. Has anything
changed with 1.1?
Jos