If not .Net then what?

  • Thread starter Thread starter jim
  • Start date Start date
What would be a good test? I am happy to try it (if only for my own
satisfaction) but don't want to spend hours putting something together.
Something like sorting a large string array and calculating factorials?

Calculating factorials is fairly useless, and sorting tends to be done
by library calls rather than in your own code.

The suggestions by Stephen Howe are fairly reasonable though - although
I suspect they'll be IO limited rather than CPU limited. Mind you,
that's often likely to be the case in real code as well...

I'm happy to do the C# - it would be good to see VB6, C and C++
solutions. It would also be nice to look at the complexity/readability
of the solutions afterwards, as well as the performance...
 
Andre Kaufmann said:
O.K., you stated you don't want to port your code to .NET in another
post and you said that your code runs 60 times slower under .NET ?

Since you say you don't believe me, what is the point in my saying anything
to you? Think about it.
 
<snipped>

You can't talk to King Little Richard as he knows it all. You should cut
that fossil loose, and let him try to hold his court in comp.programming. :)
 
Richard said:
Andre Kaufmann said:


Since you say you don't believe me, what is the point in my saying anything
to you? Think about it.

Because factual evidence stands alone, and you haven't provided any.

-- Barry
 
Herfried K. Wagner [MVP] wrote:

Disclaimer: I work for CodeGear, a division of Borland. I do not and
(legally) cannot speak for the company. I speak in a personal capacity
only.
Delphi is now developed by another company than Borland.

Delphi is a product developed by CodeGear, a subsidiary of Borland.
I am not sure
about support for older versions of Delphi like Delphi 7.

We work hard to ensure compatibility between releases, such that people
upgrading to newer versions should simply face a recompile, possibly
with a few adjustments, rather than a large amount of work. If code
compiled using very old releases doesn't work as well as possible with
new OSes etc., we hope that the code can be ported to a newer release
with ease.

As Delphi applications can and often are built as standalone
executables, I'd wager that they are often more resilient to changing
OSes than systems which link in lots of shared code.
Applications
developed using older versions may not be guaranteed to work properly on
future versions of Windows. That's what will finally kill VB6 too.

Delphi is an ongoing product. It's not like VB6, terminated in its
tracks. Delphi continues to be enhanced and developed.

-- Barry
 
Barry Kelly said:
Because factual evidence stands alone, and you haven't provided any.

Not so. Everything that I have reported in this thread is factual. Whether
I have made correct *deductions* from the facts I have reported is of
course a matter of opinion, and I accept the possibility that I have made
incorrect deductions.

Nevertheless, it *is* the case that, in 2003 (I think), I and a colleague
wrote, in C++, a working system for analysing source code for determining
dependency relationships, and it *is* the case that we moved the code to
..Net and found that it ran sixty times slower on that platform. There may
be many reasons for that, reasons which we couldn't identify at the time,
and I accept that wholeheartedly.

But when Andre Kaufmann says that he doesn't believe that we did this, he
is basically accusing me of lying. If that is what he thinks, well, that's
up to him, but it makes future discussion with him rather pointless.
 
Richard said:
Barry Kelly said:
[...]

But when Andre Kaufmann says that he doesn't believe that we did this, he
is basically accusing me of lying.

No - I don't blame you of lying, but perhaps misinterpreting - read below.
If that is what he thinks, well, that's
up to him, but it makes future discussion with him rather pointless.


I only can't believe you, because you don't say us how you did it -
moved the C++ code to .NET. You haven't converted it, since you posted
that you don't want to convert code to .NET.

We don't know which compiler you used and which language: C++ / C# /
Delphi / C++/CLI ?

You haven't answered this question not in several posts, where I asked
for this information to track down the problem.
What would you conclude if one makes statements and doesn't answer
simple questions (please continue reading before answering this question) ?


Currently my conclusion, derived from the posts was:
----------------------------------------------------

You have compiled your code with Visual Studio C++ .NET in the debug
version, which is .... slow compared to the release switch. This is a
rather common "problem", when switching the compilers from CBuilder to
VStudio, because by default CBuilder (the old version) hasn't supported
multiple configurations and starts by default with all optimizations
turned on.

Even if you turn all optimizations on in VStudios debug default
configuration of a C++ project, the debug heap is still active and slows
down heavily your application. Since you are using commonly *not*
VStudio but only CBuilder I came to the conclusion above.


Shouldn't be offensive to you at all.

Andre
 
Richard Heathfield said:
Barry Kelly said:


Not so. Everything that I have reported in this thread is factual. Whether
I have made correct *deductions* from the facts I have reported is of
course a matter of opinion, and I accept the possibility that I have made
incorrect deductions.

Nevertheless, it *is* the case that, in 2003 (I think), I and a colleague
wrote, in C++, a working system for analysing source code for determining
dependency relationships, and it *is* the case that we moved the code to
.Net and found that it ran sixty times slower on that platform. There may
be many reasons for that, reasons which we couldn't identify at the time,
and I accept that wholeheartedly.

But when Andre Kaufmann says that he doesn't believe that we did this, he
is basically accusing me of lying. If that is what he thinks, well, that's
up to him, but it makes future discussion with him rather pointless.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999

Now that we have an application, there are a few questions that you still
haven't answered.

1) What version of dotNET? I suspect it's 1.1 with VS 2003 from your time
frame. Version 2.0 is signficantly faster that 1.1 for most applications.
2) What dotNET language did you port your application to?
3) Did you use automated porting tools or did you take the time to actually
rework the application to take advantage of the features in dotNET? I
suspect the answer to this is that you didn't spend the time to use the
framework the way it was designed.

Mike Ober.
 
Andre Kaufmann said:
Richard said:
Barry Kelly said:
[...]

But when Andre Kaufmann says that he doesn't believe that we did this,
he is basically accusing me of lying.

No - I don't blame you of lying, but perhaps misinterpreting - read
below.

Then I have certainly misinterpreted an earlier remark that you made, for
which I apologise.
I only can't believe you, because you don't say us how you did it -
moved the C++ code to .NET. You haven't converted it, since you posted
that you don't want to convert code to .NET.

We wrote the original system in ISO C++ (plus Berkeley sockets).
We don't know which compiler you used and which language: C++ / C# /
Delphi / C++/CLI ?

We compiled the original code using several compilers - Intel, Visual
Studio 7, and gcc - and got comparable results from all of them - around
the 6 to 10 second mark for our test data set, with Visual C++ giving us 8
seconds IIRC. The translation to .Net involved converting to "managed"
C++.
You haven't answered this question not in several posts, where I asked
for this information to track down the problem.
What would you conclude if one makes statements and doesn't answer
simple questions (please continue reading before answering this question)
?

I would conclude that the person concerned isn't all that interested in a
justification, no matter how spirited, of a development environment that
they themselves rejected some years ago.
Currently my conclusion, derived from the posts was:

No. I may not be the brightest star in the firmament, but even I know to
switch to release mode when doing performance analysis - and my
then-colleague *is* one of the brightest stars in the firmament; even if
I'd been dense enough to try a performance test in debug mode, she'd soon
have put me right.

You seem to think that, because I sing C++ Builder's praises, I am a heavy
C++ Builder user. I'm not. In fact, I don't think I've ever used it in the
Real World, although I use it a fair bit at home. Most of my Real World
experience has been on Microsoft, GNU, and big iron compilers. I've used
(almost) every version of Visual C++ ever published, as well as Microsoft
C5.0, 5.1 and 6.0 (which pre-dated Visual C++). I am perfectly well aware
of the difference between debug and release.
 
Bart C said:
I tried this little recursive program in C#. And similar in straight C. The
C# took nearly twice as long but that was probably in debug mode. So
pleasantly surprised.

The funny thing about the Fibonacci sequence is that it's always shown
as the example of recursion, but it's one of the easiest recursions to
"undo" into a *far* more efficient form.

I'm sure most people reading this could give an alternative in their
sleep, but here's an example:

static int Fib(int n)
{
int previous = 0;
int current = 1;

for (int step = 0; step < n; step++)
{
int next = previous+current;
previous = current;
current = next;
}
return previous;
}

(Note that this is one step off from the code you gave - I was checking
by the sample data from Wikipedia. It's trivial to change, of course.
If you don't care about the n=0 case, it's slightly neater to return
current and make step start at 1, but hey...)
Unless I've missed the point about what .Net is, I'm assuming it's a bunch
of MS languages all producing the same IL and a huge runtime, plus some
over-the-top IDEs.

Whether VS is over-the-top or not is a matter of opinion, of course. I
personally prefer Eclipse's Java support to VS's C# support, but never
mind. There are certainly a lot of designers, which I'm not generally a
fan of.

Other than that:
o Lots of languages compiling to IL - yes.
o The languages all being MS - no (see http://boo.codehaus.org for
example).
o Large runtime - yes.

I wouldn't say that's the "point" of .NET though. That's a question I
don't have time to address right now, unfortunately.
 
o The languages all being MS - no (see http://boo.codehaus.org for example)
<aside>
I remember looking at boo, what feels like an age ago, and you just
reminded me of it... re-reading the manifesto, it is interesting how
much of this is now addressed in C# 2 (generators/yield), C# 3
(functional approach, var, etc), and MSBuild (compilation pipeline).

OK, some others aren't there (duck typing [although it has always been
in VB], syntactic macros [unlikely to appear]), and the return-type
inference isn't really there (except for lambdas) - but there is
definitely some convergence...
</aside>

Marc
 
Richard said:
Andre Kaufmann said:
[...]
seconds IIRC. The translation to .Net involved converting to "managed"
C++.

O.K. thank you for the information and please apologize the parts of my
posts which might have been unintentionally offensive.

To be honest I'm playing around with managed C++ only after the old
managed C++ style supported by VS7 had been deprecated and C++/CLI has
been introduced.
So I have only performance experience with the newer C++/CLI syntax and
don't know if the managed C++ syntax / implementation had some
"performance problems". It definitively had some readability problems,
regarding the ugly double underscore keywords, although these have been
more standard compliant.

60 times is a huge decrease and from the information I currently have I
would suspect it to be a string problem, since strings in .NET are
immutable and therefore each changing of a string creates a new string
object, which may be quite inefficient.

But it's also possible that managed C++ had other performance problems.


That a performance decrease is not always the case, when switching to
managed C++, here a little illustrative example I'm currently investigating:

-------------------------------------------------
Native C++ example:

class T { public: void cb(int i) {} };

void perf()
{
test t;
boost::function<void (int)> foo = boost::bind<void>(&T::cb, t, _1);
foo(100);
return 0;
}

-------------------------------------------------

Managed C++/CLI example:

ref class T { public: void cb(int i) {} };
delegate void CBD(int i);

void perf()
{
T^ test = gcnew T();
CBD^ foo = gcnew CBD(test, &T::callback);
foo(100);
return 0;
}

-------------------------------------------------


The native function call foo(100) needs 30 assembly instructions and 3
branches/jumps.
The managed one needs 5 assembly instructions and 3 branches/jumps.
But it's too early to derive anything from that. I'm currently searching
for the problem / failure I've made - if there is one.
And I'm not an expert regarding the boost::bind/function templates, so
maybe there's a more efficient way to write it, which reduces the number
of assembly instructions needed too.

[...]
C5.0, 5.1 and 6.0 (which pre-dated Visual C++). I am perfectly well aware
of the difference between debug and release.

Then please apologize, wasn't my intention to be offensive. I've got
bitten by the default configuration too, activated all optimizations,
but forgot to remove/change the damn debug macros.

Andre
 
Andre said:
O.K. thank you for the information and please apologize the parts of my
posts which might have been unintentionally offensive.

Ehm, damn could be misinterpreted too ;-).
Shouldn't mean that I had any intention to be offensive only that some
posts might be.

Should read:

Please apologize all parts of my posts which might have been offensive.
Wasn't my intention to be offensive.

Andre
 
Ehm, damn could be misinterpreted too ;-).
Shouldn't mean that I had any intention to be offensive only that some
posts might be.

Should read:

Please apologize all parts of my posts which might have been offensive.
Wasn't my intention to be offensive.

This political correctness bull is out of hand when you must post a
correction to your apology because you fear that the original apology
might have been offensive.
 
Andre Kaufmann said:
Richard said:
Andre Kaufmann said:
[...] seconds IIRC. The translation to .Net involved converting to
"managed" C++.

O.K. thank you for the information and please apologize the parts of my
posts which might have been unintentionally offensive.

To be honest I'm playing around with managed C++ only after the old
managed C++ style supported by VS7 had been deprecated and C++/CLI has
been introduced.
So I have only performance experience with the newer C++/CLI syntax and
don't know if the managed C++ syntax / implementation had some
"performance problems". It definitively had some readability problems,
regarding the ugly double underscore keywords, although these have been
more standard compliant.

60 times is a huge decrease and from the information I currently have I
would suspect it to be a string problem, since strings in .NET are
immutable and therefore each changing of a string creates a new string
object, which may be quite inefficient.

But it's also possible that managed C++ had other performance problems.


That a performance decrease is not always the case, when switching to
managed C++, here a little illustrative example I'm currently
investigating:

-------------------------------------------------
Native C++ example:

class T { public: void cb(int i) {} };

void perf()
{
test t;
boost::function<void (int)> foo = boost::bind<void>(&T::cb, t, _1);
foo(100); return 0;
}

-------------------------------------------------

Managed C++/CLI example:

ref class T { public: void cb(int i) {} };
delegate void CBD(int i);

void perf()
{
T^ test = gcnew T();
CBD^ foo = gcnew CBD(test, &T::callback);
foo(100);
return 0;
}

-------------------------------------------------


The native function call foo(100) needs 30 assembly instructions and 3
branches/jumps.
The managed one needs 5 assembly instructions and 3 branches/jumps.
But it's too early to derive anything from that. I'm currently searching
for the problem / failure I've made - if there is one.
And I'm not an expert regarding the boost::bind/function templates, so
maybe there's a more efficient way to write it, which reduces the number
of assembly instructions needed too.

[...] C5.0, 5.1 and 6.0 (which pre-dated Visual C++). I am perfectly well
aware of the difference between debug and release.

Then please apologize, wasn't my intention to be offensive. I've got
bitten by the default configuration too, activated all optimizations, but
forgot to remove/change the damn debug macros.

Andre


OK - now we're getting somewhere. I have heard that the C++ in VS 2003 had
some major issues with performance, especially when mixing C++ code with
dotNET libraries. I do know that Visual C++ 7.0 is slower than VC++ 6.0.
In addition, you need to realize that Managed C++ isn't a true dotNET
language. Rather, it's a hybrid. Thus you can't really make the claim that
dotNET apps are slower than C++.

Mike.
 
Michael said:
[...]

OK - now we're getting somewhere. I have heard that the C++ in VS 2003
had some major issues with performance, especially when mixing C++ code

Hm, I was aware that it (managed C++) had a serious problem regarding
mixing managed and native code, which could lead to a deadlock, when a
native dll has been loaded dynamically - loader lock.
Wasn't aware that there were such flaws, that could explain a speed
decrease of 60 times. But as I wrote, I don't have that much experience
with the old implementation.
with dotNET libraries. I do know that Visual C++ 7.0 is slower than
VC++ 6.0.

The IDE yes, code generation definitively not, at least not that
significantly.

By the way the upcoming V10 (!not VS2008!) version has already a
nickname "the new 6" ;-)
In addition, you need to realize that Managed C++ isn't a true
dotNET language. Rather, it's a hybrid. Thus you can't really make the
claim that dotNET apps are slower than C++.

The C++ compiler should emit the most optimized IL code of all .NET
compilers available from MS. At least with C++/CLI.
With managed C++ you may be right - don't know for sure.
There are some pitfalls, when crossing the boundaries between managed
and native code, but such a high decrease would be only explainable if
they have permanently crossed the borders, but I would assume that the
core function with the high decrease in performance hasn't done this.

Andre
 
Michael D. Ober said:

OK - now we're getting somewhere. I have heard that the C++ in VS 2003
had some major issues with performance, especially when mixing C++ code
with
dotNET libraries. I do know that Visual C++ 7.0 is slower than VC++ 6.0.

So in other words, you recognise that the .Net implementation I was using
had serious performance issues, which is what I said all along.
In addition, you need to realize that Managed C++ isn't a true dotNET
language. Rather, it's a hybrid. Thus you can't really make the claim
that dotNET apps are slower than C++.

It said .Net on the box. Are you telling me Microsoft were lying to my
then-client by claiming it was .Net when really it wasn't .Net? Would you
advise them to sue Microsoft?

In any case, it is not my claim that .Net apps are slower than C++, because
C++ doesn't have a speed. It's a language, not an implementation. The
speed of the "vanilla" C++ version varied depending on whether the
executable image had been built using Intel, gcc, or Visual Studio.

..Net, however, /is/ an implementation, so it does make sense to talk about
the speed of .Net applications - and in my experience it is unacceptably
slow. If you have a different experience, I'm pleased for you.

My original response in this thread was to someone who said "I've only ever
heard one objection to .Net...", and he went on to explain why he thought
that the objection was invalid. My only objective in this thread was to
broaden his experience, by giving him two more objections. The fact that
some people here don't agree with those objections does not put them into
a different class from the original objection which, after all, he himself
did not agree with. But never again will he be able to say "I've only ever
heard one objection to .Net..." - because he has now heard at least three.
 
Andre Kaufmann said:
The C++ compiler should emit the most optimized IL code of all .NET
compilers available from MS. At least with C++/CLI.

Well, optimization is primarily performed by the JIT compiler nowadays and
not by the compiler used to emit the IL. Optimization on JIT-level can be
more powerful because it can keep the machine's characteristics in mind.
Inlining, for example, is typically performed by the JIT compiler in .NET.
 
Herfried said:
[...]
Well, optimization is primarily performed by the JIT compiler nowadays
and not by the compiler used to emit the IL. Optimization on JIT-level
can be more powerful because it can keep the machine's characteristics
in mind. Inlining, for example, is typically performed by the JIT
compiler in .NET.

Sure, agreed.

But the JIT compiler has to be fast, so that startup time/runtime isn't
decreased that much. And IMHO it currently can do only local
optimizations. High level optimizations, like removing complete code
parts or function calls, are still the task of the compiler itself.

It doesn't mean that the effect is significant and measurable
(generally), only that the C++ compiler sometimes can optimize the code
better and emit somewhat more efficient IL code.

This might be not the case in the future anymore, if all compilers will
share the same back end. And this will be the case if I understood the
Phoenix project correctly.

Andre
 
Andre,

Andre Kaufmann said:
Well, optimization is primarily performed by the JIT compiler nowadays
and not by the compiler used to emit the IL. Optimization on JIT-level
can be more powerful because it can keep the machine's characteristics in
mind. Inlining, for example, is typically performed by the JIT compiler
in .NET.

[...]

But the JIT compiler has to be fast, so that startup time/runtime isn't
decreased that much.

That's true. In some situation creating a native image of the IL image may
be a solution. This can be archieved using the "ngen.exe" tool which comes
with the .NET Framework. The resulting native image is placed in the NIC
(Native Image Cache), which is part of the GAC (Global Assembly Cache). In
addition, the JIT compiler performs caching, which means that native
versions of methods etc. are cached and reused. This will reduce overhead
when calling a certain method more than once.
And IMHO it currently can do only local optimizations. High level
optimizations, like removing complete code parts or function calls, are
still the task of the compiler itself.

Sure, the compiler can do some optimizations. However, the potential
optimizations a JIT compiler can perform are more evolved than those of the
compiler. Nevertheless, some sompilers provide different optimizations.
The VB compiler, for example, supports disabling of certain checks which
will result in fewer IL instructions but will also reduce security.

I believe that the whole performance debate misses the point:

..NET is a platform that is most suitable for writing user mode applications
in which performance and real-time behavior are not really important. On
the other hand, C and C++ are optimized for writing system applications
(kernel mode, heavy interaction with hardware, etc.).

Although C++ can be used to write applications similar to those commonly
written in VB/C#, the latter programming languages are more suitable for
this purpose (library support, designer support, IDE support, abstraction).
 
Back
Top