C# v. C++ Performance

  • Thread starter Thread starter Nigel
  • Start date Start date
N

Nigel

I read that C#'s JIT compiler produces very efficient
machine code. However, I've found when performing
extensive numerical calculations that C# is less than a
fourth the speed of C++. I give code examples below. Both
C# and C++ were compiled as release builds with default
optimisation on (C++ has /Ob1 set in addition to default
optimizations). I suspected the relatively poor C#
performance was due to using managed memory, but
adding 'unsafe' to relevant classes and methods makes
little or no difference.

I'd appreciate advice on how I can bring the C#
performance close to that of C++, or an acknowledgement
that for the code samples below, C# is fundamentally
several times slower than (unmanaged) C++.

C++ Code:

double total = 0.0;

for (int rep = 0; rep < 5; rep++)
{
total /= 1000.0;

for (long i = 0; i < 100000000; i++)
{
total += i/999999.0;
double disc = total*total + i;
double root = (total + disc)/
(200000.0*(i + 1));
total -= root;
}
}

C# Code:

double total = 0.0;

for (int rep = 0; rep < 5; rep++)
{
total /= 1000.0;

for (long i = 0; i <
100000000; i++)
{
total +=
i/999999.0;
double disc =
total*total + i;
double root =
(total + disc)/(200000.0*(i + 1));
total -= root;
}
}
 
Longs 32bit in native code (C/C++) but are 64 bit in managed code, change
.... for (long i = 0; i < > 100000000; i++) in you C# code into for (int
i=...

Willy.
 
I can't offer much more than a comment at the moment, but in my experience
numerical calculations are precisely the area where C# should be on equal
ground with C++. I've had very good results with C# for intense numerical
computations. If, for some reason, you end up finding out the C# won't do
the job (I'd be surprised), you could always put the most performance
senstive code in C or C++ (even in an unmanaged DLL) and call it from C#.
I've done that before with very good results (only because some code was
already in C). The performance using PInvoke was really good.
HTH
 
Nigel said:
I read that C#'s JIT compiler produces very efficient
machine code.

C# doesn't have a JIT compiler. .NET has a JIT compiler. You need to be
very clear on where the boundaries are.
However, I've found when performing
extensive numerical calculations that C# is less than a
fourth the speed of C++. I give code examples below. Both
C# and C++ were compiled as release builds with default
optimisation on (C++ has /Ob1 set in addition to default
optimizations). I suspected the relatively poor C#
performance was due to using managed memory, but
adding 'unsafe' to relevant classes and methods makes
little or no difference.

I'd appreciate advice on how I can bring the C#
performance close to that of C++, or an acknowledgement
that for the code samples below, C# is fundamentally
several times slower than (unmanaged) C++.

C# itself is a language, not an implementation. However, assuming that
weren't actually a problem... "fundamentally" is a very strange word to
use here - there are certain benchmarks that could no doubt be produced
where C++ ends up slower than C# - how can two languages themselves
each be "fundamentally" slower than the other?

There are certain situations where C++ will be faster than C#, and vice
versa.

Now, let's look at your case in point. Here are the raw numbers from my
laptop - not the ideal testing scenario, but not a bad starting point:

First run:
C++: 27s
C#: 70s

Already this is less than the factor of four you were quoting - indeed,
it's not even three times as slow. Still, let's see what we can do...

There looks to be a lot of casting from long to double here. Changing
the C# code to:

for (int rep = 0; rep < 5; rep++)
{
total /= 1000.0;

for (double i = 0; i < 100000000d; i+=1.0d)
{
total += i/999999.0;
double disc = total*total + i;
double root = (total + disc)/(200000.0*(i + 1.0d));
total -= root;
}
}

the execution time for C# goes down to just 18 seconds! (The results
appear to be the same, and I can't see why it wouldn't be a perfectly
valid optimisation to perform - the only possible loss of precision
would be where converting (i+1) to a double would have a different
result to adding 1.0 to the double value of i. I've verified that
doesn't occur anywhere in the range 0-100000000; in actual code you
might want to check whether or not it could be a problem.

Changing the C++ code in a similar way doesn't appear to help it.

So, according to the above, would you acknowledge that "C# is
fundamentally significantly faster than (unmanaged) C++" or would you
acknowledge that single benchmarks aren't necessarily a good indication
of an entire platform?
 
Thats a first run, try a second run for a warm start.

I ran the code several times, actually (both versions). For the .NET
version, I only timed the code between the start of the method
executing and the end - the only JIT compilation time required would be
for Console.WriteLine and DateTime.Now. Unlike Java, .NET only JITs
once so you don't need to run the same method many times in order to
get the code to run as fast as possible - aside from the time spent to
JIT in the first place, it's running as fast as it's going to by the
time it's running the first time.
 
Jon,

The main problem with OP's code is the for loop using longs, which are as
you know 64 bit in .NET and 32 bit in native code.
Quite a diffrerence if you are adding/comparing a 32 bit vs. a 64 bit entity
on a 32 bit CPU....( and this 100.000.000 times)

Using for(int i=0; i<...
Both C++ and C# took the same time to run to completion.

Willy.
 
Willy Denoyette said:
The main problem with OP's code is the for loop using longs, which are as
you know 64 bit in .NET and 32 bit in native code.
Quite a diffrerence if you are adding/comparing a 32 bit vs. a 64 bit entity
on a 32 bit CPU....( and this 100.000.000 times)

Yes - I even thought about that as I was originally responding, but had
a brain fart and didn't end up picking it up properly.
Using for(int i=0; i<...
Both C++ and C# took the same time to run to completion.

Actually, when I ran it, C# had the edge using ints on my box - 20s vs
27s. However, using a double for the loop counter is even faster (18s).

Interestingly, I thought I'd get rid of the duplicate (i+1) by having
another variable which would act as the next value of i and the amount
to multiply by 200000 - but that decreased performance. I haven't
managed to increase performance beyond that 18s mark.

I've just tried with /O2 on the C++ version, and that actually runs in
about 15s, so there's still room for improvement in the CLR, but it's
still very impressive, IMO.
 
So, according to the above, would you acknowledge that "C# is
fundamentally significantly faster than (unmanaged) C++" or would you
acknowledge that single benchmarks aren't necessarily a good indication
of an entire platform?

Actually, C++ is infinitely faster than C# because VC++ 7.1 with the
/Ox switch optimized away the entire loop!

After inserting a "return (int) total;" statement at the end so that
the loop was actually executed, optimized builds for both languages
performed at about the same speed on my system (~12 seconds), with C++
coming out slightly ahead (let's say 11.5 seconds). The C# version
was of course fixed to use an int counter instead of a long counter.

That was one remarkably poor "benchmark" that the OP used. Isn't
there some "Benchmarks for Dummies" book that could be recommended in
such cases? If you have lots of time you might want to add a
benchmarking section to your page, seeing how you already took care of
the floating-point and Unicode questions. ;-)
 
How does the JIT handle dynamic code?

What if I change a class during runtime (it can be done)

Self modifying code for example, GAs.
 
a Double is faster? Why.


Jon Skeet said:
Yes - I even thought about that as I was originally responding, but had
a brain fart and didn't end up picking it up properly.


Actually, when I ran it, C# had the edge using ints on my box - 20s vs
27s. However, using a double for the loop counter is even faster (18s).

Interestingly, I thought I'd get rid of the duplicate (i+1) by having
another variable which would act as the next value of i and the amount
to multiply by 200000 - but that decreased performance. I haven't
managed to increase performance beyond that 18s mark.

I've just tried with /O2 on the C++ version, and that actually runs in
about 15s, so there's still room for improvement in the CLR, but it's
still very impressive, IMO.
 
How does the JIT handle dynamic code?

What if I change a class during runtime (it can be done)

I don't believe a class *can* be changed during runtime in .NET - at
least not yet. VB.NET will have "edit and continue" functionality in
the next version of VS.NET, which requires that kind of facility, but I
don't believe it's available yet.
Self modifying code for example, GAs.

Could you give an exmaple of that occurring in .NET? A class could
create a *new* class, of course, but that's a different matter.
 
a Double is faster? Why.

Adding 1 to a double is very quick, but apparently converting from int
or long to double isn't quite as quick. Or at least, the two additions
and one double comparison required in the modified code are faster than
the three int->double conversions, 2 integer adds and one integer
comparison required in the original (or modified from long->int) code.
 
Cant you build a type at runtime and
get it via reflection?

Im not talking about user generated, im talking programatically.

Edit and continue has jack to do with it.
 
Cant you build a type at runtime and get it via reflection?

Yes. That's *creating* a type though, not changing an existing one.
Im not talking about user generated, im talking programatically.

Sure, but that's not the same as changing a class during runtime.
Edit and continue has jack to do with it.

Edit and continue has *everything* to do with it - it requires that a
type which has already been loaded and JITted can then be modified and
the code reJITted. *That* is changing a class at runtime.
 
Back
Top