.Net IL and optimization Capabilities?!

  • Thread starter Thread starter Leon_Amirreza
  • Start date Start date
L

Leon_Amirreza

I have written a program in C# with .net 3.5sp1 and calculated the running
of it in micro senconds. now
I want to estimate the time that it would take to run the same algorithm in
C on another processor/microcontroller SO
I need to know what features of intel processor and optimization mechanisms
are USED in .net and what features of intel are NOT USED by .net to estimate
the running time more accurate?

1- any links to .net internals or source code is appreciated; or books (free
or commecial books)?

usually algorithms take much more effort to be written in C than c#!
 
I cannot imagine how you would get a meaningful number without actually
recoding, recompiling and testing on the target machine. To me, an analogy
is you have driven a car from Chicago to Cleveland, and from it you want to
estimate how long it will take to ride a motorcycle from Detroit to
Louisville.
 
Leon_Amirreza said:
I have written a program in C# with .net 3.5sp1 and calculated the
running of it in micro senconds. now
I want to estimate the time that it would take to run the same algorithm
in C on another processor/microcontroller SO

Hopeless, especially if you go down to the microsecond level. It depends on
how you allocate memory, how the C# compiler optimizes, how the CLR
optimizes, how your C compiler optimizes and the specifics of your target
hardware in terms of caching and pipelining. Anything you don't benchmark is
a lie. Anything you do benchmark is probably a half-truth, but at least
it'll be better.
usually algorithms take much more effort to be written in C than c#!

But if you don't have a .NET runtime on your target platform, the only point
to writing the algorithm in C# first is to get it correct (and possibly
optimized as far as asymptotic running time goes). There's no point to
benchmarking code you ultimately won't be using.
 
sorry to answer on a new thread ;

1- i can have a worse case on c#?! cant I? because the target is less
capable of Intel core dou
and because of scheduling overhead and because of .net overhead
and Joreon you are right depends on these and that is why i need these
insight info ( i have the exact target plaftform info and capabilitis but
not .Net)
and i have run my algorithm many times that gives an statistical average
running time with some tolerance thats all; simple and effective i have done
these kind of benchmarkings and wasnt so far from the reality (I am not
interested in exact microsencod running time but the average) by microsecond
I just meant i got this resolution in calculation time with windows Perf
Counter not Environment.TickCounts thats all

I have run this App for more than 10000 times and gives a good average on
worse case

thank you for your time but STILL you havent answered mine questiuon!
 
Target processor is much like a intel pentium without SSE2 and other
especial features so they are very much alike
 
if the worse case on c# is satisfactory there is just hope that actullay
buying and building the new target platform from scratch may have
satisfactory results but if c# fails to give a good running time you would
not know what will happen when you run your program on the target machine
(see the cheapest way is first to test if the algorithm runs in a good time
in c# or not then test it on the real platform)
 
Leon_Amirreza said:
1- i can have a worse case on c#?! cant I? because the target is less
capable of Intel core dou
and because of scheduling overhead and because of .net overhead
and Joreon you are right depends on these and that is why i need these
insight info ( i have the exact target plaftform info and capabilitis but
not .Net)

The "exact info" is an implementation detail. There are no books on it
because it's a moving target. The only way you can find out is to compile
programs and look at the assembler output. To get "exact info", you'd need
to have the source of the compiler and the runtime and a lot of free time
understanding them. I'm pretty sure both the compiler and the runtime are
closed-source, though a good part of the CLR and libraries are available in
source form.

There is info on how the garbage collector works, for example, but only in
broad strokes. It will not allow you to draw conclusions about individual
programs, you can only use it to explain performance problems when they crop
up. You will not get details like "allocating X objects will take Y
seconds, and if you did 'the same thing' in C it would take Z seconds" from
any book or site. The only way to find that out is to try it.
and i have run my algorithm many times that gives an statistical average
running time with some tolerance thats all; simple and effective i have
done
these kind of benchmarkings and wasnt so far from the reality (I am not
interested in exact microsencod running time but the average) by
microsecond
I just meant i got this resolution in calculation time with windows Perf
Counter not Environment.TickCounts thats all
Yes, you certainly can get a *worst* case. Unfortunately, your worst case
timings will say very little about how fast the code will run on your actual
platform once it's written in a different language and compiled by a
different compiler to a different architecture. It could be slower, it could
be faster. A very simple difference like cache size could already have a big
impact.

You can only establish that (for example) your code runs in linear time.
This is a property of the abstract *algorithm*, independent of the language
it's written in, and while it's certainly a useful thing to know, it says
little about how any particular implementation of that algorithm in *code*
will actually run.
thank you for your time but STILL you havent answered mine questiuon!
I don't think your question is worth answering. If you want to know how fast
your code will run, write it, transfer it to the target platform, then
measure it. Unless your hardware does not yet exist, doing anything else is
a waste of time. You can do algorithmic analysis and that's useful, but
forget about timing.
 
First u got my question right and wrong i am not asking for ways how to
estimate my algorithm but for insight references that u already said theres
none ; so i assume my answer is simply "None" if its not worth answering so
simply dont answer it and thank you
 
through EXP every1 can say that for example the 1st iteration of the program
took longer because of JIT
or you can guess when GC kicks in when no progress in your algorithm but cpu
activity and high memory allocation through Task manager
(YES guessing is not exact estimation but gives you the clue) that why for
example the 10001th iteration of the loop took much longer than the other
and ....
So its simple to observe the effects of OS or .net platform on your App
or you can detect paging by simply HDD activity and when Task manager says
your allocating more MEMory than the physical present in the system
so observation gives you the clue (of course not exact estimation)

Peter Duniho said:
[...]
thank you for your time but STILL you havent answered mine questiuon!

You haven't received an answer to the specific question because, I
believe, there isn't one. That is, the references you're looking for
simply don't exist. You can't extrapolate the performance of your
algorithm, currently coded in C# and running on one hardware platform, to
a theoretical implementation in C running on a different hardware
platform. There's just no reliable way to do that.

If you know that the tested hardware platform is at a minimum not superior
to the target hardware platform, then you _might_ be able to, with some
small degree of confidence, assume that the tested scenario is in fact
your "worst case" scenario and you would get better performance in C on
the other platform. But in truth, even that really depends on too many
factors for you to be sure.

In particular, an algorithm that relies heavily on a library
implementation may in fact run better on .NET because a lot of .NET is
coded with the latest technologies and techniques providing the best
performance. This seems especially likely to be applicable if you are
saying that "usually algorithms take much more effort to be written in C
than c#", since that statement is actually only true if you're making
heavy use of library functions that aren't available in your C
environment.

Pete
 
Leon_Amirreza said:
First u got my question right and wrong i am not asking for ways how to
estimate my algorithm but for insight references that u already said
theres none ; so i assume my answer is simply "None" if its not worth
answering so simply dont answer it and thank you
What I meant was "the question you're trying to answer isn't worth answering
and here's why", not "your question doesn't deserve an answer".

As an aside, please consider using more punctuation. I have to read
everything you write twice just to get the sentences right.
 
Most of my algorithm involve calculation that relies on FPU and pipling and
most of my extensive programming work is gathering processing and presenting
output (that has no effect on my algorithm running time) So
I decided not to write my algorithm in C in order not to introduce Interop
overhead (becuase i have no clue that how much overhead that would be) So
Both algorithm and the App that presents output are written in C#
and when my algorithm run in 6 microseconds i can tell that 1st iteration
that took 0.01 seconds is due to JIT and ....
my algorithm just rely on Math class not any other .net framework LIB and
because most of this class implementation rely on Intel FPU then knowing
some about this class and FPU architecture is enough to give you a clue of
how it will perform on another fp capable processor that again you have the
exact architecture info !
95% of times my algorithm runs in 4 to 8 micro seconds (see the picture?)

any other .net platform that is open source (like mono) that i can use ?
have you ever used open source ported .net ? can i run my program on them
and do the timing for example linux and mono!?
Peter Duniho said:
[...]
thank you for your time but STILL you havent answered mine questiuon!

You haven't received an answer to the specific question because, I
believe, there isn't one. That is, the references you're looking for
simply don't exist. You can't extrapolate the performance of your
algorithm, currently coded in C# and running on one hardware platform, to
a theoretical implementation in C running on a different hardware
platform. There's just no reliable way to do that.

If you know that the tested hardware platform is at a minimum not superior
to the target hardware platform, then you _might_ be able to, with some
small degree of confidence, assume that the tested scenario is in fact
your "worst case" scenario and you would get better performance in C on
the other platform. But in truth, even that really depends on too many
factors for you to be sure.

In particular, an algorithm that relies heavily on a library
implementation may in fact run better on .NET because a lot of .NET is
coded with the latest technologies and techniques providing the best
performance. This seems especially likely to be applicable if you are
saying that "usually algorithms take much more effort to be written in C
than c#", since that statement is actually only true if you're making
heavy use of library functions that aren't available in your C
environment.

Pete
 
oh i am sorry
Jeroen Mostert said:
What I meant was "the question you're trying to answer isn't worth
answering and here's why", not "your question doesn't deserve an answer".

As an aside, please consider using more punctuation. I have to read
everything you write twice just to get the sentences right.
 
Back
Top