12.34f vs (float) 12.34

  • Thread starter Thread starter Jon Shemitz
  • Start date Start date
J

Jon Shemitz

Is there a difference between a constant like "12.34f" and "(float)
12.34"?

In principle, at least, the latter is a double constant being cast to
a float; while the two both generate actual constants, does the latter
ACTUALLY do a conversion at compile time? That is, are there constants
where

<constant>f != (float) <constant>

?
 
No. The compiler will optimize the cast away assuming it can. It's probably
generating something like this:

private static void Blah()
..locals (float V_0, float V_1)
L_0000: ldc.r4 1.23
L_0001: stloc.0
L_0002: ldc.r4 4.56
L_0003: stloc.1

from this:

static void Blah()
{
float f1 = 1.23F;
float f2 = (float) 4.56;
}
 
Klaus H. Probst said:
No. The compiler will optimize the cast away assuming it can. It's probably
generating something like this:

private static void Blah()
.locals (float V_0, float V_1)
L_0000: ldc.r4 1.23
L_0001: stloc.0
L_0002: ldc.r4 4.56
L_0003: stloc.1

from this:

static void Blah()
{
float f1 = 1.23F;
float f2 = (float) 4.56;
}

Indeed it is. The question, though, was HOW it's optimizing the cast
away, whether code like "1.23f" and "(float) 1.23" might ever generate
slightly different constants.
 
The answers to those two questions are:

HOW: it's called constant folding, and is performed by the C# compiler, and
there's loads of information out there, and
WHETHER: in theory I think they could in your example (but that's just my
interpretation); section 11.1.3 of the CLI docs states:

"Storage locations for floating point numbers (statics, array elements, and
fields of classes) are of fixed size. The supported storage sizes are
float32 and float64. Everywhere else (on the evaluation stack, as arguments,
as return types, and as local variables) floating point numbers are
represented using an internal floating-point type. In each such instance,
the nominal type of the variable or expression is either R4 or R8, but its
value may be represented internally with additional range and/or precision.
The size of the internal floating-point representation is
implementation-dependent, may vary, and shall have precision at least as
great as that of the variable or expression being represented."

From my experience the internal representation can differ between debug and
release.

Hope that helps, but please don't take my answer as gospel.

Stu
 
Stu said:
HOW: it's called constant folding,

Yes, thanks. Point of my question, though, is that I can imagine a few
ways that this might be done:

* Very straightforwardly, whereby "(float) 1.23" constructs a double
value 1.23, then casts it to a single.

* More convolutedly, whereby the constant folding operation sees the
(float) cast being applied to an implicitly typed double, and goes
back to the literal and reparses it as 1.23f.

* Complex parsing, where cast followed by a literal parses the literal
exactly as if it were 1.23f.

The point being that in some cases the first approach might yield
slightly different values than the first.
WHETHER: in theory I think they could in your example (but that's just my
interpretation); section 11.1.3 of the CLI docs states:

I'm not sure that ECMA-335 has much to say about the workings of the
compiler ....
 
Jon Shemitz said:
Yes, thanks. Point of my question, though, is that I can imagine a few
ways that this might be done:

* Very straightforwardly, whereby "(float) 1.23" constructs a double
value 1.23, then casts it to a single.

* More convolutedly, whereby the constant folding operation sees the
(float) cast being applied to an implicitly typed double, and goes
back to the literal and reparses it as 1.23f.

* Complex parsing, where cast followed by a literal parses the literal
exactly as if it were 1.23f.

As far as I'm aware (and we're at the limits of my knowledge here), constant
folding is done at compile time, so option one is out of the question. I
believe the compiler makes passes over the statement/expression trees it has
generated, looking for certain patterns, and replacing them with
equivalents. I guess one pattern could be something like "(builtin value
type) constant -> new constant".
The point being that in some cases the first approach might yield
slightly different values than the first.


I'm not sure that ECMA-335 has much to say about the workings of the
compiler ....

Don't know about that, but I do know that if you have a variable of type
float it's not guaranteed to be no more than a float.
 
As far as I'm aware (and we're at the limits of my knowledge here), constant
folding is done at compile time, so option one is out of the question.

We're pretty much at the limits of my knowledge, too. I've written a
simple compiler, so I have some knowledge of the issues involved, but
that's about it.

But I don't see why you say that option 1 can be ruled out? Seems to
me: Parser sees a stream of digits followed by a . and another stream
of digits which is NOT followed by one of {d, f, m}. Bingo: a double
constant for the parse tree. Optimizer later comes along and sees the
expression cast(float, double constant) and does a compile-time cast,
reducing it to a float constant.

* * *

I suppose I should just do the brute force thing: write a simple test
program to generate reams of float constants, and see if (say)
2.22222222222222f != (float) 2.22222222222222.

Course, I'll probably find more than I care to know about method size
limits ....
 
Jon Shemitz said:
We're pretty much at the limits of my knowledge, too. I've written a
simple compiler, so I have some knowledge of the issues involved, but
that's about it.

But I don't see why you say that option 1 can be ruled out? Seems to
me: Parser sees a stream of digits followed by a . and another stream
of digits which is NOT followed by one of {d, f, m}. Bingo: a double
constant for the parse tree. Optimizer later comes along and sees the
expression cast(float, double constant) and does a compile-time cast,
reducing it to a float constant.

I thought you meant a run-time cast. Not sure whether you'd call
compile-time folding a cast but I think we're talking the same thing, even
if we use different words.
* * *

I suppose I should just do the brute force thing: write a simple test
program to generate reams of float constants, and see if (say)
2.22222222222222f != (float) 2.22222222222222.

And what I meant about having extra precision was something like this:

class Foo
{
public void Test()
{
float localFloat = _memberFloat;

// The following doesn't have to be true, ie assertion could fire.
Debug.Assert( _memberFloat == localFloat );
}

private float _memberFloat = 1.23;
}
 
Is there a difference between a constant like "12.34f" and "(float)


You can be sure that both are evaluated to compile time contants (at least
jit time).
But you cannot assume that 12.34f ==(float)12.34 because conversion double
to float might result in loss of precision.
 
And what I meant about having extra precision was something like this:
class Foo
{
public void Test()
{
float localFloat = _memberFloat;

// The following doesn't have to be true, ie assertion could fire.
Debug.Assert( _memberFloat == localFloat );
}

private float _memberFloat = 1.23;
}


No, assigning a float to a float never loses any precicion, and a comparison
of them will always return true (as long as not float.NaN is involved)

Imagine the following:

class Foo
{
private float memberFloat;

public void Test()
{
float localFloat = 12.34;

for (int i=0; i<100000; i++)
{
memberFloat = localFloat;
localFloat = memberFloat;
}
}
}

if assigning localfloat to memberFloat would loose any precision then the
output of this program would be very unexpected 0.0 or 1.111111111 of
similar..
 
cody said:
No, assigning a float to a float never loses any precicion, and a comparison
of them will always return true (as long as not float.NaN is involved)

Well, while I wouldn't expect the above assertion to fail (looking at
it closely), comparisons can do odd things. A theoretically "float"
variable may in fact be stored in a register which has more precision.
If you end up comparing a version which has been calculated and is
stored in a register with one which is stored in memory in the
"correct" number of bits, you may well have a failed comparison
unexpectedly.

Here's an example:

using System;

class Test
{
static float member;

static void Main()
{
member = Calc();
float local = Calc();
Console.WriteLine(local==member);
}

static float Calc()
{
float d1 = 2.82323f;
float d2 = 2.3f;
return d1*d2;
}
}

That prints "False". If you change "member" to be a local variable, or
"local" to a member variable, or (say) print out the value of "local"
afterwards, it prints "True".
 
cody said:
No, assigning a float to a float never loses any precicion, and a
comparison

Agreed; it won't lose precision.
of them will always return true (as long as not float.NaN is involved)

Not true. Remember that debug and release builds do give different results.
We've hit this problem before, it all worked a treat in debug and then we
started getting odd behaviour in release. The value stored in a local
variable is not guaranteed to be the same as that in a member variable,
_even if you just assigned it_.
Imagine the following:

class Foo
{
private float memberFloat;

public void Test()
{
float localFloat = 12.34;

for (int i=0; i<100000; i++)
{
memberFloat = localFloat;
localFloat = memberFloat;
}
}
}

if assigning localfloat to memberFloat would loose any precision then the
output of this program would be very unexpected 0.0 or 1.111111111 of
similar..

It won't lose the value, but if you tested the values, you'd quite likely
find they were different. Not necessarily in debug, and not necessarily with
12.34, but you really don't want to rely on it.
--
cody

[Freeware, Games and Humor]
www.deutronium.de.vu || www.deutronium.tk
 
Stu Smith said:
It won't lose the value, but if you tested the values, you'd quite likely
find they were different. Not necessarily in debug, and not necessarily with
12.34, but you really don't want to rely on it.

No, I don't believe they would be different, actually - because the
value baked into the code would be the *exact* 32 bit floating point
value. The differences come where the value is calculated and an
intermediate stage (e.g. 40 bit precision) has a different value to the
rounded 32 bit value.
 
class Test
{
static float member;

static void Main()
{
member = Calc();
float local = Calc();
Console.WriteLine(local==member);
}

static float Calc()
{
float d1 = 2.82323f;
float d2 = 2.3f;
return d1*d2;
}
}

That prints "False". If you change "member" to be a local variable, or
"local" to a member variable, or (say) print out the value of "local"
afterwards, it prints "True".

I tried your code and the result was "true"! I tried release and debug, I
tried different values,
I also tried passing one of the values as parameter I also tried it using
double. It also returned "true".
I don't believe that ever a compison of two floats with the same value will
yield to false.

But I tried the following and the results were exactly as expected:

Console.WriteLine(1.11f==1.11); -> false
Console.WriteLine((float)1.11==1.11); -> false
Console.WriteLine((double)1.11f==1.11); -> false

All three returned false because comparing floating point constant has a
different representation in float and double.

Console.WriteLine((float)1.11==1.11f); -> true

No cast occures here, it seems both are parsed as float literals so it
returns true.

Console.WriteLine((double)1.11f==1.11f); -> true

This one returns true which was not as I expected. shouldn't 1.11==1.11f
the same as ((double)1.11f==1.11f?
Shouldn't there be a loss in precision and result in a false?
 
cody said:
I tried your code and the result was "true"!

Try compiling and running from the command line - and from various
different processor architectures. (Mine is a P4.)
I tried release and debug, I
tried different values,
I also tried passing one of the values as parameter I also tried it using
double. It also returned "true".
I don't believe that ever a compison of two floats with the same value will
yield to false.

So how do you explain the results I'm getting? I absolutely *swear*
that on my box, compiling and running the program I gave from the
command line prints False, and I believe it's allowed to.
But I tried the following and the results were exactly as expected:

Console.WriteLine(1.11f==1.11); -> false
Console.WriteLine((float)1.11==1.11); -> false
Console.WriteLine((double)1.11f==1.11); -> false

All three returned false because comparing floating point constant has a
different representation in float and double.
Sure.

Console.WriteLine((float)1.11==1.11f); -> true

No cast occures here, it seems both are parsed as float literals so it
returns true.
Yes.

Console.WriteLine((double)1.11f==1.11f); -> true

This one returns true which was not as I expected. shouldn't 1.11==1.11f
the same as ((double)1.11f==1.11f?
No.

Shouldn't there be a loss in precision and result in a false?

No. Every float is exactly representable as a double, but not every
double is exactly representable as a float. 1.11f is the float closest
to the exact value 1.11. That close value is exactly representable as a
double - but it isn't the same value as 1.11d, which is the double
closest to the exact value 1.11.
 
That prints "False". If you change "member" to be a local variable, or
Try compiling and running from the command line - and from various
different processor architectures. (Mine is a P4.)

mine is AMD Athlon.
 
Another question appears: What about variables of the same storage class.
Say, your processor has some 32 bit and some 40 or 80 or whatever bit
registers.
Your method has lots of float variables, may the compiler choose to put some
variables in 32 bit registers and some in 40/80 bit registers?

Is is guaranteed that

float Foo(float f) { float ret = f; return ret; }

float f=1234.0f;
Console.WritleLine(f==Foo(f)),

will always return true?
 
cody said:
Another question appears: What about variables of the same storage class.
Say, your processor has some 32 bit and some 40 or 80 or whatever bit
registers.
Your method has lots of float variables, may the compiler choose to put some
variables in 32 bit registers and some in 40/80 bit registers?

Not sure - quite possibly.
Is is guaranteed that

float Foo(float f) { float ret = f; return ret; }

float f=1234.0f;
Console.WritleLine(f==Foo(f)),

will always return true?

While I'm not sure it's absolutely guaranteed, I don't see why it would
ever not be true - because the starting point is exactly representable
as 32 bits.
 
cody said:
mine is AMD Athlon.

That may well be the difference - I seem to remember first hearing
about this when someone was seeing different effects on an Athlon from
an Intel chip.
 
cody said:
You can be sure that both are evaluated to compile time contants (at least
jit time).
But you cannot assume that 12.34f ==(float)12.34 because conversion double
to float might result in loss of precision.

Right. This is pretty much where I ended up. Even if the canonical
(MS) C# compiler works this way on a Pentium, that is not to say that
all C# compilers on all processors will act this way.
 
Back
Top