A
Amir Kolsky via .NET 247
Hello all, this is my first time here so please be gentle
We have a VC++ program that does many floating point (float, notdouble) and we are seeing that the results of runs under VC6 andCV7 are the same. However, when we compile with the clr flag,the results we are getting are different in the very lowsignificant bits of the mantissa. We guess it has something todo with how the precision the CLR uses to hold floating pointnumbers.
Has anyone seen this before? Please answer here and possibly tomy email: (e-mail address removed)
I will post a digest of replies here...
Thanks, Amir Kolsky
We have a VC++ program that does many floating point (float, notdouble) and we are seeing that the results of runs under VC6 andCV7 are the same. However, when we compile with the clr flag,the results we are getting are different in the very lowsignificant bits of the mantissa. We guess it has something todo with how the precision the CLR uses to hold floating pointnumbers.
Has anyone seen this before? Please answer here and possibly tomy email: (e-mail address removed)
I will post a digest of replies here...
Thanks, Amir Kolsky