Erland Sommarskog said:
The entire point you seem to be missing in this thread is what we use
types for. No on would use an int to store 28.02, but we may want to
use float to that aim. But it is not possible to store 28.02 exactly in
a float, so we get an approximation of the value we are really thinking of.
I've never disputed that. I've never disputed the use of decimal as a
way of storing decimal numbers exactly. I've said several times that I
absolutely agree with the use of decimal for the OP's problem. I'm not
concerned about the *use* of the type - we can all agree on that. How
you use something doesn't, to my mind, define what it *is*. Most of the
32-bit integers I use in .NET will never actually have numbers above
1000 in them, but that doesn't mean that the type itself can't store
anything above 1000 - what it can and can't store is precisely defined
regardless of how I choose to use it.
My concern is that float is being arbitrarily labeled as "approximate"
just because it can't store all values of one particular base, despite
the fact that decimal can't store all values of other bases. Yes, base
10 is obviously the most widely used base, but it *is* just a base.
We're lucky that it's a multiple of 2 - otherwise exact float values
wouldn't be exactly representable in decimal...
Any float value is exact in and of itself. If it's an approximation to
some other value it was originally converted from, so be it - that
doesn't, to my mind, make the type itself "approximate".
That is why float/real are approxamite, and decimal and int are not. They
are approximations of what we really want to store.
The value in the float is only approximate equal to the decimal value
which was originally converted, but in itself it is an exact number.
Given a floating point value, there is a well-defined, precise number
that value represents.
Yes, it is true that it is possible to store some values with very many
decimals exactly in a float, but that is completely irrelevant, because
floats are very rarely if ever used for that aim.
Frankly, how many *decimals* a float stores is irrelevant to me. What
I'm interested in is whether the value stored in a float can be
regarded as "exact" or merely "approximate". Given a float
representing, say, 1.0101, that is an exact value.