M
Mike C#
Jon Skeet said:How
you use something doesn't, to my mind, define what it *is*.
By that logic, a computer that hosts applications on a network shouldn't be
called a "server" and a computer that connects to it shouldn't be called a
"client", since they're all "computers" anyway and how you use them is
irrelevant to what they are? Some people might claim that how you use
something might be very closely tied to how we classify it.
32-bit integers I use in .NET will never actually have numbers above
1000 in them, but that doesn't mean that the type itself can't store
anything above 1000 - what it can and can't store is precisely defined
regardless of how I choose to use it.
However when you store 989 in it, you are certain to get back 989. Not 988
or 990 "approximations" of 989.
My concern is that float is being arbitrarily labeled as "approximate"
just because it can't store all values of one particular base, despite
the fact that decimal can't store all values of other bases.
Base 10 is what we use in SQL for floating point numbers, most likely
because no one has introduced a better system. Perhaps you should recommend
Base 3 to ANSI, or recommend that people work out their own binary
representations of floating point values and store them directly using the
BINARY data type?
Yes, base
10 is obviously the most widely used base, but it *is* just a base.
We're lucky that it's a multiple of 2 - otherwise exact float values
wouldn't be exactly representable in decimal...
I thought your argument was that Base was irrelevant? Now you have a bias
for Base 2? What about Base 3?
Any float value is exact in and of itself. If it's an approximation to
some other value it was originally converted from, so be it - that
doesn't, to my mind, make the type itself "approximate".
And in the case of 28.08, the float that is stored is *exactly* the wrong
number.
The value in the float is only approximate equal to the decimal value
which was originally converted, but in itself it is an exact number.
Given a floating point value, there is a well-defined, precise number
that value represents.
And in SQL we tend to concern ourselves with little things like data
integrity; i.e., am I going to get out what I put in? Or am I going to get
back something different?
Frankly, how many *decimals* a float stores is irrelevant to me. What
I'm interested in is whether the value stored in a float can be
regarded as "exact" or merely "approximate". Given a float
representing, say, 1.0101, that is an exact value.
The fact that you can't store many values that are coincident to the data
type exactly without data loss or introducing error (i.e., "approximation")
makes them "approximate".