[racket] Multiplying by 0
On Tue, Feb 15, 2011 at 12:22 AM, Hendrik Boom <hendrik at topoi.pooq.com> wrote:
>
> Yeah, floating point is approximate.
To be very precise and pedantic, floating point consists of two things:
1. A selected set of rational numbers along with an associated
representation.
2. A variety of well-defined mathematical operations on this set
with the constraint
that the result of the operation must produce a representable
number that is mathematically correct if possible, or, depending
on the rounding rules, the representable number closest to the
mathematically correct answer.
The `approximation' is in the operations, not in the numbers. This implies
the property that any floating point number (that is, a floating point object
that represents a number. The weird things like infinities and NaNs don't
count.) can be converted to a rational and back without loss of information
(or `gain of information' --- additional unwarranted bits of precision)
If the *number* were approximate, there would be ambiguity about which
rational it represented.
> That said, it actually provides an
> exact value
Exactly correct!
> which the user hopes is close to the intended value, but
> floating-point makes no attempt to assign bounds on how close the answer
> is.
Who knows what the user hopes? A reasonable person may decide
to implement fixnums with floating point representation. The results will
be mathematically correct for addition, subtraction, and multiplication
if the fixnums are kept to fewer than 53 bits of precision (for doubles).
There will be no `rounding error' whatsoever. (Division is a different story,
but that is an issue with integers, not with the floating point representation.)
> Calling floating-point "approximate" is a way of hinting to the naive
> user that the answers may not be quite right. And numerical analysts
> can easily come up with examples where the answers are grossly,
> misleadingly inaccurate.
Right. But it tends to mislead naive users into thinking there is something
squirrelly about floating point numbers. The numbers are fine, the operations
are squirrelly.
> To have approximate values that are really logically consistent, we's
> have to use somoething like interval arithmetic, in which the
> calculation provides lower and upper bounds for the answer. Then is
> come calculation produces and answer x like 1e100, you's know the
> precision. Is x 1e100 plus-or-minus 1, or is x 1e100 plus-or-minus
> 1e200?
Or use a distribution function rather than a number.
> In the first case, (min 0 x) coule legitimately be 0. In the second, it
> would have to be the interval "something between -1e200 and 0"
>
> Now this is consistent, and will inform you whether you can rely on the
> answer, becaues it doesn't provide an illusion of precision.
>
> Going further, what you might really want for approximate arithmetic is
> for a number to be a function which, when given a tolerance 'epsilon',
> will tield a value within epcilon of the correct answer. Going this
> way, you end up with the constructive real numbers. These are the
> closest things we can compute with to what most mathematicians call the
> real numbers. In fact, if you are a constructive mathematician, you
> will use the phrase 'real numbers' for (equivalence classes of) these
> functions, and you'd consider the idea of real numbers that cannot
> be expressed this way to be an incoherent fantasy.
>
> There. Now you have an idea how far you can go in this direction.
> Stopping at floating-point is a matter of efficiency, not conceptual
> completeness.
--
~jrm