[racket] Multiplying by 0

From: Hendrik Boom (hendrik at topoi.pooq.com)
Date: Tue Feb 15 03:22:40 EST 2011

On Mon, Feb 14, 2011 at 03:11:06PM -0800, Joe Marshall wrote:
> > On Mon, Feb 14, 2011 at 12:14 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
> >> No, it's not a bug.  Since 1e100 is an inexact number, there's
> >> uncertainty about the minimum of those two numbers,
> > On Mon, Feb 14, 2011 at 5:01 PM, Joe Marshall <jmarshall at alum.mit.edu> wrote:
> >> So could a conforming implementation return 1e100 as the answer?
> >> (min 0 1e100) => 1e100
> On Mon, Feb 14, 2011 at 2:59 PM, Sam Tobin-Hochstadt <samth at ccs.neu.edu> wrote:
> >  The R6RS spec is not totally clear on this point, but I don't think it allows that.
> That doesn't sound like uncertainty to me.

Yeah, floating point is approximate.  That said, it actually provides an 
exact value which the user hopes is close to the intended value, but 
floating-point makes no attempt to assign bounds on how close the answer 

Calling floating-point "approximate" is a way of hinting to the naive 
user that the answers may not be quite right.  And numerical analysts 
can easily come up with examples where the answers are grossly, 
misleadingly inaccurate.

To have approximate values that are really logically consistent, we's 
have to use somoething like interval arithmetic, in which the 
calculation provides lower and upper bounds for the answer.  Then is 
come calculation produces and answer x like 1e100, you's know the 
precision.  Is x 1e100 plus-or-minus 1, or is x 1e100 plus-or-minus 

In the first case, (min 0 x) coule legitimately be 0.  In the second, it 
would have to be the interval "something between -1e200 and 0"

Now this is consistent, and will inform you whether you can rely on the 
answer, becaues it doesn't provide an illusion of precision.

Going further, what you might really want for approximate arithmetic is 
for a number to be a function which, when given a tolerance 'epsilon', 
will tield a value within epcilon of the correct answer.  Going this 
way, you end up with the constructive real numbers.  These are the 
closest things we can compute with to what most mathematicians call the 
real numbers.  In fact, if you are a constructive mathematician, you 
will use the phrase 'real numbers' for (equivalence classes of) these 
functions, and you'd consider the idea of real numbers that cannot 
be expressed this way to be an incoherent fantasy.

There.  Now you have an idea how far you can go in this direction. 
Stopping at floating-point is a matter of efficiency, not conceptual 

-- hendrik

Posted on the users mailing list.