[racket] Multiplying by 0

 From: José Lopes (jose.lopes at ist.utl.pt) Date: Tue Feb 15 13:39:19 EST 2011 Previous message: [racket] Multiplying by 0 Next message: [racket] [ANN] Racket Gtk+ WebKit Binding Messages sorted by: [date] [thread] [subject] [author]

```I would like to give an alternative point of view.

According to the math definition of minimum, the min function should
return the smallest number from a set of numbers.

Therefore, the following evaluation
(min 0 1e10 1e100) => 0.0
cannot be correct because it is returning a number that is not part of
the given set of numbers, i.e., 0.0 cannot be the minimum of the set {0
1e10 1e100} because it is not part of that set.

On 15-02-2011 08:22, Hendrik Boom wrote:
> On Mon, Feb 14, 2011 at 03:11:06PM -0800, Joe Marshall wrote:
>>> On Mon, Feb 14, 2011 at 12:14 PM, Sam Tobin-Hochstadt<samth at ccs.neu.edu>  wrote:
>>>> No, it's not a bug.  Since 1e100 is an inexact number, there's
>>>> uncertainty about the minimum of those two numbers,
>>> On Mon, Feb 14, 2011 at 5:01 PM, Joe Marshall<jmarshall at alum.mit.edu>  wrote:
>>>> So could a conforming implementation return 1e100 as the answer?
>>>> (min 0 1e100) =>  1e100
>> On Mon, Feb 14, 2011 at 2:59 PM, Sam Tobin-Hochstadt<samth at ccs.neu.edu>  wrote:
>>>   The R6RS spec is not totally clear on this point, but I don't think it allows that.
>> That doesn't sound like uncertainty to me.
> Yeah, floating point is approximate.  That said, it actually provides an
> exact value which the user hopes is close to the intended value, but
> floating-point makes no attempt to assign bounds on how close the answer
> is.
>
> Calling floating-point "approximate" is a way of hinting to the naive
> user that the answers may not be quite right.  And numerical analysts
> can easily come up with examples where the answers are grossly,
>
> To have approximate values that are really logically consistent, we's
> have to use somoething like interval arithmetic, in which the
> calculation provides lower and upper bounds for the answer.  Then is
> come calculation produces and answer x like 1e100, you's know the
> precision.  Is x 1e100 plus-or-minus 1, or is x 1e100 plus-or-minus
> 1e200?
>
> In the first case, (min 0 x) coule legitimately be 0.  In the second, it
> would have to be the interval "something between -1e200 and 0"
>
> Now this is consistent, and will inform you whether you can rely on the
> answer, becaues it doesn't provide an illusion of precision.
>
> Going further, what you might really want for approximate arithmetic is
> for a number to be a function which, when given a tolerance 'epsilon',
> will tield a value within epcilon of the correct answer.  Going this
> way, you end up with the constructive real numbers.  These are the
> closest things we can compute with to what most mathematicians call the
> real numbers.  In fact, if you are a constructive mathematician, you
> will use the phrase 'real numbers' for (equivalence classes of) these
> functions, and you'd consider the idea of real numbers that cannot
> be expressed this way to be an incoherent fantasy.
>
> There.  Now you have an idea how far you can go in this direction.
> Stopping at floating-point is a matter of efficiency, not conceptual
> completeness.
>
> -- hendrik
> _________________________________________________