[racket] matrix-solve and approximation errors
On 04/17/2014 10:07 AM, Matthias Felleisen wrote:
>
> Neil, a two-layer design, using option contracts, could be a solution here.
Interesting! So we could have both math/matrix and (say)
math/unchecked-matrix, where the former enables the option contracts
(e.g. that check matrix condition numbers), and the latter doesn't. Each
could re-export the same implementation.
Enforcing low floating-point error using contracts requires solving a
few research-level problems, though.
1. Not only is floating-point error not compositional, it's not even
monotone. For example, for large arguments, `fllog` reduces error:
> (require math)
> (define a (flstep 1000.0 5)) ; 1000 approx., 5 ulps error
> (define b (flstep 1002.0 5)) ; 1002 approx., 5 ulps error
> (flulp-error (* a b) (* 1000 1002))
10.0 ; Error for * is additive
> (flulp-error (fllog (* a b))
(bigfloat->real (bflog (bf (* 1000 1002)))))
0.2789786104670918 ; Whoa, just rounding error
(Admittedly, decreasing error is less likely than increasing error
because most operations at least add rounding error.)
Pretending error is monotone would rule out useful high-error
approximations. Dealing with floating-point error's nonmonontonicity
would make it hard to determine blame, or when to raise exceptions.
2. A robust floating-point implementation usually approximates its
function in more than one way. One common strategy is to first try a
reasonably fast approximation that has low error on a large domain, and
switch to a slower but more accurate approximation if the fast one's
estimated error is high. Can contracts support this strategy? Can they
support it efficiently?
(Efficiently switching to another approximation is probably less of a
concern in linear algebra, where most operations are O(n^2) or O(n^3).)
3. To a user who expects flonums to be just like real numbers but maybe
with a bit of error, an approximation error contract violation - even if
it blames exactly the right expression - will seem capricious and
inexplicable. "It won't let me compute the dot product of this vector
and the result of solving this system of equations? What's wrong with
this library?"
4. How do we express the invariants of a function's floating-point
implementation as contracts?
5. A call site doesn't always need the absolute minimum error a
floating-point function can provide. Further, how much error it can
tolerate may be contextual. How do we express a call site's
approximation error requirements?
Again, I think these are interesting research problems. They may also be
some reasons why nobody has tried this yet (as far as I know). The first
three are the reasons I hesitate to use contracts to enforce low error
right now.
> ------------------------------------------------------------------------
>
>
> On Apr 16, 2014, at 3:18 PM, Konrad Hinsen <konrad.hinsen at fastmail.net> wrote:
>
>> Any linear-algebra library meant for "serious" work does not do any
>> non-essential work "just" for error checking. Performance is king.
>> Users are supposed to know the pitfalls and take care to avoid them.
>> Unfortunately, many don't.
>
> It is incredible how often I find this scenario. Just yesterday I had
> a colleague tell me (on a related subject) that "this is how you test
> people who want to work in this field. If they can't figure out these
> assumptions, they are not well suited. Flush them."
>
> My proposal had been to create a type-like system to check some invariant
> about geometric relations. His response in the end was "yeah, it may help
> more novices to overcome this obstacle, but the field wont' care in the end.
> Experts don't make these mistakes."
>
> At this age, I am 1000% convinced that experts make these mistakes too, but
> what can we say.
The most convincing argument I've made to myself is that *I* make these
mistakes even when I'm trying my best to be careful. Unfortunately, that
argument is convincing to others only if we can all agree I'm an expert,
and I still haven't convinced myself of that.
Neil ⊥