[racket] matrix-solve and approximation errors

From: Konrad Hinsen (konrad.hinsen at fastmail.net)
Date: Thu Apr 17 04:37:31 EDT 2014

Laurent writes:

 > Sounds like a good idea, but possibly cumbersome to program.
 > Or maybe just define a error-checking-level parameter that can be adjusted from
 > `unsafe` to `noob`?

I'd expect this more difficult to design than to program.

My first idea would be to base everything on singular value
decomposition (SVD), which of course would have to be implemented in
the Racket math library first - but that's a good idea anyway.  SVD is
about the most well-behaved algorithm in linear algebra.  It doesn't
add any subtle problems of its own, and it permits to identify
numerical problems that a matrix might cause.

Given the SVD of a matrix, you can both identify near-singular
matrices and compute a generalization of the matrix inverse that
remains well-defined (and numerically well-behaved). As a bonus, it
also generalizes to non-square matrices (and thus over- und
under-determined linear equation systems). It is known as the
pseudo-inverse or Moore-Penrose inverse. Using the pseudo-inverse
comes down to replacing a set of linear equations by a linear
least-squares problem.

Actually, when I wrote yesterday that I am not aware of any such
approach having been used before, I was wrong: APL and its successor J
provide a "matrix divide" operator that is defined in terms of the
pseudo-inverse. I have actually used it for solving least-squares
problems. That was 25 years ago when most scientists hadn't even
heard of the pseudo-inverse. Today, most linear algebra libraries
do provide SVD and pseudo-inverses, but my impression is that it's
still underappreciated.

Konrad.

Posted on the users mailing list.