[racket] TR Math Collection
On 08/09/2012 11:37 AM, Ray Racine wrote:
> Was github clicking around in the new TR math collection. Impressive to
> say the least. Much technique to learn.
:D
It'll help if you're familiar with SciPy's arrays.
> One question was in the foundation of (Array A) which is essentially a
> struct: wrapped (Vectorof A). Fundamentally, can one ultimately expect
> near performance equivalence between a (Vectorof Float) vs FlVector?
It depends on what you do with them. Certain functions like `fl+' can
operate on "unboxed" Floats, but most can't. Generally, if you pull
floats out of an FlVector and apply non-arithmetic functions, the floats
will get boxed (put on the heap). If you pull floats out of a (Vectorof
Float) and apply non-arithmetic functions, there's no extra cost because
they're already boxed.
For example, my fastest `flvector+' takes about 1/3 the time to run as
my fastest `vector+'. But my fastest `flvector-map' takes about 4/3 the
time to run as my fastest `vector-map'. It's slower because it has to
allocate a float whenever it applies its function argument.
I expect the numbers to change in favor of FlVectors as Racket's
compiler gets better at analysis and inlining. But TR's optimizer will
also be improving (I imagine it will unbox vectors eventually), and it
will probably always be a little faster to use higher-order functions on
a (Vectorof Float).
Speaking of higher-order, the type (Array A) is a union type: (U
(Strict-Array A) (View-Array A)). A view array wraps a function of type
(Indexes -> A). Operating on them (adding, transposing, slicing,
broadcasting, etc.) generally comes down to function composition. So
(define arr (array+ brr (array+ crr drr)))
doesn't actually add elements, but returns a (View-Array Number) that
wraps an (Indexes -> Number) that adds elements. This gets the results:
(array-strict arr)
It returns a (Strict-Array Number), which wraps a (Vectorof Number).
One nifty thing about this is that intermediate operations *hardly
allocate anything*. (This representation automatically "deforests".)
Another is that, to parallelize nearly every array operation, only
`array-strict' has to be parallelized. This includes operations that are
typically hard to parallelize, like matrix multiplication.
The downside is that users of the array library will have to determine
when to convert results to strict arrays. I suspect it won't usually be
too hard to determine (rule of thumb: do it when every element will be
referred to more than once or twice), and I just added arrays of lazy
elements for when it *is* too hard to determine.
To sum up, the array part of the math library is
1. An expressive multidimensional array API.
2. A real-world test case for data parallelism in Racket.
3. A user-interface experiment.
I think we've got #1 down, and I've done enough testing on #2 to know
that parallelizing the heck out of these things is definitely possible.
For #3, we'll have to wait for people to start using it.
Neil ⊥