[racket] TR Math Collection
I'd originally figured the strict vs view as a technique of immutability,
i.e. N views -> 1 Strict, any operation on a view forces a deep vector
clone of the underlying (again assuming unboxed FlVector and Vectorof
Integer for suitably small integers).
But absolutely, avoiding intermediate copies by composing operations and
then applying as a single computation is cool.
I recently did some quick and dirty utility scripts for some data munging
and after a bit of think went with 3 basic union types assuming arithmetic
monomorphic ops. FlSeries (underlaying is FlVector), CategoricalSeries
(Nominals, Features whathaveyou, with an underlying of Vectorof Integer)
and ObjectSeries A for all other boxed types.
I did start with a Vectorof A but figured.
1. Having only 3 "types" of Arrays was manageable. I totally ignore
Number, Real, Complex and the rest of the stack numericr tower.
2. The preponderance of calculation stuff is basic 4 banger arith on
FlVector elements.
3. A straightforward FlVector allocation and low level copy of unboxed
linear memory should be PDQ.
4. Less memory since the bulk of data I'm munging loaded in as flat float
or int vectors.
I guess ultimately it was the memory needs of munging "big data" loaded
into memory that moved me to FlVectors. Just this morning I was loading
250K rows by 150 columns of data and pair-wise determining entropy and
mutual information between the categorical data columns and I must say the
latest version of Racket + Compiler + TypedRacket optimizations is FLYING.
(Course I do have an 8 core I7, 8 gig with SSD laptop.)
Your #2 however ... "done enough testing on #2 to know that parallelizing
the heck out of these things is definitely possible". Worthy stuff right
there.
I'll try and find some time to swap in your Array A as my underlying and
see what happens.
Ray
On Thu, Aug 9, 2012 at 3:11 PM, Neil Toronto <neil.toronto at gmail.com> wrote:
> On 08/09/2012 11:37 AM, Ray Racine wrote:
>
>> Was github clicking around in the new TR math collection. Impressive to
>> say the least. Much technique to learn.
>>
>
> :D
>
> It'll help if you're familiar with SciPy's arrays.
>
>
> One question was in the foundation of (Array A) which is essentially a
>> struct: wrapped (Vectorof A). Fundamentally, can one ultimately expect
>> near performance equivalence between a (Vectorof Float) vs FlVector?
>>
>
> It depends on what you do with them. Certain functions like `fl+' can
> operate on "unboxed" Floats, but most can't. Generally, if you pull floats
> out of an FlVector and apply non-arithmetic functions, the floats will get
> boxed (put on the heap). If you pull floats out of a (Vectorof Float) and
> apply non-arithmetic functions, there's no extra cost because they're
> already boxed.
>
> For example, my fastest `flvector+' takes about 1/3 the time to run as my
> fastest `vector+'. But my fastest `flvector-map' takes about 4/3 the time
> to run as my fastest `vector-map'. It's slower because it has to allocate a
> float whenever it applies its function argument.
>
> I expect the numbers to change in favor of FlVectors as Racket's compiler
> gets better at analysis and inlining. But TR's optimizer will also be
> improving (I imagine it will unbox vectors eventually), and it will
> probably always be a little faster to use higher-order functions on a
> (Vectorof Float).
>
> Speaking of higher-order, the type (Array A) is a union type: (U
> (Strict-Array A) (View-Array A)). A view array wraps a function of type
> (Indexes -> A). Operating on them (adding, transposing, slicing,
> broadcasting, etc.) generally comes down to function composition. So
>
> (define arr (array+ brr (array+ crr drr)))
>
> doesn't actually add elements, but returns a (View-Array Number) that
> wraps an (Indexes -> Number) that adds elements. This gets the results:
>
> (array-strict arr)
>
> It returns a (Strict-Array Number), which wraps a (Vectorof Number).
>
> One nifty thing about this is that intermediate operations *hardly
> allocate anything*. (This representation automatically "deforests".)
> Another is that, to parallelize nearly every array operation, only
> `array-strict' has to be parallelized. This includes operations that are
> typically hard to parallelize, like matrix multiplication.
>
> The downside is that users of the array library will have to determine
> when to convert results to strict arrays. I suspect it won't usually be too
> hard to determine (rule of thumb: do it when every element will be referred
> to more than once or twice), and I just added arrays of lazy elements for
> when it *is* too hard to determine.
>
> To sum up, the array part of the math library is
>
> 1. An expressive multidimensional array API.
>
> 2. A real-world test case for data parallelism in Racket.
>
> 3. A user-interface experiment.
>
> I think we've got #1 down, and I've done enough testing on #2 to know that
> parallelizing the heck out of these things is definitely possible. For #3,
> we'll have to wait for people to start using it.
>
> Neil ⊥
>
> ____________________
> Racket Users list:
> http://lists.racket-lang.org/**users <http://lists.racket-lang.org/users>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.racket-lang.org/users/archive/attachments/20120809/49a06839/attachment.html>