[racket-dev] Math library initial commit almost ready; comments on issues welcome
On Mon, Oct 1, 2012 at 6:08 PM, Neil Toronto <neil.toronto at gmail.com> wrote:
> On 10/01/2012 02:06 PM, Sam Tobin-Hochstadt wrote:
>>
>> On Mon, Oct 1, 2012 at 2:26 PM, Neil Toronto <neil.toronto at gmail.com>
>> wrote:
>> PR 13098 isn't really fixable, in some sense. There's just more data
>> there with broader types, so it will always take longer than for more
>> specific types. All of the things I'd do to fix the problem would
>> speed up both the slow and the fast case, without really affecting the
>> gap between them.
>
>
> You're saying it wouldn't hurt to replace functions with more specifically
> typed functions in general, right? Good point.
That isn't what I meant, but it's true as well ...
> I think you can speed up the slow cases more than the fast ones, though.
> Huge `case->' types tend to have this structure:
>
> (case-> (A0 -> B0)
> ((U A0 A1) -> B1)
> ((U A0 A1 A2) -> B2)
> ((U A0 A1 A2 A3) -> B3)
> ...)
>
> The `abs' function, for example, starts this way:
>
> (case-> (Zero -> Zero)
> (One -> One)
> (Positive-Byte -> Positive-Byte)
> (Byte -> Byte)
> (Positive-Index -> Positive-Index)
> (Index -> Index)
> ...)
>
> I've done some timing tests using generated types like this:
>
> (case-> (Zero -> Zero)
> ((U Zero One) -> (U Zero One))
> ((U Zero One 2) -> (U Zero One 2))
> ...)
>
> The tests suggest that typechecking application of a function with a type
> like that is quadratic in the size of the function's type, which makes
> sense. Could you get something nearer to linear-time by dividing the case
> types into monotone groups?
I think the revision you're imagining only works in fairly simple
cases -- I think the monotone groups would be small enough in practice
that the binary search wouldn't help much. For example, in the snippet
of the type of `abs` you give above, all of the monotone groups are of
size 1 or 2.
> My timing tests also show that typechecking is apparently quadratic in the
> depth of expressions, regardless of how simple the types are. Is that
> something you already knew?
This surprises me in general, but I wouldn't be surprised if it were
the case for some examples. If you have particular examples, that
would be helpful for trying to fix them. However, some algorithms in
TR are inherently super-linear.
> (The latter is a medium-ish deal for the math library. Most special
> functions have domains in which they're computed by evaluating a 25- to
> 50-degree polynomial. Using Horner's method, that's an expression 50 to 100
> deep; if they're Chebyshev polynomials, it's more like 200-400.)
Is there a reason to generate these expressions statically? Is it
genuinely faster?
>>> * Another typed/untyped barrier issue: certain functions are actually
>>> macros so that TR can optimize around their expansions. (Example: by
>>> making
>>> `fcvector-ref' a macro, (+ (fcvector-ref xs 0) (fcvector-ref ys 0))
>>> operates
>>> entirely on unboxed Float-Complex values.) I really don't want to make an
>>> `untyped/math' library to expose the function versions, but I can't think
>>> of
>>> a way around it.
>>
>>
>> There's no way around this at the moment.
>
>
> I've managed to make some headway here in another part of the library, by
> defining macros only in #lang racket. If their expansions contain typed
> identifiers, it seems TR is smart enough to not check their contracts when
> the macros are used in typed code.
Yes, TR should be able to make this work in general -- the contract
wrapping doesn't happen until the real reference to the identifier is
expanded.
--
sam th
samth at ccs.neu.edu