[racket-dev] Math library initial commit almost ready; comments on issues welcome
On Mon, Oct 1, 2012 at 6:49 PM, Neil Toronto <neil.toronto at gmail.com> wrote:
> On 10/01/2012 04:20 PM, Sam Tobin-Hochstadt wrote:
>>
>> On Mon, Oct 1, 2012 at 6:08 PM, Neil Toronto <neil.toronto at gmail.com>
>> wrote:
>>>
>>> On 10/01/2012 02:06 PM, Sam Tobin-Hochstadt wrote:
>>>>
>>>>
>>>> On Mon, Oct 1, 2012 at 2:26 PM, Neil Toronto <neil.toronto at gmail.com>
>>>> wrote:
>>>
>>> My timing tests also show that typechecking is apparently quadratic in
>>> the
>>> depth of expressions, regardless of how simple the types are. Is that
>>> something you already knew?
>>
>>
>> This surprises me in general, but I wouldn't be surprised if it were
>> the case for some examples. If you have particular examples, that
>> would be helpful for trying to fix them. However, some algorithms in
>> TR are inherently super-linear.
>
>
> This is similar to the testing code I wrote, and it also exhibits quadratic
> behavior. The `apply*' macro generates the simplest deep expression
> possible. It's used to repeatedly apply a function with the simplest
> one-argument floating-point type possible.
>
> #lang typed/racket/base #:no-optimize
>
> (require racket/flonum
> (for-syntax racket/base))
>
> (define-syntax (apply* stx)
> (syntax-case stx ()
> [(_ f x 0) #'(f x)]
> [(_ f x n) #`(f (apply* f x #,(- (syntax->datum #'n) 1)))]))
>
> (: simple-flabs (Flonum -> Flonum))
> (define simple-flabs flabs)
>
> (: flabs* (Flonum -> Flonum))
> (define (flabs* x)
> (apply* simple-flabs x 1000))
>
>
> Typechecking is quadratic in the number of nested applications done in
> `flabs*'. Changing `simple-flabs' to `flabs' doubles the time; changing it
> to `abs' apparently changes typechecking to O(n^3).
The latter is unlikely -- I expect it just increases the constant
factor. But I'll look into this.
>>> (The latter is a medium-ish deal for the math library. Most special
>>> functions have domains in which they're computed by evaluating a 25- to
>>> 50-degree polynomial. Using Horner's method, that's an expression 50 to
>>> 100
>>> deep; if they're Chebyshev polynomials, it's more like 200-400.)
>>
>>
>> Is there a reason to generate these expressions statically? Is it
>> genuinely faster?
>
>
> Yes. I assume it's because the CPU can pipeline the entire computation.
This also seems unlikely to me (but almost anything is possible at
this level). Have you looked at the output of the decompiler and/or
the disassembler on the results?
>>> I've managed to make some headway here in another part of the library, by
>>> defining macros only in #lang racket. If their expansions contain typed
>>> identifiers, it seems TR is smart enough to not check their contracts
>>> when
>>> the macros are used in typed code.
>>
>>
>> Yes, TR should be able to make this work in general -- the contract
>> wrapping doesn't happen until the real reference to the identifier is
>> expanded.
>
>
> Nice.
>
> Related question: How do I make this work?
>
> #lang typed/racket
>
> (: plus (Flonum Flonum -> Flonum))
> (define (plus a b)
> (+ a b))
>
> (module provider racket
> (require (submod ".."))
> (provide inline-plus)
> (define-syntax-rule (inline-plus a b) (plus a b)))
>
> (require 'provider)
>
> (inline-plus 1.2 2.3)
>
>
> I've tried the various module-declaration forms, but can't find the right
> incantation. Am I trying to do something that I can't? (FWIW, I can't make
> it work in untyped Racket, either.)
This program doesn't make sense -- the outer module depends on
'provider (see the `(require 'provider)`) but 'provider depends on the
outer module (that's what the inner `require` is doing). That's a
cycle, and not allowed, typed or untyped.
--
sam th
samth at ccs.neu.edu