[racket-dev] Contract barrier inefficiencies
On 12/29/2012 06:18 PM, Robby Findler wrote:
> On Sat, Dec 29, 2012 at 3:57 PM, Matthias Felleisen
> <matthias at ccs.neu.edu <mailto:matthias at ccs.neu.edu>> wrote:
>
> I think the questions really concern the interaction between TR's
> generation of contracts and contracts themselves:
>
> 1. TR does not seem to exploit as much knowledge as possible when
> it generates contracts.
> Example: (All (a) ((Foo -> a) Foo -> a) seems to be such an example
> (perhaps extreme)
> Where can Foo come from? Why does TR wrap a contract around the
> domain of Foo -> a --
> TR proved that it is applied to Foo if anything. Is TR too
> aggressive in negative positions?
I'm almost certain it is. Sam? (Wherefore art thou, Sam?)
I can see an argument for having overly aggressive contracts at first,
before having large test cases that demonstrate TR's implementation is
almost always sound. If these are training wheels, though, we should
consider removing them. The math library alone has at least 18k lines of
TR code, it and exercises the type system pretty thoroughly.
> In this case, the chaperone is necessary, since the "a" means one value
> (not multiple), so you cannot eliminate the contract. Sadly.
One asymmetry between multiple arguments and multiple return values is
that the number of arguments a procedure accepts is observable, but the
number of return values isn't. If it were observable, and TR were less
aggressive with the first `Foo', a chaperone would be unnecessary.
There have got to be other uses for that, but I can't think of any
offhand. Something to do with continuations, maybe? But TR could easily
annotate lambdas with the number of values they return.
> 2. Multiple crossings impose the same contracts over and over. BUT
> 'same' is not necessarily
> expressed as eq?. Can the contract library do better here? Is the
> 'inner loop' assumption
> wrong?
>
>
> The contract system has support for comparing contracts in a useful way
> for this (contract-stronger?). The problem Neil ⊥ identified is that the
> values that go thru them are rarely eq? so checking the contracts that
> are on the values isn't helpful since they've changed just enough to
> avoid being able to detect that the are the "same" value.
>
> At least that's what I understood from him.
That's right.
But the array values - their procedures in particular - almost always
have an equivalent contract. Is it possible to retrieve the contract on
a value, check it, and pass the value through unchanged if the
additional contract you're considering putting on it isn't stronger?
> But thinking a little bit more about this: I think that maybe we're
> seeing a slightly distored picture on this last point. The functions
> that Neil was using in his examples didn't do anything and normally
> functions would do something, so the cost for this point is exaggerated
> as compared to what would happen in a real program.
No. No, no, no. This isn't distorted. With automatically deforesting
arrays, function call overhead is almost everything. The array
procedures themselves do very little: they usually just call other array
procedures, tweaking the input or output. (Example: the procedure of the
array returned by `matrix-transpose' calls the original array's
procedure with swapped row and column indexes.)
To see an example using arrays that do something, load up
"math/tests/mandelbrot-test.rkt". Add the line
(time (void (mandelbrot 0.05 20)))
somewhere after the `mandelbrot' definition, and run it. On my computer,
it reports 30ms.
Change the language to Racket. Change the line
(define c (array->fcarray (array-make-rectangular x y)))
to
(define c (array->fcarray (inline-array-map make-rectangular x y)))
(because there's currently a problem using `array-make-rectangular' in
untyped code). Run it. On my computer, it reports 2700ms, or about 90x
the time taken by the typed code.
Neil ⊥