[racket-dev] Slow contracts
Oh, sorry: I just meant that it may be useful information if we were
to run some micro-benchmarks to determine where the break-even point
is between #:lazy and strict for some fairly simple tree type and an
untyped program that just walks over the tree, say, twice. I'm
imagining measuring the break-even point in size of the tree.
If the break-even point is a tree with 10 nodes than I think we'd come
to very difficult conclusions than if the break-even point was a tree
with 10,000 nodes.
Robby
On Fri, Jun 13, 2014 at 1:06 AM, Eric Dobson <eric.n.dobson at gmail.com> wrote:
> The issue is TR doesn't know statically know how the value will be
> used, thus we have do the computation for determining the break even
> point at run time. This puts the actual work in the contract system
> (or at least the contracts that TR generates), and I don't understand
> all the internal workings of these.
>
> I do believe there are optimizations that we can do, for example
> unrolling the contract so that only every 5 struct contracts is a lazy
> chaperone contract. But I have no idea how we could dynamically pick
> the best unrolling strategy.
>
> On Thu, Jun 12, 2014 at 10:20 PM, Robby Findler
> <robby at eecs.northwestern.edu> wrote:
>> On Tue, Jun 10, 2014 at 2:27 AM, Eric Dobson <eric.n.dobson at gmail.com> wrote:
>>> On Mon, Jun 9, 2014 at 9:46 PM, Eric Dobson <eric.n.dobson at gmail.com> wrote:
>>>> Splitting this out because this is actually a different issue. This is
>>>> about us generating slow contracts.
>>>>
>>>> There are two things in play here.
>>>>
>>>> One is that TR doesn't use the new lazy parts of struct/dc. This would
>>>> require changing struct contracts from flat contracts to
>>>> chaperone-contracts. Given that I think we are going to need to change
>>>> struct contracts to sometimes be chaperone contracts anyways for
>>>> soundness that might not be a huge loss.
>>> I did performance measurements and it is about a factor of 60 slower
>>> for lazy chaperone contracts. I see a couple of ways to improve these
>>> numbers, so they could be better in the future with a bit more work on
>>> optimizing. Given that strict contracts actually change the big O
>>> notation I think that this is a reasonable performance price to pay,
>>> given that data structures with more than 60 elements are fairly
>>> common.
>>
>> Does it make sense to try to find the break-even point for a program
>> that, say, makes two passes over the data-structure?
>>
>> Robby