[racket-dev] math collection [was: Hyperbolic functions]
On 06/28/2012 01:52 PM, Robby Findler wrote:
> One more comment (even tho I promised not (and even worse this is a
> kind of repetition)): I think it is easy to fall into the trap of
> thinking "well, I want to limit access to this because I know that X
> writes this code and thus can I can be sure that things work"; we
> should really be thinking about technical ways to ensure that things
> work, instead of relying on "smart" people. (One look at the
> correlation between good developer and number of open bugs suggests
> that even if we wanted to rely only on good developers, we'd be in
> trouble.)
>
> I think we also already do a lot of work in this sense and this is
> not, generally speaking, an easy set of problems, but I think it is a
> set where we can offer something and so we should be seeking out such
> places, opening them up to anyone and putting _technical_ barriers in
> place (when possible) to keep our invariants preserved.
>
> In the specific example that started this thread, I think that trying
> something like random testing can be an effective way to make sure
> that contracts and types match up. For example.
Okay, now I finally understand what all you guys are on about.
Also, I think that's a great idea. Currently, randomized testing only
tests *preservation* for numeric primitives, by ensuring that the type
of an expression's value (as computed by `eval') is a subtype of the
expression's type according to TR. In principle, the primitives' giant
case-> types could be proved correct, but:
1) They would only be proved for an ideal implementation
2) It would still take too much friggin' work
3) Vincent would still have to *identify* the right types (theorems).
Randomized testing 1) tests the actual implementation; 2) moves the work
to the machine; and 3) can find counterexamples quickly to make
identifying the right types easier.
So... what I thought was a great idea was extending randomized testing
to the contract boundary.
Also, I just had an idea. Vincent, how hard would it be to use something
like the current randomized testing to *search* for a more precise type?
I'm thinking of an algorithm that iteratively 1) makes a type like (Real
-> Real) more precise (e.g. (case -> (Float -> Float) (Real -> Real))),
then 2) tests it on random inputs. It would backtrack if testing finds
preservation counterexamples, and repeat until it can't refine the type
anymore.
Neil ⊥