[racket] Typed Racket: (Parameterof (HashTable ...))
On 02/15/2014 12:31 PM, Greg Hendershott wrote:
> As a Typed Racket beginner (as in, ~ 48 hours into it), I'm still
> learning when to use `ann` (or `#{}`) vs. `cast` vs. `assert`.
I only use #{} in binding forms in untyped macros like `match', which
don't provide a way to declare identifier types and would choke on
`ann'. (So I rarely have to use it.) I stick to `ann' because #{} is a
typed/racket reader macro, so it doesn't work in typed submodules.
Speaking of matching, it might be handy to know how to use occurrence
typing within `match'. This fails to typecheck:
(: maybe-positive? ((U #f Real) -> Boolean))
(define (maybe-positive? x)
(match x
[(? real?) (positive? x)]
[_ #f]))
but this passes:
(: maybe-positive? ((U #f Real) -> Boolean))
(define (maybe-positive? x)
(match x
[(and (? real?) x) (positive? x)]
[_ #f]))
> And as someone whose predominant static typing experience is C and
> C++, I have the instinct that you don't cast unless you know exactly
> what you're doing. So when I need to use `ann` or `cast` or `assert`,
> I wince. But I'm still trying to learn the appropriate instinct, here.
That sounds like a good start, especially the wincing bit, but I'd
refine it a little.
Types in TR are different than in C and C++, in that they only
*incidentally* describe data layout. It's best to look at them as
theorems about program values, and your code and TR's type checking and
inference rules together as proofs of the theorems. For example, "foo :
Real -> Real" is a theorem that (more or less) means "foo is a function
from reals to reals" and (define (foo x) x) is a proof of that.
(It's also a proof of "for all A, foo is a function from A to A," which
is why polymorphic types are written using (All (A) ...).)
A cast is an assumption that *you* have to prove external to the
language and without TR's help. So cast when the encoding of the proof
as Racket code
1. Is so opaque that you can't figure it out (but ask here first).
2. Would be too slow or complicated (you're making a practical theory).
3. Is not possible with TR's rules and the other theorems (i.e. types).
#2 and #3 refine the idea "you know what you're doing." :D Asking about
#1 will make you good at determining #3. Determining #2 takes practice
and testing.
TR has gotten precise enough that sometimes *rewriting* code, not just
annotating it, is the best option. For example, it's hard to convince TR
that this code always operates on nonnegative values:
(cond [(x . >= . y) ... operate on (- x y) ...]
[else ... operate on (- y x) ...])
It's easy with this change, which uses occurrence typing to inject a
theorem about the sign of z into each branch:
(define z (- x y))
(cond [(z . >= . 0) ... operate on z ...]
[else ... operate on (- z) ...])
It would be hard to convince skeptical humans about the first version.
It depends on whether they know "x >= y implies x - y >= 0" is provable
from the IEEE-754 floating-point standard. The second version is cake.
There are a few places in the math library where it isn't possible to
prove that a return value is, say, always nonnegative. In those cases I
resort to things like (max 0.0 x) because I like tricky speed tricks
(flonum ops are faster than branching) and because Vincent has done an
awesome job making the numeric primitive types precise.
Don't let the talk about proving theorems put you off. You've always
done it while coding. TR just formalizes some of the routine parts.
Neil ⊥