# [plt-scheme] The perfect teaching language--Is this too much to ask for?

 From: Stephen Bloch (sbloch at adelphi.edu) Date: Mon Jun 15 13:10:42 EDT 2009 Previous message: [BULK] Re: [plt-scheme] The perfect teaching language--Is this too much to ask for? Next message: [plt-scheme] The perfect teaching language--Is this too much to ask for? Messages sorted by: [date] [thread] [subject] [author]

```On Jun 15, 2009, at 11:02 AM, hendrik at topoi.pooq.com wrote:

> There's a formal correspondence between statements in a formal system
> and types.  I believe it's called the Curry-Howard correspondence.

Yes, I saw that in grad school.

> Roughly speaking,
>
>   a proposition corresponds to a type
>   a proof of a propsition corresponds to a value of the type.
>   A proof that P implies Q corresponds to a function from P to Q (thus
> we can think of the function as converting proofs of P into proofs
> of Q)
>   A proof of P & Q is a pair of a proof of P and a proof of Q.
> Correspondingly, we get a Catesian product of types.
>
> There are dependent types, which I write
>    v : T -> S, where S contains free occurrences of v.
> This is a type for functions whose returned values have a type which
> depends on the argument.  In logic, this corresponds to the universal
> quantifier,
>
> And so on.
>
> This whole thing breaks down if you have nonterminating programs,
> because a function F defined as
>   let F = lambda x. F(x)
> gives an infinite recursion, and can be typed to return any type
> whatsoever, even those that correspond to false propositions.
>
> And if you let type be a type, it turns out to be possible to create
> set-theoretical paradoxes out of this stuff, whose "proofs" are
> nonterminating functions.

IIRC, under at least some definitions, typed lambda-calculus is not
only decidable but polynomial-time decidable.  This is a lovely
guarantee of termination, and even termination in a reasonable length
of time, but it also means you can't write programs for problems that
don't have polynomial-time solutions -- in fact, you can't write
programs for problems without a KNOWN polynomial-time algorithm, like
CLIQUE, SAT, Travelling Salesman, etc. not to mention intentionally
non-terminating programs such as operating systems and servers, or
programs that look for counterexamples to the Riemann hypothesis.  It
always seemed to me that by insisting on decidability of term
equivalence, we were throwing out a lot of practical programming
power.  (Part of my dissertation was defining a functional language
in which the only programs you could write were parallelizable to O
(log(n)) time -- not that you would actually want to program in that
language, but just to show that it could be done.)

I'm reading Meyer & Reinhold's _'Type' is not a type_ paper right
now.  The crucial paragraph seems to be

"The failure of property (2) (strong normalisation) is not by itself
surprising, since in a general programming language one expects terms
which define divergent computations not to have normal forms.
Similarly, the failure of property (3) (undecidability of term
equivalence) is to be expected. But the failure of property (4) means
that it is not decidable whether a term has a given type, and more
generally whether two types are equal. This undermines the
possibility of effective "static" type-checking before runtime.
Finally, the failure of property (5), conservative extension, is also
serious as we indicate in Section 4 below."

In other words, the issue isn't so much non-terminating or non-
provably-terminating programs -- those are inevitable, as I said in
my previous post -- but non-terminating compile-time type checks.  If
a programmer writes a program that goes infinite, the programmer
deserves what (s)he gets, but if a programmer writes a program on
which the COMPILER goes infinite, whom do you blame?

The aforementioned Section 4 points out that
(a) having a type of all types allows you to construct a fixed-point
operator Y;
(b) having a fixed-point operator Y allows you to find a fixed-point
for the "add1" function, i.e. a number which is 1 more than itself,
and hence both even and odd, which is a little disturbing;
(c) the existence of such a number, together with an (apparently)
reasonably-defined conditional operator and (apparently) reasonable
equality axioms, allows you to prove that ANY two terms of the same
type are equal, which is even more disturbing.

As Meyer and Reinhold point out, the usual interpretation for a fixed-
point of "add1" is a nonstandard integer value that corresponds to
non-terminating computations.  The difficulty can therefore be seen
as a conflict between an axiomatization that has non-standard models
and a conditional operator that makes sense on "standard" or
"convergent" values, but goes kerflooey when you bring in such
nonstandard values.

I guess it's analogous to the problem with "the set of all sets" that
led to Russell's paradox, the dismantling of naive set theory, the
_Principia Mathematica_, and thence to Goedel's incompleteness theorems.