[plt-scheme] Re: to define, or to let

From: Bill Richter (richter at math.northwestern.edu)
Date: Mon Apr 19 21:19:02 EDT 2004

Anton clearly wrote what he wants from PLT:

   simply putting a note in the documentation about appropriate uses
   of function application, LET, etc., and that deliberate ordering of
   side effects should use constructs intended for that purpose.

Anton, does PLT agree with you about what's `appropriate'?  I'm
guessing not: that with their non-ambiguous eval order (l->r), and
extra sequencing in letrec, PLT is saying, go ahead and put sequential
side effects into function application, let, etc.

I can't speak for PLT, but I'm guessing that's their position, because
that seems to be the logical outgrowth of a non-ambiguous interpreter.

Of course, if PLT agrees with you about what's appropriate, it would
be a good thing for them to put such `a note in the documentation'.

Anton, you've posted that a lot of C/C++ debugging involves order of
eval problems.  So there's a culture built up, and I suppose there are
good things about this culture that you don't want to abandon.  Fine.

But suppose that C/C++ from the beginning had mandated a non-ambiguous
eval order (say l->r).  Then these order of eval bugs aren't bugs
necessarily.  With ambiguity, (foo goo) is buggy if foo & goo have
side-effects that affect each other.  But that's not a bug with l->r.
Now what's a bug is when evaluating foo, there's an unintended
side-effect on goo.  Question: are we worse off debugging-wise?

I had trouble with your LC/LC_v prose, so let me forge ahead first:

In LC, there's a deep result called the Standard Reduction theorem.
If an expression can be reduced to a beta-nf, then we can get there by
always reducing the left-most redex.

That's obviously false if we reduce the right-most redex.  Because

((\x . y) foo) => y

for any foo, so take foo to be any LC expression that doesn't have a
beta-nf, e.g. the one in Eli Barzilay's signature:

foo = ((lambda (x) (x x)) (lambda (x) (x x)))

So if we reduce the right-most redex in ((\x . y) foo) , we just
reduce foo ---> foo, so we're looping.

That makes the LC Standard Reduction theorem sound deep.  Now Plotkin,
MF & I proved the LC_v Standard Reduction theorem, which I thought was
both deep & unrelated to Scheme evaluation.  Both claims seem false.
Your step 1 & 2 procedure from p 46 of mono sound exactly like Scheme
evaluation.  That was really dumb of me to miss that!  My problem was
I never read the Machines section, the very next section, 3.2.

OK, but what about depth?  If our LC_v expression reduces to a value,
how do we know we can get there by Standard Reduction?  Well, we can
assume our expression is a combination (A B), for otherwise it's
already a value!  Suppose we have a sequence of reductions to a value:

(A B) ---> X_1 ---> X_2 --->  ...  ---> X_n = V

there's only 3 kinds of reductions we can perform on (A B): we can
reduce A, or B, or beta_v reduce (A B) if both A & B are values, and A
is a lambda abstraction.  So that means all we can do is reduce A & B
until they become values.  We'll have a combination until then, which
isn't a value!  Therefore, both A & B must reduce to values!  So by
induction, A & B both can be reduced to values by Standard Reduction.
We'd be done if we knew these values that Standard Reduction are the
same as the ones that we used, so we're not quite done.  But maybe
there's a proof of the LC_v Standard Reduction theorem this way, and
then clearly we can use left->right or right->left.

Alternatively, you can go through MF's proof of Thm 3.1.4 and see if
the proof works with right->left eval order.  I bet it does.

Once you have a proof of the right->left LC_v Standard Reduction
theorem, then (but not until then!) you can say:

   you get a choice because of the decision to reduce all components
   of an application to values, before performing the application.
   The choice is what order to do that in.  And, it's arbitrary.

Anton, I think you're confusing LC_v with Pure Scheme:

   In this context, the order of evaluation of the sub-expressions of
   an application certainly doesn't matter (without mutation).

Yes, but that's a theorem in LC_v, and you need Thm 3.1.4, the LC_v
Standard Reduction theorem.  Now in Pure Scheme, it's obvious.

This is partly why I always urge you to read mono, learn LC_v.  To
some extent, PLT is using LC_v to write compiler optimizations.  So
you'd then know that LC_v has a strong left->right bias, and it
wouldn't necessarily be easy for them to switch to right->left.

   I'm all for a Scheme-like language (with macros!) that's closer to LC_v,
   with more sophisticated ways of dealing with mutation.  (More sophisticated
   than SML, too.)  However, I'm not aware that PLT Scheme is going in that
   direction.  

But it's a start!  At least, PLT has a non-ambiguous interpreter, and
they can now reason about it, perhaps with LC_v thinking.

I bet it's a good thing.  I was dumb-founded when earlier in the
thread I realized, "Hey, R5RS Scheme wasn't an algorithm, it was
ambiguous, and so was C.  Dang, why didn't I realize that?"  Seems
like there are places to go from there, and new places even.


Posted on the users mailing list.