[plt-scheme] Re: to define, or to let

From: Bill Richter (richter at math.northwestern.edu)
Date: Thu Apr 22 23:14:26 EDT 2004

   > What does HtDP say about mutation & order of eval issues?  Is
   > there something deficient about HtDP's approach?  Doesn't handle
   > enough real-world mutation complexities?

   HtDP doesn't introduce mutation until chapter VII, the second-last
   chapter.  It starts out showing how to use BEGIN to sequence side
   effects.  In the very last section of that chapter, in subsection
   38.3, "The Meaning of Advanced Scheme", it points out that
   expressions with side effects need to be evaluated in a particular
   order: "To avoid such problems, we use the fixed ordering but give
   ourselves more freedom when no state variables are involved."  It's
   talking about the HtDP rules for evaluating programs here, not
   about style in writing real programs.  So, afaict, it doesn't
   address the issue we've been discussing.

Thanks, Anton.  Please clarify: HtDP doesn't give nearly enough
examples to explain a preferred way of coding mutation so we have a
way to deal with these sequencing problems with mutation?

What I mean is this.  Scheme is like C or C++ in that you can write
incredibly hard-to-read code, right?  So what we want is good
programming styles, and HtDP has a number of recipes.  It's not enough
to say "don't do this."  There's still way too much rope to hang
ourselves.  But HtDP doesn't have enough mutation recipes?

   If you're talking about LC_v reduction, then we've gone over this:
   argument evaluation order is arbitrary in LC_v.  

I'm trying to get to learn some Math, or at least appreciate it.
You're talking about a theorem that I believe is true but don't know
how to prove: a version of mono's Thm 3.1.4 using the "right-most"
standard reduction function.  So in particular, if an LC_v expression
reduces to a value, then reducing the algorithm which reduces the
right-most beta_v redex will always terminate in a value.  Do you know
a reference for this?  Did Matthew Flatt prove this in his version of
mono?  Presumably it's true by the "arbitrary" quote you found in the
text of Ch 3 in mono, and I sketched a possible proof myself.

   So what's "compelling" you is simply that you want to do it a
   certain way out of convention.  Which is fine, of course.  

Yeah, essentially.  We have this LC/LC_v history, it's a lotta Math,
so even arbitrary choices build up inertia, but in LC, it's not an
arbitrary choice, you can't reduce the right-most redex.

   The real compelling reason for fixed order, of course, is in the
   presence of mutation, but neither LC nor pure LC_v has anything to
   say about that.


   see the LC connection made in the above link.

Hmm, all I saw was this:

 > One of Guy Steele's Rabbit compiler main points was that normal
 > order beta-subtitution could be used very successfully to optimize
 > Scheme code.  In the absence of side-effects, and if the programs
 > terminate when using applicative order, there is no way to
 > determine whether normal or applicative order is used.  A compiler
 > can use this profitably to postpone evaluation.  A think that a
 > more appropriate statement is that portable programs should not
 > depend on the evaluation strategy used, ie. they should work in
 > applicative order, but different implementations may want to use
 > different strategies.

So he's mostly recommending applicative order, which is left->right?
Remind what normal order is.  The LC hook here is "beta-subtitution"?
Which is quite different from beta_v reduction, right, because you
don't have to reduce the argument to value, right?  So maybe you're
right about the LC/LC_v focus of Scheme.

   >    That is crucial difference!  Yes, the resulting actions of the
   >    computer might be identical, but the impression that it has on the
   >    next person reading the code is quite different.
   > To R5RS Schemers, yes!  To folks who have been thoroughly ingrained in
   > Mzscheme's l->r order, maybe not.  Right now, there may not be any
   > such folks.

   There will never be any such folks, unless they are capable of divining
   intent from information that's not present in the source code.

I'm getting tired of not being understood.  It's starting to remind me
of Will Clinger's spectacular induction errors on c.l.s. last year: "We
can't define a mathematical function for evaluation
curly-E : programs ---> output
without using non-Hausdorff topological spaces."  That's false, and
sounds utterly ridiculous.  We need some induction, surely.  But I
figure the problem was nobody on c.l.s. was reading my posts.  Will's
induction skills are surely good enough to understand this.

Now Anton, I speculated above about folks becoming "thoroughly
ingrained in Mzscheme's l->r order."  This is possible, right?

   > I was asking Anton if life might be better if C, C++ & Scheme had from
   > the beginning mandated l->r eval order.  Because then we flat out
   > wouldn't have the category of order of eval bugs!!!

   The latter sentence is simply wrong, and that's the whole point.

You just misunderstood me.  Pardon me if my terminology is bad.

   Languages with fixed evaluation order suffer from order of
   evaluation bugs - these days, I see them in Java, and Joe mentioned
   seeing them in Common Lisp.

   Fixing an order means that expressions that depend on that order
   will work, without having to explicitly specify that they depend on
   an order.  

yes, that's what I meant.

   But the class of bugs we're talking about are not "expressions
   which depend on eval order", 

That's what I meant above by `the category of order of eval bugs'.  

   but rather "expressions which contain unintended side effect

I've talked about those, but I didn't know they were called `order of
eval bugs'.  I'd call them `unintended side effect interaction' bugs.

   A subset of these expressions are those which depend on some eval
   order to work (whether intentional or not); another subset may
   simply be buggy all the time, due to an effect interaction which
   shouldn't exist.  Fixing eval order only eliminates a subset of
   these bugs, the subset of expressions which depend on the chosen
   fixed eval order (intentionally or otherwise).  

Right, that's what I meant.  And we agree that if it's "unintentional"
then we've got a latent bug?  Now a substantive comment:

   But pervasively fixing the order makes it harder to detect the
   other kinds of bug, because now there's no way to tell from the
   code when side effect interactions are intended, and when they're

OK, this is the point you've been making, you want "source code
annotation", and I'll repeat my point: it sounds good, but to me,
having an algorithm for the compiler takes precedence.  But as you
say, this doesn't bear on letrec = letrec*.

Posted on the users mailing list.