[plt-scheme] Re: to define, or to let
Anton van Straaten <anton at appsolutions.com> responds to me:
> But HtDP doesn't have enough mutation recipes?
It's primarily teaching pure functional programming, without
mutation, for most of its chapters. Mutation is an advanced topic,
left for the end of the book. So no, in-depth treatment of
mutation isn't there. Remember, this is a book that's used at the
introductory level - high school, or college CS intro. It's a
foundation, a beginning - a good one, but far from all there is to
learn.
Thanks, Anton. Well, that's a big purpose of PLT, I think, to teach
their book, so maybe the points you're raising are too advanced.
> If you're talking about LC_v reduction, then we've gone over this:
> argument evaluation order is arbitrary in LC_v.
>
> I'm trying to get to learn some Math, or at least appreciate it.
Don't worry Bill, I'm sure if you apply yourself diligently, you'll
eventually learn some math. Appreciating it is even easier. :-)
Hah, that's great!!!
> You're talking about a theorem that I believe is true but don't
> know how to prove: a version of mono's Thm 3.1.4 using the
> "right-most" standard reduction function. So in particular, if
> an LC_v expression reduces to a value, then reducing the
> algorithm which reduces the right-most beta_v redex will always
> terminate in a value. Do you know a reference for this?
Take a look at the evaluation contexts in definition 3.1.1. What
I'm talking about involves switching (o V ... V E M ... M) around
so that it reads (o M ... M E V ... V). Then in the proof of Lemma
3.1.2, for example, you can replace "leftmost argument expression"
with "rightmost argument expression", and the same proof still
applies - follow it and see. If you see a problem with doing this
here or anywhere else, let me know and I'll address it. It's
really pretty straightforward.
Wait, if you're telling me that in the 10+ page proof of Thm 3.1.4
that you can switch left-most to right-most... then you must have
read the proof. If so, congratulations! You're actually learning
some math, and no doubt appreciating it! that's not an easy proof.
Here's something I wrote MF on 31 Dec 1998, and he agreed it was true:
here's a simple proof that the Standard Reduction Thm 3.1.4 follows
from Lemmas 3.1.11 & 3.1.12, avoiding Plotkin's Theorem 3.1.8
altogether. [...]
I'll see if I can find some references over the weekend. There are
certainly papers out there that deal with order of evaluation
issues in LC, with and without mutation.
Cool. What I really want is papers that seriously use some LC or LC_v
for some computer-ological purpose, such as defining the language.
Just because somebody (say Steele's Rabbit) does some beta-reductions
doesn't mean there's any heavy LC.
You can take any reduction that reduces the leftmost redex in an
application, and switch it so you reduce the rightmost redex in
that application instead, and nothing changes, and you can prove
that.
I can prove it? Maybe you're not saying you read the proofs above,
but you're advising me how to right-most 'em. Good advice, thanks!
> he's recommending applicative order, which is left->right?
Applicative order is where arguments are evaluated before applying
a function to them.
Ah, so in LC/LC_v, with only one argument, that's left-most.
> Remind what normal order is.
What LC books have you been reading? Normal order is the
leftmost-outermost reduction of standard LC. It's the order that's
guaranteed to reach a normal form if there is one.
You caught me! Here's the 1st sentence of my paper on my web page:
Barendregt~\cite[Thm.~13.2.2]{Barendregt} proves a {\em Normalization
Theorem} for the $\l$ Calculus: if a $\L$ expression has a $\b$-nf,
then the algorithm of reducing the leftmost redex eventually yields
the $\b$-nf.
Why do you keep saying leftmost-outermost when I say leftmost?
> So maybe you're right about the LC/LC_v focus of Scheme.
Some other good papers on the subject are Sussmann & Steele's
"Lambda Papers".
Thanks, I didn't get much out of them the last time I looked.
Bill> Now Anton, I speculated above about folks becoming "thoroughly
Bill> ingrained in Mzscheme's l->r order." This is possible, right?
The implications of what you speculated, relative to what Joe was
saying, are not possible, no.
no no no no no. I mean this: You're "thoroughly ingrained" in various
C++ techniques, right? I don't mean that you could never make a
mistake using them, but you're fluent in them.
Someone "thoroughly ingrained in l->r order" cannot infer where
those annotations are *not* necessary,
Sure!
> I've talked about those, but I didn't know they were called `order of
> eval bugs'. I'd call them `unintended side effect interaction' bugs.
OK, but the latter is the category of bugs we're concerned with,
when you talk about eliminating constructs intended for
communicating order independence.
Good.
I'll repeat my point. You can have an algorithm for the compiler. You
can use left->right evaluation order. That doesn't prevent you from
having constructs intended to communicate evaluation order independence,
regardless of what their evaluation order actually is.
I'm not convinced that it's good for a language to have such
non-enforced constructs. I mean, it's good to be able to convey
information to yourself and the other coders. But if this information
turns out to be false (perhaps because of modifications made by
someone else), then we have serious trouble.
It also doesn't prevent you from using tools, like BrBackwards which I
posted earlier, to test that your programs don't misbehave when evaluation
order is switched. Empirical experience shows that such bugs show up
fairly quickly, even though they're tough to detect statically.
Me, I don't know, but I really doubt MF would be satisfied by this.