[plt-scheme] Re: to define, or to let
Bill Richter wrote:
> But HtDP doesn't have enough mutation recipes?
It's primarily teaching pure functional programming, without mutation, for
most of its chapters. Mutation is an advanced topic, left for the end of
the book. So no, in-depth treatment of mutation isn't there. Remember,
this is a book that's used at the introductory level - high school, or
college CS intro. It's a foundation, a beginning - a good one, but far
from all there is to learn.
> If you're talking about LC_v reduction, then we've gone over this:
> argument evaluation order is arbitrary in LC_v.
>
> I'm trying to get to learn some Math, or at least appreciate it.
Don't worry Bill, I'm sure if you apply yourself diligently, you'll
eventually learn some math. Appreciating it is even easier. :-)
> You're talking about a theorem that I believe is true but don't know
> how to prove: a version of mono's Thm 3.1.4 using the "right-most"
> standard reduction function. So in particular, if an LC_v expression
> reduces to a value, then reducing the algorithm which reduces the
> right-most beta_v redex will always terminate in a value. Do you know
> a reference for this?
Take a look at the evaluation contexts in definition 3.1.1. What I'm
talking about involves switching (o V ... V E M ... M) around so that it
reads (o M ... M E V ... V). Then in the proof of Lemma 3.1.2, for
example, you can replace "leftmost argument expression" with "rightmost
argument expression", and the same proof still applies - follow it and
see. If you see a problem with doing this here or anywhere else, let me
know and I'll address it. It's really pretty straightforward.
I'll see if I can find some references over the weekend. There are
certainly papers out there that deal with order of evaluation issues in
LC, with and without mutation.
> So what's "compelling" you is simply that you want to do it a
> certain way out of convention. Which is fine, of course.
>
> Yeah, essentially. We have this LC/LC_v history, it's a lotta Math,
> so even arbitrary choices build up inertia, but in LC, it's not an
> arbitrary choice, you can't reduce the right-most redex.
We're mainly talking about LC_v, where the response to that is, sure you
can. You can take any reduction that reduces the leftmost redex in an
application, and switch it so you reduce the rightmost redex in that
application instead, and nothing changes, and you can prove that.
> see the LC connection made in the above link.
>
> Hmm, all I saw was this:
>
> > One of Guy Steele's Rabbit compiler main points was that normal
> > order beta-subtitution could be used very successfully to optimize
> > Scheme code. In the absence of side-effects, and if the programs
> > terminate when using applicative order, there is no way to
> > determine whether normal or applicative order is used. A compiler
> > can use this profitably to postpone evaluation. A think that a
> > more appropriate statement is that portable programs should not
> > depend on the evaluation strategy used, ie. they should work in
> > applicative order, but different implementations may want to use
> > different strategies.
>
> So he's mostly recommending applicative order, which is left->right?
Applicative order is where arguments are evaluated before applying a
function to them. Basically the same reduction order as LC_v, but
applicative order is sometimes taken to mean reducing inside abstractions,
too. As with LC_v, left->right is nothing more than an arbitrary
convention.
> Remind what normal order is.
What LC books have you been reading? Normal order is the
leftmost-outermost reduction of standard LC. It's the order that's
guaranteed to reach a normal form if there is one.
> The LC hook here is "beta-subtitution"?
> Which is quite different from beta_v reduction, right, because you
> don't have to reduce the argument to value, right?
Beta substitution is the substitution of argument for formal parameter
that takes place in beta reduction, under any of the reduction systems.
The difference you're describing here is the difference between normal
order and applicative order (or call-by-value) reduction.
> So maybe you're right about the LC/LC_v focus of Scheme.
Some other good papers on the subject are Sussmann & Steele's "Lambda
Papers".
OK, this next one requires some context:
Bill> I mean, the 2nd is driving home the point that we really
Bill> want to evaluate foo first.
Joe> That is crucial difference! Yes, the resulting actions of
Joe> the computer might be identical, but the impression that
Joe> it has on the next person reading the code is quite different.
Bill> To R5RS Schemers, yes! To folks who have been thoroughly ingrained in
Bill> Mzscheme's l->r order, maybe not. Right now, there may not be any
Bill> such folks.
Anton> There will never be any such folks, unless they are capable
Anton> of divining intent from information that's not present in
Anton> the source code.
Bill> I'm getting tired of not being understood.
...
Bill> Now Anton, I speculated above about folks becoming "thoroughly
Bill> ingrained in Mzscheme's l->r order." This is possible, right?
The implications of what you speculated, relative to what Joe was saying,
are not possible, no. Joe was pointing out the information that's
communicated by explicit annotations related to order of evaluation, and
perhaps more importantly, the information conveyed by *lack* of those
annotations. Someone "thoroughly ingrained in l->r order" cannot infer
where those annotations are *not* necessary, without a deeper
understanding of the semantics of the code than would otherwise be needed.
So, while it's true that someone expecting l->r order everywhere would
read code accordingly, it doesn't help with the point Joe and I are
interested in.
> Fixing an order means that expressions that depend on that
> order will work, without having to explicitly specify that
> they depend on an order.
>
> yes, that's what I meant.
>
> But the class of bugs we're talking about are not "expressions
> which depend on eval order",
>
> That's what I meant above by `the category of order of eval bugs'.
>
> but rather "expressions which contain unintended side effect
> interactions".
>
> I've talked about those, but I didn't know they were called `order of
> eval bugs'. I'd call them `unintended side effect interaction' bugs.
OK, but the latter is the category of bugs we're concerned with, when you
talk about eliminating constructs intended for communicating order
independence.
> A subset of these expressions are those which depend on some eval
> order to work (whether intentional or not); another subset may
> simply be buggy all the time, due to an effect interaction which
> shouldn't exist. Fixing eval order only eliminates a subset of
> these bugs, the subset of expressions which depend on the chosen
> fixed eval order (intentionally or otherwise).
>
> Right, that's what I meant. And we agree that if it's "unintentional"
> then we've got a latent bug?
It may not just be latent, it may be a manifest bug.
> Now a substantive comment:
>
> But pervasively fixing the order makes it harder to detect the
> other kinds of bug, because now there's no way to tell from the
> code when side effect interactions are intended, and when they're
> not.
>
> OK, this is the point you've been making, you want "source code
> annotation", and I'll repeat my point: it sounds good, but to me,
> having an algorithm for the compiler takes precedence.
I'll repeat my point. You can have an algorithm for the compiler. You
can use left->right evaluation order. That doesn't prevent you from
having constructs intended to communicate evaluation order independence,
regardless of what their evaluation order actually is.
It also doesn't prevent you from using tools, like BrBackwards which I
posted earlier, to test that your programs don't misbehave when evaluation
order is switched. Empirical experience shows that such bugs show up
fairly quickly, even though they're tough to detect statically.
So my real point is that the static "algorithm for the compiler" is only
part of the picture. Simply because some job isn't handled statically by
the compiler, doesn't mean we should ignore it. Types aren't usually
checked statically by the Scheme compiler, for example, but we don't
ignore them.
> But as you say, this doesn't bear on letrec = letrec*.
We're not discussing letrec = letrec*. We're discussing whether
evaluation order should be relied on in constructs like LET, and in
function application. That also applies to whether you can rely on
evaluation order in LETREC. That still has nothing to do with the R5RS
conformance aspect of letrec = letrec*, which has to do with whether all
initializers are evaluted before their variables are assigned, and has
nothing to do with order of evaluation.
Anton