[plt-scheme] to define, or to let

From: Paul Graunke (ptg at ccs.neu.edu)
Date: Wed Mar 24 01:13:31 EST 2004

On Tuesday, March 23, 2004, at 01:34 AM, Anton van Straaten wrote:
> First, correct programs still have well defined meanings,

but it is undecidable which "programs" are correct
(or which syntactic thingies are actually programs depending on how
you look at it).  This is really bad IMHO.

> Strangely,
> regardless of the theoretical horror of such a concept, in practice 
> it's not
> a big deal - C and C++ programmers have done this for decades, and it's
> really easy to avoid writing incorrect code in this respect.

and they still drag their knuckles on the ground when they walk.
I don't want to emulate them.

>
> OTOH, if you fix evaluation order throughout the language to something
> useful, there's nothing but your own voluntary restraint to prevent 
> you from
> taking advantage of it, which makes it possible to write valid code 
> which is
> inherently fragile, and not necessarily obviously so - because side 
> effect
> ordering in any expression becomes acceptable, so you can't tell when 
> they
> belong and when they don't.

They always belong.
>
> But here's the real point, which leads to the answer to your question: 
> it's
> not so much that I specifically want to program in the sort of 
> language you
> describe, but rather that I want to be able to express when I mean to 
> use
> eval order dependencies, and when I don't.

Adding an extra annotation for a static analysis tool that will tell
you which expressions are effects free (err order independent) would be
nice.  FX, or Leroy's PoPL 1999 paper, or some (unpublished?) work by
Matthias and Andrew Write are all ways of (trying to in some cases)
sort this out.  Of course, whether the code will actually perform
a side effect is not the same as whether a static approximation thinks
it might perform a side effect.

> If you take that capability away
> from me, by fixing eval order throughout the language, then I'll 
> answer your
> question by saying that I see a tradeoff, between being able to 
> express what
> I want to express and dealing with the practically trivial issue of 
> avoiding
> the resulting potential for ambiguity, vs. having useful information 
> removed
> from the source, not being able to express what I want to express, and
> having certain kinds of order dependency bugs actually become harder to
> identify (due to lack of expression of intent).


>
> Before arguing with any of the above, try my thought experiment:

Too late.
>
> Imagine a language which has duplicate sets of constructs for 
> everything,
> including function application, binding, etc.  One set of constructs
> guarantees left-to-right evaluation; the other also guarantees 
> unambiguous
> evaluation, but has an order of evaluation that makes it extremely 
> difficult
> to abuse the constructs to achieve desirable side effect ordering 
> (perhaps
> using a formula based on things like function name, number of 
> arguments,
> etc. - obviously not realistic, just for the sake of the point.)
>
> Questions:
>
> 1.  This language now has a well-defined meaning for every expression,
Yes.
> even incorrect ones (since you may still incorrectly try to use the 
> weird-order
> constructs to try to sequence effects, which is almost certainly 
> incorrect).

I would not call ordering effects incorrect.
> Aside from possibly not seeing the need for the extra set of 
> constructs, do
> you see any other problems with such a language?
Well defined evaluation order is a good thing.  I'm glad you agree :-)
>
> 2.  Would you simply always use the L-to-R constructs in this 
> language, even
> in cases where you knew order was irrelevant, and avoid the other
> constructs?

I want to minimize the time I think about trivial crap.  Hence design
patterns, making the code structure follow the structure of the data,
or any other routine knee-jerk decision making process I can use to
write code with less thinking time is a win.  I would pick one order
and stick with it.  I would pick the simplest, clearest order.

>
> 3.  Even if your answer to #2 is "Yes", can you see why someone might 
> want
> to use both?
>

No.  I agree with your desire to express that you expect your code
to be evaluation order independent.  I argue, though, that neither
fixing some weird order, nor letting the compiler change the behavior
at whim provide this.

> The point of the above should be reasonably clear.  It's not that
> "unspecified" eval order is important per se - as I've said, what I'm
> interested in is the ability to express the information contained in 
> both
> sets of constructs.  In the absence of a language such as the above, I 
> find
> R5RS + real Scheme implementations perfectly reasonable, as long as 
> it's
> understood that function application, and LET and friends, are not 
> supposed
> to be used to sequence effects, even if the implementation happens to
> support that.

I've seen another graduate student struggle for hours tracking down
a bug due to switching from mzscheme to larceny.  The right hand sides
of a letrec are evaluated in a different order, which caused the bug.
You may argue the code is incorrect, but if there is no reasonable way
to detect correct (I think you mean valid or well-formed or something.)
code, then I claim the system is unreasonable.

> In practice, I find that the natural documenting of intent by
> use of the appropriate constructs is sufficient to easily avoid
> incorrectness in this area, especially when looking at other people's 
> code.
> That's actually not the case in Java, where such distinctions are 
> hidden by
> the eval order.
>
>> When crap like that happens, one can blame the programer.  Someone
>> I know told me he always writes his code in ANF to avoid this problem.
>> Sure, one can claim that a programmer should write only code that
>> is independent of evaluation order (baring time and space 
>> consumption).
>> The question I have then is, how can I tell?  Did I write my code
>> so that it is independent of evaluation order, or did I mess up?
>
> I'm talking about using sequencing constructs when that's what you 
> need, and
> not using them when you don't need them.  If you depend on sequence in 
> an
> expression which does not guarantee a useful sequence, that's an error.
>

So we can eliminate all such errors by guaranteeing a useful sequence.

> If you use side effects in such expressions, it should be obvious by 
> simple
> inspection that they don't have any complicated consequences.

I don't know how to look at a call to a higher order function passed in
as an argument and tell if it has side effects or not.  In Haskell you
can tell (well you could before they added imprecise exceptions), but 
not
in Scheme---at least not in Scheme without amazing (soft typing) tool 
support.

>   This is good
> practice anyway,
if it were possible
>  and it's easy to do in Scheme, especially if you're writing
> fairly functional code.  C/C++ programmers do it all the time, too.
>
I'm skeptical of C/C++ programmers doing their laundry correctly.

>> If the compiler writer cannot write a static analysis that can tell,
>> how am I supposed to know?
>
> Not every property of a program can be determined by static analysis.  
> Hey,
> we're talking about a dynamically-typed language!

Right.  In Python you cannot tell if a program is closed.  Variable 
bindings
can by dynamically removed.  Blech.

I would like to be able to tell if the syntax I write is a well-defined
program or whether the compiler will arbitrarily do different random
things when a customer runs the code as opposed to when I test it.

[Snip]
> OTOH, if you document the intent, you can at the very least perform a
> dynamic analysis.  Besides, documented intent makes potential problems 
> easy
> to avoid.

There is no runtime check that a program will behave the same way
under all permissible evaluation orders, unless you try them all.
(If there is only one permissible order, this is trivial.)
>
>> An alternative to analysis is methodical
>> construction---i.e. write your programs in ANF or CPS so they
>> pin down the order explicitly.
>
> Pinning the order down explicitly where it matters is the issue, and 
> being
> able to tell the difference by simple inspection.
>
>> If one argues that this is the
>> only reasonable way (or the most reasonable way) to write code,
>> then design a subset of scheme that accepts *only* programs in
>> that form.
>
> So you're still talking about pinning order down explicitly through the
> whole program?  That would miss my point.

If pinning down an order is the only way I can pin down the meaning of
a program, then that is what must be done.  Otherwise the behavior
is arbitrary, unspecified non-sense.  Sorry.
>
>> *Any* time a programmer has to maintain a complex
>> invariant that depends on code scattered across the source base,
>> it is always helpful to have the *system* enforce, check, or
>> maintain the invariant for you.  Just think about memory management.
>
> The analogy that makes sense to me is types.  Many of the arguments 
> you and
> others have made can be applied to types - in particular, the static
> analysis point.  DrScheme lets you write code that will blow up at 
> runtime
> with type errors, when it's possible to prevent that.  Yet we use it 
> anyway,
> despite the availability of ML and Haskell.
>
>> Sigh,
>>
>> Paul "don't know how to change people" Graunke
>
> You have to address some issues first, you can't simply wave a magic 
> "issues
> which I haven't addressed should go away" wand.  We're really talking 
> about
> tradeoffs and choices related to imperfect tools.  And it's not even 
> clear
> that your perfect tool would satisfy me, and possibly vice versa.
>
> Anton

If the issue is compilation optimizations, I'm not sympathetic.
I'm glad to hear this does not appear to be your main concern.
If you want a way to express an invariant that you think should
hold, that's great.  Using either a static approximation to the check
or a dynamic safety check would be fine.  I don't see any 
computationally
feasible way to check that a program behaves the same way under all
sequential interleavings of function argument evaluation.  That is
a really messy problem.

I see nothing gained by using an ill defined language.  I do not buy 
your argument that anything is "checked" this way. I do not see any 
tradeoff.
Again, the most reasonable ways I can see to address the problem of
checking if code is in fact evaluation order independent are through
static analysis (via above mentioned effects analysis efforts) or 
restricting program construction (elimination of side effecting 
operations).
Trying all possible orders at runtime is not sensible.  Picking one
arbitrarily, either at compiler construction time or compilation time
or runtime or whenever does not really check anything.

The C community seems to argue that all C compiler optimizations
are semantics preserving for valid C programs because the definition
of a valid C program is one that behaves the same under all C compilers.
The discussion of valid Scheme programs here has a similar, distasteful
flavor.

Paul "looking for a magic wand" Graunke

P.S. Sorry if I come off sounding annoyed.  I just really want to be 
able
      to look at code and figure out what it means/does and I'm not able
      to do that without a semantics (and a decision procedure for 
whether
      or not what I'm staring at is a well-formed program).



Posted on the users mailing list.