[plt-scheme] Re: to define, or to let

From: Bradd W. Szonye (bradd+plt at szonye.com)
Date: Wed Apr 21 01:16:51 EDT 2004

Bradd wrote:
>> Language specs generally aren't supposed to be algorithms. 

Bill Richter wrote:
> I think it's a fabulous idea for a language spec to be an algorithm,
> even if Mzscheme is the first such language.

I don't think it's appropriate for a general-purpose programming
language. I've seen too many cases where a language designer tried to
specify features that way and ended up overconstraining the language, so
that it was too difficult to:

- implement the language on systems it wasn't designed for
- use the language for tasks it wasn't designed for
- extend or improve language features
- encode design decisions

The LET/LET* thing discussed in this thread is an example of the last;
left->right evaluation encourages programmers to use that instead of
explicit sequencing constructs. (That's why I encourage right->left or
some "perverse" order if you really want to lock down the order of
evaluation.)

There are good examples of algorithmic specs stifling innovation in the
C++ language. C++ has an extensive algorithm library that guarantees
big-O bounds on memory and speed and similar things. Some implementors
have run into problems where they can't implement a superior algorithm
because it doesn't strictly satisfy the boundaries.

So I don't believe that "algorithmic" language specs are good.

>> language implementations that encourage [...] ignoring the "where
>> sequencing matters" issue are irresponsible on some level, IMO.

> But nobody's in favor of such ignorance!   The question is whether it
> could be good programming to take advantage of the Mzscheme features.

I don't believe it is. Scheme has separate constructs for sequenced and
non-sequenced, non-reentrant evaluation. The PLT Scheme features make
the latter constructs redundant, which encourages programmers not to use
them. That's a problem IMO, because at best it encourages them not to
encode their design decisions; at worst, it encourages them to ignore
the issue altogether.

>>> But suppose that C/C++ from the beginning had mandated a
>>> non-ambiguous eval order (say l->r).  Then these order of eval bugs
>>> aren't bugs necessarily ....

>> Sure they are. 

> Precision, please.  You don't mean this, as you say:

Yes, I do mean it!

>> In the engineering contexts I'm used to, an implementation that
>> doesn't match the design or that's fragile is almost as buggy as one
>> that's more obviously incorrect. 

> So why is this formerly-buggy coding now always fragile?

What do you mean by "formerly-buggy"? Order of evaluation bugs are still
bugs even if the compiler sets things up so that they're predictable or
so that they do less damage. Sequentiality (or lack of it) is an
important design decision. A design that fails to consider it is
fragile. An implementation that doesn't match the design decision is
too. The only difference unspecified/right->left/perverse eval order
does is increase the odds that the fragile code will break sooner
(during design and coding) rather than later (after shipping).

>> And if your design doesn't consider things like "where sequencing
>> matters," then your design is incomplete at best.

> Sure, but could we use these new left->right-enabled side-effect
> features as part of our design?

They aren't new. They already exist in R5RS Scheme, with different
names. Making them redundant by giving the "unordered" constructs the
same semantics just encourages bad design and coding habits.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd


Posted on the users mailing list.