[plt-scheme] Error reporting and batch compilation (was: fluid-let-syntax may get flushed)

From: Daniel Silva (daniel.silva at gmail.com)
Date: Wed Aug 11 13:53:50 EDT 2004

On Wed, 11 Aug 2004 12:12:35 -0400, Richard C. Cobbe <cobbe at ccs.neu.edu> wrote:
> Lo, on Wednesday, August 11, Matthias Felleisen did write:
> > On Aug 11, 2004, at 10:39 AM, Joe Marshall wrote:
> > > "Richard C. Cobbe" <cobbe at ccs.neu.edu> writes:
> > >
> > >> In general, my tools should give me as many error messages per run
> > >> as possible; it wastes less of my time.  (This is one thing the C
> > >> folks got right; why has the Scheme community forgotten this?)
> > >
> > > As many *legitimate* error message, please.
> Ideally, yes.  But I'll gladly accept several spurious error messages in
> order to get more legitimate ones.  It's a question of balance.
> > The C people didn't get this any "rigther" than the Scheme people.
> > They were forced to report as many type and syntax errors in one
> > pass as possible because they were and are batch people, who just
> > don't understand how incremental work helps people.
> Right, but as execution time increases, the distinction between batch
> and interactive development decreases.  It's really not very hard to
> write a Scheme program whose execution time (by which I mean time
> between hitting the `execute' button and getting a prompt back) is
> comparable to running make.
> Case in point: the test cases for my PLT redex implementation of Jacques
> take forever to execute, largely because they pull in SchemeUnit, which
> in turn pulls in all of the framework stuff.  Compiling to .zos helps
> somewhat, but then you're back to the batch model.
> If someone can suggest a testing strategy that doesn't require hitting
> execute after every change (or even most of them), then I'd love to hear
> about it.  For full credit, the testing system must also work for
> changes in a different module than the one that contains the test cases.

How about continuous testing?


You could write a tool that mixes into drscheme's editor class some
methods to start background tests (module name + "-tests.scm"?) at
after-insert and after-delete.
Trap the errors and add them to a tasklist window/frame/box.  Each
entry in the tasklist can be associated with a failed testcase and
then cleared once the test passes.

I guess you need to cache required modules if you don't want this to
murder your CPU while you type though.  Keep a namespace around with
previously loaded modules and namespace-module-attach the required
ones first?


Posted on the users mailing list.