[racket] standard solution for non-terminating handin tests?

From: Eli Barzilay (eli at barzilay.org)
Date: Sat Feb 11 15:05:27 EST 2012

Two hours ago, John Clements wrote:
> I think the underlying problem is that I'm using (add-header-line!
> ...) to record information about test case results, and these are
> lost if the sandbox self-terminates.

Yes, they would be lost since it writes the added lines after running
all of the tests.

The problem is that there's a single limit for the whole handler
thread, which is why the sandboxed evaluation has no limits.  If you
want to be able to catch these errors you'll need to have a new (much)
lower limit for the sandbox (see "sandbox.rkt" in the handin server).
This should make it possible for the sandbox to die gracefully while
your code can catch that error and deal with it.  This could be added
as an option -- *but*...

1. The limits should be very different, since catching a memory error
   is not precise so you want to guarantee that the sandbox dies
   before the handler dies.

2. You'll also need to catch these errors and deal with them
   properly.  For example, a naive code (which you're probably using)
   would continue sending tests to the already-dead sandbox, which
   will all fail.  If you're using that as a automatic grading thing,
   it means that an out of error in the first test out of a 100 will
   get the student an unjustified 0.

3. To deal with that you can also add the per-evaluation limit, so an
   interaction can die without killing the sandbox.  And yes, this
   limit should also be far enough below the sandbox limit.  With this
   you can get fix some of the bad behavior of #2, but not completely
   since you still need to deal properly with memory being held in the
   sandbox outside of interactions.

So in theory it's possible to do the above -- but in practice I think
that almost nobody will be able to deal with the subtleties of keeping
this configuration and dealing with the failures.  Therefore, I don't
think that it'll be effective to add an option for a sandbox memory
limit and a per-evaluation limit.  (It'll be easy to add them, so if
you really want to and write the docs then I can do the last 5% of the
work and add the code...)


> I can see a couple of ways of fixing this: one is to send my test
> case info "out of channel," so that it's not lost when the sandbox
> explodes. The other one is to enrich my standard 'test' form with a
> timeout thread that sleeps for half a second then kills the test to
> allow the assignment to be saved.

To clarify, neither of these is a good solution.  The first requires
you to hack the handin server code and deal with an additional thread
that runs outside of the connection's custodian.  (And when you go out
to that level you need to start worrying about races and multiple
submissions, and files that disappear, etc.  It won't be pretty)  The
second is going in the direction of implementing your own sandboxing
limits, which the sandbox can already do as I described above.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                    http://barzilay.org/                   Maze is Life!

Posted on the users mailing list.