[racket-dev] using Racket to build an image-based Lisp: feasible?

From: mikel evins (mevins at me.com)
Date: Wed Mar 6 15:35:56 EST 2013

On Mar 6, 2013, at 11:42 AM, Eli Barzilay <eli at barzilay.org> wrote:

> 20 minutes ago, mikel evins wrote:
>> Rather than answer your questions directly, it seems much easier and
>> more apropos to address the underlying assumption that images and
>> source files are somehow mutually exclusive. They're not. Adding the
>> ability to save and load images does not subtract the ability to
>> work with source files. I don't know why people seem to think that.
> Aha -- in that case, it seems that your image-based workflow is not
> much different from working with just the sources *and* the (byte-)
> compiled files that go with them, right?

Imagine meeting someone who had never used a REPL, and trying to explain what its point is. At some point he might say to you, "so a REPL really isn't much different from just batch-compiling some sources, is it? After all, in both cases it's just one line of code after another getting executed, right?"

I've actually been in that conversation.

If the person you're talking with has no experience of working with an interactive environment with a REPL, then it may be hard to convey what it's good for. The other person, attempting to get the point of it, and to understand what it is and what it does, interprets your descriptions in terms he's familiar with, leading to glosses like the one above: yeah, in a strained sense, a REPL isn't much different from just batch-compiling and running some source files. But presumably you know from experience that actually using a REPL is in fact very different from just batch-compiling everything, and leads to a very different experience and a very different workflow. A batch compiler--however fast--is not a substitute for a REPL.

The same is true of image-based development. It's "not much different" from just using source files in the same way that using a REPL is "not much different" from just batch-compiling everything.

> There are some technical differences like one big file instead of a
> bunch of them, and the fact that instead of invoking an external tool
> you're doing the "compilation" inside the environment and it becomes
> part of the next dump.  These obviously make it more convenient than
> what I'm talking about -- but at the highlevel it looks similar.

This description doesn't sound very much like any work I ever actually did. I've always used both lots of image files and lots of source files. 

I'm not sure why you put scare-quotes on "compilation". MCL, CCL, SBCL, Lispworks, Allegro, and I'm sure Lisps I'm neglecting to mention all compile to native code in-memory, and can all also produce object files and link them to form executable binaries. All of them can save and load images containing compiled code.

MacScheme, like lots of Smalltalk implementations, could do all of that, too, but with bytecode in place of native code. That's just what I need: compilation to efficient portable bytecode and image saving and loading.

> Assuming that, the disadvanate that I see is in the disconnection
> between the images and the sources: sure you have both, but what if
> during my interactions I've somehow managed to get something work
> really well "by mistake" -- I can see myself sitting at the computer a
> month later being infinitely puzzeled with "how did I get *that* to
> work"...  

> So if I had worked with images, I would probably end up
> starting from the plain sources very often since I'd be worried about
> work that gets lost this way.

That's possible, but I doubt it. I worked with teams of up to several dozen people working with image-based systems, and to my knowledge, none of them ever worked that way.

We did build everything from scratch from sources all the time, but we had scripts to do that constantly as part of automated testing. There was little or no need to do it by hand.

>> If you aren't interested in images, or don't like them, don't use
>> them. I'm not here to convert anybody. [...]
> BTW, I'm not arguing against it -- I have reasons to dislike "proper"
> development happening with them.  But what you're describing is not
> forcing that -- and I wonder what is it exactly that makes this
> particular feature so great.  (When I worked with images, I used them
> only as a distribution tool -- I didn't have experience with them for
> actual development with other people, so the target audience would be
> people who just run it.)

I don't know what to tell you other than what I've already told you. If that's not persuasive, I'm okay with that. You don't have to like or want it just because I do.

>> Matthew and Matthias seem to be saying they think platform-
>> independent images are probably doable; that's good enough for me,
>> for now. If it turns out they're wrong, well, I can work around
>> that. There are other options. It'll grow back.
> Well, it would be nice to know how exactly this would look like.  So
> far, I think that it's roughly a way to dump the current state as a
> kind of zo file -- extended with the ability to serialize more values,
> and extended to represent a whole environment rather than a single
> module or a single piece of data.

> Note that clarifying that this is more like zo files was important,
> since it clarifies that the "dump" doesn't have to be a straight
> *memory* dump.

Indeed not; in fact, a literal memory dump is something to be avoided, because I need to be able to dump images that can be loaded in different processes and on different machines. Smalltalk systems provide an existence proof that this is doable. In fact, it's possible to dump identified subsets of an image to the net and load them into remote Smalltalks.

>  Assuming this, I think that you can start with
> dumping the current namespace as a zo file -- something like:
>  (define (dump-image file)
>    (call-with-output-file file
>      (λ(o) (s-exp->fasl (current-namespace) o))))
>  (define (load-image file)
>    (current-namespace (call-with-input-file "i" fasl->s-exp)))
> This is of course far from working...  The first problem is that a
> namespace cannot be written this way.  So there would be some scanner
> that turns it into something that can, maybe adding some
> representation layer on top of plain values.  Making it work for
> everything else (including required modules etc) that you can reach
> from the namespace is probably "the only thing" that is needed...

Great; that's a good place to start. Thanks!


Posted on the dev mailing list.