[plt-scheme] 301.4

From: Matthew Flatt (mflatt at cs.utah.edu)
Date: Wed Feb 1 18:19:46 EST 2006

At Tue, 31 Jan 2006 21:39:56 -0800, Jim Blandy wrote:
> On 1/30/06, Matthew Flatt <mflatt at cs.utah.edu> wrote:
> > For now, typical speedups from JIT compilation are in the range 1x
> > (i.e., no speedup) to 1.3x, with an occasional 2x or 4x. The JIT rarely
> > slows any program --- though, of course, it's possible.
> 
> Are you comparing the JIT to mzc, or to the bytecode interpreter?

Bytecode.

> If the interpreter, I'm surprised.  are there bytecodes for which you
> emit slower code than the interpreter path for that bytecode?  Or does
> a lowered instruction cache hit rate eat your gains?

The reason for the small JIT gain is that the bytecode is translated
directly to native code, with no high-level analysis. The only gain is
in removing some branching and jumping compared to the interpreter ---
and there's still plenty of branching and jumping.

> Don't get me wrong --- this is a great hack.  But for goodness' sake,
> why would one plunge into the hair and non-portability of native code
> generation if one doesn't get a nice hefty speed boost from it?  That
> is, why isn't the bytecode interpreter a sufficient stopgap until the
> "real thing" is ready?

Well, it's difficult to know ow much speed boost is available without
actually implementing it.

And then it's difficult to know whether a relatively easy 1.3x is worth
keeping, considering the overhead, without implementing the rest of the
infrastructure... and so on. From my perspective, it's *still* no clear
whether it's worthwhile or worthless.

In any case, the non-portable part (generating the native-code bytes)
was easy. The hard part was setting up the JIT infrastructure, and that
may be useful for the real thing --- even if only in terms of experience.

Matthew



Posted on the users mailing list.