[plt-scheme] Statistics (V301.5 Speed Up)

From: Williams, M. Douglas (M.DOUGLAS.WILLIAMS at saic.com)
Date: Sun Feb 12 15:08:50 EST 2006

 
>I've been following this thread for a while, so here are some
>benchmarks that I've been running for while now (these are current):
>
>benchmark intepreter   mzc s-llvm interp/llvm mzc/llvm
>--------- ---------- ----- ------ ----------- --------
>sort1            692   643    563         1.2      1.1
>nboyer          2616  1786   2115         1.2      0.8
>sboyer          2090  1161    973         2.1      1.2
>graphs           885  1384    712         1.2      1.9
>lattice         8199 10065   3631         2.3      2.8
>earley          2576  2524  11659         0.2      0.2
>conform          151   179     68         2.2      2.6
>dynamic         1453  1892   1105         1.3      1.7
>
>I never get any significant variation here. The methodology is very
>simple - run the benchmark once, gc a few times, run it again. I do
>use time-apply. Like others have probably said, if you try to run the
>program 1000 times in the same mzscheme session, the GC will be the
>factor that introduces the odd distributions, not the program itself.

Have you tried the benchmarks with V301.5?

We weren't trying to make a statement about V301.5 (or PLT Scheme) in
general.  I happen to have a very continuation entensive applications (or
set of applications) and wanted to see what improvements the new
implementations made.

For long runs like these, with collect-garbage run between each, the
variances for the GC times reported are similar to that of the CPU and real
times.  (We do run the model 5 times before we start collecting data.)
Although it is interesting that for the most recent run I did, the GC times
seem to bucket into two groups that are ~500 apart, with very little
variation in those two buckets.  Also, the centroid of each buckets
gradually creeps up.  [That may be because of the way I was printing data
between runs in this one and the interactions pane containing more and more
data each iteration.  That is, my naive data collection is interferring with
the data itself.]


Posted on the users mailing list.