[plt-scheme] Has PLT performance improved?
We have a new GC and a JIT compiler in roughly that timeframe.
(thanks, Matthew!). As far as future things go, I don't think that
there are any plans at the moment, but you never know.
Robby
On Thu, Jul 2, 2009 at 10:39 AM, Philippos Apolinarius<phi500ac at yahoo.ca> wrote:
> Three years ago, when I was taking an AP course in Logan High, I devised a
> few benchmarks to compare Scheme with my favorite language, which is Clean.
> Then I decided to compare Scheme implementations. I discovered that PLT was
> much slower than, for instance, Bigloo or Stalin. I store the fact that PLT
> is slow somewhere in the back of my brain, and whenever I need to code
> something in Scheme or Lisp, I rule out PLT.
>
> Yesterday, I received a request to compare PLT with Bigloo, Larceny, SBCL
> and Gambit. Therefore I fetched my old benchmarks. To my surprise, PLT 4.2
> turned out to be quite fast. In some cases, it was slightly faster than
> Bigloo compiled with -Obench option, and 30% faster than Gambit or Larceny.
> In the worst case, it was only three times slower than Bigloo. In most case,
> it was 30% slower than Bigloo.
>
> Let us consider the neural network benchmark. Bigloo runs it in 1.3 seconds
> in a Pentium quadricore, Windows XP. PLT runs it in 1.15 s. The benchmark
> has vectors and a lot of floating point calculations. What amazes me is that
> PLT accepted it as is, i.e., I did not use special vectors (like
> f64vectors), not compile options (like -Obench, -farithmetics, -copt -O3).
> The neural net benchmark is attached to this email, so you people can check
> my claims.
>
> I would like to know what happened to PLT. Did its performance improve a lot
> since 2005? How can it compile floating point operations so well without any
> type declaration, or without special operators like *fl, /fl, etc?
>
> I noticed also that PLT is not so good at array intensive computations. In
> one of those Lisp benchmarks designed to show that Lisp can be as fast as C,
> PLT (7.9s, without f64vector) is twice as much time as Bigloo (4.3s, without
> f64vector).. Does PLT team intends to improve array processing?
>
>
>
>
> (module pnet scheme
>
> #| Given a vector to store the weights, and a
> list ws of indexes, newn builds a neuron. E.g.
> (let [ (v '#(0 0 0 0)) ]
> (newn v '(0 1 2)))
> produces a two input neuron that uses v[0],
> v[1] and v[2] to store its weights. |#
>
>
> (define (sig x) (/ 1.0 (+ 1.0 (exp (- x))) ))
>
>
> (define (newn v ws)
> (lambda(xs)
> (sig (let sum ( (i ws) (x (cons 1.0 xs)) (acc 0.0))
> (if (or (null? i) (null? x)) acc
> (sum (cdr i) (cdr x)
> (+ (* (vector-ref v (car i) )
> (car x)) acc))
> )
> )
> )
> )
> )
>
>
> ;; Given a vector vt, (prt vt) creates
> ;; a neuron network that can learn to act
> ;; like a logical port.
>
> (define in-1 car)
> (define in-2 cadr)
>
> (define (gate vt)
> (let ( (n1 (newn vt '(4 5 6)) )
> (ns (newn vt '(0 1 2 3))) )
> (lambda (i)
> (if (null? i) vt
> (ns (list (in-1 i)
> (n1 (list (in-1 i) (in-2 i)))
> (in-2 i) ))) )))
>
> ;; Here is how to create a xor neural network:
>
> ;;(define xor (gate (vector -4 -7 14 -7 -3 8 8)))
>
> (define xor (gate (vector 2 3 0 3 5 1 8)))
>
> (define dx 0.01)
> (define lc 0.5)
>
> (define *nuweights* (make-vector 90) )
> (define *examples* #f)
>
> (define (assertWgt vt I R)
> (vector-set! vt I R) R)
>
> (define (egratia eg)
> (vector-ref *examples*
> (min eg (- (vector-length *examples*) 1)) ))
>
> (define (setWeights vt Qs)
> (do ( (i 0 (+ i 1)) )
> ( (>= i (vector-length vt)) vt)
> (vector-set! vt i
> (vector-ref Qs i)) ))
>
> (define (errSum prt Exs)
> (let sum ( (e Exs) (acc 0.0))
> (if (null? e) acc
> (let* ( (eg (egratia (car e)))
> (vc (prt (cdr eg)))
> (v (car eg)) )
> (sum (cdr e) (+ acc (* (- vc v) (- vc v)) ) )
> )
> )
> )
> )
>
>
> (define (updateWeights prt vt err0 ns Exs)
> (do ( (i 0 (+ i 1)) ) ((> i ns))
> (let* ( (v (vector-ref vt i))
> (v1 (assertWgt vt i (+ v dx)))
> (nerr (errSum prt Exs))
> (nv (+ v (/ (* lc (- err0 nerr)) dx) ) ) )
> (assertWgt vt i v)
> (vector-set! *nuweights* i nv) ) )
> (setWeights vt *nuweights*) )
>
>
> (define (train p exs)
> (set! *examples* exs )
> (set! *nuweights* (make-vector 90))
> (setWeights (p '()) '#(0 1 0 0 2 0 0))
> (do ( (vt (p '()))
> (exs '(0 1 2 3 3 2 1 0)) )
> ( (< (errSum p exs) 0.001) )
> (updateWeights p vt (errSum p exs)
> (- (vector-length vt) 1) exs) ) )
>
> (define *exs*
> '#( (0 1 1) (1 0 1) (1 1 0) (0 0 0)) )
>
> (define (start args)
> (time (train xor *exs*))
> (display (list "1-1=" (xor '(1 1))))
> (newline)
> (display (list "1-0=" (xor '(1 0))))
> (newline)
> (display (list "0-1=" (xor '(0 1))))
> (newline)
> (display (list "0-0=" (xor '(0 0))))
> (newline)
> )
>
> (start 0)
> )
> ;;(training xor '( (0 1 1) (1 1 0) (1 0 1) (0 0 0)) )
>
>
>
>
>
> ________________________________
>
> Yahoo! Canada Toolbar : Search from anywhere on the web and bookmark your
> favourite sites. Download it now!
>
> _________________________________________________
> For list-related administrative tasks:
> http://list.cs.brown.edu/mailman/listinfo/plt-scheme
>
>