[plt-scheme] Performance Targets for MzScheme

From: Matthias Felleisen (matthias at ccs.neu.edu)
Date: Wed May 12 20:11:02 EDT 2004

My results:

[matthias-ti:~/Desktop] matthias% mzscheme -r test-gen.ss
[matthias-ti:~/Desktop] matthias% time ./testfile.py
1.380u 0.090s 0:01.51 97.3%     0+0k 0+1io 0pf+0w

Using srfi:
(define (file-line-split-test)
   (let ((fp (open-input-file "testfile.dat")))
     (let L ()
       (let ([line (read-line fp)])
         (unless (eof-object? line) (string-tokenize line) (L))))))

[matthias-ti:~/Desktop] matthias% time mzscheme -r testfile.ss
36.050u 0.120s 0:36.81 98.2%    0+0k 0+0io 0pf+0w

Using regexp-split on each line:
(define (file-line-split-test)
   (let ((fp (open-input-file "testfile.dat")))
     (let L ()
       (let ([line (read-line fp)])
         (unless (eof-object? line)  (regexp-split " " line) (L))))))

[matthias-ti:~/Desktop] matthias% time mzscheme -r testfile.ss
4.860u 0.060s 0:05.08 96.8%     0+0k 0+0io 0pf+0w

Using regexp-split directly on the port (though I suspect I don't have 
quite the right pattern):
(define (file-line-split-test)
   (let ((fp (open-input-file "testfile.dat")))
     (let L ()
       (let ([line (regexp-split " " fp)])
         (unless line (L))))))

[matthias-ti:~/Desktop] matthias% time mzscheme -r testfile.ss
1.820u 0.160s 0:02.04 97.0%     0+0k 0+0io 0pf+0w

Okay, we lose by either 4.5 or .4 depending on how you count. That is 
slower but not an order of magnitude.

-- Matthias



On May 12, 2004, at 6:06 PM, Brent Fulgham wrote:

>   For list-related administrative tasks:
>   http://list.cs.brown.edu/mailman/listinfo/plt-scheme
>
> At one time we had some performance statistics that
> showed MzScheme was pretty fast compared to other
> Scheme implementations.  This was done pre 200-series,
> so things have almost certainly changed.
>
> For fun, I started playing with some existing
> benchmarks found on the internet to see how MzScheme
> stacks up to other similar systems.
>
> The first test I ported does not show MzScheme to be a
> great performer.  In this test, the idea is to read in
> a fairly large file (15 Megs) and split each line into
> tokens (by spaces):
>
> (require (lib "13.ss" "srfi")
>          (lib "42.ss" "srfi"))
>
> (define (file-line-split-test)
>   (let ((fp (open-input-file "testfile.dat")))
>     (do-ec
>      (:port line fp read-line)
>      ;Split the line up
>      (string-tokenize line))))
>
>> (load "all_test.scm")
>> (time (file-line-split-test))
> cpu time: 39607 real time: 40209 gc time: 7025
>>
>
> Python:
> def file_line_split_test(NESTED):
>   fp = open("testfile.dat")
>   for a_line in fp.readlines():
>     line_parts = string.split(a_line)
>
> C:\Fulgham\Projects\schematics\performance>python
> all_test.py
> starting
> file_line_split_test(1) elapsed: 1.78199982643 seconds
>
>
> Perl:
> sub file_line_split_test {
>     my ($INDEX) = @_;
>     my $line;
>
>     open(FILE,"testfile.dat");
>     while (<FILE>) {
> 	@line_parts = split(" ");
>     }
> };
>
> C:\Fulgham\Projects\schematics\performance>perl
> all_test.pl
> file_line_split_test elapsed 2.42350912094116
>
> Thus, the gauntlet has been thrown.  How can we drop
> our speed by an order of magnitude to compete with
> these other scripting languages?!?  :-)
>
> -Brent
>
>
>
>



Posted on the users mailing list.