# [plt-scheme] Re: plt-scheme digest, Vol 1 #1089 - 7 msgs

On Nov 28, 2004, at 9:01 AM, David G.Kay wrote:
>* For list-related administrative tasks:
*>* http://list.cs.brown.edu/mailman/listinfo/plt-scheme
*>*
*>* I'll bite: Why is the second solution much
*>* faster than the first one?
*>* --DGK
*>*
*>>* As for your problem, you may wish to consider two different solutions:
*>>*
*>>* ;; Nat Nat (Nat Nat -> X) -> Matrix[X]
*>>* (define (build-matrix n m f)
*>>* (build-vector n (lambda (i) (build-vector m (lambda (j) (f i
*>>* j))))))
*>>*
*>>* ;; Matrix[X] Nat Nat -> X
*>>* (define (matrix-ref M i j)
*>>* (vector-ref (vector-ref M i) j)))
*>>* ;; suitable dimension check omitted
*>>*
*>>* or
*>>*
*>>* ;; Nat Nat (Nat Nat -> X) -> Matrix[X]
*>>* (define (build-matrix n m f)
*>>* (list->vector (apply append (build-list n (lambda (i) (build-list
*>>* m
*>>* (lambda (j) (f i j))))))))
*>>*
*>>* with a suitable deferencing schema. It's much faster than the first
*>>* one.
*
The first one accesses two memory compounds: the outer vector and the
inner one. The second one will do one such references. If you're on a
modern machine, you may miss cache twice for the memory access and be
forced to spend one order of magnitude more cycles waiting for your
memory to respond than for cache. This will be more visible with an
optimizing compiler such as Chez than an interpreter such as mzscheme
or a barely optimizing compiler such as mzc.
-- Matthias
P.S. Been there, tried to compete with assembly guys at Rice, beaten :-)