[plt-scheme] Dr.Scheme freezes on (fib 100) recursion

From: Stephen Bloch (sbloch at adelphi.edu)
Date: Fri Jan 9 13:24:51 EST 2009

On Jan 9, 2009, at 10:47 AM, Jos Koot wrote:

> Fib cannot realy run in time O(n), because (+ n m) does not run in  
> time O(1) for large values of n and m.

Quite true.  I would assume that (+ n m) takes at most linear time in  
the lengths of n and m.  Since the "right answer" to fib(n) is  
exponential in n, the lengths of the numbers being added are linear  
in n, which means the cost of addition should make (at most) the  
difference between O(n) and O(n^2) time.

On a related subject that somebody mentioned off-list, Scheme's  
optimization of tail-recursion makes a multiplicative difference of  
approximately the depth of the recursion, i.e. the difference between  
O(n) and O(n^2) time -- about the same effect as the cost of addition.

And, of course, replacing the double recursion of the naive algorithm  
with a single recursion makes the difference between O(n) or O(n^2)  
or O(n^3) and O(phi^n) time, a MUCH more noticeable difference.

Arthur adds:
> The following algorithm from SICP is better.
>
> (define (square x) (* x x))
>
> (define (fib n)
>   (fib-iter 1 0 0 1 n))
>
> (define (fib-iter a b p q count)
>   (cond ((= count 0) b)
>         ((even? count)
>          (fib-iter a
>                    b
>                    (+ (square p) (square q))
>                    (+ (* 2 p q) (square q))
>                    (/ count 2)))
>         (else (fib-iter (+ (* b q) (* a q) (* a p))
>                         (+ (* b p) (* a q))
>                         p
>                         q
>                         (- count 1)))))


I hadn't seen this expressed in Scheme before; cute.

For those who haven't seen the algorithm at all, it relies on the  
observation that the transformation from <fib(r), fib(r+1)> to <fib(r 
+1), fib(r+2)> is simply multiplication by the matrix
+-      -+
|  0   1 |
|  1   1 |
+-      -+
Call this matrix F.  Then multiplying by the matrix F^n effects a  
transformation from <fib(r),fib(r+1)> to
<fib(r+n), fib(r+n+1)>; in particular, it gets you from <fib(0),fib(1) 
 > to <fib(n),fib(n+1)>.  So if you can figure out how to  
exponentiate that remarkably simple matrix quickly, it allows you to  
compute fib(n) quickly.  (In fact, you can compute F^n once, and then  
get the nth-number in ANY Fibonacci sequence, no matter what the  
starting numbers are, in constant time!)  Raising something to the n- 
th power, of course, can be done in time O(logn) by repeated squaring  
(again ignoring the non-constant cost of primitive arithmetic  
operations when the numbers no longer fit into a machine word).

So where the naive algorithm takes time O(phi^n) and the algorithm in  
my previous post takes time O(n) or O(n^2) (depending on whether you  
count the cost of primitive arithmetic operations), this one takes  
time O(logn) or O(n*logn) (ditto).

However, nobody would come up with this algorithm who hadn't taken  
linear algebra.

Stephen Bloch
sbloch at adelphi.edu


Posted on the users mailing list.