# [plt-scheme] Dr.Scheme freezes on (fib 100) recursion

 From: Jos Koot (jos.koot at telefonica.net) Date: Fri Jan 9 15:01:43 EST 2009 Previous message: [plt-scheme] Dr.Scheme freezes on (fib 100) recursion Next message: [plt-scheme] identifier-binding question Messages sorted by: [date] [thread] [subject] [author]

```Thanks, that is a clear explanation of the code Arthur showed. It has been
proven that there are limits to the power of programming languages (or
formal systems in general), but again and again I am amazed how far we :(me
not included): can move the limit further away, not only by faster hardware,
but more importantly by more clever algorithms.
Jos

----- Original Message -----
From: "Stephen Bloch" <sbloch at adelphi.edu>
To: "Jos Koot" <jos.koot at telefonica.net>
Cc: <laalaalovesu at yahoo.com>; <plt-scheme at list.cs.brown.edu>
Sent: Friday, January 09, 2009 7:24 PM
Subject: Re: [plt-scheme] Dr.Scheme freezes on (fib 100) recursion

>
> On Jan 9, 2009, at 10:47 AM, Jos Koot wrote:
>
>> Fib cannot realy run in time O(n), because (+ n m) does not run in  time
>> O(1) for large values of n and m.
>
> Quite true.  I would assume that (+ n m) takes at most linear time in  the
> lengths of n and m.  Since the "right answer" to fib(n) is  exponential in
> n, the lengths of the numbers being added are linear  in n, which means
> the cost of addition should make (at most) the  difference between O(n)
> and O(n^2) time.
>
> On a related subject that somebody mentioned off-list, Scheme's
> optimization of tail-recursion makes a multiplicative difference of
> approximately the depth of the recursion, i.e. the difference between
> O(n) and O(n^2) time -- about the same effect as the cost of addition.
>
> And, of course, replacing the double recursion of the naive algorithm
> with a single recursion makes the difference between O(n) or O(n^2)  or
> O(n^3) and O(phi^n) time, a MUCH more noticeable difference.
>
>> The following algorithm from SICP is better.
>>
>> (define (square x) (* x x))
>>
>> (define (fib n)
>>   (fib-iter 1 0 0 1 n))
>>
>> (define (fib-iter a b p q count)
>>   (cond ((= count 0) b)
>>         ((even? count)
>>          (fib-iter a
>>                    b
>>                    (+ (square p) (square q))
>>                    (+ (* 2 p q) (square q))
>>                    (/ count 2)))
>>         (else (fib-iter (+ (* b q) (* a q) (* a p))
>>                         (+ (* b p) (* a q))
>>                         p
>>                         q
>>                         (- count 1)))))
>
>
> I hadn't seen this expressed in Scheme before; cute.
>
> For those who haven't seen the algorithm at all, it relies on the
> observation that the transformation from <fib(r), fib(r+1)> to <fib(r +1),
> fib(r+2)> is simply multiplication by the matrix
> +-      -+
> |  0   1 |
> |  1   1 |
> +-      -+
> Call this matrix F.  Then multiplying by the matrix F^n effects a
> transformation from <fib(r),fib(r+1)> to
> <fib(r+n), fib(r+n+1)>; in particular, it gets you from <fib(0),fib(1)
> > to <fib(n),fib(n+1)>.  So if you can figure out how to
> exponentiate that remarkably simple matrix quickly, it allows you to
> compute fib(n) quickly.  (In fact, you can compute F^n once, and then  get
> the nth-number in ANY Fibonacci sequence, no matter what the  starting
> numbers are, in constant time!)  Raising something to the n- th power, of
> course, can be done in time O(logn) by repeated squaring  (again ignoring
> the non-constant cost of primitive arithmetic  operations when the numbers
> no longer fit into a machine word).
>
> So where the naive algorithm takes time O(phi^n) and the algorithm in  my
> previous post takes time O(n) or O(n^2) (depending on whether you  count
> the cost of primitive arithmetic operations), this one takes  time O(logn)
> or O(n*logn) (ditto).
>
> However, nobody would come up with this algorithm who hadn't taken  linear
> algebra.
>
> Stephen Bloch