[racket] Question about tail recursion

From: Matthias Felleisen (matthias at ccs.neu.edu)
Date: Fri Aug 8 17:58:23 EDT 2014

I didn't use the word vastly. All I said was that __equivalent__ TC version of f run in constant stack space. The savings are exactly the depth of the recursion. Whether that's worth it, is the programmer's call. 




On Aug 8, 2014, at 4:18 PM, Jack Firth <jackhfirth at gmail.com> wrote:

> So even for the cases where the tail recursive algorithm has the same space complexity as the non tail recursive algorithm, the savings of reusing the stack frame but building the results in reverse order are vastly greater than the cost of allocating a new stack frame for each result?
> 
> 
> On Fri, Aug 8, 2014 at 9:17 AM, Matthias Felleisen <matthias at ccs.neu.edu> wrote:
> 
> 1. Some calls are tail-calls [whether programmers care or not]. So this is really about the issue of how compilers deal with tail calls.
> 
> 2. What people commonly call 'tail-call optimization' is not. It is a misnomer, and I am sure the people who dubbed it an 'optimization' regret this mistake. A lot.
> 
> 3. It should be called Proper Implementation of Tail Calls (which I like to abbreviate to PITCH).
> 
> 4. The point of PITCH is to save stack allocations and with it deallocation. Neither is very expensive (certainly cheaper than heap allocation) but if you don't have to do it, you save space. Period.
> 
> 5. You are saving stack space in both cases: sum and filter. In both cases, you avoid pushing frames with return info and other data on to the stack. In one case, you have the additional benefit of computing the result on the fly -- because of the supposed commutativity of these operations. Try this with floating point numbers. Or try this for multiplication (say in factorial).
> 
> Bottom lines: functions converted to TC shape may perform computations in a different order if you're not careful, they may get different results, and they may slow the program down, because they use to big-number multiplication immediately as opposed to small-number, built-in multiplication (and similar examples).
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Aug 8, 2014, at 12:00 PM, Jack Firth <jackhfirth at gmail.com> wrote:
> 
> > As far as I understand it, the vast majority of the benefit of tail recursion is less the stack-frame business and more the ability to discard values you're not using anymore at each step by combining partial results. To clarify what I mean, for a function sum with a recursive and tail-recursive version:
> >
> > (define (sum xs)
> >   (if (empty? xs) 0 (+ (first xs) (sum (rest xs)))))
> >
> > (define (sum-tailrec xs)
> >   (let loop ([xs xs] [s 0])
> >     (if (empty? xs) s (loop (rest xs) (+ s (first xs))))))
> >
> > > (sum '(1 2 3))
> > => (+ 1 (sum '(2 3)))
> > => (+ 1 (+ 2 (sum '(3))))
> > => (+ 1 (+ 2 (+ 3  (sum '()))))
> > => (+ 1 (+ 2 (+ 3 0)))
> > => (+ 1 (+ 2 3))
> > => (+ 1 5)
> > 6
> >
> > > (sum-tailrec '(1 2 3))
> > => (loop '(1 2 3) 0)
> > => (loop '(2 3) 1)
> > => (loop '(3) 3)
> > => (loop '() 6)
> > 6
> >
> > The tail recursive version is much better because you can perform the calculation on parts of the list and then ditch those parts and accumulate your results as you go. But for functions which don't really have any means of accumulating their results, is there any advantage to tail recursion? If so, how much? For clarity, consider these two implementations of filter:
> >
> > (define (my-filter p xs)
> >   (cond
> >     [(empty? xs) xs]
> >     [(p (first xs)) (cons (first xs) (my-filter p (rest xs)))]
> >     [else (my-filter (rest xs))]))
> >
> > (define (my-filter-tail-rec p xs)
> >   (let loop ([xs xs] [filtered '()])
> >     (cond
> >       [(empty? xs) (reverse filtered)]
> >       [(p (first xs)) (loop (rest xs) (cons (first xs) filtered))]
> >       [else (loop (rest xs) filtered)])))
> >
> > > (my-filter even? '(1 2 3 4))
> > => (my-filter even? '(2 3 4))
> > => (cons 2 (my-filter even? '(3 4)))
> > => (cons 2 (my-filter even? '(4)))
> > => (cons 2 (cons 4 (my-filter even? '())))
> > => (cons 2 (cons 4 '()))
> > => (cons 2 '(4))
> > '(2 4)
> >
> > > (my-filter-tail-rec even? '(1 2 3))
> > => (loop '(1 2 3 4) '())
> > => (loop '(2 3 4) '())
> > => (loop '(3 4) '(2))
> > => (loop '(4) '(2))
> > => (loop '() '(4 2))
> > => (reverse '(4 2))
> > '(2 4)
> >
> > The tail recursion doesn't really help much in this problem, because the only real difference is that the tail recursive version holds the filtered values in a list instead of consing them on the stack. With the summing, you could take the values and combine them into something much more compact in terms of space by just summing them and passing the partial sum, but there is no logical "compacting" operation that can be done with filtering. So is there much of any performance or algorithmic difference between tail recursive and normal recursive functions when tail recursion is achieved simply by storing intermediate values in a list argument instead of consing them on the stack? And is that difference significant enough to justify the cost of reversing the list and the added complexity of writing the code in a tail recursive fashion?
> > ____________________
> >  Racket Users list:
> >  http://lists.racket-lang.org/users
> 
> 



Posted on the users mailing list.