Tuesday, May 5, 2009

You knew I'd say something, Part V

Is this horse dead, yet? At the beginning of this series I said that my goal was to clear up a few misconceptions about tail recursion:
  • Tail recursion is primarily syntactic sleight-of-hand that allows a programmer to use a function call rather than a traditional iteration construct.
    Tail recursion is a means of reducing the asymptotic space complexity of some programs. Tail recursion unifies function calls and message passing in a synchronous, sequential Actor model.
  • Its purpose is to avoid writing looping constructs.
    Its purpose is to allow access to more powerful programming models.
  • Any code that depends on tail recursion can easily be rewritten to use a looping construct instead. Such a rewrite is a trivial local transformation.
    Certain constructs that depend on tail recursion cannot be rewritten into a looping construct without global changes to the code.
  • People who like tail recursion are burdening their code, but they expect the implementor to bear the weight.
    Proper tail recursion cannot be added within the language without adding devices such as trampolines and imposing a calling convention on all users.
  • Tail recursion causes loss of valuable debugging information.
    Means to retain stack traces while preserving tail recursion have been known for close to thirty years.
My primary intention was to get at least a few people to think a little bit harder about tail recursion and not simply dismiss it as cute trick. I think I succeeded at this.

I wanted to follow up on some questions that people have raised in the comments.

Mr. Jerf said this about my Visitor Pattern:
You need to show an example where the solution would be more elegant than the canonical Python solution if and only if it could use tail recursion, not a grotesquely less elegant solution than is literally constructed for the purpose of blowing the stack.
and then
I posted in part II, complaining that you need to show something that isn't better written another way. (I still disagree with your later reply, because it isn't fair to say you need to globally rewrite code you specifically chose for the fact nobody would write it that way; it would have been iterators from day one.)
My purpose in part II was to demonstrate that not all tail-recursive code can be easily changed into an equivalent loop. The example is admittedly contrived for that specific purpose, but I don't think that invalidates it as an example. I chose the Visitor pattern because it makes use of callbacks. Callbacks are quite frequently called from tail position and could take advantage of tail recursion. If a Visitor callback happens to perform further visits (a not uncommon thing to do), it opens up the possibility of very deep recursion that could benefit from tail recursion.

The Visitor pattern is simply a form of continuation passing style(Shhh! Don't tell anyone).

I also disagree that ‘nobody would write it that way’. I did a bunch of poking around through Python source code to see whether people would use the Visitor pattern, and whether they would use it in such a way that could lead to stack overflow. I found several examples of visitors. In Zope there is an AST walker, in various XML libraries there are graph walkers, there even seem to be a few visitors in Python itself. I checked to see if there were any cases where a visitor performed further visits. There are. My very tiny example code is a distilled version of this that only has the interesting part of the code.

The point, however, was not to demonstrate the benefits of tail recursion or to persuade Python programmers to change either their language or programming style. It was simply to point out that not every use of tail recursion could be easily transformed into a loop, and I wished to provide a small, but not completely unrealistic example.

Mr. Prescod said:
I still feel that the last post is incomplete without code examples in TCO versus trampoline style.
That's a fair amount of work! I think you may be in luck, though. My MIT-Scheme interpreter used to be written with a simple trampoline in order to do tail recursion (revision 196). The trampoline was this:
void Run ()
{
    // The etc argument isn't used.
    // We want EvalStep to be an expression rather
    // than a statement so we chain the return value
    // around in case we think of a use in the
    // future.             
    object etc = null;
    while (true)                 
        etc = this.expression.EvalStep (this, etc);
} 
Each node in the AST had an EvalStep method. Here's the method for a conditional expression:
internal override object EvalStep (Interpreter interpreter, object etc)
{
    Conditional.evaluationCount += 1;
    return interpreter.EvalSubproblem (this.predicate, 
            new ConditionalDecide (interpreter.Continuation, 
                                   this,
                                   interpreter.Environment));
}
This is an example of a non tail recursive call. The predicate subexpression will be evaluated and its value will be passed to the ConditionalDecide object. The definition of EvalNewSubproblem is this:
internal object EvalSubproblem (SCode sCode, Continuation continuation)
{
    this.expression = sCode;             
    this.continuation = continuation;             
    this.history.NewSubproblem (sCode, this.environment);             
    return null;
}
So we put the predicate subexpression in interpreter.expression, put the continuation in interpreter.continuation, record the call in the History, and then return null.

The null is the value returned to the trampoline. interpreter.expression is now the predicate subexpression, so evaluation proceeds at that point. By returning to the trampoline on each evaluation step, we avoid pushing the stack. We still have to be careful not to heap allocate on the tail calls, though. So let's look at how that works.

Suppose our predicate was a constant. EvalStep for a constant is this:
internal override object EvalStep (Interpreter interpreter, object etc)
{
    Quotation.evaluationCount += 1;
    return interpreter.Return (this.item);         
} 
and we'll need to see how interpreter.Return works:
internal object Return (object value)
{
    Continuation cont = this.continuation;
    this.continuation = cont.Parent;
    return cont.Invoke (this, value);
}
To return a value, we invoke the continuation. The continuations are chained together in the heap like a stack, so we put the parent continuation in place when we invoke the topmost continuation.

The ConditionalDecide object is the continuation. Continuations have Invoke methods:
internal override object Invoke (Interpreter interpreter, object value)
{
    return interpreter.EvalReduction ((value is bool && (bool) value == false)
                                      ? this.Expression.Alternative
                                      : this.Expression.Consequent,
                                      this.Environment);
}
A conditional will tail call either the alternative or the consequent, so we select which subexpression is to be evaluated. EvalReduction arranges for the tail call:
internal object EvalReduction (SCode sCode, Environment environment)
{
    this.expression = sCode;
    this.environment = environment;
    this.history.NewReduction (this.expression, this.environment);
    return null;         
}
We put the alternative or the consequent into expression, put the environment into environment, record the tail call in the History, and then return null. As before, this null is returned to the trampoline.

The difference between EvalSubproblem and EvalReduction is that EvalSubproblem needs an explicit continuation object and EvalReduction implicitly uses the existing continuation in the interpreter. Each time we evaluate a subproblem, we allocate storage for the continuation. Each time we evaluate a reduction (a tail call), we just use the existing continuation.

Now just for the sake of argument, suppose I had written this for ConditionalDecide.Inovke :
internal override object Invoke (Interpreter interpreter, object value)
{
    return interpreter.EvalSubproblem
                 ((value is bool && (bool) value == false)
                      ? this.Expression.Alternative
                      : this.Expression.Consequent,
                  this.Environment,
                  new ConditionalFinish (interpreter.Continuation));
}
where ConditionalFinish has this Invoke method:
internal override object Invoke (Interpreter interpreter, object value)
{
    return interpreter.Return (value);         
}
What I've basically done here is take Alternative or Consequent out of tail position. In order to run the code as a subproblem, I have to supply an explicit continuation (which I am calling ConditionalFinish). ConditionalFinish is not a very useful piece of code. It has no state except for the parent continuation, and the Invoke method just unwraps that and invokes it. The effect is virtually identical to using the implicit continuation with the only difference being a waste of time and space.

Now, how would this look if I could just use tail recursion? Here's the EvalStep for conditionals:
internal override object EvalStep (Interpreter interpreter, object etc)
{
    Conditional.evaluationCount += 1;
    object value = interpreter.EvalSubproblem (this.predicate);
    if (value is bool && (bool) value == false)
        return interpreter.EvalReduction (this.Alternative);
    else
        return interpreter.EvalReduction (this.Consequent);
}

2 comments:

EM said...

Great post. I've been enjoying these!

Mr. Cat said...

That's a really nice example.
Indeed, it is better to implement an interpreter for tail recursive language in tail recursive language.