amb. There is a hand-written loop that generates and tests various moves in various orders looking for a good series of moves that leads to a good position. But conceptually, you could use
amband it would work about the same (if slower).
Now as you search for a good move, you have a temporal constraint. You must move before your opponent can respond, and you must wait for your opponents move before you can make the next move. The computer simulates you and your opponent taking turns moving pieces and then looks carefully at the possible board positions at the end of the series of moves to determine which side is winning. If the computer is losing, it backtracks to one of its earlier moves and tries an alternative. But this method of backtracking has a big problem: every time you backtrack a move, you forget your opponents response. So suppose you find that if you move your knight, your opponent will kill you with a queen-rook combination 3 moves later. Aha! We'll backtrack to that knight move and try a bishop. And your opponent kills you with the same queen-rook combination 3 moves later. Aha, we'll backtrack to that bishop and try a pawn. (you know what happens). The problem is that your opponent's queen-rook combination is independent of the knight, bishop or pawn moves on your part, but each time you backtrack you forget this and rediscover it. What you really want to do is see that the queen-rook combination in three moves is a threat and figure out a move *now* that affects that outcome. How do you backtrack without forgetting?
Return to the multiple-dwelling puzzle. If you tried to re-order the clauses that generate possible solutions, you'd notice that there is no optimal ordering. Optimizing for one set of constraints makes another one less optimal and you cannot satisfy all at once. Sussman pointed out that this is a direct consequence of the underlying evaluation strategy. Nearly every computer language uses a ‘recursive descent’ evaluation model. (Even if the language is compiled, the various subexpressions are run in a recursive descent order.) To a certain extent, this is an artifact of the abstract syntax being a tree. Each node of the abstract syntax tree represents a language construct that can contain zero or more subnodes (subexpressions), but has exactly one parent node (the containing expression). It is natural to interpret the language through a tree-walk, and thus we get a recursive descent evaluator.
In fact, it is so natural, it is really hard to imagine something different. But an example of a different evaluation mode might be the way Scheme's
syntax-ruleshygienic macro system works. Rather than recursively evaluation of the subforms of an expression, macro expansion involves a pattern-match against the entire expression followed by a rewrite of the form. (Then comes a recursive descent phase when we expand the subforms.)
A recursive descent evaluator uses the stack to keep track of the control state of our program. When we backtrack to a decision point, we return to a prior control point by discarding the intermediate state. We are changing the control flow of the program. The unwanted side effect of this is that we are also changing the data flow of our program. The values that are returned from subexpressions are computed as we evaluate the subexpressions, and if we back out of the evaluation, we forget the values. But what if we could separate the control flow from the data flow? We cannot completely divorce the two because computing a value usually means transferring control to the expression that denotes the value, but we need not forget the value once it is computed if we back out of a computation.
Here's how we do it: We take our original Scheme program (like the multiple dwelling problem) and generate a propagator network that computes the same value.
AMBnodes in the propagator network arbitrarily pick one of the values to emit. As suggested earlier, we'll propagate information about the values (the provenance) along with the values themselves. At the
requirenodes, we check to see if we have satisfied the requirement. If we have, everything is ok, but if not, we examine the provenances of the values that the requirement depends upon. The relevent
AMBnodes will be in the provenances, and we can select one or more of those nodes to request a different value. This is backtracking, but it only backtracks the control directly involved in the data flow that supplies affects the requirement. Other parts of the computation remain unaffected. This is true even if the other parts of the computation were orginally syntactically intertwined with the part we are backing out of. This is caled dependency-directed backtracking.
We can extend this idea to be more sophisticated. Rather than backtracking to an
AMBnode and telling it to blindly choose a different alternative, we may be able to give it some hints to select an alternative in a more intelligent way. For instance, if we have deduced that Cooper lives on the first floor, but find we have to recompute Fletcher (who is not on the top or bottom, nor next to Cooper), we can use the constraints to immediately put Fletcher on floor 3 rather than cycling through all the possibilities.
It took a bit of time to get to this point. Propagator networks, McCarthy's
AMB, recursive descent evaluation, and backtracking are all old ideas, so it was easy to listen to this part of the seminar and more or less consider it just a review. But while the parts may be familiar, this way of combining them is new. In the next post, I'll review day 5, where we put a bit more thought into building programs this way.