Propagator networks are a pain to program. You have to break up your
problem into a set of computation nodes, allocate cells for
intermediate values, and wire up the inputs and outputs. It's pretty
straightforward to do this by hand for a small program, but it would
be pretty nasty for something more complex. In addition, propagator
networks for recursive functions require a bit of trickery to get them
to work. On the other hand, the standard recursive descent evaluation
strategy, while being a much easier model to program to, imposes
implicit restrictions on evaluation order that get in the way of
non-deterministic programming.
The solution is to compile our Scheme programs into a propagator
network. This is what Chris Hanson showed to us on day 5. This is
doable, but it is quite tricky. I don't think Chris had the time to
explain all the nuances of the compilation, but if he had a difficult
time making it work correctly, it can't be easy.
One thing that I thought was unusual was the way recursive procedures
were compiled. The propagator network for a recursive procedure is
not instantiated and linked until the procedure is actually invoked.
This allows recursive procedures to run by instantiating only as much
of the propagator graph as necessary to compute the answer. What
strikes me as unusual about this is that a standard expression-based
language usually prevents runaway recursion by making conditionals be
non-strict. The propagator network seems to have the recursion
grounded in a different place. I asked Chris is this was an issue and
he thinks it may be what is causing some of the compilation
difficulties.
Now that we have generated a propagator network, we need to run it.
It is relatively straightforward to create a scheduling process that
notes which value cells have changed and pushes the values through the
appropriate computation nodes, but the performance is nothing to write
home about. What is more interesting is to create a propagation-based
virtual machine and run our propagator network against that. We can
use JIT technology to accelerate the virtual machine and get
reasonable performance. It may also be possible to partition the
propagator network into independent or loosely coupled subnets. If
so, then we can distribute the computation of the subnets among
several physical machines.
This is where we wrapped up the course.
Next post, maybe I'll mention some ideas that occurred to me while I
was listening.
Well, compilation for data flow architectures was a major MIT thing for quite some time... See references like http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.8780 . IIRC, removing / transforming recursion was a major issue.
ReplyDeleteYour comments about how to compile a recursive function remind me about how one computes Pi, say, with guaranteed error bounds ("constructively"). The number of iterations of the Brent algorithm depends on the accuracy needed; to improve runtime performance intermediate results are cached (like in your cells); and precision requirements in intermediate results are propagated backwards from the final precision requirement.
ReplyDeleteYour recent posts have been very interesting, thanks.