This experiment with writing an MIT-Scheme S-code interpreter in C# was successful in these ways:
- It showed that the S-code interpreter is an independent component of the Scheme system. The interpreter substrate can be replaced with a new implementation, written in a different language, using a different evaluation strategy, without replacing the Scheme runtime system written in Scheme.
- It showed that the S-code interpreter can, on small segments of code, perform as fast as compiled code. However, growing the size of these small segment causes an exponential increase in the number of interpreter specializations. The obvious solution of automatically generating interpreter specializations on demand is the equivalent of JIT compilation.
- It validated the idea that the lexical environment can be represented as a flattened vector of values. Mutable variables can be implemented by cell conversion. Variable values are copied from outer scopes to inner scopes when closures are created. The semantics of such an implementation is equivalent to the semantics of a nested series of frames as used in MIT-CScheme.
- It showed that you can implement tail recursion via trampolines at each call site, and that you can implement first-class continuations by testing for a magic return value after the return of each trampoline. We don’t use the C# exception handling mechanism to unwind the stack when implementing first-class continuations, just a conditional branch and a normal return. This is far less complicated and expensive.
It was a failure in these ways:
- Although it showed one way in which we could take incremental steps to increase the speed of the interpreter until it approached the speed of compiled code, each step resulted in an exponential increase in the number of specializations in the interpreter and had diminishing returns.
- The ultimate outcome of this process would be an interpreter with thousands of specializations. Small Scheme programs could be completely represented by a single specialization, and they would be interpreted as fast as compiled code. But this is because the specialization is eessentially a compiled version of the Scheme program. In other words, we ultimately will have an interpreter that “optimizes” by looking up a program in a huge table that maps small programs to their precomputed compiled bodies. This is just an unusual and inefficient way to implement a compiler.
- Because C# offers no way to dump a the heap in binary format, we must cold load the system each time we start it.
- One of the tasks in the cold load is to initialize the unicode tables. These are big tables that take a long time to initialize.
- It took an annoyingly long time to get to Scheme’s initial top-level prompt.
- Debugging crashes in the Scheme system was annoying and slow because we have to cold load the Scheme system to reproduce bugs.
- I have understated a large component of the work: providing a new C# implementation for each of the hundreds of primitives in the Scheme runtime. I only bothered to implement those primitives called as part of the cold lood boot sequence, but still there were a lot of them. For many of these primitives, the C# implementation just achieved the same effect “in spirit” as the MIT-CScheme implementation. These were easy to implement. But some were more persnickety where it was vital that the C# implementation produced exactly the same bits as the MIT-CScheme implementation. For instance, the code used to hash the types for generic method dispatch had to produce the exact same hash values in both implementations. This is because there is code that depends on the hashed multimethod ending up at a precomputed location in a method cache.
- The C# interpreter was complete enough to boot a Scheme cold load and run it to the top-level prompt. It could run the top-level REPL. But much was missing. It could not host the SF program, which generates the S-code for the Scheme runtime. You’d have to run an original instance of MIT-CScheme to generate the S-code that you would then run in the C# interpreter.
I think the next Lisp system I will try should be based around a simple, portable JIT compiler.
1 comment:
Thanks, that was a very interesting post series! Though I was a bit too tired for the last one, the whole introduction to S-code was appreciated.
Post a Comment