Friday, September 25, 2009

Needed for a debugger?

Pascal Costanza writes:
I always find the argument that some language construct is supposedly “dangerous” a bit weird. It's too fuzzy to make such a statement, in my humble opinion. What's more important, I think, is this: Do you want to be able to implement portable runtime debuggers or not? If you want this, you need first-class environments.

Argh! I did use the word “dangerous” and I was determined not to. You are correct that it is too vague a term. What I really mean in this case is this: if the language specification requires that all variable bindings be exposable and mutable to arbitrary code (that is one possible definition of `first-class environment'), then `lexical scoping' can no longer be guaranteed by the implementation, and that this drastically changes the character of the language in a particularly undesirable way, to wit, all reasoning about the code must take into account the entire body of code, not simply the lexically enclosing contexts. (Wow, that was a long sentence.)

I disagree that you need first-class environments in order to write a portable runtime debugger. That's a complex assertion, so I'll need to make a few assumptions about what you mean (and please correct me if I'm wrong).

First, it seems to me that a debugger is not usually a portable program. It will have to depend upon how the implementation works. A Scheme->C compiler would need to have some facility to debug the code generated by the C compiler. A Scheme implementation hand written in assembly code would need special assembly routines to decode the binary layout of data structures. A Scheme implementation in C# would naturally have debugging information stored as .NET metadata in the image. Then there are implementation details. Some Scheme systems alpha-rename the variables. Others discard the variable names and use De Bruijn indexes for variables. Some Scheme systems perform CPS conversion, others perform A-normal conversion. Some Scheme systems implement the interpreter as a finite state machine with a push-down stack, others use a ‘threaded’ interpreter, still others use a ‘lambda combinatorical’ approach. It seems to me that each variation will need its own means of debugging that cannot be easily shared with other variations.

Second, it seems to me that the information the debugger presents to the user is also implementation dependent. Suppose an implementation had an extension that allowed you to declare runtime constants, and that this implementation would inline the constants where they were used. Would you still expect `bindings' for these constants to be visible on the stack?

If the debugger is completely, transparently portable, and it were used on the same piece of code in two different implementations, I assume that it would present exactly the same information in exactly the same way (by my definition of ‘complete and transparent’). It seems to me that this would impose a particular evaluation model on the code. I can see two ways to achieve this:
  1. Impose a canonical evaluation model that all implementations must follow. This would specify a particular environment model that must be used and maintained by the implementation.
  2. Write a portable meta-circular evaluator that is designed to work with the portable debugger.
Option 2 solves the problem without the need to standardize on first-class environments.

Let me take a moment to describe what MIT Scheme does.

In MIT Scheme, you can run code in one of three modes. The first mode is plain interpreted code. The code is input as text then lightly processed to expand the macros and build an abstract syntax tree which is then walked by the interpreter.

The second mode is ‘syntaxed’. A program called ‘SF’ is run on the text. An abstract syntax tree is built, but then it is more heavily processed. In addition to expanding the macros, SF will ‘inline’ a number of primitive procedures and runtime constants and will simplify the code. The resulting AST is dumped to a file in binary mode. This processed code loads much more quickly and runs a fair amount faster than the original code.

The final mode is compiled code. The compiler uses SF as a preprocessor. The compiler can either produce C code that is then fed into a C compiler, or it can produce native x86 code. This code can be made to run very quickly.

Each of these modes has potentially different semantics. Before you panic too much at that, let me note that this is completely under the control of the user. The default action is to preserve the semantics of the interpreted code in all cases. When you choose the default action (or rather, fail to choose a non-default), SF will only expand the macros in the code. No inlining of any kind is performed. The compiler only open-codes those procedures inlined by SF, so no compiler optimization will be perfomed either. Therefore, simply running SF and compiling will not change the semantics of the code.

Simply running SF and compiling will not change the performance of the code, either. Given that the compiler can infer nothing about the code, all it can do is ‘inline’ the interpreter. All function calls are out-of-line and must go through an interpreter-compatible stack frame. (The compiler will transparently cache the values of some variables, but there is little performance to be gained through this.)

These semantics are very flexible. If you change the definition of CAR and CDR, the compiled code will use the new definition. However, in “real-life” circumstances, you never need this sort of flexibility. It is there, and it is the default, but for nearly any production purpose you'd like to tell the compiler that the standard definitions of the standard primitives can be assumed to be unchanging. To get a lot of bang for your buck, you simply add a declaration to the source code: (declare (usual-integrations))

When usual-integrations are declared, some fifty or so primitive procedures will be inlined by SF. These include things like CAR, CDR, VECTOR-REF, VECTOR-SET, +, etc. In addition, users can declare certain procedures of their own to be inlined. These inlining operations can have a dramatic impact on the performance of the code, but they come at a price. Because these are now inlined, redefinition of these functions will have no effect this code. Furthermore, the inlining is visible if you pretty-print the code. Inlining can introduce new lexical variables and eliminate the need for others. The lexical environment as defined in the source code may not reflect the actual set of bindings.

Most users find this a reasonably low price to pay. Few want to redefine the standard primitives (and you can specify exceptions if you really do want to redefine one or two particular ones), and while the environment may change, it isn't so different that it is unrecognizable. (There is a caveat here. We assume that the debugger will reflect the environment, but that user code is not reaching in to that reflected environment in order to perform some function necessary for the program. That is, the code isn't sneaking around using the debugger to do an ‘end-run’ around the scoping rules.)

With certain primitives inlined by SF, the compiler can now do a much better job of generating code. Instead of calling out to the procedure VECTOR-REF every time it needs to look at a vector, it can use an indexed offset load instruction. This can easily be hundreds of times faster.

Using the compiler also comes at a price. The compiler is free to put variables in machine registers, duplicate variables (provided the duplication has no semantic effect), or eliminate variables that are not necessary. The compiled code cannot be interrupted at arbitrary points. Between these points, the compiler may generate code that does not preserve the consistency of the Scheme memory model. Of course it must restore consistency before checking for interrupts. The end result is that the compiled code may be so different from the original code that it is unrecognizable. It will, however, compute the same value.

Needless to say, the debugger would have a hard time with this. There is no environment data structure. The contents of the registers is known only to the compiler. Variable names have long since gone away, and wouldn't matter because some variables are aliased and others don't even exist. So what is a user to do?

The compiler does something rather sophisticated. It keeps track of which points in the source code correspond to possible interrupt checks. Since it knows that a debugger can only gain control at an interrupt, it can annotate each interrupt poll with meta-information that tells the debugger what part of the source code is running at that point. It is an approximation, but a fairly good and useful one. In addition to this, the compiler emits a variable map that gives the debugger some information about where variables might be located. This allows the debugger to present a fairly good picture of where in the source code you are, and what the variables are bound to, if they exist at all. The debugger includes a simple meta-circular evaluator that can evaluate forms as if they were executed in the context of the debugged stack frame. Not every form can be evaluated, and assignments don't work, but enough code works to present a good illusion.

So what is the point of describing this? I think this is a nice engineering compromise between performance and debugging. However, it requires some assumptions. First, it is the case that you cannot get at the environment of an arbitrary compiled procedure. It may or may not exist, and you need the debugging meta-information to parse it, and it may have all, some, none, or extra bindings. It cannot be used programmatically, it can only be presented to the user as an aid to debugging. Second, MIT Scheme allows for first-class environments. When you call (the-environment) you will get an interpreter compatible environment with all that entails. This disables all optimizations done by SF or the compiler because any optimizations could change the expected contents. On the other hand, if you don't use the full environment, you can simply close over the variables that you do use, and the compiler will be sure to preserve the semantics of your closure while it optimizes the code. Third, it assumes that any introspection can be subordinated by the user. That is, I can write code and instruct the compiler to assume that introspection is unnecessary. With this assumption, the compiler can generate much better code. However, that also means that if you want introspection into my compiled code, you are likely to be out of luck.

In conclusion, I have these objections to your assertion that you a standardized first-class environment API to write a portable debugger:
  1. If you write a portable meta-circular evaluator to go with your debugger, you can define your own environment structures independent of the underlying implementation.
  2. A standardized first-class environment API would unduly restrict implementations to following a particular evaluation model.
  3. A first-class environment model that allowed the underlying system to add or remove bindings as it deemed necessary is by definition not portable because no binding or lack thereof could be assumed.
  4. Debugging is outside the milieu of the language specification. Each implementation can supply its own debugger or many debuggers or none at all.
  5. Not specifying a first-class environment model does not mean that a sophisticated debugger cannot be built.

I personally like the ability to use introspection when I am developing code. But I also like the ability to ‘seal’ the code against introspection in order to make the known working code run fast and to guarantee stability of that code.

No comments: