The semantics of a programming language are a mathematical model of the language. The semantics relate expressions in the programming language to mathematical objects that (presumably) are well-understood. Denotational semantics attempt to find a mapping between programs and mathematical objects (the recursive partial functions). Operational semantics attempt to describe mathematically each step of the computation as it evolves.

But we can look at this another way: the denotational semantics

*translate*the program to a mathematical structure, the operational semantics

*mimic*the program via mathematical structures. In other words, we can consider mathematics to be a target language and say this:

- Denotational semantics ‘compiles’ a program into math.
- Operational semantics ‘interprets’ a program in math.

This means we can turn the above statements around:

- A compiler is a denotational semantics from the source language to the target language.
- An interpreter is an operational semantics from the interpreted language to the implementation language.

Some proposed additions to Scheme or Lisp (fexprs or ubiquitous first-class environments for example) pretty much make it impossible to compile code. This is because the user can come along after the fact and change the meaning of the code in a way that the compiler could not possibly forsee (by, for example, a fexpr rewriting the source code or someone injecting a new binding in an environment). If you cannot compile the code, you cannot develop a non-trivial denotational semantics for it, either. (The semantics would trivially be the limit as n goes to infinity of S(n) where S(n) is n steps of the operational semantics, provided the limit exists. In other words, the only way to determine the meaning of the program is to run it and see what happens.)