Student T exclaims “I have a great idea!
“Sometimes I want to make a computer language that is very similar to an existing language with just a couple of exceptions. Wouldn't it be cool if we could just tweak the semantics of the existing language?”
Student A asks ‘What do you mean? What part of the semantics?‘
“Oh, I don't know... Anything!”
‘Anything? Anything at all?’
“Sure. Why limit ourselves?”
‘Like, say, adding or removing special forms?’
“Of course. Anything!”
‘How about changing from call-by-value to call-by-name?’
“That would be harder, but why not? Imagine you're running a program and you realize you need lazy evaluation for some part of it. Well, you just turn on lazy evaluation for that segment of code and turn it off again when it returns! Voila!”
‘What? While the code is running?’
Student A thinks for a moment and says, ‘So in essence you want the language semantics to be a function of time.’
Student T replies “No no no. I just want to be able to change them on the fly if I want.”
Student A says ’Yes, that's what I mean. At different times you could have different semantics.’
Student T says “Yes, but only if you change them.”
‘And how do I change them?’
“You just call a special primitive or something from your program.“
‘So if the language semantics can change over time, doesn't that imply that the meaning of a program can change over time as well?’
Is Student A right?
8 comments:
Features like what Student T are discussing exist in a few languages already, although maybe not so much at run-time.
With many Lisps, Perl 6, (I believe) Forth, and a few others it is possible to change language grammar and semantics at write-time, so that one section of the program can use special forms not available to another, or be call-by-reference instead of call-by-value, etc.
I do not know of any languages which would allow run-time tweaking of language semantics in the same way, such that the same section of code can be run with two different semantics during the same run.
While write-time semantics changes are reasonable, I see run-time semantics changes as being potentially difficult for programmers to analyze. It seems that in the past similar language "features" like dynamic scope fell out of favor for much the same reason. In order to understand what it will do, you oftentimes need to understand the entire dynamic state of the running program.
This takes us back to Student B, who did not see any way to understand the program without execution, and why that was such a short-sighted approach. Runtime semantic changes may very well require the adoption of Student B's approach.
On the other hand, it might as well completely invalidate Student B's approach : if the meaning of the program can change over time, evaluating it now only gives you insight into what it meant a few seconds ago (or a few hours, depending on how long it takes). There is absolutely no reason to think it will still have the same meaning tomorrow, or in five minutes.
Given such a question, I would pedantically say that the meaning of the program does not change; rather, the program changes or disappears. It is only the meaning of the notations that changes, such that the same sequence of symbols might, at different times, represent different programs.
Student T's opinion in another context: "Hey, why don't we make all our variables global, so we don't have to bother passing them around all over the place?"
Meaning (contra Hilary Putnam) is in the head; the meaning of a program, like any utterance, is the effect that perceiving the program has on us. If I read the same words at ten and again at fifty, the meaning may well have changed, but the words are the same. Specifically, the semantics of a program are what it means to the reader, and the pragmatics of a program are what it means to the mechanical interpreter. These evolve along different time scales.
That's an interesting philosophical interpretation of the question and problem, but I don't think it addresses the purported wackiness of the idea.
It is also a different definition of programming language "semantics" than has been used so far in this series (at least, to my understanding).
if the operators are in the language, whether or not that final statement is true, it does not apply to the case.
I'm not sure why so many are opposed to this concept. I mean, I'm opposed to a _bad_ implementation of this concept.
Just like I grossly dislike dynamic variables. "Why do you dislike local variables? do you think we should all use globals instead?!" No. Just, there are better ways to do local variables than that. Lexical scoping is generally better, because it keeps side effects very local.
We already have this in many places of our programming languages. Take:
(define (sort2 a b less)
(if (less b a) (list b a) (list a b)))
This function has very different meanings! Given 3d vectors, it could sort by magnitude, sort lexicagraphically, etc.
This too can to all the above:
(define less (lambda (a b) #f)) ; to be later rebound
(define (sort2 a b)
(if (less b a) (list b a) (list a b)))
BUT now you have this very widely scoped effects, while the former had all the effects of the less function localized, well scoped.
So what's really missing is proper scoping on the effects. It might be appropriate to write a function to be parameterized on call-by-value / call-by-need, choosing the later only when operating on side-effect-free data. Haskell occasionally models this with monads -- there are monads that enforce a sequential order, others that enforce no order, and even one that evaluates your program fragment _backwards_ (http://bit.ly/avj3Ea, section 2.8).
So yeah, i want Student T's extensions. But I want them to be lexically scoped, and probably explict. That way, you can still reason about your programs and talk about their meaning, optimize them well, and have that meaning very _rich_.
Aaron says:
``I'm not sure why so many are opposed to this concept.''
I think it should be obvious: if the semantics can change after you
write or analyze a program, then the meaning of the program can
change. If the meaning can change, then there is nothing you can
predict about the program.
>> We already have this in many places of our programming
>> languages. Take:
(define (sort2 a b less)
(if (less b a)
(list b a)
(list a b)))
>> This function has very different meanings!
Not at all. True, it is a higher-order function, and what it actually
does will depend on the arguments passed in and the binding of the
free variable LIST. But here is what I can know about this:
1. This is a DEFINITION form.
2. It defines `SORT2' as a procedure of 3 arguments.
3. Sort2 invokes its last argument on its first two,
depending on the result, it invokes LIST with on the first two
arguments in the same argument order, or with the argument order
swapped.
4. If the third argument terminates when invoked on the first and
second, then this function will exit by tail calling LIST.
5. LIST is the only free variable.
Now I can assert these things only if the semantics I assume now don't
change. If the semantics can be freely changed at any time, then I
cannot even tell whether this will be *well-formed* at an appropriate
point in the future. (Let alone a program, let alone a definition.)
I can't tell you what the free variables are. I can't tell you if
there *are* variables.
>> i want Student T's extensions. But I want them to be lexically
>> scoped, and probably explict. That way, you can still reason about
>> your programs and talk about their meaning, optimize them well, and
>> have that meaning very _rich_.
I think we can have the power to tailor the language semantics to what
we want *without* destroying the ability to reason altogether.
I don't think we have to dynamically modify the language semantics.
Post a Comment