I need to post more frequently, and if I wait for my thoughts to become coherent it may be quite some time.
I wanted to say something about abstraction, so I was thinking about some of the fundamental abstractions computer scientists use. What's really happening (for some sense of ‘real’) in the computer is some very complex electromagnetic field interactions. But no computer hacker I know sits down at the terminal and starts programming from Maxwell's equations. There are better ways to think about what a computer does.
Maybe the most basic abstraction is what I'll call primitive ontological abstraction. We'll take a fairly stable pattern of currents and we'll call that a ‘number’ or a ‘byte’ or some such thing. A circuit that can maintain the pattern we'll call a ‘register’, so we can talk about ‘register EAX’ holding ‘the value 27’ or somesuch thing. Here we abstracting away the idea of patterns of current flowing through the computer to simple objects. I call it primitive ontological abstraction because we're not dealing with ‘Big-O Object-Oriented’ ideas, but rather with the notion that there are ‘things’ in the computer like bits, bytes, characters, text strings, code chunks, etc. that are the basic ‘stuff’ we're going to compute with.
Without primitive ontological abstraction, our interaction with computer is going to be very limited. We could fiddle around with the electrical signals going in to the processor and maybe cause the computer to halt or generate a non-maskable interrupt, but it's hard to program the computer when you have no notion of ‘instructions’. But primitive ontological abstraction only lifts us up to the machine-code level where we are putting different byte values in sequential memory and then telling the computer to load the ‘program counter’ with a particular start address. At this point we need the next abstraction: naming.
Ok, my bus ride is over. We'll leave naming to the next post.