Gerry started with a combinatoric pattern matcher. The basic structure was that each sort of matching construct would return a procedure that performed the actual match. These procedures all have the same signature and return value conventions, so it is possible to mix and match them. Gerry used a single ‘success’ continuation and simply returned
#f
if the match failed. The higher-order matchers
would propagate the #f
from the lower order ones. Chris said he
preferred to use both a ‘win’ and ‘lose’ continuation instead.I almost forgot. Gerry made one of those comments that seem obvious at first, but actually have a lot of depth behind them. “Here we have a simple pattern matcher. Pattern matching is a generalization of equality.”
It's funny, but I've been programming for all these years and that connection never occurred to me. Now I know that I'm probably the only person on the planet that didn't notice this, and everyone is wondering how on earth I can get a programming job if I'm this obtuse, but I think this is a very interesting observation. It answers the question of why there are so many equality predicates in Lisp: consider one of the operands to be a pattern and the other to be an object to match against. The ‘pattern’ object doesn't have any way to indicate the intent of the match. Suppose I want to match any three element list that contains the symbols
(a b c)
, but you want to match
the exact instance of a particular list that contains the symbols (a b
c)
. The ‘pattern’ in these cases is identical, so we have to indicate
the strictness of matching elsewhere — in the predicate. So I call
‘equal’ and you call ‘eq’. That's a pretty nice answer, but now I have more questions. Why are
EQ
, EQL
, EQUAL
, EQUALP
, etc. built in operations, but a generalized
pattern matcher not built in? I use EQ
a lot in my code and my
mental model is that I'm testing for object identity. How about if I
relax my mental model and think about my uses of EQ
as special
instances of pattern matching. Would I get any advantages? A
generalized pattern matcher carries around a dictionary of pattern
variables so you can match substructures against previously matched
substructure. How come the equality predicates like EQUAL
don't have
this? What if they did? Some computer languages eschew extensional
equality: in order to test equality, the objects in question have to
have an ‘is equal’ method, otherwise you are out of luck.
(Extensional equality means that I can take two arbitrary objects and
determine their equality without calling any method on either object.
Equality is an external property of the object. Intensional
equality means that the object or the object class defines what it
means for objects of that type to be considered equal. Equality is an
internal property of the object.) If the language discourages or
prohibits extensional equality, is that an artifact of discouraging or
prohibiting extensional pattern matching in general? If so, isn't
that a horrible error in design? If not, then why is EQ
singled out?I hear Otto asking “You eat a lot of acid, Miller, back in the hippie days?”
On an unrelated topic, Gerry and I argued a bit about floating point numbers. We violently agree that they are a good tool for certain use cases and a poor tool for others. He gave an example of what he thought was a problem, but I objected that his particular example was handled correctly be the IEEE spec. I've got what I think is a better example:
The odds against a meteor hitting your house are 182,138,880,000,000 to 1. (I read it online, so it must be true.) What is the probability of a meteor striking your house? Answer 1: Given the odds against something, compute the odds for it by taking the reciprocal. Convert this to a probability with this formula: p = o/(o + 1) (odds->probability (/ 1.0 182138880000000)) 5.490315961095151e-15 Answer 2: Given the odds against something, compute the probability against it with p = o/(o + 1). The probability for it is simply 1 minus the probability against it. (- 1.0 (odds->probability 182138880000000)) 5.440092820663267e-15Those answers differ in the second decimal place. Shouldn't they be the same? Which one is right? Or are they both wrong?
What happened is this: floating point numbers are not evenly distributed, and there are a lot more of them down near zero than there are near 1.0 This means our answers will be a lot more accurate if we do our computations near zero rather than near one. (Since there are more floats near zero, if we have to round, we don't have to go very far to find a nearby float. But if we are up near 1.0, we have to go much, much further to find a nearby float.)
The actual answer is 1/182138880000001, and the closest floating point number to that is 5.4903159610951515e-15 The first method produced a much more accurate answer, but neither answer is exactly right.
2 comments:
I almost forgot. Gerry made one of those comments that seem obvious at first, but actually have a lot of depth behind them. “Here we have a simple pattern matcher. Pattern matching is a generalization of equality.”
In Haskell, it always rubbed me wrong that I had to declare Eq-ness for data types because (==) has type Eq a => a -> a -> Bool, yet I could get around it when using pattern matching.
Post a Comment