Monads are the darling of the Haskell community. They enable Haskell programmers to perform extraordinary tasks like sequencing and IO. They've been given proper names like `Maybe' and `List'. They embue programs with special powers!

But what

*are*they?

Monads are a fancy name for a simple solution to a simple problem. The problem is this: Haskell is a lazy language — nothing is evaluated unless it is required. In a lazy language you can't depend on `begin' or `progn' to force your code to run in a particular order. This is particularly a problem when you want to print something. Things will get printed as it is necessary to evaluate them, not in the order the print statements appear. In fact, since the return value of `print' is rarely used, the compiler can simply discard those statements altogether.

So let's suppose we have some F, G, and H, and we want to run them in that exact order. But our language is lazy and won't call anything unless absolutely necessary. There's only one way to

*force*an ordering among functions: we make the value of one depend on the value of the other. We chain them like this: (h (g (f))). Now H cannot run until G produces an output, and G cannot run until F does, so they

*have*to run in the order we want. (We're assuming that G and H actually

*compute*something with their argument so that the function that produces the value has to be run.)

This is a nifty trick except for a few small problems. First, it looks kind of odd to mention H first and F last when F runs first and H runs last. Besides, the nesting will get a bit out of control if you have several of these things. What we really want is the higher-order compose function: (compose f g h) <=> (h (g (f))) Second, we have to rewrite F, G, and H so that they can chain together nicely. This might be a bit of a problem if G and H take more arguments. And we're using the output of F and G to force the sequencing. This will be a problem if F and G already have some useful output (no multiple values!).

The trick to overcoming these problems is easy. We wrap F, G, and H in a procedure object that manages the passing the chained return value around. We arrange for the wrapper procedure object to take a single argument (the return value from further down the chain) and to return a single argument (the return value for further up the chain). Now we can simply string the wrapped procedures together in any way we want and the dependency chain we construct will force the evaluation to happen in the exact order we want.

Oh yeah, this is a monad.

A monad is simply a collection of wrapped procedures that can be plugged into each other because each takes an argument and produces a value of the same type.

So where's the magic? The magic comes from two places: First of all, the compose function that plugs the wrappers together doesn't have to simply pass the chained return value through. It can do things like inspect the chained return value and bypass the rest of the functions if it wants. Second, the wrapper functions don't have to just invoke the function they wrap, they can, for example, keep a list that accumulates the return values from each function as it is called and stuff that in the chained return value. Basically, the wrappers can contain some fairly complex `plumbing' that the base functions F, G, and H don't have to know about.

There's one more tiny bit of magic. In Haskell, if you don't pass enough arguments to a function, the function will be automagically curried. This is a little bizarre. Imagine this in Common Lisp:

(defun foo (a b c) (+ a (* b c))) (foo 2) => #<closure > (funcall (foo 2) 3 4) => 14 (funcall (foo 2) 3) => #<closure >.

The magic comes when we arrange for the chained return value to always be the rightmost value. We can then just drop that argument and forget that there is a chained return value and let the system do all the dirty work of automatically currying our functions. The final result is that the chained return value disappears from the code that was just using it to force sequencing. We've factored the program into two parts: the chunks that are blissfully unaware that they are going to be chained together in funny ways and the plumbing that really doesn't need to know what sort stuff is flowing through the pipes.

That, for the most part, is all there is to monads. If you like math, you can use category theory to prove that this works. Obviously there are constraints on what the wrapper functions can do if they need to be plugged together, and if you are truly anal, you can once again invoke category theory to prove that your personal set of wrapper functions satisfy the requirements.

I have to admit that I am a bit underwhelmed about this. Sure it's a cute trick, but it isn't all

*that*amazing. What's all the hubbub about?

## 2 comments:

I think that the focus on IO and sequencing is a bit of a red herring. Monads can be seen as a generalization of continuation-passing style. In combination with the fact that arguments are only evaluated when asked to do so, you can also regard this as a subset of 3-Lisp style reflection: A monadic value is an encapsulated version of an unevaluated expression+environment, and then you also get a continuation. So this corresponds roughly to 3-Lisp's reflective lambdas, except that monads can still be compiled, while 3-Lisp cannot be compiled (cf. "The theory of fexprs is trivial").

In other words, you get a framework for doing a form of procedural reflection, which gives you a way to extend the language from inside.

You're right that I've taken a narrow view with the IO and sequencing, but I'm going to go someplace different with it, so I'm glossing over some of the more abstract stuff like you mentioned.

Post a Comment