Sunday, April 20, 2025

Well This Was Interesting

I was playing with notebooklm.google.com and for fun I entered a few of my recent blog posts. I asked it to generate a conversation. The AI generated what sounds a lot like a podcast discussing my blog. One host sounds a bit like Mike Rowe. They get a lot of the detail right, but there are a couple of glaring mistakes. (Read - EVIL - Print - Loop) I suppose I am contributing to the soup of AI slop, but I was suprised at how natural this sounds. Podcast

I also tried giving it some text files about Tic Tac Toe representations. It generated a remarkably good “Deep Dive” that really sounds as if it has an understanding of the text: Deep Dive

There are some “tells” that this is AI and not human, but I wouldn't be surprised if this could fool the average layman. In a few years, it’s going to be really hard to detect, though.

Friday, April 18, 2025

Stupid reader tricks

Here are some stupid reader tricks for Lisp. I've tested them on SBCL, and they are of questionable portability and utility.

Run Shell Commands from the Lisp Prompt

(set-macro-character #\! 
    (lambda (stream char)
      (declare (ignore stream char))
      (uiop:run-program (read-line stream) :output *standard-output*))
    t)

> !ls -als
total 4068
   4 drwxr-x--- 21 jrm  jrm     4096 Apr 18 06:42 .
   4 drwxr-xr-x  4 root root    4096 Mar 22 17:27 ..
1900 -rwx--x--x  1 jrm  jrm  1940604 Apr 17 19:10 .bash_history
   4 -rw-r--r--  1 jrm  jrm      220 Mar 19 12:16 .bash_logout
   8 -rw-r--r--  1 jrm  jrm     4961 Apr  1 11:13 .bashrc
   4 drwx------  6 jrm  jrm     4096 Mar 21 07:52 .cache
   0 lrwxrwxrwx  1 jrm  jrm       51 Mar 24 05:20 .config -> /mnt/c/Users/JosephMarshall/AppData/Roaming/.config
   0 lrwxrwxrwx  1 jrm  jrm       50 Mar 26 03:12 .emacs -> /mnt/c/Users/JosephMarshall/AppData/Roaming/.emacs
   4 drwx------  6 jrm  jrm     4096 Apr 17 12:13 .emacs.d
      ... etc ...

>

Make λ an alias for lambda

(set-macro-character #\λ (lambda (stream char) (declare (ignore stream char)) 'cl:lambda) t)

> ((λ (x) (+ x 4)) 3)
7

If you do this you might want to add a print function for the lambda symbol:

(defmethod print-object ((obj (eql 'cl:lambda)) stream) ;; doubt this is portable
  (format stream "λ"))

> '(λ (x) (+ x 4))
(λ (x) (+ x 4))

> (symbol-name (car *))
"LAMBDA"

Thursday, April 17, 2025

DES Machine

The MIT CADR Lisp Machine had a number of static RAMs that were used in the processor for various things such as state machines and register files. The core parts of the LMI Lambda Lisp Machine were similar to the CADR (similar enough that they could run the same microcode) but technology had advanced such that the static RAM chips were typically double the size of the CADR's. The LMI Lambda thus had twice as many registers as the CADR, but because there weren't any extra bits in the instruction set, you couldn't address half of them. The extra address bit from the RAM was wired to a status register. So the LMI Lambda had, in effect, two banks of registers which you could swap between by toggling the bit in the status register. This was not normally used — the microcode would set the bit to zero and leave it there.

A friend of mine was interested in security and he had written a high performance version of the encryption algorithm used by Unix to encrypt passwords. He was experimenting with dictionary attacks against passwords and one bottleneck was the performance of the password encryption algorithm.

It occurred to me that I could store the DES S-boxes in the alternate register bank of the LMI Lambda. With some special microcode, I could turn an LMI Lambda into a DES machine that could churn through a dictionary attack at a high speed. I added a special Lisp primitive that would swap the register banks and then run several hundred rounds of the DES algorithm before swapping back and returning to Lisp. Then I wrote a Lisp program that would feed a dictionary into the DES primitives when the processor was idle.

I was able to discover a few passwords this way, but I was more interested in the proof of concept that the actual results. A microcoded DES machine would work, but you'd get better performance out of dedicated hardware.

Sunday, April 13, 2025

Mea Culpa

OH NO! There's something wrong on the Internet!

It is often me. Mea culpa. Mea maxima culpa. I'm only human.

But this is a blog, not a reference manual. I use it to organize my thoughts, get things off my chest, provoke discussion, maybe entertain a few fellow hackers, and show others how much fun I'm having playing with computers. Accuracy and precision aren't the primary objectives although I try not to be egregiously incorrect.

Mostly I see people misinterpreting something I say casually. I gloss over some details that some find important but I consider boring. I make the first-order error of assuming everyone has the same backgroud as I do and will interpret what I say in the way I had in mind. Clearly this isn't the case, yet I persist in thinking this way. Oh well.

I'll try to correct errors that are brought to my attention and will elaborate things in more detail if asked, but if my blog irritates you with its broad generalizations, inattention to detail, omission of specifics, and things that are just plain wrong, why are you reading it?

Emacs and Lisp

The first editor I learned how to use was TECO on a line printer. You’d print the line of code you were on, then you’d issue commands to move the cursor around. You tried to avoid printing the line because that would be wasting paper. So you’d move the cursor around blind until you thought you got it to where you wanted, and then you’d start inserting characters. When you thought you had it, you’d print out the edited line.

Another undergrad saw me struggling with this and asked why I wasn’t using vi. I had never heard of vi and I was amazed that you could view the code on the screen and move your cursor around visually before going into insert mode and adding text. With vi I was orders of magnitude more productive than with TECO.

When I came to the ’tute in the early 80s, I found that computer accounts were only routinely issued to students taking computer science courses. I hadn’t decided on a computer science major, so I didn’t have an account. However the Student Information Processing Board would give out a Multics account to interested students, so I signed up for that. The Multics terminal was in the basement and it had a dial up connection with an acoustically coupled modem: two rubber cups that the handset would cradle in.

Everyone at the AI Lab used a home grown editor called emacs, and there was a Multics port. I abandoned vi and learned emacs. The paradigm was different, but I didn’t see one as superior to the other. When I declared computer science as my major, I got an account at the AI Lab on the Decsystem 20 machine. The editor was TECO with the editor macros (emacs) package loaded.

When I took S&ICP, we had a lab with HP9836 “Chipmunks” running MIT Scheme. The Chipmunks were PCs, not time-shared, and each acted as its own Scheme machine, complete with an emacs clone for editing the code.

At the AI Lab, there were a couple of machines running ITS, the Incompatible Timesharing System. You could run Maclisp on them, but Maclisp was such a pig, using over a megabyte of RAM, that a couple of instances of Maclisp would bring the machine to its knees. The Lab had developed the Lisp Machine, a single user computer that would run ZetaLisp (the successor to Maclisp). In addition to Lisp, the Lisp machine ran the ZWEI editor. Zwei Was Eine Initially, and Eine Is Not Emacs, but rather an emacs clone written in Zetalisp.

ZWEI was integrated with the Lisp environment. You could insert Lisp objects in the ZWEI editor and their printed representation would appear in the edited text. The printed represention was mouse sensitive and had its own context menu.

If you weren’t using a Lisp Machine, your options were Unipress Emacs and Gosling Emacs which you could run on this new OS called “Unix”

Around this time (in the late 80s) a hacker named Stallman decided to write a new OS. His first task was to write an editor and he decided on a new version of Emacs written in its own Lisp dialect.

If you wanted to use Lisp, you interacted with it via Emacs.

These days, I use GNU Emacs and I load up the sly package. Sly is a slime fork and it turns GNU Emacs into an IDE for a Lisp running in another process. The interface gives you much of what you used to get when using ZWEI on the Lisp machine. You can evaluate subexpressions, macroexpand and compile programs, gather output, and run the Lisp debugger from within GNU emacs.

Emacs and Lisp co-evolved at MIT and Emacs has always been used as a front-end to Lisp. I’ve never gone back to vi, and when I’ve had to use it I’ve found it frustrating (but this is because I have forgotten everything about how to use it).

I understand that some people find Emacs impossible to use and have a strong preference for vi. Is the experience of hacking Lisp in vi any good?

Saturday, April 12, 2025

Writing Your OS in Lisp

How to write an OS in Lisp is beyond the scope of a single post in this blog, and I don‘t have a series prepared. However, I can drop some hints of how you can overcome some problems.

If you are writing your OS in Lisp, it helps to have some idea of how Lisp compiles your program to machine code. The disassemble function prints out the disassembly of a Lisp function. It is obviously implementation dependent.

The inner dispatch of the compiler, when it is called, will be given a target location in which to deposit the result of evaluating the compiled expression. The target location given to the compiler will typically be a register, the top of stack, a fixed offset within the stack, or a global memory location, that sort of thing. The compiler should generate code that ultimately results in the value of the expression ending up in the target location.

A number of Lisp primitive functions can be implemented as a single CPU instruction. (Alternatively, you can create quirky Lisp “CPU subprimitive” functions out of each of the non-jump CPU instructions.) When compiling a function call to such a subprimitive function, the compiler emits code that fetches each argument from its location and places the value in a register, and then emits the single CPU instruction that the subprimitive consists of. It arranges for the result of the CPU instruction to be placed in the target location. What I’m describing here is basically what is known as a “compiler intrinsic” in other languages.

If you have a tree of compound expressions, each of which is one of these CPU subprimitive functions, the compiler will assign locations to all the intermediate computed values, determine an order in which to compile the primitives (linearized left to right), and then emit linear code that carries out the primitive operations. If all your code consists of calls to CPU primitives, then the compiler can generate straight-line assembly code CPU instructions. The compiler in this case only acts as a register allocator. From here, you can bootstrap yourself to a set of primitive procedures each of which is written in straght-line assembly code.

When writing an OS, you occasionally run into places where you want a very specific sequence of CPU instructions. You have a couple of options: some Lisp compilers offer a way to insert literal assembly code instructions in the compiled code instruction stream in a progn like manner. Naturally such code is “unsafe”, but it nicely solves the problem where you need hand-crafted assembly code in your solution. The other solution is to write the code as a compound expression of CPU primitives and let the compiler sort out the register allocation.

Friday, April 11, 2025

Bloom Filter

This is the response time of the Google Mini search appliance on several thousand queries. There is an odd spike at 300 ms. A number of machines were taking exactly 300 ms to respond regardless of the query.

I soon tracked this down to the spell server. This is the microservice that puts “Did you mean foobar?” on the top of the page when you spell something wrong. The results page generator will wait up to 300 ms for the spell server to come up with its suggestions and then it will proceed without it. What appeared to be happening is that the spell server was giving a “no go” signal to the page generator at the beginning of page composition. The results page generator would then wait for the spell server to make a suggestion. The spell server would invariably take too long, so after 300 ms, the results page generator would time out and ship the page as is. This happened often enough that it showed up as a blip in the performance graph.

The spell server was based on a Bloom filter. A Bloom filter is a variation on a hash table where you only record that a bucket exists, but not its contents. A Bloom filter will quickly and reliably tell you if an entry is not in the table, but only probabilistically tell you if an entry is in the table. Bloom filters rely on having a good hash function and having a mostly empty hash table. If the hash table is mostly empty, the Bloom filter will usually end up hitting an empty bucket and returning false immediately.

A quick look at the spellserver showed that the Bloom filter was almost always getting a hit, this would send the “no go” signal and trigger a slow lookup to find the misspelled word. The problem was that the Bloom filter was too small. It had too few buckets, so most buckets had an entry. Words would always get a hit in the Bloom filter and so the search appliance thought that all words were misspelled.

I adusted the size of the Bloom filter, giving it several million buckets. Now correctly spelled words would likely hit an empty bucket and the filter would give a “go” signal to the response page generator. Problem solved, and the spike at 300 ms went away.

Thursday, April 10, 2025

Why I Program in Lisp

Lisp is not the most popular language. It never was. Other general purpose languages are more popular and ultimately can do everything that Lisp can (if Church and Turing are correct). They have more libraries and a larger user community than Lisp does. They are more likely to be installed on a machine than Lisp is.

Yet I prefer to program in Lisp. I keep a Lisp REPL open at all times, and I write prototypes and exploratory code in Lisp. Why do I do this? Lisp is easier to remember, has fewer limitations and hoops you have to jump through, has lower “friction” between my thoughts and my program, is easily customizable, and, frankly, more fun.

Lisp's dreaded Cambridge Polish notation is uniform and universal. I don't have to remember whether a form takes curly braces or square brackets or what the operator precedency is or some weird punctuated syntax that was invented for no good reason. It is (operator operands ...) for everything. Nothing to remember. I basically stopped noticing the parenthesis 40 years ago. I can indent how I please.

I program mostly functionally, and Lisp has three features that help out tremendously here. First, if you avoid side effects, it directly supports the substitution model. You can tell Lisp that when it sees this simple form, it can just replace it with that more complex one. Lisp isn't constantly pushing you into thinking imperatively. Second, since the syntax is uniform and doesn't depend on the context, you can refactor and move code around at will. Just move things in balanced parenthesis and you'll pretty much be ok.

Third, in most computer languages, you can abstract a specific value by replacing it with a variable that names a value. But you can perform a further abstraction by replacing a variable that names a quantity with a function that computes a quantity. In functional programming, you often downplay the distinction between a value and a function that produces that value. After all, the difference is only one of time spent waiting for the answer. In Lisp, you can change an expression that denotes an object into an abtraction that computes an object by simply wrapping a lambda around it. It's less of a big deal these days, but properly working lambda expressions were only available in Lisp until recently. Even so, lambda expressions are generally pretty clumsy in other languages.

Functional programming focuses on functions (go figure!). These are the ideal black box abstraction: values go in, answer comes out. What happens inside? Who knows! Who cares! But you can plug little simple functions together and get bigger more complex functions. There is no limit on doing this. If you can frame your problem as "I have this, I want that", then you can code it as a functional program. It is true that functional programming takes a bit of practice to get used to, but it allows you to build complex systems out of very simple parts. Once you get the hang of it, you start seeing everything as a function. (This isn't a limitation. Church's lambda calculus is a model of computation based on functional composition.)

Lisp lets me try out new ideas as quickly as I can come up with them. New programs are indistinguishable from those built in to the language, so I can build upon them just as easily. Lisp's debugger means I don't have to stop everything and restart the world from scratch every time something goes wrong. Lisp's safe memory model means that bugs don't trash my workspace as I explore the problem.

The REPL in lisp evaluates expressions, which are the fundamental fragments of Lisp programs. You can type in part of a Lisp program and see what it does immediately. If it works, you can simply embed the expression in a larger program. Your program takes shape in real time as you explore the problem.

Lisp's dynamic typing gives you virtually automatic ad hoc polymorphism. If you write a program that calls +, it will work on any pair of objects that have a well-defined + operator. Now this can be a problem if you are cavalier about your types, but if you exercise a little discipline (like not defining + on combinations of strings and numbers, for example), and if you avoid automatic type coercion, then you can write very generic code that works on a superset of your data types. (Dynamic typing is a two-edged sword. It allows for fast prototyping, but it can hide bugs that would be caught at compile time in a statically typed language.)

Other languages may share some of these features, but Lisp has them all together. It is a language that was designed to be used as a tool for thinking about problems, and that is the fun part of programming.

Wednesday, April 9, 2025

Lisp Programs Don't Have Parentheses

Lisp programs don't have parentheses — they are made of nested linked lists. The parentheses only exist in the printed representation — the ASCII serialization — of a Lisp program. They tell the Lisp reader where the nested lists begin and end. Parenthesis are the contour lines in the topographic map of your Lisp program.

The parentheses allow other programs, such as editors, to manipulate the program text as a tree structure. You can use the parenthesis to determine the structure of the Lisp program without needing a complicated parser. You only need to know how to count the opening and closing parentheses to determine the boundaries of expressions in the text of a program. You don't need an IDE or a treesitter process running a language parser in the background to make sense of the lexical structure of the program.

This makes refactoring of a Lisp program easy because the expression boundaries are explicitly marked. Additionally, Lisp expressions do not have a dependency on the context in which they appear — Lisp expressions don't change form when you move them around. Contrast this with, for example, expression sequences in C. In C, in a statement context, expressions are terminated by semicolons, but in an expression context, they are separated by commas. If you move a sequence of expressions in C from statement to expression context, you also have to change the semicolons to commas.

As another example, in C and Java, conditional statements follow the if … else … form, but conditional expressions use the infix ternary ?: operator, so moving conditionals around may require a substantial edit. In a language without ternary conditionals, like Go and Rust, wrapping a subexpression with a conditional may require large rewrites of the surrounding code.

These days, people use complex IDEs to write and refactor code. But still, you are dependent upon what the IDE provides. A refactoring that is not supported by the IDE has to be done manually. But Lisp just lets you move expressions around as text. No muss, no fuss. You can even use a simple text editor to do it.

Monday, April 7, 2025

Are You Tail Recursive?

I was trying to write a procedure that would test whether tail recursion was enabled or not. It turns out that this is semi-decidable. You can only tell if it is not tail recursive by blowing the stack, but you can never be sure that somebody didn’t make a really, really big stack.

But for the sake of argument, let’s say assume that 228 recursive calls is enough to blow the stack. Also, let’s assume that the system will correctly signal an error and be able to recover from the blown stack once it is unwound. Then we can test for tail recursion as follows:

(defconstant +stack-test-size+ (expt 2 28))

(defun test-self-tail-recursion (x)
  (or (> x +stack-test-size+)
      (test-self-tail-recursion (1+ x))))

(defun test-mutual-tail-recursion (x)
  (or (> x +stack-test-size+)
      (test-mutual-tail-recursion-1 (1+ x))))

(defun test-mutual-tail-recursion-1 (x)
  (or (> x +stack-test-size+)
      (test-mutual-tail-recursion (1+ x))))

(defun self-tail-recursive? ()
  (ignore-errors (test-self-tail-recursion 0)))

(defun mutually-tail-recusive? ()
  (ignore-errors (test-mutual-tail-recursion 0)))

Naturally, your optimize settings are going to affect whether or not you have tail recursion enabled, and even if this code says it is enabled, it doesn’t mean that a different compilation unit compiled at a different time with different optimizations would be tail recursive as well. But if you run this in your default environment and it returns NIL, then you can be pretty sure your default is not tail recursive. If it returns T, however, then there are at least some conditions under which your system is tail recursive.

Use of certain features of Common Lisp will “break” tail recursion, for example special variable binding, multiple-value-bind, unwind-protect, and catch, and basically anything that marks the stack or dynamically allocates on the stack. If control has to return to the current function after a call, then it is not tail recursive, but if control can return directly to the caller of the current function, then the compiler should be able to make a tail recursive call.

Most, but not all, Common Lisp implementations can be configured to enable tail recursion, and it is usually possible to disable it if desired. If you want tail recursion, but your implementation cannot provide it, you are SOL. If you don’t want tail recursion, but your implementation provides it by default, there are a number of things you can do to disable it. Usually, you can set the debugging options to retain every stack frame, or you can set the debug optimization to disable tail recursion. If the recursive call is not in tail position, then it is incorrect to tail call it, so you can disable tail recursion by moving the recursive call out of tail position. For example, you could write a without-tail-call macro:

(defmacro without-tail-call (form)
  (let ((value (gensym)))
    `((lambda (,value) ,value) ,form)))

;; Use like this:

(defun factorial (x &optional (acc 1))
  (if (zerop x)
      (progn (cerror "Return the factorial" "Computed a factorial")
             acc)
      (without-tail-call
        (factorial (1- x) (* acc x)))))

Running this program will drop us into the debugger because of the cerror and we can inspect the stack.

> (factorial 100)

Computed a factorial
   [Condition of type SIMPLE-ERROR]

Restarts:
 0: [CONTINUE] Return the factorial
 1: [RETRY] Retry SLY mREPL evaluation request.
 2: [*ABORT] Return to SLY’s top level.
 3: [ABORT] abort thread (#<THREAD tid=179 "sly-channel-1-mrepl-remote-1" RUNNING {1000BA8003}>)

Backtrace:
 0: (FACT 0 2432902008176640000)
 1: (FACT 1 2432902008176640000)
 2: (FACT 2 1216451004088320000)
 3: (FACT 3 405483668029440000)
 4: (FACT 4 101370917007360000)
 5: (FACT 5 20274183401472000)
 6: (FACT 6 3379030566912000)
  ...

As requested, each recursive calls pushes a stack frame.

But what if the compiler “optimizes” ((λ (x) x) (foo)) to simply be (foo)? That is a questionable “optimization”. Lisp is a call-by-value language, meaning that the arguments to a function are fully evaluated before the function is applied to them. The call to (foo) should be fully evaluated before applying (λ (x) x) to it. The “optimization” essentially treats (foo) as unevaluted and then tail calls it after applying the lambda expression. This is call-by-name evaluation. While anything is possible, an “optimization” that sometimes changes the function call semantics from call-by-value to call-by-need is dubious.

Sunday, April 6, 2025

When Laymen Try to be Programmers

When I worked on Google Offers (a Groupon clone), we had a user interaction problem. When a user purchased an item, there were three important steps that we wanted to acknowledge:

  1. When Google recieves an order. The user should get positive feedback that the order was recognized. If the user doesn’t get this, they will either re-submit the order, or be surprised when the order is fulfilled. The order is placed on a queue.
  2. When Google processes an order from the queue. The user should get positive feedback that Google is working on fulfilling the order. Usually, an order is fulfilled reasonably quickly, but there are situations where critical systems are unavailable and the order has to remain in the queue for an extended period of time (several hours). The user gets nervous if they get ghosted by the system after getting positive feedback that Google got the order.
  3. Finally, when Google has made the decision whether to fulfill the order or has declined the order. The user needs to be told whether to expect shipment or a refund. If a refund, then the user can take action to re-submit the order.

So in submitting an order to Google Offers, a total of three emails would be sent to the user so they could watch their order proceed through the system. The sending of these emails was controlled by the “commerce” API. The commerce API was a walled off section of the Google infrastructure that knew how to talk to the credit card companies and charge money. Normal Google code was not allowed to do these things but had to work via the commerce API, and the commerce API would take care of ensuring that the appropriate pop-ups would appear on the user’s screen and that the credit card information was securely obtained. Normal Google code never got its hands on the user’s credit card information, it only got a "go/no-go" signal from the commerce API.

So the commerce API would be the system actually taking the steps of recieving the order, charging the credit card, and returning the go/no-go signal to our system. We would instruct it to send email for each of these steps. So far so good.

The problem was that often the order would go through very quickly. The emails were processed in batches, so the email that acknowledged the reciept of the order could end up being received after the email that acknowledged that the order had been fulfilled. The user would first get an email saying "We charged your card." and only after this would they get an email saying "We got your order." This would confuse the user.

There was no way to add an artifical time lag, nor would we want to. We could not guarantee that the emails would arrive in the right order. (Even if we could, we couldn’t guarantee that the user would read them in the right order.) The solution that I proposed was to explicitly number the emails so that each email would say "This is email number N of 3 expected emails." and even perhaps a small message that said "Emails can arrive in the wrong order." If a user got email 2 first, then email 1, they could figure it out.

But the product managers didn’t see it that way. As far as they were concerned, it was confusing when email 1 arrived after email 2, so they wanted to not send email 1 if email 2 had been recieved. This is a nice idea, but I pointed out that we had no control over the order of arrival of emails, so there was no way to know if email 2 would be received prior to email 1 at the time we were sending email 1. They were adamant: "Don’t send the second email (that is, email 1, which arrived second in some situations)."

Ok, then. I adjusted the code to suppress the sending of email 1. This solved the problem of email 1 arriving after email 2, sure, but recall that email 1 was the "Google has received your order and has placed it on a queue to be processed" acknowledgement. Now when people placed an order, they would no longer get confirmation that Google had received it. Usually, Google would process the order in a timely manner and they’d quickly get email 2 which said "We are processing your order", but if there were some reason that we could not process the queue for some time, the user would be left in the dark about whether Google had heard them in the first place.

Complaints poured in.

The subsequent conversation was something like this:

“Why aren’t you acknowledging that the order has been received?”

“You explicitly told us to not send that email. See this communication (cite reference).”

“But that was not what we meant. We meant, don’t send it if the user has received the ‘Your order has been processed.’ email.”

“How are we supposed to know if email system delivered his mail and that he read it in the right order? We’re good, but not that good.”

“You mean emails can arrive or be read in the wrong order?”

“Exactly what we said in this communication (cite reference).”

“...”

“May I suggest we number the emails so that the user can figure it out if they arrive in the wrong order?”

“No, don’t do that. Just put it back the way it was.”

Done, and we are back to the original problem.

Out-of-order packets are an old problem that existed and was solved before computers. Computer programmers are, of course, very aware of the problem and the solutions. Computer programmers are well versed in the problems of “process”, and when laymen try to play at being programmers by interfering with process, they generally screw it up.

Friday, April 4, 2025

Suddenly

Suddenly this week, Copilot is reporting Emacs among our company's top IDEs and CommonLisp among the top languages. All on me.

Thursday, April 3, 2025

Blacksmithing and Lisp

One of my hobbies is blacksmithing (not farrier work). Mild steel is an amazingly versatile material. It's very strong and hard at room temperature and it gets soft and easily workable when you heat it up. You use a hammer to move the metal once it is hot. You don't have to hit it very hard, just firmly. Hot metal is a very forgiving medium. If you make a mistake, simply heat the work up and try again. You rarely encounter mistakes you cannot recover from and you have to throw your work away.

A blacksmith uses tongs to manipulate work that would otherwise be too hot to handle. You don't want to drop a piece of hot steel, so you'd like your tongs to be a good fit on your work. Find some tongs that are an approximate fit and stick them in the fire to get good and hot. When they have softened, put the hot tongs around the cold workpiece and tap them into shape with your hammer. Voila! Custom tongs.

When I first saw this trick I was quite amused. It reminded me of Lisp programming — you can work on your problem, or you can customize the language to fit your problem better. And, yes, I'm the kind of nerd that sees a blacksmith trick and thinks “Lisp!”

Another computer-sciency question is one of bootstrapping. Where do tongs come from? How do you make tongs if you don't have them? This isn't too hard. A simple pair of tongs is basically two sticks with a rivet. You can shape half a pair of tongs (a single tong) by holding one end while you work the other. It will take a bit of time for the end in your hand becomes too hot to hold.

A good part of blacksmithing is creating ad hoc tools in pursuit of the end goal. You fairly often recur and create tools that help create tools. Metacircular blacksmithing.

The downside of working hot steel is that it is quite hot. You will get burned, but usually only mildly. Your reflexes take over pretty quickly when you touch hot metal. Then you learn early on that if you drop something, you do not attempt to catch it.

Wednesday, April 2, 2025

Lisp at Work

Lisp is not a good fit for most of the projects I'm doing at work. Our company has few Lisp programmers, so there would be no one to help maintain any code I wrote, and no one has a Lisp environment set up on their machine. I'll sometimes do a quick prototype in Lisp, and I keep a REPL open on my machine, but I don't develop code I expect to share or deploy. Once I have a prototype or proof of concept, I'll write the real thing in a language that is more commonly used at work.

But sometimes, Lisp is such a good fit for a problem that it overrides the considerations of how well it fits into your company's ecosystem. Not often, but sometimes.

I had to migrate some builds from CircleCI 2 to CircleCI 4. We had tried (twice) and failed (twice) to do an in-place upgrade of the server, so we instead brought up CircleCI 4 in parallel and migrated the builds over. To migrate a build, we had to extract the build information from the old server and import it into the new server. The API to the old CircleCI server did not expose all the data we needed, and there was no API on the new server that let you import everything we needed.

But CircleCI is written in clojure. So you can connect to a running instance of CircleCI and open a REPL. The REPL has access to all the internal data structures. It is an easy matter to write some lisp code to extract the necessary data and print it to stdout. It is also an easy matter to write some lisp code to read some data from stdin and import it into the new server.

The migration code could be written in any language, but it had to talk to a clojure REPL, format data in such a way that the clojure REPL could read it, and parse the output from the clojure REPL, which was going to be a Lisp object. Any language with a Common Lisp reader and printer would do, but you sort of get these for free if you actually use Lisp. I knew that the migration code could be written in Lisp because I had already prototyped it.

So I wrote a migration function that would take the name of the project to be migrated. The I created a Dockerfile that would boot an instance of sbcl on a core file. I preloaded the migration code and dumped the core. I ran the Dockerfile and got an image that, when run, would migrate a single project read from the command line. I then created a server that a user could visit, enter the name of his project, and it would run the Docker image as a kubernetes job.

We migrated most of the projects this way. At one point, I wrote an additional script that would read the entire list of projects in the old server and simply print them to stdout. After we shut down the migration server, I'd get requests from people that didn't seem to understand what a deadline was. I could prime the migration script from the file and it would initialize a project on the new server with the old state that I dumped. Migration stragglers could often recover this way.

Using Lisp for the migration tool did not add risk to the company. Just using the tool did not require Lisp knowledge. Anyone that needed to understand the migration process in detail had to understand clojure anyway. The migration took place over a period of weeks, and the migration tool was shut down at the end of this period. Long term maintenance was not a concern. Users of the migration tool did not have to understand Lisp, or even know what language was being used. It was a black box kubernetes job. I got buy-in from management because the tool simply worked and solved a problem that had twice been been unsuccessfully attempted before.

Tuesday, April 1, 2025

Vibe Coding, final word

I couldn't leave it alone. This AI was going to write some Lisp code if I had to force it. This isn't “vibing” anymore. We're going to be pecise, exact, and complete in our instructions, and we're going to check the results.

Again, I'm taking on a Minesweeper clone as the problem. All the code was to be written in a single file using a single package. The AI simply didn't understand the problem of forward references to symbols in other packages. Perhaps a game loop is beyond the ability of the AI. I wrote a basic game loop that initializes all the required libraries in correct order with unwind-protects to clean up in reverse order. I wrote a main function that creates a window and a renderer to draw on it, and a game loop that polls for events and handles keypresses and the quit event. This is a basic black window that has no behavior beyond the ability to quit. There should be no need for the AI to modify this code.

The AI used the GPT-4o model. Instructions were given in precise, imperative English. For example,

“Each cell on the board is in one of these states: hidden, flagging, flagged, unflagging, exposing, exposed Cells start out in hidden state. When a cell is hidden, it renders as a blank square. When a cell is hidden and the mouse is over the cell and the right button is down, the cell enteres the flagging state. When a cell is flagging and the mouse is over the cell and the right button is up, the cell enters the flagged mode. When a cell is flagged and the mouse is over the cell and the right button is down, the cell enters unflagging. When the cell is unflagging, the mouse is over the cell and and right button is up, the cell enters hidden. Cells that are flagging or flagged display as the flag texture. Cells that are hidden or unflagging display as the blank texture.”

This is programming, not vibing. There is always room for misunderstanding, but I spelled out the details of part of the state transitions that I wanted the AI to implement. In particular, notice that when flagging a cell, there are hidden states beyond the flagged and unflagged states. These are necessary to make the effect of flagging and unflagging be edge triggered. I didn't trust the AI to know about this, so I spelled it out.

Sometimes I could write simple directions, such as:

“When rendering a cell, if it is under the mouse, highlight the cell with a red border.”

Or:

“When the cell is in the exposing state, it should display as a blank, depressed cell.”

But then again,

“When the mouse is over a cell, if the cell is in hidden state and left mouse button is down, the cell enters the exposing state. If a cell is in exposing state and the mouse is not over the cell, the cell reverts to hidden state. If a cell is in exposing state, and the mouse is over the cell, and the left mouse button is raised, the cell enters the exposed state.”

The AI did a surprisingly good job at handling the parenthesis. There was one function that it kept adding a stray parenthesis to. I went back more than once to delete that extra parenthesis when the reader barfed.

At one point, the AI accidentally removed the abiity to place a flag on a cell. I asked it to restore that ability, but I didn't give instructions in as great detail as I did before. It added transitions to and from the flagged state, but it didn't add the intermediate flagging and unflagging states as I had explicitly directed it to before. As a result, flagging ended up being non-deterministic, with the flag being toggled on and off every 16 ms. I had to go back and add the intermediate states by hand again to restore the determinism.

After a few hours of this, I had a working Minesweeper clone. Let's look at the generated code.

;; -*- mode: lisp; package: "VIBE" -*-

(in-package "VIBE")

;; Define constants for magic numbers
(defparameter *tile-size* 32) ;; Size of each tile in pixels
(defparameter *game-width* 25) ;; Number of columns
(defparameter *game-height* 15) ;; Number of rows
(defparameter *frame-delay* 16) ;; Delay between frames in milliseconds

;; Define cell states
(defconstant +hidden+ 'hidden)
(defconstant +flagging+ 'flagging)
(defconstant +flagged+ 'flagged)
(defconstant +unflagging+ 'unflagging)
(defconstant +exposing+ 'exposing)
(defconstant +exposed+ 'exposed)

;; Define game states
(defconstant +playing+ 'playing)
(defconstant +won+ 'won)
(defconstant +lost+ 'lost)

It is a little odd to define the state constants as symbols. This adds a pointless extra level of symbolic indirection. I would have used keywords instead of symbols.

;; Global game state
(defparameter *game-state* +playing+)

;; The game board
(defparameter *board* nil)

;; The game board content (e.g., :mine or a number)
(defparameter *board-content* nil)

(defun initialize-board ()
  "Initialize the game board with all cells in the hidden state and place mines."
  ;; Initialize the board state
  (setf *board* (make-array (list *game-height* *game-width*)
                            :initial-element +hidden+))
  ;; Initialize the board content
  (setf *board-content* (make-array (list *game-height* *game-width*)
                                    :initial-element 0))
  ;; Place 50 mines randomly
  (loop repeat 50
        do (let ((row (random *game-height*))
                 (col (random *game-width*)))
             ;; Ensure no duplicate mines
             (unless (eql (aref *board-content* row col) :mine)
               (setf (aref *board-content* row col) :mine)
               ;; Update adjacent cells
               (update-adjacent-cells row col)))))

(defun update-adjacent-cells (row col)
  "Increment the mine count for all adjacent cells."
  (loop for r from (max 0 (1- row)) to (min (1+ row) (1- *game-height*))
        do (loop for c from (max 0 (1- col)) to (min (1+ col) (1- *game-width*))
                 do (unless (eql (aref *board-content* r c) :mine)
                      (incf (aref *board-content* r c))))))

You can tell that I didn't write this code. I hate loop.

The textures are loaded from PNG files just before beginning the game loop. The AI decided to maintain them as four separate values that are passed around to the functions that need them. This will get out of hand quickly. A table of resources would be better.

(defun render-frame (renderer blank-texture flag-texture mine-texture digit-textures)
  (sdl2:set-render-draw-color renderer 0 0 0 0)
  (sdl2:render-clear renderer)
  ;; Render game state
  (loop for row from 0 below *game-height*
        do (loop for col from 0 below *game-width*
                 do (render-tile renderer row col blank-texture flag-texture mine-texture digit-textures))))

(defun render-tile (renderer row col blank-texture flag-texture mine-texture digit-textures)
  "Render a single tile based on its state and highlight it if under the mouse."
  (let ((state (aref *board* row col))
        (content (aref *board-content* row col)) ;; Content of the cell (e.g., :mine or a number)
        (dest-rect (sdl2:make-rect (* col *tile-size*) (* row *tile-size*) *tile-size* *tile-size*)))
    ;; Render the tile based on its state
    (cond
      ;; Hidden or unflagging state: render blank texture
      ((or (eql state +hidden+) (eql state +unflagging+))
       (sdl2:render-copy renderer blank-texture :dest-rect dest-rect))
      ;; Flagging or flagged state: render flag texture
      ((or (eql state +flagging+) (eql state +flagged+))
       (sdl2:render-copy renderer flag-texture :dest-rect dest-rect))
      ;; Exposing state: render blank texture with a depressed effect
      ((eql state +exposing+)
       (sdl2:render-copy renderer blank-texture :dest-rect dest-rect)
       (sdl2:set-render-draw-color renderer 50 50 50 100) ;; Semi-transparent dark overlay
       (sdl2:render-fill-rect renderer dest-rect))
      ;; Exposed state: render mine or digit
      ((eql state +exposed+)
       (cond
         ;; If the cell contains a mine, render the mine texture
         ((eql content :mine)
          (sdl2:render-copy renderer mine-texture :dest-rect dest-rect))
         ;; If the cell contains a digit (positive number), render the corresponding digit texture
         ((and (integerp content) (> content 0))
          (let ((digit-texture (gethash content digit-textures)))
            (when digit-texture
              (sdl2:render-copy renderer digit-texture :dest-rect dest-rect))))
         ;; Otherwise, render a default exposed tile (e.g., light gray)
         (t
          (sdl2:set-render-draw-color renderer 200 200 200 255) ;; Light gray for exposed
          (sdl2:render-fill-rect renderer dest-rect)))))
    ;; Highlight the tile with a red border if it is under the mouse
    (multiple-value-bind (mouse-x mouse-y _) (sdl2:mouse-state)
      (declare (ignore _))
      (let ((mouse-row (floor (/ mouse-y *tile-size*)))
            (mouse-col (floor (/ mouse-x *tile-size*))))
        (when (and (= row mouse-row) (= col mouse-col))
          (sdl2:set-render-draw-color renderer 255 0 0 255) ;; Red color
          (sdl2:render-draw-rect renderer dest-rect)
          (sdl2:set-render-draw-color renderer 0 0 0 255)))))) ;; Reset color to black

Each time we go around the game loop, we update the state the cell under the mouse. The AI decided to pass the state input as spread arguments to the update function.

(defun update-cell-state (row col mouse-over left-button-down left-button-up right-button-down right-button-up)
  "Update the state of a cell based on mouse input."
  (let ((state (aref *board* row col)))
    (cond
      ;; If the cell is hidden and the left mouse button is down, it enters the exposing state
      ((and (eql state +hidden+) mouse-over left-button-down)
       (setf (aref *board* row col) +exposing+))

      ;; If the cell is exposing and the mouse is not over it, it reverts to hidden
      ((and (eql state +exposing+) (not mouse-over))
       (setf (aref *board* row col) +hidden+))

      ;; If the cell is exposing, the mouse is over it, and the left mouse button is raised, it enters the exposed state
      ((and (eql state +exposing+) mouse-over left-button-up)
       (setf (aref *board* row col) +exposed+)
       ;; If the cell contains a mine, the game is lost and all mines are exposed
       (when (eql (aref *board-content* row col) :mine)
         (setf *game-state* +lost+)
         (expose-all-mines))
       ;; If the cell has zero neighboring mines, recursively expose neighbors
       (when (and (integerp (aref *board-content* row col))
                  (= (aref *board-content* row col) 0))
         (expose-neighbors row col)))

      ;; If the cell is hidden and the right mouse button is down, it enters the flagging state
      ((and (eql state +hidden+) mouse-over right-button-down)
       (setf (aref *board* row col) +flagging+))

      ;; If the cell is flagging and the right mouse button is up, it enters the flagged state
      ((and (eql state +flagging+) mouse-over right-button-up)
       (setf (aref *board* row col) +flagged+))

      ;; If the cell is flagged and the right mouse button is down, it removes the flag
      ((and (eql state +flagged+) mouse-over right-button-down)
       (setf (aref *board* row col) +unflagging+))

      ((and (eql state +unflagging+) mouse-over right-button-up)
       (setf (aref *board* row col) +hidden+)))))

(defun poll-mouse-and-update ()
  "Poll the mouse position and button states, and update the board accordingly."
  (when (eql *game-state* +playing+) ;; Only process mouse input if the game is playing
    (multiple-value-bind (x y buttons) (sdl2:mouse-state)
      (let ((row (floor (/ y *tile-size*)))
            (col (floor (/ x *tile-size*)))
            (left-button-down (logbitp 0 buttons))  ;; SDL_BUTTON_LEFT is bit 0
            (right-button-down (logbitp 2 buttons))) ;; SDL_BUTTON_RIGHT is bit 2
        (when (and (>= row 0) (< row *game-height*)
                   (>= col 0) (< col *game-width*))
          ;; Update the cell state based on mouse input
          (update-cell-state row col
                             t ;; mouse-over is true for the current cell
                             left-button-down
                             (not left-button-down)
                             right-button-down
                             (not right-button-down)))))))

This illustrates that while the lights appear to be on, no one is at home. The mouse-over variable is always true, there is no need for it to exist at all. There is no need to pass both left-button-down and its complement. Same with right-button-down.

I did allow the AI to modify game-loop, but the modifications were subject to careful scrutiny to make sure that the game would continue to run. In particular, one time it wanted to add handlers for mouse events. I told it no, and that it could poll the mouse state as necessary instead.

(defun game-loop (window renderer blank-texture flag-texture mine-texture digit-textures game-over-texture)
  "Main game loop."
  (declare (ignore window))
  ;; Main game loop
  (sdl2:with-event-loop (:method :poll)
    (:idle ()
           ;; Clear the screen
           (sdl2:set-render-draw-color renderer 0 0 0 255) ;; Black background
           (sdl2:render-clear renderer)

           ;; Poll mouse and update game state
           (poll-mouse-and-update)

           ;; Render the game frame
           (render-frame renderer blank-texture flag-texture mine-texture digit-textures)

           ;; Render the "Game Over" overlay if the game is lost
           (when (eql *game-state* +lost+)
             (let ((screen-width (* *tile-size* *game-width*))
                   (screen-height (* *tile-size* *game-height*)))
               ;; Set blend mode and alpha for transparency
               (sdl2:set-texture-blend-mode game-over-texture :blend)
               (sdl2:set-texture-alpha-mod game-over-texture 192) ;; 75% transparency
               ;; Render the texture as a full-screen overlay
               (let ((dest-rect (sdl2:make-rect 0 0 screen-width screen-height)))
                 (sdl2:render-copy renderer game-over-texture :dest-rect dest-rect))))

           ;; Present the rendered frame
           (sdl2:render-present renderer)

           ;; Delay for the next frame
           (sdl2:delay *frame-delay*))
    (:keydown (:keysym keysym)
              (cond
                ;; Reset the game when the 'o' key is pressed
                ((eql (sdl2:scancode keysym) :scancode-o)
                 (reset-game))
                ;; Quit the game when the 'x' key is pressed
                ((eql (sdl2:scancode keysym) :scancode-x)
                 (sdl2:push-quit-event))
                ;; Lose the game and expose all mines when the 'p' key is pressed
                ((eql (sdl2:scancode keysym) :scancode-p)
                 (progn
                   (setf *game-state* +lost+)
                   (expose-all-mines)))))
    (:quit () t)))

Notice that in this game loop, we're not accounting for the time it takes to update the game state and render the frame. If this game really tried to animate anything, the animation would be jittery. A better game loop would track real time and refresh accordingly.

For a simple game such as this, it makes sense to load the all the bitmaps into memory at the get-go. For a more complicated game with many levels, you might not be able to fit them all in memory.

Passing the surfaces around as arguments is not going to work when you have a lot of them.

(defun initialize ()
  "Initialize the game, load textures, and create the game board."
  (initialize-board) ;; Initialize the game board
  (let ((blank-surface nil)
        (flag-surface nil)
        (mine-surface nil)
        (game-over-surface nil)
        (digit-surfaces (make-hash-table)))
    (unwind-protect
         (progn
           ;; Load PNG surfaces
           (setq blank-surface (sdl2-image:load-image
                                (asdf:system-relative-pathname "vibe" "textures/blank.png")))
           (setq flag-surface (sdl2-image:load-image
                               (asdf:system-relative-pathname "vibe" "textures/flag.png")))
           (setq mine-surface (sdl2-image:load-image
                               (asdf:system-relative-pathname "vibe" "textures/mine.png")))
           ;; Load digit textures (e.g., "1.png", "2.png", etc.)
           (loop for i from 1 to 8
                 do (setf (gethash i digit-surfaces)
                          (sdl2-image:load-image
                           (asdf:system-relative-pathname "vibe" (format nil "textures/~a.png" i)))))
           ;; Create the "GAME OVER" surface
           (setq game-over-surface (create-game-over-surface))

           ;; Create the window and renderer
           (sdl2:with-window (window
                              :title "Vibe"
                              :x 0 :y 0
                              :w (* *tile-size* *game-width*)
                              :h (* *tile-size* *game-height*)
                              :flags '(:shown))
             (sdl2:with-renderer (renderer window :index -1 :flags '(:accelerated))
               (let ((blank-texture (sdl2:create-texture-from-surface renderer blank-surface))
                     (flag-texture (sdl2:create-texture-from-surface renderer flag-surface))
                     (mine-texture (sdl2:create-texture-from-surface renderer mine-surface))
                     (digit-textures (make-hash-table))
                     (game-over-texture (sdl2:create-texture-from-surface renderer game-over-surface)))
                 ;; Convert digit surfaces to textures
                 (maphash (lambda (key surface)
                            (setf (gethash key digit-textures)
                                  (sdl2:create-texture-from-surface renderer surface)))
                          digit-surfaces)
                 (unwind-protect
                      (game-loop window renderer blank-texture flag-texture mine-texture digit-textures game-over-texture)
                   ;; Cleanup textures
                   (sdl2:destroy-texture blank-texture)
                   (sdl2:destroy-texture flag-texture)
                   (sdl2:destroy-texture mine-texture)
                   (sdl2:destroy-texture game-over-texture)
                   (maphash (lambda (_key texture)
                              (declare (ignore _key))
                              (sdl2:destroy-texture texture))
                            digit-textures)))))))
      ;; Cleanup surfaces
      (when flag-surface (sdl2:free-surface flag-surface))
      (when blank-surface (sdl2:free-surface blank-surface))
      (when mine-surface (sdl2:free-surface mine-surface))
      (when game-over-surface (sdl2:free-surface game-over-surface))
      (maphash (lambda (_key surface)
                 (declare (ignore _key))
                 (sdl2:free-surface surface))
               digit-surfaces)))

In Minesweeper, if you click on a cell with no neighboring mines, all the neighboring cells are exposed. This will open up larger areas of the board. The AI did a good job of implementing this, but I was careful to specify that only the hidden cells should be exposed. Otherwise, the recursion would not bottom out because every cell is a neighbor of its neighbors.

(defun expose-neighbors (row col)
  "Recursively expose all hidden neighbors of a cell with zero neighboring mines."
  (loop for r from (max 0 (1- row)) to (min (1+ row) (1- *game-height*))
        do (loop for c from (max 0 (1- col)) to (min (1+ col) (1- *game-width*))
                 do (when (and (eql (aref *board* r c) +hidden+)) ;; Only expose hidden cells
                      (setf (aref *board* r c) +exposed+)
                      ;; If the neighbor also has zero mines, recursively expose its neighbors
                      (when (and (integerp (aref *board-content* r c))
                                 (= (aref *board-content* r c) 0))
                        (expose-neighbors r c))))))

We need a way to get the game back to the initial state.

(defun reset-game ()
  "Reset the game by reinitializing the board and setting the game state to playing."
  (initialize-board)
  (setf *game-state* +playing+))

The AI writes buggy code. Here is an example. It is trying figure out if the player has won the game. You can state the winning condition in couple of different ways.

  • All the cells that are not mines are exposed.
  • All the cells that are mines are flagged, all flagged cells contain mines.

This does't quite achieve either of these.

(defun check-win-condition ()
  "Check if the player has won the game."
  (let ((won t)) ;; Assume the player has won until proven otherwise
    (loop for row from 0 below *game-height*
          do (loop for col from 0 below *game-width*
                   do (let ((state (aref *board* row col))
                            (content (aref *board-content* row col)))
                        (when (and (not (eql state +exposed+)) ;; Cell is not exposed
                                   (not (or (eql state +flagged+) ;; Cell is not flagged
                                            (eql content :mine)))) ;; Cell does not contain a mine
                          (setf won nil)))))
    (when won
      (setf *game-state* +won+))))

create-game-over-surface prepares a surface with the words “Game Over” writ large.

(defun create-game-over-surface ()
  "Create a surface for the 'GAME OVER' splash screen using SDL2-TTF."
  (let ((font nil)
        (text-surface nil))
    (unwind-protect
         (progn
           ;; Load the font (adjust the path and size as needed)
           (setq font (sdl2-ttf:open-font (asdf:system-relative-pathname "vibe" "fonts/arial.ttf") 72))
           ;; Render the text "GAME OVER" in red
           (setq text-surface (sdl2-ttf:render-text-solid font "GAME OVER" 255 0 0 255)))
      ;; Cleanup
      (when font (sdl2-ttf:close-font font)))
    text-surface))

The main function initializes the SDL2 library and its auxiliar libraries along with unwind-protects to uninitialize when we leave the game. The AI was not permitted to change this code.

(defun main ()
  (sdl2:with-init (:video)
    (unwind-protect
         (progn
           (sdl2-image:init '(:png))
           (unwind-protect
                (progn
                  (sdl2-ttf:init)
                  (initialize))
             (sdl2-ttf:quit)))
      (sdl2-image:quit))))

If you step on a mine, it exposes the other mines.

(defun expose-all-mines ()
  "Expose all mines on the board."
  (loop for row from 0 below *game-height*
        do (loop for col from 0 below *game-width*
                 do (when (eql (aref *board-content* row col) :mine)
                      (setf (aref *board* row col) +exposed+)))))

Conclusion

This wasn't “vibe coding”. This was plain old coding, but filtered through an English language parser. It added an extra level of complexity. Not only did I have to think about what should be coded, I had to think about how to phrase it such that the AI would generate what I had in mind and not disturb the other code.

Whenever I tried to let go and “vibe”, the AI would generate some unworkable mess. Programming is a craft that requires training and discipline. No dumb pattern matcher (or sophisticated one) is going to replace it.

In languages other that Common Lisp, you might get further. Consider Java. It takes a page and half of boilerplate to specify the simplest first-class object. An AI can easily generate pages and pages of boilerplate and appear to be quite productive. But you've missed the point if you think that it is better to generate boilerplate automatically than to use abstractions to avoid it and a language that doesn't need it.

Monday, March 31, 2025

Avoiding Stringly Typed Code

It can be tempting to implement certain objects by their printed representation. This is especially true when you call out to other programs and pass the parameters in command line arguments and get a result back through the stdout stream. If an object is implemented by its printed representation, then serialization and deserialization of the object across program boundaries is trivial.

Objects implemented by their printed representation are jokingly referred to as “stringly typed”. The type information is lost so it is possible to pass strings representing objects of the wrong type and get nonsense answers. There are no useful predicates on arbitrary strings, so you cannot do type checking or type dispatch. This becomes a big problem for objects created from other utilities. When you call out to a bash script, you usually get the response as stream or string.

The solution? Slap a type on it right away. For any kind of string we get back from another program, we at least define a CLOS class with a single slot that holds a string. I define two Lisp bindings for any program implemented by a shell script. The one with a % prefix is the program that takes and returns strings. Without the % it takes and returns Lisp objects that are marshaled to and from strings before the % version is called. The % version obviously cannot do type checking, but the non-% entry point can and does enforce the runtime type.

Sunday, March 30, 2025

Keep a REPL Open

I keep a REPL open at all times whether or not I’m actually doing Lisp development. It’s my assistant that evaluates random expressions for me. I’ll script up little tasks in Common Lisp rather than in Bash. When I need to rapidly prototype something larger, I’ll switch to the REPL and do it there.

At work, my REPL has accumulated a bunch of code for talking to systems like GitHub, CircleCI, and LDAP as well as our in-house tools. These are utilities for my use only. I don’t write mission critical apps in Common Lis. No one else in the company uses it, and it is more important that the code be maintainable by the rest of the team than that it be written in a language I like. So I write the mission critical code in Python, or Golang, or Java, or whatever the rest of the team is using. I keep my Common Lisp to myself. I have, however, used it to protype code that evetually ends up ported to Python or Golang.

On occasion, I’ve wanted to quickly share some functionality before I have taken the time to port it. I’ve found two ways to do this. The first is to slap a web server on it. I use Hunchentoot for this. I translate JSON to Lisp coming in to the web server and Lisp back to JSON going out. This is all you effectively need for a black-box microservice. There have been a couple of transient projects where the whole thing was not expected to be maintained for a long time and by anyone other than me, so I can just throw up a microservice and tell my colleagues to hit it with a curl command.

The second way is to create a docker image that contains the Common Lisp code and all of its dependencies. It can take a bit of work to configure a lisp setup in your environment, so having it hiding inside a docker image allows me to correctly set up the Lisp environment along with the Lisp interpreter and the rest of the code. My colleagues can just pull and run the container and it will work. Again, this is only for small, throwaway projects that no one else is expected to modify or maintain. For anything that is mission critical or is expected to be shared at some point, I write it in Python or Golang or Java, etc.

I could have written these as a series of Bash scripts or Python programs, but when you start connecting a series of these together, you quickly run into the limitations of using a pipe to talk between programs. My Lisp scripts all reside in the same address space, so they can share structured data without any fancy marshaling protocol.

Saturday, March 29, 2025

Angry Fruit Salad

I like to program in living color. My color scheme is definitely “angry fruit salad”, but there is a method to my madness.

My eyeglasses have a very strong prescription. Chromatic aberration is significant at the edges of my field of vision, so it is important that text be mostly monochromatic, or it will split into tiny glyph-shaped spectra. So my main text color is green on a black background, like a terminal from the 1970s. From there, I chose cyan for comments in the code because it is easy to read. I generally favor the warmer colors for the more “active” elements and the cooler colors for the more “passive” ones, but there are many exceptions.

I have found that my brain gets used to the colors. When something shows up in an unexpected color, it immediately looks wrong, even if I don’t know why. I can leverge this effect by using a very wide variety of colors for different semantic elements. I’m not consciously aware of the semantic meaning, I can just tell if the code looks the wrong color.

So my code looks like the Vegas strip: gaudy, neon colors fighting for attention. I’m sure it would drive many people up the wall. A VSCode theme sort of based on this is available at https://github.com/jrm-code-project/VSCode-Theme.

Friday, March 28, 2025

Vibed into non-functioning

Continue vibing? Well, why not go all the way.

The AI wasn’t doing so well with the main game loop, so I gave it enough help that a blank window would come up. The window would respond to the X key being pressed in order to exit, but you could also close the window to exit as well.

I told the AI that I wanted a grid of tiles. Some tiles had mines. The remaining tiles had an integer which was the number of mines in adjacent squares. The AI wanted to load some textures from files 0.png through 8.png. I asked it to generate those files, but it didn’t want to. So I broke out Paint and generated some crude 32x32 png images of numbers, a mine, a blank, and a flag.

The AI tried to load these images directly, so I had to instruct it that you need a dependency on SDL2-image and that you can load the image on to a surface, and then you can load a texture from the surface (think of a texture as a bitmap on the GPU and a surface as a bitmap on the CPU). There were several rounds of trying the code, getting an error, and pasting the error in to the AI. As per the philosophy of vibe coding, I just accepted the suggested changes without looking at them. I did have to direct it to not to try to “use” packages because that simply introduced name conflicts.

I got to the point where I could compile and load the game so far with no errors. I was testing the code at each step. It wasn’t making much progress in so far as displaying anything, but it at least didn’t regress.

Until it did. I had vibed to the point where I got a nice black rectangle on the screen that did not display anything or respond to any input. No errors were printed. Time to debug. The problem is that I only had a vague idea of what it was doing. I wasn’t paying much attention to changes being made. I dove into the code that had been generated.

What a mess. I had my suspicions as to what was wrong. Some of the newly added code needed to use the SDL2 image library. It needs to initialize the SDL2 image library, load the surfaces, and load the textures in that order. When it exits, it has to unload things in reverse order. When I wrote my Platformer Tutorial, I wrote a set of with-... macros that would pair up loading/unloading and initialize/uninitialize steps with an unwind-protect. If you use the with-... macros, you automatically get the LIFO order of operation that you need for the code to function, and the unwind-protects make sure that the uninitialization occurs even if you error out or abort the application.

The vibed code had none of this. It didn’t know about unwind-protect. It didn’t even know about nesting. It simply tried to run the initialization code first, the inner code next, and the cleanup code after that. But it combined the code through concatenation, not composition, so the necessary LIFO properties were absent. In addition, the initialization code was not paired with the cleanup code. It was pure coincidence that a cleanup happened after an initialization. The initialization code was spread about several functions in an ad hoc manner and the cleanup code was clumped in different sections. It was spaghetti code, and you needed to analyze it carefully to determine if the code initialized things in the right order or cleaned up correctly. One obvious bug was the code destroying the surfaces while the textures were still in use.

I poked at it a little bit, but there was no easy way to massage the code into a working state. It was just too disjoint. I eventually just deleted the vibed code. Firt of all, it didn’t work. Second of all, when I removed it, I regained the lost functionality of the close box and the X key for exit. It is a bad sign when removing code increases functionality.

Vibing is 0 for 2 at this point. If I have time in the next few days, I may revisit this once again and see how much hand-holding I have to give the AI to generate a working display.

Thursday, March 27, 2025

AI Silliness

In The Cyberiad, Stanislaw Lem tells the story of two robots, Trurl and Klapaucius, who are master constructors and are quibbling friends. One day, Trurl builds an artifical intelligence. He shows it off to Klapaucius, who is unimpressed. Klapaucius gives it increasingly absurd tasks, irritating Trurl. One of the challenges is to write a poem, lofty and tragic, and full of pathos, six lines long. About a haircut. And every word beginning with the letter S. Trurl objects, but the AI is up to the task

Samson snored.
She scissored short.
Sorely shorn, Soon shackled slave, Samson sighed,
Silently scheming,
Sightlessly seeking
Some savage, spectacular suicide.

It was science fiction 60 years ago, but our current LLMs can handle this easily. I think you can guess the author and prompt I put in for this:

The Unspeakable Syntax: A Tale of Lispian Horror

It was on a night fraught with cyclopean terrors that I first delved into the accursed tomes of programming lore known to the initiated as the Common Lisp Hyperspec. I had been warned by my colleagues — wizened wretches whose trembling fingers bore the mark of years lost to the parentheses — that this was no ordinary language. But my hubris, my insatiable curiosity, drove me to confront the dark corners of syntax mankind was not meant to know.

Upon the screen unfolded an incomprehensible vista of nested parentheses — lines of code whose curvatures undulated like the tentacles of some primordial beast. They defied reason and mocked the laws of Euclidean space. The parentheses seemed to breathe, to writhe, to multiply in grotesque fractal patterns that scraped against the outer edges of my sanity.

Each ‘car’ and ‘cdr’ was not merely a construct of the language, but an invocation — arcane words that seemed to echo from the abyssal void, summoning constructs whose purposes were shrouded in madness. The recursive forms spiraled infinitely inward, a pitiless vortex from which there could be no escape. My attempts to comprehend the structures were as futile as a mortal gazing upon the boundless chaos of Azathoth, the blind idiot god at the center of all things.

The culmination of my horror came as I confronted a macro. It was a labyrinthine incantation, a concatenation of symbols that seemed to weep with ancient and forbidden knowledge. It whispered to me secrets of the universe I dared not fathom, and though I averted my gaze, its presence imprinted itself upon the fragile canvas of my mind. I knew then that this language was not designed by mere mortals. No, it was the work of entities beyond comprehension—beings for whom the manipulation of symbols is as natural as the shifting of tectonic plates.

The syntax of Common Lisp was not a tool for mankind, but a trap — an eldritch puzzle whose solution would unravel the fabric of reality itself. And now, as I type these words, I can feel the parentheses closing in, their curved forms encircling me, threatening to enclose me in an eternal recursion from which there can be no escape. May the gods have mercy on my soul, for I have gazed into the syntax, and the syntax has gazed back into me.

Wednesday, March 26, 2025

Hard Won Esoteric Knowledge

I do my development in Ubuntu-22.04 Linux running on the Windows Subshell for Linux. I recently got a laptop refresh and the latest software doesn’t run. The fix is obscure, so I thought I’d document it.

sbcl runs fine out of the box in WSL2, but I’m encountering a bug where TCP connections to one particular server are being left in the CLOSE_WAIT state indefinitely. After several minutes, I hit the limit on the number of open files.

The “right thing” would be to track down who isn’t closing the connection properly, but it’s only a few hundred connections. It appears that ulimit is set to 1024, which is pretty easy to hit with this bug. Bumping ulimit to something more reasonable is a lazy workaround. It isn’t a solution — I’m still leaking open files — but I’ll be able to leak thousands of them without having problems.

But increasing nofiles turned out to be a problem. I edited all the magic files in /etc until they all said I could have 131071 open files. When I re-started WSL, all the ways I could start a shell agreed that the ulimit was 131071, yet when I started sbcl and ran this:

(uiop:run-program "prlimit" :output *standard-output*)

RESOURCE   DESCRIPTION                             SOFT      HARD UNITS
AS         address space limit                unlimited unlimited bytes
CORE       max core file size                         0 unlimited bytes
CPU        CPU time                           unlimited unlimited seconds
DATA       max data size                      unlimited unlimited bytes
FSIZE      max file size                      unlimited unlimited bytes
LOCKS      max number of file locks held      unlimited unlimited locks
MEMLOCK    max locked-in-memory address space  67108864  67108864 bytes
MSGQUEUE   max bytes in POSIX mqueues            819200    819200 bytes
NICE       max nice prio allowed to raise             0         0 
NOFILE     max number of open files                1024   1048576 files
NPROC      max number of processes                62828     62828 processes
RSS        max resident set size              unlimited unlimited bytes
RTPRIO     max real-time priority                     0         0 
RTTIME     timeout for real-time tasks        unlimited unlimited microsecs
SIGPENDING max number of pending signals          62828     62828 signals
STACK      max stack size                       8388608 unlimited bytes
NIL
NIL
0 (0 bits, #x0, #o0, #b0)

The limit was at the old value of 1024.

WSL launched sbcl without a shell, so the ulimit setting was not being run.

The solution is easy, but it took me a long time to figure it out. Not only do you need to edit all the magic in /etc, and add ulimit statements to your .bashrc, you should also add ulimit statements to your .profile, and then instruct wsl to launch your program under a login shell:

(require ’sly)
(setq sly-lisp-implementations
      ’((sbcl  ("C:\\Program Files\\WSL\\wsl.exe"
                      "--distribution-id" "{df4f07a6-2142-405c-8a6a-63f1ca3a7e8d}"
                      "--cd" "~"
                      "--shell-type" "login"
                      "/usr/local/bin/sbcl")
               )))

This bit of insanity allows me to run sbcl with 131071 open files in Linux as my inferior lisp program in a Windows Emacs running SLY. (Running Emacs under Windows gives me a way to use a modified Dvorak keyboard. I could run Emacs in the Linux subsystem, but the Wayland server is in a container and doesn’t let you modify the keyboard.)

Tuesday, March 25, 2025

Vibe Coding in Common Lisp, continued

I unwedged the AI with regard to the package system, so I asked the AI to write the game loop.

Now there are a number of ways to approach a game loop, but there is one strategy that really wins: a nested set of with-... macros. These win because they tie resource management to the dynamic execution scope. You write with-window, and upon entry to the form, a window is allocated and initialized and comes into scope during the body of the code. When the body returns, the window is destroyed and deallocated.

These with-... macros are built around allocation/deallocation pairs of primitives that are called from an unwind protect. The abstraction is the inverse of a function call: instead using a function to hide a body of code, you use a function to wrap a body of code. The body of code doesn’t hide in the callee, but is passed as an argument from the caller. One important feature of programming in this way is that resources are never returned from a function, but are only passed downwards from the allocation point. This keeps objects from escaping their dynamic scope.

The entry to the game loop consists of a few nested with-... macros that initialize the library, allocate a window, allocate a drawing context, and enter a event loop. When the event loop exits, the resources are torn down in reverse order of allocation leaving the system in a clean state.

But the AI did not use with-... macros. The code it generated had subroutines that would create a window or allocate a drawing context, but it would assign the created objects into a global variable. This means that the object is set to escape its dynamic scope when it is created. There is nothing to prevent (or even discourage) access to the object after it has been deallocated. There were no unwind-protects anywhere, so objects, once allocated, were eternal — you could never close a window.

In the large, the code was built to fail. In the small, it immediately failed. Calling conventions were not even followed. Keyword agument functions were called with positional arguments, or with an odd number of arguments, irrelevant extra arguments were passed in, the AI would pass in flags that didn’t exist. We’ll grant that the AI does not ultimately understand what it is doing, but it should at least make the argument lists superficially line up. That doesn’t require AI, a simple pattern match can detect this.

The event loop did not load, let alone compile. It referred to symbols that did not exist. We’ll we can expect this, but it needs to be corrected. When I pointed out that the symbol didn’t exist, the AI began to thrash. It would try the same symbol, but with asterisks around it. It would try a variant of the same symbol. Then it would go back and try the original symbol again, try the asterisks again, try the same variant name again, etc. There is nothing to be done here but manual intervention.

There are some macros that set up an event loop, poll for an event, disptach to some code for that event while extracting the event particulars. You can roll your own event loop, or you can just use one of pre-built macros. When the AI began to thrash on the event loop, I intervened, deleted the code it was thrashing on and put in the event loop macro. The AI immediately put back in the code I had removed and started thrashing on it again.

Again, it is clear that the AI has no knowledge at all of what it is doing. It doesn’t understand syntax or the simplest of semantics. It cannot even tell if a symbol is bound to a value. Even the most junior developer won’t just make up functions that are not in the library. The AI doesn’t consult the documentation to validate if the generated code even remotely passes a sniff test.

You cannot “vibe code” Common Lisp. The AI begins to thrash and you simply must step in to get it unwedged. It doesn’t converge to any solution whatsoever. I suspect that this is because there is simply not enough training data. Common Lisp would appear to need some semantic understanding in order to write plausibly working code. Just mimicking some syntax you found on the web (which is ultimately what the AI is doing) will not get you very far at all.

Monday, March 24, 2025

Vibe coding in Common Lisp

Can you “vibe code” in Common Lisp?

Short answer, no.

I set out to give it a try. The idea behind “vibe coding” is to use an AI to generate the code and to blindly accept everything it generates. You offer some feedback about error messages and whether the code is doing something close to what you want, but you specifically don’t try to analyze and understand the code being generated. You certainly don’t try to debug it and fix it.

A.I. is trained on a lot of open source code. If your language is popular, there is a lot of code to train on. Chances are, if you have a problem, then not only has someone attempted to code it up in Python, but several people have given it a try and someone has ported it to JavaScript. Someone has solved your problem on StackOverflow.

Common Lisp is not a popular language. There is not a lot of code to train on, and most of it is someone’s homework. So for any particular problem, the A.I. doesn’t have a lot to go on. It becomes clear pretty quickly that the A.I. has no understanding of what it is doing, it is just trying to synthesize something that is similiar to what it has seen before, and if it hasn’t seem much, you don’t get much.

So I tried to vibe code in Common Lisp. I decided to try to write a Minesweeper game. That seemed like it had probably been done enough times before that the A.I. might be able to generate some vibe code.

I told it that we were going to write a video game and that it should generate an asd file for the game, and some skeleton code that would be a good start. It generated an asd file and four small files: main.lisp, game.lisp, input.lisp, and graphics.lisp. There was little actual code in the files, just some function stubs, but you could see where the cross-file dependencies were going to be.

The asd file wouldn’t load. The generated files had some minor dependencies upon each other and imported the required symbols from the other files. This imposed an implicit load order because a file couldn’t be loaded until the file it depended on had created the package for the symbols that it referenced. This is an old problem in Common Lisp, and the solution is to set up all the packages before loading anything. But I was vibing, so I told the AI that the forward references of the symbols were causing problems.

The AI added require statements into the files to try get them to load in a working order. It didn’t help. require statements have a whole bunch of issues and aren’t used very much thes days. The modern solution is to make the dependencies explicit in the system definition file. I gave the AI a direct instruction to make sure that the package.lisp file loaded before any other. Rather than edit the asd file, it decided to add even more require statements.

I declared failure at this point and manually edited the package.lisp file to create the packages and import the inital symbols, I added a dependecy on the package.lisp file to every other file, and I removed all the spurious require statements. It was clear the AI was not going to hit upon this solution.

The AI obviously has no model of what the package system is. It doesn’t reason that you need to load it first. It simply knows that it can insert require statements to express a dependency. So it thrashes around added require statements in the hope that it will converge to a solution. It converged to a circular dependency instead.

Sunday, March 23, 2025

The Obarray

Early Lisps interned all their symbols in a single symbol table called the obarray. Every program you loaded into your Lisp image would share the obarray. Memory was limited, so you usually only ran one program (like Macsyma) at a time.

But as memory got larger and cheaper, people started to want to run multiple Lisp programs, like Macsyma and Emacs, at the same time. The problem was they would collide over the use of the symbols. (In particular, over the property lists.) Someone — I’m not sure who — came up with a hack that would swap out the obarray depending on which program you were loading.

The origin of the package system was "the obarray hack". Packages are first-class named obarrays, with some support for controlling the sharing of symbols among the obarrays, a limited form of inheritance and some code that maintains consistency.

In any but the smallest Lisp project, you need to decide on a symbol and package strategy. Some people keep with the original spirit of the package system and create just a handful of coarse-grained packages that each encompass a logical program. Other people use packages as modules, which gives you a set of many fine-grained packages, one to each module in your system.

I favor the former approach. I either use one package for my entire program, or I break it into just a couple of main packages. I don’t try for a finer grained approched. The package system wasn’t really designed for a fine-grained appoach.

Screwing up your packages can easily make your system unusable. If you dynamically create and link packages as you load your code, you have to be careful about the order in which you load your files. Load a file out of order and you’ll end up with dozens of symbols interned in the wrong package.

When I’m working on a project, I always have a file called package.lisp which defines all the packages in the project in one place. The package.lisp is always the first file loaded. Once I’ve done this, then the order in which the other files are loaded becomes far less important. It saves a lot of headaches.

Friday, March 21, 2025

CLOS vs. message passing

When I was in high school and first getting interested in computers, I heard about the concept of message passing. This struck me as a good idea, although I couldn't articulate why at the time. Now I know a bit more about computers, and I often find that message passing is a good way to structure certain kinds of programs. In particular, it really works well with client/server architectures.

The idea behing message passing is that you have active agents that communicate by sending passive messages amongst themselves. A message is a fairly simple piece of data. It is basically a symbolic token and maybe some parameters. The recipient of the message interprets the meaning of the token and acts accordingly. Conceptually, the interface is narrow: an agent exposes one endpoint and all messages come through that endpoint. This facilitates the creation of strong abstraction barriers.

The standard way of implementing this is to have a reverse proxy within the agent that disatches messages to the appropriate handler within the object. The object has a table of message handlers and it looks up the appropriate handler in the table and calls it.

I wanted to use this paradigm for talking to several heterogeneous servers — a GitHub server, an LDAP server, a CircleCI server, etc. But I got bogged down in the details of how to implement this. It was proving difficult to map the reverse proxy implementation on to CLOS. But then I remembered something: synchronous message passing is isomorphic to simple function calls. I didn't want to implement a message dispatcher in CLOS, I could just use CLOS's built-in method dispatch.

Messages are just the names of generic functions, and parameterized messages are just generic functions that take arguments. The method dispatch table doesn't reside in the object but in the generic function. In fact, very little is left of the object itself. It can often be instance with no slots that only has an identity.

Once I got my head straightened out, the code came together quickly.

Wednesday, March 19, 2025

Object Last or First

In Java or C#, the first argument to a method call is the object that contains the method. This is built in to the language and it is to some extent an artifact of the mechanism of method dispatch. So when you add a type (class) to the language, you will have a set of methods where the first argument is the object of that type.

In Common Lisp, the first argument to a function is not special. When you add a type, the functions that operate on that type can place the object anywhere in the argument list. The convention in Common Lisp is (mostly) for the object to be the last argument in the argument list. Look at the list and sequence functions for examples of this convention.

The Common Lisp convention reads more like English. (subst new old sequence) reads directly as "substitute new for old in sequence". In Java, the same method would be called like this sequence.subst(old, new), which would have to read as "In sequence, substitute for old, new", which is a bit more awkward.

But I think I prefer the Java convention. In Lisp, this would be (substitute sequence old new). There is no implementation need for the object to be the first argument, but I think there is an advantage to the regularity of the convention. It places the object in the same place in the argument list for all the functions that operate on the object.

The argument that has persuaded me the most is that if the return type of the function is the same type as the object, as it often is, then you can elegantly chain the method calls together with a fold-left. So consider a table object. It might have an insert method that takes a key and value and returns a new table. If the insert method is like this: (insert table key value), then you can insert a bunch of keys and values with a fold-left like this: (fold-left #’insert table ’(key1 key2 ...) ’(value1 value2 ...)).

Note how order of arguments is analagous between the fold-left and the insert method. When the object is the last argument, then you have to insert an intermediate lambda expression to shuffle the arguments around, and the table argument moves from being after the key and value in the insert method to being before the key list and value list in the fold-left method. It is a small thing, but I find it very appealing that in moving from the single argument case to the argument list case we don’t have random changes.

Of course I don’t think we should change Common Lisp to conform to a different convention, but I tend to write my own functions with the object as the first argument rather than the last.

Tuesday, March 18, 2025

Just Keep Consing

Lisp was the first garbage collected language. But early garbage collectors were painful to use. They had significant overhead and would pause your program for several seconds or even minutes at the worst possible times. People tried to avoid garbage collection by re-using objects, using allocation pools, etc. Many people would run their Lisp programs with the garbage collection turned off. They would reboot their Lisp machines when they ran out of memory. Lisp Machine Inc. had a product called "Picon" which was carefully crafted to avoid any runtime allocation.

Generational garbage collectors began to be adopted in the early 80s. Generational collectors have much less overhead than the earlier "Stop the world" collectors. Memory has gotten much cheaper, so larger heaps are practical. Large heaps have two benefits: garbage collection becomes less frequent, and objects have time to "age" and perhaps become garbage before the next generational collection. Some garbage collection algoriths have no cost overhead for very short-lived objects.

It is no longer necessary to re-use objects or try to avoid allocating memory. Garbage collection pauses are usually short enough to be unnoticeable. You can typically set the heap size nice and large and forget about it. It is certainly possible to encounter a program that has a pathological memory usage pattern, but it is much less common than it used to be.

Because of the way linked lists work, the result of walking down a list usually comes out in the reverse order. In the old days, you would make the effort of trying to accumulate the result in the forward direction by keeping track of the last cell in the answer and mutating it to accumulate the next cell. This is a pain. These days, it you can just accumulate the result in the reverse order and then call reverse when you are done. In practice, this is no slower than accumulating the result in the forward direction, but certainly a lot simpler. It generates more garbage, but it is short-lived garbage with little or no overhead.