Wednesday, July 30, 2025

Novice to LLMs — LLM calls Lisp

I'm a novice to the LLM API, and I'm assuming that at least some of my readers are too. I'm not the very last person to the party, am I?

When integrating the LLM with Lisp, we want to allow the LLM to direct queries back to the Lisp that is invoking it. This is done through the function call protocol. The client supplies to the LLM a list of functions that the LLM may invoke. When the LLM wants to invoke the function, instead of returing a block of generated text, it returns a JSON object indicating a function call. This contains the name of the function and the arguments. The client is supposed to invoke the function, but to return an answer, it actually makes a new call into the LLM and it concatenates the entire conversation so far along with the result of the function call. It is bizarro continuation-passing-style where the client acts as a trampoline and keeps track of the continuation.

So, for example, by exposing lisp-implementation-type and lisp-implementation-version, we can then query the LLM:

> (invoke-gemini "gemini-2.5-flash" "What is the type and version of the lisp implementation?")
"The Lisp implementation is SBCL version 2.5.4."

1 comment:

Josh Ballanco said...

One thing to note, if you peak under the covers, is that to call it a "function call *protocol*" is generous at best. Essentially, the tools you list get added into the chat context as JSON text describing the tools available, and you hope that the model completes the chat with JSON text when it wants to call a tool. Models that are explicitly trained (or fine tuned) with this "protocol" in their training set are more likely to respond appropriately, but even those that are not can sometimes suffice (and even those that are sometimes do silly things like surround the JSON response in markdown code blocks).