Tuesday, August 5, 2025

Recursive Prompting

What if we give the LLM the ability to prompt itself? I added a “tool” to the LLM prompt that allows the LLM to prompt itself by calling the promptLLM function with a string.

I guess it isn't surprising that this creates an infinite loop. The tool appears to have a higher affinity than the token prediction engine, so the LLM will always try to call the tool rather than do the work itself. The result is that the LLM calls the tool, which calls the LLM, which calls the tool, which calls the LLM, etc.

We can easily fix this by not binding the tool in the recursive call to the LLM. The recursive call will not have the tool, so it will engage in the token prediction process. Its results come back to the tool, which passes them back to the calling LLM, which returns the results to us.

Could there be a point to doing this, or is this just some recursive wankery that Lisp hackers like to engage in? Actually, this has some interesting applications. When the tool makes the recursive call, it can pass a different set of generation parameters to the LLM. This could be a different tool set or a different set of system instructions. We could erase the context on the recursive call so that the LLM can generate "cold" responses on purpose. We could also use this to implement a sort of "call-with-current-continuation" on the LLM where we save the current context and then restore it later.

The recursive call to the LLM is not tail recursive, though. Yeah, you knew that was coming. If you tried to use self prompting to set up an LLM state machine, you would eventually run out of stack. A possible solution to this is to set up the LLM client as a trampoline. You'd have some mechanism for the LLM to signal to the LLM client that the returned string is to be used to re-prompt the LLM. Again, you'd have to be careful to avoid infinite self-calls. To avoid accumulating state on a tail call, the LLM client would have to remove the recent context elements so that the "tail prompt" is not a continuation of the previous prompt.

Recursive prompting could also be used to search the prompt space for prompts that produce particular desired results.

If you had two LLMs, you could give each the tools needed to call the other. The LLM could consult a different LLM to get a “second opinion” on some matter. You could give one an “optimistic” set of instructions and the other a “pessimistic” set.

The possibilities for recursive prompting are endless.

1 comment:

  1. I wonder to what extent you might be able to use this approach to allow the LLM to create its own agents. For example, if you gave the "base" LLM enough tools to write a file and call it with, say, Python or Node, and then "taught" it how to create tools and pass them through to the recursive LLM-call tool, then instructed it to generate a set of tool sets and prompts to seed enough recursive calls...and if those recursive calls could be asynchronous...I wonder if there could be some "master" prompt that would "inspire" the base LLM to assemble a useful collection agents to accomplish any arbitrary task. LLM, prompt thyself!

    ReplyDelete