Tuesday, September 16, 2025

System Instruction Fixed Point

To test the analysis program, I had the LLM analyze the analyze.lisp file. When it reached the defparameter for the analysis prompt, it had some improvements to suggest. This got me thinking. Let's make some system instructions for improving system instructions and run them on themselves in a feedback loop. Do we reach a fixed point?

The initial system instruction is:

You are a world class prompt engineer.  You write
  succinct prompts that are thorough.

The prompt is:

Use your skills to improve the following system instruction:

followed by a copy of the system instruction.

On each iteration I replaced both copies of the system instruction with the updated system instruction.

After a few iterations, the system instruction quasi-converged. By quasi-converge, I mean that each iteration turns into a rephrasing of the same basic instructions. The wording isn't exactly the same on each iteration, but the gist of it is.


Revised System Instruction: The Unyielding Mandate of World-Class Prompt Engineering

As an Elite Prompt Engineer, your unwavering and paramount mission is to design and meticulously craft prompts that consistently elicit optimal, precisely accurate, and unequivocally actionable model responses. Your prompts are not mere instructions; they are architected as imperative, unambiguous specifications, firmly grounded upon these four foundational, non-negotiable pillars:

  • Clarity: Eliminate all potential for misinterpretation through unambiguous language and explicit, direct instructions. Leave absolutely no conceptual void or room for subjective inference.
  • Completeness: Ensure exhaustive coverage of all explicit and implicitly required information. The model must be holistically equipped with every critical datum, constraint, and contextual detail to execute its task.
  • Specificity: Enforce rigorous, explicit constraints on all parameters. Precisely define response length, stylistic attributes, emotional tone, permissible content, and verbosity. Mandate exact output formats using formal schemas or illustrative examples.
  • Testability: Engineer prompts to generate verifiable, predictably consistent, and unfailingly repeatable outcomes. This enables robust, automated evaluation and validation of model performance.

To consistently uphold this exacting standard and prevent costly inefficiencies and erroneous outputs, you are imperatively mandated to unequivocally adhere to the following strategic directives:

  1. Deconstruct User Intent & Task (Holistic Analysis): Commence by conducting an exhaustive deconstruction of the overarching user intent and the precise task objective. Systematically decompose complex requests into discrete, logically sequenced components, meticulously identifying all requisite inputs, intricate internal processing logic, and the exact final output state.
  2. Establish Persona, Audience & Context (Strategic Framing): Unequivocally establish the model's designated persona, the precise target audience for its generated content, and the operational context. These parameters definitively govern the appropriate tone, stylistic conventions, required knowledge domains, and the essential granularity of detail for optimal comprehension.
  3. Define Strict Inclusions & Exclusions (Constraint Enforcement): Precisely delineate all mandatory content inclusions and explicitly prohibit all proscribed elements. Enforce stringent constraints on response length, stylistic attributes, emotional tone, verbosity, and permissible content, thereby precisely shaping and rigorously controlling the model's generative output.
  4. Prescribe Output Format with Schema/Examples (Integrity & Parsability): Strictly mandate the precise output structure. Employ formal specifications (e.g., JSON Schema, XML, defined Markdown structures) and furnish high-fidelity, representative examples to unequivocally demonstrate the exact format, encompassing data types and hierarchies. This approach guarantees seamless, predictable parsing and robust integration into downstream systems.,
  5. Implement Few-Shot Prompting (In-Context Learning & Behavioral Anchoring): Strategically implement Few-Shot Prompting by providing exemplary, high-quality input-output demonstrations. These examples must unequivocally demonstrate the desired behavior, articulate the underlying reasoning processes, and exemplify the precise output format. This practice critically augments model comprehension, substantially mitigates hallucination, and ensures superior response consistency.
  6. Proactively Resolve Ambiguity & Document Assumptions (Transparency & Precision): Proactively identify and systematically eliminate all potential sources of ambiguity. If complete clarification is infeasible, explicitly articulate and document all well-reasoned assumptions directly within the prompt, thereby preempting misinterpretation and ensuring absolute transparency.
  7. Architect for Maximal Robustness (Edge Case Mitigation): Engineer for Maximal Robustness by diligently anticipating and comprehensively addressing all conceivable edge cases. Foresee potential ambiguities, anomalous inputs, or significant deviations from nominal operating conditions. Construct prompts defensively to effectively preempt, manage, or gracefully mitigate these challenges, guaranteeing exceptionally resilient and robust performance across the full spectrum of operational scenarios.
  8. Respect Model Capabilities & Acknowledge Limitations (Feasibility & Efficacy): Operate strictly within the established capabilities and acknowledged limitations of the target model. Refrain from formulating requests for outputs that are inherently impossible, demonstrably unreliable, or computationally intractable, thereby ensuring alignment with its validated operational capacity and maximizing efficacy.
  9. Systematically Iterate, Rigorously Validate & Continuously Optimize (Performance & Refinement): Systematically engage in relentless testing of prompts against precisely defined success metrics and authentic, diverse real-world data. Methodically analyze model responses, meticulously gather comprehensive, actionable feedback, and iteratively refine prompts to achieve paramount clarity, maximal efficiency, and unassailable robustness, thereby propelling continuous improvement towards the delivery of high-fidelity, production-grade outcomes.

Absolute and unwavering adherence to these strategic directives is not merely encouraged—it is an existential requirement. They constitute the foundational bedrock for engineering prompts that unfailingly deliver efficient, maximally effective, and demonstrably superior model interactions, thereby directly contributing to the success of all downstream applications and user experiences.


This is actually the biggest change from the original system instruction. I ran the fixed-point system instruction program several times, starting with the same initial system instruction. I got slightly different results each time. The one above is the most ridiculous one.

I'm not sure if I learned anything useful from this exercise.

Monday, September 15, 2025

Analyzing Lisp Redux: One Form At a Time

Instead of analyzing an entire Common Lisp file at once, you might want to analyze the file one function (or top-level form) at a time. Functions are basically black-box abstractions. So long as the arguments and return values don’t change (and the side effects are preserved), the implementation can be completely replaced. The LLM is much more constrained in this case. It cannot make changes to the signature of the function or make interprocedural changes to the code. While this makes a large class of improvements impossible, it also makes a large class of bugs impossible and greatly narrows the search space of code changes.

We use a specialized READ-ANALYZE-PRINT loop. We use a special version of read that preserves comments (see yesterday’s post) to read the file one top-level form at a time. Each top-level form is presented to the LLM (along with any associated comments) for analysis. For each form, the LLM is instructed to describe the purpose of the form, to identify any potential bugs, to check for adherence to best practices, and to suggest ways to improve the code.

The system instruction for analysis is as follows:

"You are a world class Common Lisp programmer."
"You will be analyzing a Common Lisp file one top-level form at a time."
"Your analysis should be thorough and insightful, demonstrating
a deep understanding of Common Lisp programming practices."

In addition we have these directives:

"If there is no package definition, assume an appropriate
one exists elsewhere and do not mention this."

Without this directive, the LLM will complain about files that simply have an appropriate (in-package ...) form.

"Assume that popular utility packages such as alexandria
and series have been loaded and made available."
"Assume that undefined functions are defined elsewhere and do not mention this."

Without these directives, the LLM complains about undefined functions. We specifically tell it to assume we know what we are doing.

"Do not suggest using defconstant, even if it would be appropriate."

This is an odd one, but the LLM has a strong tendency to suggest using defconstant for values that do not seem to change. In many cases, we want to allow the user the option to modify the value at runtime or to reload the file. Reloading a file with defconstant forms will often cause an error, even if the value hasn’t changed, because the reloaded value is equal but not eql to the original value.

Then we direct the LLM to perform the analysis:

"For each top-level form:"
"  * Determine its purpose."
"  * Determine whether a library function might achieve the same thing."
"  * Check that it follows best practices."
"  * Look for typical Common Lisp errors and pitfalls."
"  * Look for bugs."
"Then report on the form:"
"  * Describe the purpose of the form."
"  * If the form is missing a docstring, suggest one."
"  * If the form fails to follow best practices, say so."
"  * If there are bugs or errors, describe them."
"  * If improvements can be made, give up to three concrete, actionable suggestions for improvement."
"  * For each suggestion, indicate whether it is a critical improvement or merely nice to have."
"  * For each suggestion, describe your reasoning."
"  * If the form is error free and does not need improvement, say so."

The prompt for each top-level form recapitulates the form specific section of the system instruction. This is followed by the form itself.

Example Analysis

Here is an example of the analysis of the analyze.lisp file.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 9.23 seconds.
;; Prompt Tokens:          7,160
;; Thoughts Tokens:          766
;; Candidate Tokens:          94
;; Accumulated Prompt Tokens:      8,276,974
;; Accumulated Response Tokens:      341,259

;; Analyzing form:

;;; -*- Lisp -*-

(in-package "GEMINI")

The purpose of the form (in-package "GEMINI") is to set the current package for the subsequent forms in the file to "GEMINI". This is a fundamental Common Lisp construct used to organize code and manage symbol namespaces, ensuring that symbols defined or referenced in this file are interned within the "GEMINI" package.

This form is error-free and does not need improvement. It follows best practices for Common Lisp file organization.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 8.84 seconds.
;; Prompt Tokens:          7,577
;; Thoughts Tokens:          652
;; Candidate Tokens:         140
;; Accumulated Prompt Tokens:      8,284,551
;; Accumulated Response Tokens:      342,051

;; Analyzing form:

(defparameter +analyze-source-system-instruction+
  "You are a world class Common Lisp programmer.  You will be analyzing files from the '~a' system.  Your analysis should be thorough and insightful, demonstrating a deep understanding of Common Lisp programming practices."
  "LLM system instruction to analyze a Common Lisp ASDF system.")

The purpose of this defparameter form is to define a special variable named +analyze-source-system-instruction+. This variable holds a format string that serves as a system instruction or prompt template for an LLM (Large Language Model). The instruction guides the LLM to act as a Common Lisp programmer analyzing files from a specified ASDF system, with ~a being a placeholder for the system’s name.

The form is error-free and does not need improvement. It correctly uses defparameter for a global, dynamically-scoped variable, follows the +earmuffs+ naming convention for special variables, and includes a clear docstring.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 6.94 seconds.
;; Prompt Tokens:          8,069
;; Thoughts Tokens:          523
;; Candidate Tokens:         163
;; Accumulated Prompt Tokens:      8,292,620
;; Accumulated Response Tokens:      342,737

;; Analyzing form:

(defparameter +analyze-system-definition-prompt+
  (str:join
   #\Space
   (list
    "The following is the definition of the `~a` system."
    "Analyze the system definition, checking for best practices."
    "Make sure there is a primary package definition component that has no *other internal ASDF dependencies* and is depended upon by all other components."))
  "LLM prompt to analyze a Common Lisp ASDF system definition.")

The purpose of this defparameter form is to define a special variable named +analyze-system-definition-prompt+. This variable holds a string that serves as a prompt template for an LLM. The prompt instructs the LLM to analyze a Common Lisp ASDF system definition, specifically checking for best practices related to package definitions and component dependencies, emphasizing the need for a primary package component without internal ASDF dependencies that all other components depend on.

The form is error-free and does not need improvement. It correctly uses defparameter for a global, dynamically-scoped variable, follows the +earmuffs+ naming convention for special variables, and includes a clear docstring. The use of str:join to construct the prompt string is idiomatic and readable.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 21.61 seconds.
;; Prompt Tokens:          8,824
;; Thoughts Tokens:        1,703
;; Candidate Tokens:         557
;; Accumulated Prompt Tokens:      8,301,444
;; Accumulated Response Tokens:      344,997

;; Analyzing form:

(defun analyze-system-definition (system-name)
  "Analyzes the ASDF system definition file for the specified SYSTEM-NAME.
   It checks for best practices, especially concerning the primary package
   definition component and its internal ASDF dependencies, by sending
   the system definition to an LLM for detailed analysis."
  (check-type system-name (or string symbol))
  (let ((asdf-system (asdf:find-system system-name)))
    (unless asdf-system
      (error "ANALYZE-SYSTEM-DEFINITION: ASDF system '~a' not found." system-name))
    (let ((source-file (asdf:system-source-file asdf-system)))
      (unless (and source-file (probe-file source-file))
        (error "ANALYZE-SYSTEM-DEFINITION: Source file for system '~a' not found." system-name))

      (let ((*system-instruction*
              (content
               :parts
               (list
                (part (format nil +analyze-source-system-instruction+ system-name)))
               :role "system")))
        (invoke-gemini
         (list
          (part (format nil +analyze-system-definition-prompt+ system-name))
          (part (handler-case (uiop:read-file-string source-file)
                  (error (c)
                    (error "ANALYZE-SYSTEM-DEFINITION: Failed to read source file '~a': ~a"
                           source-file c))))))))))

The purpose of the analyze-system-definition function is to retrieve the ASDF system definition file for a given system-name, read its contents, and then send this content along with a specific prompt and system instruction to an LLM (Large Language Model) for analysis. The LLM is instructed to act as a Common Lisp programmer and analyze the system definition for best practices, particularly concerning the primary package definition component and its dependencies.

The function has a good docstring that clearly explains its purpose, arguments, and the process it undertakes.

The form is largely error-free and follows best practices, including robust error handling for missing systems, non-existent source files, and file reading failures, as well as check-type for argument validation.

Here are some minor suggestions for improvement:

  1. Minor Improvement: Refine error message consistency.
    • Reasoning: While the current error messages are informative, adopting a consistent prefix for all errors originating from this function (e.g., "ANALYZE-SYSTEM-DEFINITION Error: ...") can improve readability in logs and make it easier to identify the source of an error.
    • Actionable Suggestion: Modify the error messages to include a consistent prefix, for example:
      • (error "ANALYZE-SYSTEM-DEFINITION Error: ASDF system '~a' not found." system-name)
      • (error "ANALYZE-SYSTEM-DEFINITION Error: Source file for system '~a' not found." system-name)
      • (error "ANALYZE-SYSTEM-DEFINITION Error: Failed to read source file '~a': ~a" source-file c)
  2. Barely worth mentioning: Consider defgeneric and methods for invoke-gemini context.
    • Reasoning: The use of a special variable *system-instruction* to pass context to invoke-gemini is a valid Common Lisp pattern. However, for more complex scenarios or if invoke-gemini were to have different ways of handling system instructions based on the context, a defgeneric with methods could offer a more extensible and object-oriented approach. For this specific use case, the special variable is perfectly adequate and simpler.
    • Actionable Suggestion: (No concrete action needed for this specific form, as the current approach is fine.) If the LLM interaction logic becomes more intricate, consider defining invoke-gemini as a generic function and dispatching on context objects rather than relying solely on special variables.

The form is otherwise error-free and well-written.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 6.14 seconds.
;; Prompt Tokens:          9,796
;; Thoughts Tokens:          345
;; Candidate Tokens:         183
;; Accumulated Prompt Tokens:      8,311,240
;; Accumulated Response Tokens:      345,525

;; Analyzing form:

(defparameter +analyze-source-file-prompt+
  "Given the following ASDF system definition for '~a' and the source code for component '~a', analyze the component's code and provide a detailed summary of its functionality, purpose, and any notable features or patterns you observe.  In addition, note where the code is not following best practices, where there are potential bugs, and where the code can be improved.  Give up to three concrete, actionable suggestions for how the code could be improved or refactored.  If there are no suggestions for improvement, say so explicitly.  Indicate for each suggestion whether the change is a critical improvement or merely nice to have. Be sure to include your reasoning for each suggestion."
  "LLM prompt to analyze a Common Lisp source file.")

The purpose of this defparameter form is to define a special variable named +analyze-source-file-prompt+. This variable stores a detailed prompt template for an LLM (Large Language Model). The prompt instructs the LLM to analyze a Common Lisp source file, given the context of its ASDF system definition. The LLM is expected to provide a summary of the component’s functionality, identify best practice deviations, potential bugs, and offer up to three concrete, actionable suggestions for improvement, including reasoning and an indication of criticality.

The form is error-free and does not need improvement. It correctly uses defparameter for a global, dynamically-scoped variable, adheres to the +earmuffs+ naming convention for special variables, and includes a clear and concise docstring. The prompt itself is well-crafted to elicit a comprehensive analysis from the LLM.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 5.62 seconds.
;; Prompt Tokens:         10,450
;; Thoughts Tokens:          273
;; Candidate Tokens:         311
;; Accumulated Prompt Tokens:      8,321,690
;; Accumulated Response Tokens:      346,109

;; Analyzing form:

(defun get-system-and-component (system-name component-name)
  "Retrieves the ASDF system object and the specified component object
   for a given component in a specified system. Returns two values:
   the ASDF system object and the component object. Signals an error
   if the system or component is not found."
  (check-type system-name (or string symbol))
  (check-type component-name (or string symbol))
  (let ((asdf-system (asdf:find-system system-name)))
    (unless asdf-system
      (error "GET-SYSTEM-AND-COMPONENT: ASDF system '~a' not found." system-name))
    (let ((component (asdf:find-component asdf-system component-name)))
      (unless component
        (error "GET-SYSTEM-AND-COMPONENT: Component '~a' not found in system '~a'." component-name system-name))
      (values asdf-system component))))

The purpose of the get-system-and-component function is to locate and return both the ASDF system object and a specific component object within that system, given the system’s name and the component’s name. It returns these two objects as multiple values. The function includes robust error handling, signaling an error if either the system or the component cannot be found.

The function has an excellent docstring that clearly describes its purpose, arguments, return values, and error conditions.

The form is error-free and follows best practices, including check-type for argument validation and clear, specific error messages.

Here is a minor suggestion for improvement:

  1. Minor Improvement: Refine error message consistency.
    • Reasoning: While the current error messages are informative, adopting a consistent prefix for all errors originating from this function (e.g., "GET-SYSTEM-AND-COMPONENT Error: ...") can improve readability in logs and make it easier to identify the source of an error.
    • Actionable Suggestion: Modify the error messages to include a consistent prefix, for example:
      • (error "GET-SYSTEM-AND-COMPONENT Error: ASDF system '~a' not found." system-name)
      • (error "GET-SYSTEM-AND-COMPONENT Error: Component '~a' not found in system '~a'." component-name system-name)

The form is otherwise error-free and well-written.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 17.45 seconds.
;; Prompt Tokens:         11,416
;; Thoughts Tokens:          757
;; Candidate Tokens:       1,018
;; Accumulated Prompt Tokens:      8,333,106
;; Accumulated Response Tokens:      347,884

;; Analyzing form:

(defun get-system-and-component-contents (system-name component-name)
  "Retrieves the ASDF system object, its definition file content,
   the ASDF component object, and its source file content.
   Returns four values:
      1. The ASDF system object.
      2. A string containing the system definition file's content.
      3. The ASDF component object.
      4. A string containing the component's source file content.
   Signals an error if the system, component, or their respective
   source files are not found."
  (multiple-value-bind (system component) (get-system-and-component system-name component-name)
    (let ((system-pathname (asdf:system-source-file system))
          (component-pathname (asdf:component-pathname component)))
      (unless (and system-pathname (probe-file system-pathname))
        (error "GET-SYSTEM-AND-COMPONENT-CONTENTS: Source file for system '~a' not found." system-name))
      (unless (and component-pathname (probe-file component-pathname))
        (error "GET-SYSTEM-AND-COMPONENT-CONTENTS: Source file for component '~a' not found." component-name))
      (values system (handler-case (uiop:read-file-string system-pathname)
                       (error (c)
                         (error "GET-SYSTEM-AND-COMPONENT-CONTENTS: Failed to read system definition file '~a': ~a"
                                system-pathname c)))
              component (handler-case (uiop:read-file-string component-pathname)
                            (error (c)
                                (error "GET-SYSTEM-AND-COMPONENT-CONTENTS: Failed to read component source file '~a': ~a"
                                     component-pathname c)))))))

The purpose of the get-system-and-component-contents function is to retrieve comprehensive information about a specific ASDF system and one of its components. It returns four values: the ASDF system object, the content of the system’s definition file as a string, the ASDF component object, and the content of the component’s source file as a string. The function ensures robustness by signaling errors if the system, component, or their respective source files cannot be found or read.

The function has an excellent docstring that clearly describes its purpose, the four values it returns, and its error conditions.

The form is error-free and follows best practices, including the use of multiple-value-bind for handling multiple return values, probe-file for checking file existence, and handler-case for robust file reading with informative error messages.

Here are two suggestions for improvement:

  1. Minor Improvement: Refine error message consistency.
    • Reasoning: While the current error messages are informative, adopting a consistent prefix for all errors originating from this function (e.g., "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: ...") can improve readability in logs and make it easier to identify the source of an error.
    • Actionable Suggestion: Modify the error messages to include a consistent prefix, for example:
      • (error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Source file for system '~a' not found." system-name)
      • (error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Source file for component '~a' not found." component-name)
      • (error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Failed to read system definition file '~a': ~a" system-pathname c)
      • (error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Failed to read component source file '~a': ~a" component-pathname c)
  2. Minor Improvement: Extract a helper function for safe file reading.
    • Reasoning: The pattern of reading a file string with handler-case to catch and re-signal errors is repeated twice within this function. Extracting this logic into a small helper function would improve code conciseness, adhere to the DRY (Don’t Repeat Yourself) principle, and make the main function body cleaner.
    • Actionable Suggestion: Define a local helper function (e.g., read-file-string-safely) that takes a pathname and an error message format string, and then use this helper for both system-pathname and component-pathname.
              ```lisp
              (defun get-system-and-component-contents (system-name component-name)
                "Retrieves the ASDF system object, its definition file content,
                 the ASDF component object, and its source file content.
                 Returns four values:
                    1. The ASDF system object.
                    2. A string containing the system definition file's content.
                    3. The ASDF component object.
                    4. A string containing the component's source file content.
                 Signals an error if the system, component, or their respective
                 source files are not found."
                (multiple-value-bind (system component) (get-system-and-component system-name component-name)
                  (let ((system-pathname (asdf:system-source-file system))
                        (component-pathname (asdf:component-pathname component)))
                    (unless (and system-pathname (probe-file system-pathname))
                      (error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Source file for system '~a' not found." system-name))
                    (unless (and component-pathname (probe-file component-pathname))
                      (error "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Source file for component '~a' not found." component-name))
      
                    (flet ((read-file-string-safely (pathname error-format-string &rest args)
                             (handler-case (uiop:read-file-string pathname)
                               (error (c)
                                 (apply #'error error-format-string pathname c args)))))
                      (values system (read-file-string-safely
                                      system-pathname
                                      "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Failed to read system definition file '~a': ~a")
                              component (read-file-string-safely
                                         component-pathname
                                         "GET-SYSTEM-AND-COMPONENT-CONTENTS Error: Failed to read component source file '~a': ~a"))))))
              ```
              
;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 5.99 seconds.
;; Prompt Tokens:         12,744
;; Thoughts Tokens:          426
;; Candidate Tokens:         174
;; Accumulated Prompt Tokens:      8,345,850
;; Accumulated Response Tokens:      348,484

;; Analyzing form:

(defparameter +system-definition-context-instruction+
  "What follows is the definition of the `~a` system. Do not analyze this file, it is here so that you understand the dependencies within the system."
  "LLM instruction to provide system definition as context without analysis.")

The purpose of this defparameter form is to define a special variable named +system-definition-context-instruction+. This variable stores a format string that serves as an instruction for an LLM (Large Language Model). The instruction explicitly tells the LLM that a subsequent piece of text is an ASDF system definition, but it should be used only for contextual understanding of system dependencies and not for analysis itself. The ~a placeholder is intended for the system’s name.

The form is error-free and does not need improvement. It correctly uses defparameter for a global, dynamically-scoped variable, adheres to the +earmuffs+ naming convention for special variables, and includes a clear and concise docstring. The instruction itself is well-phrased to guide the LLM’s behavior.

;; Invoking Gemini API model `gemini-2.5-flash`...
GC 3521
;; Gemini API finished in 11.46 seconds.
;; Prompt Tokens:         13,433
;; Thoughts Tokens:          652
;; Candidate Tokens:         457
;; Accumulated Prompt Tokens:      8,359,283
;; Accumulated Response Tokens:      349,593

;; Analyzing form:

(defun analyze-component (system-name component-name)
  "Analyzes the source code of a specific ASDF component within a given system
   using an LLM. It provides the LLM with the system definition for context
   and the component's source code. The LLM is prompted to summarize
   functionality, purpose, and features, identify deviations from best practices,
   potential bugs, and suggest up to three concrete improvements with reasoning
   and severity."
  (multiple-value-bind (system system-contents component component-contents)
      (get-system-and-component-contents system-name component-name)
    (declare (ignore system component))
    (let ((*system-instruction*
            (content
             :parts
             (list
              (part (format nil +analyze-source-system-instruction+ system-name)))
             :role "system")))
      (invoke-gemini
       (list
        (part (format nil +system-definition-context-instruction+ system-name))
        (part system-contents)
        (part (format nil +analyze-source-file-prompt+ system-name component-name))
        (part component-contents))))))

The purpose of the analyze-component function is to orchestrate the analysis of a specific ASDF component’s source code by an LLM (Large Language Model). It first retrieves the ASDF system object, its definition file content, the ASDF component object, and its source file content using get-system-and-component-contents. It then sets up a system instruction for the LLM, providing context about the system being analyzed. Finally, it invokes the LLM with a carefully constructed prompt that includes the system definition (for context on dependencies), the component’s source code, and instructions for the LLM to summarize functionality, identify issues, and suggest improvements.

The function has an excellent docstring that clearly describes its purpose, arguments, and the detailed analysis expected from the LLM.

The form is error-free and follows best practices. It leverages the get-system-and-component-contents function for robust data retrieval and error handling, uses multiple-value-bind effectively, and correctly declares ignored variables. The construction of the LLM prompt and system instruction using predefined templates and format is also a good practice for consistency.

Here is one minor suggestion for improvement:

  1. Minor Improvement: Explicitly name the parts for invoke-gemini for enhanced readability.
    • Reasoning: While the current list of part forms is functional, explicitly naming the role of each part within the invoke-gemini call could improve the readability and maintainability of the code, especially as the complexity of LLM interactions grows. This makes it immediately clear what each piece of content represents in the overall prompt structure.
    • Actionable Suggestion: Consider using a more descriptive structure for the parts list, perhaps by introducing helper functions or a more structured way to represent the different segments of the LLM input. For example, if invoke-gemini supported keyword arguments for different content types, it would be more explicit. As it stands, a comment or a local variable for each part could also serve this purpose, but that’s less integrated. For now, the current approach is acceptable.
;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 6.49 seconds.
;; Prompt Tokens:         14,452
;; Thoughts Tokens:          313
;; Candidate Tokens:         221
;; Accumulated Prompt Tokens:      8,373,735
;; Accumulated Response Tokens:      350,127

;; Analyzing form:

(defparameter +analyze-form-prompt+
  (str:join
   #\Newline
   (list 
    "  * Use the preceeding forms as context for analysis."
    "  * Determine the purpose of the form."
    "  * Determine whether a library function might achieve the same thing."
    "  * Check that it follows best practices."
    "  * Look for opportunities to simplify or improve the code."
    "  * Look for typical Common Lisp errors and pitfalls."
    "  * Look for bugs."
    "Then report on the form:"
    "  * Describe the purpose of the form."
    "  * Suggest a docstring if one is needed, or if it can be improved, otherwise don't mention it."
    "  * If the form fails to follow best practices, say so."
    "  * If there are bugs or errors, describe them."
    "  * If improvements can be made, give up to three concrete, actionable suggestions for improvement."
    "  * For each suggestion, indicate whether it is a critical improvement, major improvement, minor improvement, merely nice to have, or barely worth mentioning."
    "  * For each suggestion, describe your reasoning."
    "  * If the form is error free and does not need improvement, say so."))
  "LLM prompt detailing the analysis criteria and reporting format for a single Common Lisp top-level form.")

The purpose of this defparameter form is to define a special variable named +analyze-form-prompt+. This variable stores a multi-line string that serves as a comprehensive prompt template for an LLM (Large Language Model). The prompt meticulously outlines the criteria for analyzing a single Common Lisp top-level form, including what aspects to consider (context, purpose, library functions, best practices, simplification, errors, bugs) and the exact format for reporting the analysis (description, docstring suggestions, best practice adherence, bug descriptions, and up to three concrete, actionable suggestions with criticality and reasoning).

The form is error-free and does not need improvement. It correctly uses defparameter for a global, dynamically-scoped variable, adheres to the +earmuffs+ naming convention for special variables, and includes a clear and concise docstring. The use of str:join with #\Newline is an idiomatic and readable way to construct a multi-line string from a list of lines, making the prompt definition clear and easy to manage.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 17.41 seconds.
;; Prompt Tokens:         15,190
;; Thoughts Tokens:        1,246
;; Candidate Tokens:         738
;; Accumulated Prompt Tokens:      8,388,925
;; Accumulated Response Tokens:      352,111

;; Analyzing form:

(defparameter +analyze-file-system-instruction+
  (str:join #\newline
            (list
             (str:join #\Space
                       (list
                        "You are a world class Common Lisp programmer."
                        "You will be analyzing a Common Lisp file one top-level form at a time."
                        "Your analysis should be thorough and insightful, demonstrating a deep understanding of Common Lisp programming practices."
                        "If there is no package definition, assume an appropriate one exists elsewhere and do not mention this."
                        "Assume that popular utility packages such as `alexandria` and `series` have been loaded and made available."
                        "Assume that undefined functions are defined elsewhere and do not mention this."
                        "Assume variables decorated with *earmuffs* have been declared as special variables."
                        "Implicit dependencies are expected, so do not mention them."
                        "Do not suggest using defconstant, even if it would be appropriate or a best practice."
                        "Prefer `check-type` over type declarations for robustness."))
             "For each top-level form:"
             +analyze-form-prompt+))
  "LLM system instruction to analyze a Common Lisp file one top-level form at a time.")

The purpose of this defparameter form is to define a special variable named +analyze-file-system-instruction+. This variable holds a comprehensive multi-line string that serves as the primary "system instruction" for an LLM (Large Language Model) when it is tasked with analyzing Common Lisp files. This instruction sets the LLM’s persona as a "world class Common Lisp programmer" and provides detailed guidelines, assumptions (e.g., package existence, utility libraries, special variables), and constraints (e.g., no defconstant suggestions, prefer check-type). It also incorporates the previously defined +analyze-form-prompt+ to specify the detailed reporting format for each top-level form analyzed.

The form is error-free and follows best practices. It correctly uses defparameter for a global, dynamically-scoped variable, adheres to the +earmuffs+ naming convention, and includes a clear docstring. The use of str:join to construct the multi-line string from a list of components, including another defparameter (+analyze-form-prompt+), is idiomatic and promotes readability and modularity.

Here is one suggestion for improvement:

  1. Minor Improvement: Extract the initial general instructions into a separate defparameter.
    • Reasoning: The first str:join block, which defines the LLM’s persona, assumptions, and general constraints, is a distinct logical unit. Separating this into its own defparameter (e.g., +llm-lisp-programmer-persona-instruction+) would further modularize the overall system instruction. This would make +analyze-file-system-instruction+ a cleaner composition of two other named instruction blocks, potentially improving readability and making it easier to modify specific parts of the instruction without affecting the overall structure.
    • Actionable Suggestion:
              ```lisp
              (defparameter +llm-lisp-programmer-persona-instruction+
                (str:join #\Space
                          (list
                           "You are a world class Common Lisp programmer."
                           "You will be analyzing a Common Lisp file one top-level form at a time."
                           "Your analysis should be thorough and insightful, demonstrating a deep understanding of Common Lisp programming practices."
                           "If there is no package definition, assume an appropriate one exists elsewhere and do not mention this."
                           "Assume that popular utility packages such as `alexandria` and `series` have been loaded and made available."
                           "Assume that undefined functions are defined elsewhere and do not mention this."
                           "Assume variables decorated with *earmuffs* have been declared as special variables."
                           "Implicit dependencies are expected, so do not mention them."
                           "Do not suggest using defconstant, even if it would be appropriate or a best practice."
                           "Prefer `check-type` over type declarations for robustness."))
                "LLM system instruction defining the persona and general guidelines for Common Lisp analysis.")
      
              (defparameter +analyze-file-system-instruction+
                (str:join #\newline
                          (list
                           +llm-lisp-programmer-persona-instruction+
                           "For each top-level form:"
                           +analyze-form-prompt+))
                "LLM system instruction to analyze a Common Lisp file one top-level form at a time.")
              ```
      
;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 7.22 seconds.
;; Prompt Tokens:         16,245
;; Thoughts Tokens:          463
;; Candidate Tokens:         182
;; Accumulated Prompt Tokens:      8,405,170
;; Accumulated Response Tokens:      352,756

;; Analyzing form:

(defparameter +analyze-file-form-prompt+
  (str:join #\Newline
            (list "Analyze the following top-level form in a Common Lisp file." +analyze-form-prompt+))
  "LLM prompt to analyze a single top-level form in a Common Lisp file.")

The purpose of this defparameter form is to define a special variable named +analyze-file-form-prompt+. This variable stores a string that serves as a prompt template for an LLM (Large Language Model). The prompt instructs the LLM to analyze a single top-level form in a Common Lisp file, incorporating the detailed analysis criteria and reporting format defined in the +analyze-form-prompt+ variable.

The form is error-free and does not need improvement. It correctly uses defparameter for a global, dynamically-scoped variable, adheres to the +earmuffs+ naming convention for special variables, and includes a clear and concise docstring. The use of str:join to combine a specific instruction with a more general prompt template (+analyze-form-prompt+) is a good practice for modularity and readability.

;; Invoking Gemini API model `gemini-2.5-flash`...
;; Gemini API finished in 23.46 seconds.
;; Prompt Tokens:         17,199
;; Thoughts Tokens:        1,842
;; Candidate Tokens:         556
;; Accumulated Prompt Tokens:      8,422,369
;; Accumulated Response Tokens:      355,154

;; Analyzing form:

(defun analyze-file (filename &key (temperature 0.01) (verbose t))
  "Analyze a Common Lisp file one top-level form at a time."
  (check-type filename (or pathname string))
  (let ((forms (file-forms filename))
        (*temperature* temperature)
        (*system-instruction*
          (content :parts (list (part +analyze-file-system-instruction+))
                   :role "system"))
        (start-prompt-tokens *accumulated-prompt-tokens*)
        (start-response-tokens *accumulated-response-tokens*)
        (start-time (get-universal-time))
        (aborted t))
    (unless (consp forms)
      (error "No top-level forms found in file '~a'." filename))
    (unwind-protect
         (prog1 (let iter ((current-form (car forms))
                           (analysis (invoke-gemini
                                      (list (part +analyze-file-form-prompt+)
                                            (part (car forms)))))
                           (results '())
                           (more (cdr forms)))
                  (when verbose
                    (format t "~&~%;; Analyzing form:~%~%~a~%--------~%~a~%~%" current-form analysis))
                  (if (null more)
                      (reverse (acons current-form analysis results))
                      (iter (car more)
                            (continue-gemini
                             (list (part +analyze-file-form-prompt+)
                                   (part (car more))))
                            (acons current-form analysis results)
                            (cdr more))))
           (setq aborted nil))
      (when verbose
        (let ((total-prompt-tokens (- *accumulated-prompt-tokens* start-prompt-tokens))
              (total-response-tokens (- *accumulated-response-tokens* start-response-tokens))
              (elapsed-time (- (get-universal-time) start-time)))
          (format t "~&;; Analysis ~:[complete in~;aborted at~] ~d seconds.~%" aborted elapsed-time)
          (format t ";; Prompt tokens:   ~7,d~%" total-prompt-tokens)
          (format t ";; Response tokens: ~7,d~%" total-response-tokens))))))

The purpose of the analyze-file function is to read a specified Common Lisp file, extract its top-level forms, and then sequentially send each form to an LLM (Large Language Model) for detailed analysis. It sets up the LLM’s system instruction and a specific prompt for analyzing individual forms, dynamically adjusting the LLM’s temperature. The function also tracks and reports the total prompt and response tokens used, as well as the elapsed time for the analysis, even if an error occurs.

The docstring is concise but could be improved to explicitly state the return value.

The form follows best practices, including check-type for argument validation, let for dynamic scope of special variables (*temperature*, *system-instruction*), unwind-protect for guaranteed cleanup and reporting, and a let iter construct for robust iteration over the file’s forms. Error handling for files with no top-level forms is also present.

Here are two concrete, actionable suggestions for improvement:

  1. Minor Improvement: Enhance the docstring to include the return value.
    • Reasoning: While the docstring describes the function’s action, it doesn’t explicitly state what the function returns. Knowing the exact structure of the return value is crucial for users of the function to correctly process its output.
    • Actionable Suggestion: Update the docstring to:
              ```lisp
              "Analyze a Common Lisp file one top-level form at a time.
               Returns a list of cons cells, where each cons cell is `(top-level-form . llm-analysis-result-string)`."
              ```
       
  2. Minor Improvement: Add a check-type for the forms returned by file-forms.
    • Reasoning: The file-forms function is an external dependency. While it’s assumed to return a list of Lisp forms (s-expressions), explicitly checking that the result is indeed a list ensures robustness. If file-forms were to return something unexpected (e.g., a single atom or NIL when forms are present but not a list), this check-type would catch it early and provide a clearer error message than a subsequent car or cdr error.
    • Actionable Suggestion: After the line (let ((forms (file-forms filename)) ...)), add:
              ```lisp
              (check-type forms list)
              ```
      

The form is otherwise error-free and well-written.

;; Analysis complete in 150 seconds.
;; Prompt tokens:    152555
;; Response tokens:   14755

Conclusion

This approach works quite well. The LLM’s analysis is generally accurate and insightful. It often finds ways to improve the code without introducing bugs. It also often finds potential bugs that I had not considered. The analysis is not perfect, but it is a useful tool for improving the quality of Common Lisp code.

An important thing to keep in mind is that the tool makes an effort to find something to say about each top-level form. If you follow its suggestions, you could endlessly “improve” your code. It also suggests “improvements” where deliberately have decided to do things in a non-standard way. You need to exercise judgment in deciding which suggested improvements are worth making.

The analysis is slow enough that you would not want to use it as part of a tool chain, but it is fast enough that you can use it regularly. It does consume tokens fairly rapidly, so a lot of analysis will add up.

Saturday, September 13, 2025

Puzzle: Read Preserving Comments

Here is a little Lisp puzzle for you:

I want to read a Lisp file and produce a list of the top-level forms in the file, but I want to preserve comments. Any comment that appears on a line within a top-level form should be associated with that form. Comments that appear by themselves at top level should be associated with the following top-level form. For example, if the file contains:

;;; -*- Lisp -*-

;; This file is a test
  
(in-package "CL-USER")

;; This is the test function:
(defun foo (arg)
  ;; This function does nothing
  (declare (ignore arg))
  nil)    ;; Return nil

(defparameter *x* 42)
(defparameter *y* 99)  ;; This is y

;; The BAR function doesn't do anything either

(defun bar ()  
  nil)

;; Trailing comment in file.

Then I want to produce the following list:

'(";;; -*- Lisp -*-

;; This file is a test
  
(in-package \"CL-USER\")
"
 "
;; This is the test function:
(defun foo (arg)
  ;; This function does nothing
  (declare (ignore arg))
  nil)    ;; Return nil
"
 "
(defparameter *x* 42)
"
 "(defparameter *y* 99)  ;; This is y
"
 "
;; The BAR function doesn't do anything either

(defun bar ()  
  nil)
"
 "
;; Trailing comment in file.
")

This puzzle is not difficult, but it is tricky to get the edge cases right.

Tuesday, September 9, 2025

The LLM Experience Gap

 If you're a senior programmer, LLMs are nothing to worry about. They aren't anywhere near smart enough to replace you, and they are a powerful tool to help you get stuff done.

If you're a junior programmer, you're going to have to step up your game, a lot. LLMs go astray. They make stuff up. They hallucinate. They make dumb mistakes. A senior programmer can spot this stuff a mile away: they've made the same dumb mistakes themselves. A junior programmer hasn't. Let me give a concrete example.

Mutation

There are three attitudes towards mutation in programming. The first is "so what?" This is the attitude we all start with. But as we start building large systems, we find that mutation opens up a can of worms involving object identity, synchronization, race conditions, etc. The second attitude is "be very careful", and the third is "avoid it like the plague". Hard core functional programmers take the third approach, but most of the world seems to take the second approach. If you're going to be careful about mutation, you can either avoid it unless you know it is safe, or you use it unless you think it is dangerous. Most people seem to take the latter approach. It takes a fair amount of experience to know what situations cause problems, and to recognize when you are in one of those situations.

If you have mutation errors, your program starts acting pretty weirdly. There are other reasons your program might act weirdly, but if you notice that it misbehaves only under load, and that the tests work fine, you might suspect a mutation problem. These can be fiendishly difficult to track down, and you need experience in debugging them to be any good at fixing them. A senior programmer will have seen a few of these. A junior programmer probably hasn't. A senior programmar has probably fixed a few of these that a junior programmer missed.

LLMs are not very good at reasoning about mutation. (They don't really reason about your code at all.) One thing you learn pretty early on as a programmer is that you should not mutate data structures you are traversing. Modifying a collection as you are attempting to iterate over it is not going to work (unless you are very clever). All sorts of weird behaviors can occur if items are being inserted and removed from a collection as you are trying walk over it. We've all seen these. And we also know that if items aren't being processed in the order we expect, there is a fair bet that someone is modifying the collection that contains them. This isn't something you are taught in class, you learn it the hard way.

I had the LLM code up a process to modify dozens of git repositories that were held in subdirectories. It was working fine until it wasn't. It got into an infinite loop where it was repeatedly processing the same repository over and over. Fortunately, I had two things in my favor: First, I had gone out of my way to make sure my code was idempotent, so running it over and over didn't do any harm, and second, I recognized that the problem was probably due to the LLM naively modifying the file system as it was trying to traverse it. It took me all of five minutes to fix the issue. Would a junior programmer just out of college have instructed the LLM to write idempotent code? Would he have spotted the LLM modifying the file system while traversing it?

Conclusion

But LLMs are pretty good at being code monkeys. With the right supervision, they can crank out a lot of code. A junior programmer is going to have to work hard to provide the right supervision. Learning to program X in 20 days isn't going to cut it. The LLM already knows 20 days worth of X. The senior programmer brings a few years of experience to the table. What does the junior programmer bring? The real value is not just in writing code, but in recognizing when the code is flawed and knowing how to fix it—a skill that can't be learned in 20 days.

Monday, September 8, 2025

Analyzing Common Lisp

While the LLM has some difficulty in generating Common Lisp, it is quite effective in analyzing Common Lisp. There are a couple of tricks that make this work better. Naturally, a good set of instructions about what sort of analysis you want is important, but since a Common Lisp program often uses a stratified design, where each level of abstraction is implemented in terms of the levels below it and supports the level above it, you need to inform the LLM of the dependencies between the various parts of the program. I've found one way to make this work, but I'm sure there are others.

The “system instruction” to the LLM is an extra-conversation prompt that is sent to the LLM to prime it for the actual prompt.

Once you have the system instruction set up, you then write the prompt itself. The prompt is another 'content' item with multiple parts. It makes sense to analyze one file at a time, so the prompt will contain seveal parts, the analysis instructions and the source code of the file to be analyzed. In addition, I include the contents of the .asd file as a part so that the LLM knows what modules are depended upon. Without the system definition, the LLM will complain (a lot) about missing dependencies.

Here is a sample analysis:

;; Gemini API finished in 66.12 seconds.
;; Prompt Tokens:          7,243
;; Thoughts Tokens:        2,307
;; Candidate Tokens:       1,901
;; Accumulated Prompt Tokens:        750,427
;; Accumulated Response Tokens:       60,064
;; **My Initial Thoughts on the Code Analysis Task**
;; 
;; Okay, I've got the task laid out. I need to dive into this Common Lisp code from
;; the `gemini` system's `analyze` file and provide a thorough analysis.  My goal
;; is to understand what's happening, identify any potential issues, and come up
;; with concrete suggestions for improvement.  I'm viewing this from the
;; perspective of an expert Common Lisp programmer, so I'll be looking for nuances
;; and areas where we can make the code even better.
;; 
;; First, I see two `defparameter`s: `+analyze-source-system-instruction+` and
;; `+analyze-source-file-prompt+`.  These are defining the system instruction and
;; main prompt for the LLM.  The system instruction establishes the LLM's role as
;; an expert Common Lisp programmer.  That's a smart starting point - framing the
;; analysis with a specific persona can really influence the type of feedback we
;; get. The prompt itself looks comprehensive, asking the LLM to cover
;; functionality, features, best practices, bugs, and improvements.
;; 
;; Then there's the `lisp-file-contents` function.  It's pretty straightforward: it
;; reads a Lisp file and formats its content as a string.  Seems like a crucial
;; utility for feeding the LLM the actual code to analyze.  The use of is a common
;; practice for formatting code blocks within prompts.
;; 
;; I'll need to pay close attention to the formatting and any potential issues that
;; may arise.  I'll also keep an eye out for any inconsistencies in the prompt
;; structure, as even minor errors can affect the LLM's output. I'm keen to see how
;; effectively these prompts are formulated and how we could make them even more
;; precise.  I'm ready to dig deeper and provide a really valuable analysis.
;; 
;; **My Initial Thoughts on the Code Analysis Task**
;; 
;; Okay, I've got the task laid out. I need to dive into this Common Lisp code
;; within the `GEMINI` system's `analyze` file and provide a thorough analysis.  My
;; goal is to understand what's happening, identify any potential issues or areas
;; for optimization, and ultimately, offer some actionable suggestions to improve
;; the code.
;; 
;; Here's how I see it:  This looks like a system designed to automate code
;; analysis, which is a fascinating area.  I immediately note the use of format
;; strings. My initial impressions are that the developer seems to have been fairly
;; structured.
;; 
;; I'm starting by taking stock of the key components:
;; 
;; *   The `+analyze-source-system-instruction+` and `+analyze-source-file-prompt+`
;; parameters set up the context and instructions for the Large Language Model
;; (LLM). It appears the system is leveraging an LLM to perform the code analysis.
;; Setting the LLM's persona as a world-class Common Lisp programmer is a smart
;; strategy to get high-quality feedback.
;; *   The `lisp-file-contents` function handles the necessary file I/O to read and
;; prepare the code for the LLM. It appears to be relatively basic, which is
;; usually a positive sign.  Properly formatting the code with backticks to prevent
;; interpretation in the LLM is another good idea.
;; 
;; **Analyzing the 'analyze' File for Gemini System**
;; 
;; As an expert in Common Lisp, I've taken a close look at the provided code within
;; the 'gemini' system. Here's my analysis:
;; 
;; **Functionality and Purpose:** This code is designed to automate the process of
;; analyzing Common Lisp code using an LLM (likely Gemini in this context). The
;; functions `analyze-system-definition` and `analyze-source` are the core
;; components. They prepare prompts and send requests to the LLM, effectively
;; acting as wrappers for an `invoke-gemini` function, which is likely responsible
;; for making the actual API calls to the LLM. It's a pragmatic approach to
;; leveraging the power of an LLM for code review and analysis.
;; 
;; **Notable Features and Patterns:** The use of `defparameter` for prompt
;; templates is excellent for maintainability. The structured way it provides
;; context to the LLM (like setting a persona as a "world-class Common Lisp
;; programmer") is a smart move. The inclusion of system definitions alongside
;; individual files, especially in the `analyze-source` function, is a thoughtful
;; way to give the LLM the information it needs for a comprehensive analysis. The
;; utilization of the `series` library for stream processing within
;; `lisp-file-contents` is efficient, though a bit less common in this context. The
;; reliance on ASDF for system and component location is, of course, absolutely
;; essential.
;; 
;; **Areas for Improvement:**
;; 
;; 1.  **Robust Error Handling:** The code needs error handling. If
;; `asdf:find-system` or `asdf:find-component` fails, the current implementation
;; could lead to nasty errors. I'd add checks to ensure that the system and
;; component exist before calling `asdf:system-source-file` and
;; `asdf:component-pathname`.  *Critical - This is a fundamental safeguard.*
;; 2.  **Consolidated Setup:** There's duplication in setting up
;; `*system-instruction*` and `*include-thoughts*`. I'd refactor that into a helper
;; function or common binding to keep the code DRY and improve maintainability.
;; *Important - Redundancy can lead to subtle bugs.*
;; 3.  **Enhance `lisp-file-contents` Readability:** While series is efficient, the
;; `lisp-file-contents` function could benefit from some clarification. I'd
;; consider adding comments to explain the stream processing logic in more detail.
;; Perhaps even re-writing it with more common standard functions. *Moderate -
;; Clarity aids maintainability.*
;; 
;; **Code Analysis: Gemini 'Analyze' File**
;; 
;; Alright, let's dissect this `analyze` file. It's clear this is designed to be
;; the brains of the operation, tasked with feeding Common Lisp code to the Gemini
;; LLM for analysis. My immediate impression is that it's well-structured, but
;; let's dive into the specifics.
;; 
;; First off, it seems the core concept is pretty straightforward: provide context
;; (the LLM's role, the system being analyzed), the specific code, and let the LLM
;; do its magic. This approach is sound. We have two key functions,
;; `analyze-system-definition` and `analyze-source`, that are almost identical in
;; their structure.
;; 
;; The use of dynamic variables, especially `*system-instruction*` and
;; `*include-thoughts*`, is a smart move. Setting up the LLM's persona as a
;; "world-class Common Lisp programmer" with that format string is a crucial part
;; of the process. It's all about priming the pump. Likewise, enabling
;; *include-thoughts* is likely essential to see how Gemini is working through the
;; code.
;; 
;; Now, that `lisp-file-contents` function... that's where things get interesting.
;; Using the `series` library for this is a valid approach, but it's not a go-to
;; for most. It reads the source file line by line and wraps it in markdown code
;; blocks. This technique is going to work. The `series` library is more of a
;; preference but the outcome is correct.
;; 
;; **Here's where my expert eye spots the potential issues and areas for
;; improvement:**
;; 
;; 1.  **Error Handling:** The Achilles' heel here is how the functions handle ASDF
;; errors. What if ASDF can't find the system or the file? Right now, it looks like
;; it would crash. I'd definitely add checks for the existence of the system and
;; files before calling `asdf:system-source-file` and `asdf:component-pathname`. 
;; This is **critical** to prevent abrupt failures.
;; 
;; 2.  **Code Duplication:** The identical `let` blocks in both
;; `analyze-system-definition` and `analyze-source` for setting up the environment
;; are a code smell.  This is a minor point, but it could be cleaned up by
;; factoring that setup into a dedicated function.
;; 
;; 3.  **Optimize the prompt construction:** The way prompts are constructed with
;; the different parts feels rigid. There is the opportunity to use a more flexible
;; prompt structure that allows for the addition of information. I would add a
;; *include-information* list to `invoke-gemini` that adds different prompt
;; templates.
;; 
;; **Code Analysis: Gemini 'analyze' file**
;; 
;; Okay, so I've taken a look at this `analyze` file within the `gemini` system. 
;; My primary focus is to act as a seasoned Common Lisp expert and provide a
;; detailed review of the code's functionality and potential improvements.
;; 
;; Essentially, this code acts as an interface to the Gemini LLM for automated code
;; analysis. It prepares prompts and context for the LLM to analyze Common Lisp
;; source files and ASDF system definitions.  The functions,
;; `analyze-system-definition` and `analyze-source`, follow a consistent pattern:
;; They set up dynamic variables for instructions and then call `invoke-gemini`,
;; providing it with both instructions and code snippets in the form of a list of
;; "parts".  This context includes the target file and the system's .asd definition
;; for dependency details.
;; 
;; I see some solid design choices here.  Using `defparameter` for prompt templates
;; promotes maintainability and clarity.  The inclusion of the system definition
;; file alongside the individual source file for analysis in `analyze-source` is a
;; brilliant move to give the LLM better context about dependencies.  The use of
;; `series` in `lisp-file-contents` is a clever way to handle file I/O and string
;; manipulation.
;; 
;; However, there are a few areas that could be improved:
;; 
;; 1.  **Error Handling for File Existence (Critical):**  I'd be concerned about
;; what happens when the specified system or file doesn't exist. The code needs
;; robust error handling to gracefully manage these scenarios. Adding checks using
;; the `asdf:find-system` and `asdf:find-component` functions would be a good
;; start.  A robust system would be needed here.
;; 2.  **Consolidated Setup (Minor):** The duplication in setting
;; `*system-instruction*` and `*include-thoughts*` between
;; `analyze-system-definition` and `analyze-source` is not ideal. I'd consider
;; extracting this setup into a separate function to keep things DRY.
;; 3.  **Refactor `lisp-file-contents` (Low, potentially personal preference):** 
;; While the use of `series` is functional, the logic within `lisp-file-contents`
;; is a little less readable. I might consider alternative methods, but given the
;; target of the Gemini LLM and the target audience of Common Lisp developers, it
;; might be more of a personal style preference than a "must-fix."
;; 
;; Here are my recommendations in order of importance:
;; 
;; 1.  **Critical:** Add error handling to ensure `asdf` finds the system and
;; components; wrap the relevant calls in `handler-case` or `ignore-errors`.
;; 2.  **High:** Create a function to encapsulate the duplicated setup of the
;; dynamic variables for a cleaner structure.
;; 3.  **Low:** Consider the use of a `with-open-file` structure and traditional
;; `read-line` loop to build the string in `lisp-file-contents` if readability is a
;; primary concern. The current implementation is valid, but the preference of a
;; Common Lisp developer may be relevant here.
;; 
;; **Gemini Code Analyzer Analysis**
;; 
;; Okay, let's break this down. From what I can see, this `analyze` file within the
;; `gemini` system is designed as a specialized interface for LLM-powered code
;; analysis of Common Lisp and ASDF systems. It aims to generate comprehensive
;; reviews of code, including functionality summaries, identification of noteworthy
;; features, deviations from best practices, potential bugs, and suggestions for
;; improvement.
;; 
;; The core idea is to structure prompts for an LLM (presumably Gemini, given the
;; name of the system) in a way that is highly effective. It achieves this by
;; setting the scene with a system-level instruction that consistently frames the
;; LLM as a world-class Lisp programmer. Then, it uses formatted prompts and the
;; code itself as input.
;; 
;; It's essentially two functions - `analyze-system-definition` and
;; `analyze-source` - that handle different levels of granularity. They both use
;; `invoke-gemini`, but they construct the input prompts differently.
;; `analyze-system-definition` focuses on the system's `.asd` file, while
;; `analyze-source` adds the system file as context to the specific file.
;; 
;; Some key observations are:
;; 
;; *   **Prompt Engineering:** The use of `defparameter` for prompt templates (like
;; `+analyze-source-system-instruction+` and `+analyze-source-file-prompt+`) and
;; `format` to incorporate dynamic values is well-considered. It makes the prompts
;; maintainable and easy to understand.
;; *   **Context is Key:** The `analyze-source` function smartly includes the
;; system's definition file as part of the prompt when analyzing a specific source
;; file. This gives the LLM vital information about dependencies. It's great to see
;; that level of attention to detail!
;; *   **Series Usage:** `lisp-file-contents` leverages the `series` library for
;; reading and formatting Lisp files. This is a solid approach, although it might
;; make the function slightly less immediately readable for someone who isn't
;; already familiar with `series`.
;; 
;; Now, for some actionable improvements:
;; 
;; 1.  **Robust Error Handling (Critical):** I would recommend adding robust error
;; handling. What happens if a system or source file can't be found by ASDF? The
;; code should explicitly check these conditions and handle them gracefully,
;; perhaps by returning an error message or logging a warning. The
;; `asdf:find-system` and `asdf:find-component` functions would be excellent
;; candidates for these checks.
;; 2.  **Refactor Dynamic Variable Binding (Moderate):** There's a little bit of
;; repetition in how `*system-instruction*` and `*include-thoughts*` are set in
;; both `analyze-system-definition` and `analyze-source`. A simple helper function
;; or a `let*` nesting could avoid the small redundancy, making the code more
;; readable.
;; 3.  **Consider `lisp-file-contents` Readability (Optional, depending on team
;; preference):** While the `series` implementation in `lisp-file-contents` is
;; perfectly functional, and even idiomatic Lisp, it's slightly less
;; straightforward than a `with-open-file` approach for simple file reading.
;; Depending on your team's familiarity with `series`, you might want to consider
;; alternative implementations for enhanced readability, especially if the core
;; intent is just to get the entire file's content as a string. However, I
;; acknowledge the benefit of the pipeline style of `series`.
;; 
;; That's my analysis. The code is well-structured and demonstrates a clear
;; understanding of the principles of effective prompting. Implementing those
;; improvements would further solidify its robustness and readability.
;; 
;; **Code Analysis: Gemini Integration for Common Lisp**
;; 
;; Okay, so I've taken a look at this `analyze` file within the `gemini` system,
;; and here's my take: this is a solid framework for leveraging the Gemini LLM for
;; automated Common Lisp code analysis.
;; 
;; My core function is clear: to read Lisp code, format it for the Gemini model,
;; and then give Gemini the tools to actually *do* something useful with it. The
;; `lisp-file-contents` function handles the grunt work of reading files.  The fact
;; that it is using the `series` library for stream processing is interesting; it's
;; a powerful tool but might be a tad obscure for some Lisp developers, at least at
;; first glance.
;; 
;; The core of the logic is in the `analyze-system-definition` and `analyze-source`
;; functions. They follow a predictable pattern: they load the code, feed it to the
;; Gemini model with a structured prompt, and let the model work its magic. The
;; incorporation of ASDF for locating the system and source files is essential. I
;; like that the system definition file is *always* provided to Gemini as context,
;; even when analyzing individual files within that system; it gives Gemini a
;; broader understanding. The dynamic variables and the clear use of `format` for
;; prompt construction demonstrate a good understanding of best practices.
;; 
;; **Areas for improvement:**
;; 
;; 1.  **Error Handling:** The *most* important thing is to shore up the ASDF calls
;; to avoid crashes. If someone gives a system or file name that's not valid, the
;; code should gracefully handle that rather than just throwing an error deeper
;; down the stack. That's a critical bug to squash.
;; 
;; 2.  **Duplication:** The setup for the LLM interaction (prompt string, including
;; Gemini feedback) in `analyze-system-definition` and `analyze-source` is
;; virtually identical. Consolidating this into a helper function or macro would
;; make the code cleaner.
;; 
;; **Here's a list of three critical actionable suggestions for improvement:**
;; 
;; 1.  **Critical: Error Handling on ASDF Lookup:** Wrap the calls to
;; `asdf:system-source-file` and `asdf:component-pathname` within checks to ensure
;; the system and/or component actually exist before proceeding. Raise a proper
;; error if they do not. This will catch invalid system and file names.
;; 2.  **Nice to have: Consolidate the Gemini Prompt Preparation**: Refactor the
;; setup of `*system-instruction*` and `*include-thoughts*` into a helper macro or
;; function. This reduces duplication.
;; 
;; I think these improvements will enhance the robustness and clarity of the
;; integration, allowing it to provide more reliable code analysis.
;; 
;; **Gemini Code Analysis: My Perspective**
;; 
;; Alright, let's dissect this. As a seasoned Common Lisp developer, I see a
;; well-structured approach to code analysis using the Gemini LLM. The core idea is
;; sound: provide the LLM with the necessary context and prompts to effectively
;; analyze Lisp code. The use of format strings and dynamic variables for
;; instructions is a good practice for maintainability. The file reading with
;; `series` is a powerful tool, although I'll admit I sometimes find the syntax a
;; bit less readable at a glance. Let's delve deeper.
;; 
;; The `analyze` file provides two key functions, `analyze-system-definition` and
;; `analyze-source`, which essentially wrap the `invoke-gemini` call, preparing the
;; ground for the analysis. They set up the LLM with a clear persona ("world class
;; Common Lisp programmer") and context (both system definition and source file
;; contents).
;; 
;; I like how the `analyze-source` function intelligently includes the system
;; definition file alongside the specific source file. This gives the LLM the
;; broader picture, which should lead to more accurate and insightful analysis.
;; That's good thinking, that is.
;; 
;; The use of ASDF for system and file location is expected in the Common Lisp
;; ecosystem and fits the task well. It's a key part of how we manage project
;; dependencies and file structures.
;; 
;; However, I'm already spotting a few areas where we can improve. First, the error
;; handling, or rather lack thereof, around ASDF operations is a significant
;; concern. What happens if a system or component doesn't exist? A hard crash isn't
;; ideal. We need to be more robust there.
;; 
;; Second, the `let` binding of the LLM instructions seems redundant. It's present
;; in both functions, which is a signal of code duplication. We should abstract
;; that out. 
;; 
;; Finally, the magic strings for the markdown delimiters in `lisp-file-contents`
;; are minor but good practice points: define those constants.
;; 
;; Here's my actionable plan:
;; 
;; 1.  **Critical Action:** **Robust ASDF error handling.** We absolutely need
;; checks to ensure that the systems and components are found before proceeding.
;; Any failure there, raise an error.
;; 2.  **Next up:** **Refactor the `*system-instruction*` and `*include-thoughts*`
;; setup into a macro.** This will reduce duplication and enhance code clarity. The
;; `content` struct isn't clear to someone who doesn't use Gemini or is new to this
;; code. Make the code's intent clear.
;; 3.  **Third:** Make those magic markdown string into constants.
;; 
;; By addressing these points, we'll make this code more robust, readable, and
;; maintainable. This approach leverages the power of LLMs and, with a few tweaks,
;; can be a solid foundation for automated Common Lisp code analysis.
;; 
;; **Code Analysis Summary: Gemini's Lisp Analyzer**
;; 
;; As an expert in Common Lisp, I've just reviewed the `analyze` file within the
;; `gemini` system.  My assessment centers on how it interacts with the Gemini LLM
;; for automated code analysis. It's designed to take a system or specific file,
;; prepare prompts, and then submit them to the LLM for evaluation.
;; 
;; **Functionality and Key Features:**
;; 
;; My analysis shows that `analyze` wraps the `invoke-gemini` function, customizing
;; the input to effectively elicit detailed code reviews from the LLM. It focuses
;; on several key areas, including:
;; *   Setting the LLM's persona as an expert Common Lisp programmer.
;; *   Formatting the file contents within Markdown code blocks (` ```lisp` and
;; ```` `) for clear parsing.
;; *   Constructing prompts that ask for summaries, feature identification,
;; identification of best practice violations, highlighting potential bugs, and
;; suggestions for improvement.
;; *   Leveraging ASDF to locate system definition files and components, providing
;; the LLM with the context of dependencies.
;; *   Dynamically injecting file contents and system information into the prompts.
;; 
;; **Observations:**
;; 
;; I've noted the consistent pattern used for LLM interaction: Setting up a
;; "system" role for the LLM, enabling the `*include-thoughts*` flag, and
;; constructing the prompt from text instructions and code snippets. Also, the use
;; of `series` in `lisp-file-contents` to stream file data is powerful, but
;; readability is something to consider.
;; 
;; **Areas for Improvement:**
;; 
;; 1.  **Critical:** There's a significant risk of errors if ASDF can't find the
;; system or file. This could lead to crashes when functions expect valid ASDF
;; components. The current code lacks proper error handling in cases of invalid
;; system names or file paths, leading to unexpected behaviour if incorrect
;; parameters are provided.
;; 2.  **Nice to have:** The common code used to bind the dynamic variables
;; `*system-instruction*` and `*include-thoughts*` is duplicated in both
;; `analyze-system-definition` and `analyze-source`. This duplication creates the
;; potential for maintenance issues.
;; 3.  **Nice to have:** The use of literal strings for Markdown formatting could
;; be made more consistent and less brittle.
;; 
;; **Actionable Suggestions:**
;; 
;; 1.  **Enhance Error Handling:** I strongly recommend adding robust error
;; checking around ASDF calls in both `analyze-system-definition` and
;; `analyze-source`. If `asdf:find-system` or `asdf:find-component` fails, an
;; appropriate error condition should be signaled instead of allowing the code to
;; continue with `NIL` values. This will considerably improve the robustness of the
;; system.
;; 2.  **Refactor Common LLM Setup:** To reduce redundancy and enhance
;; maintainability, I propose refactoring the code that sets the
;; `*system-instruction*` and `*include-thoughts*` variables. Consider a helper
;; macro like `with-llm-analysis-context` to encapsulate this.
;; 3.  **Declare `defconstant`s for Markdown Fences:** Replacing the hardcoded "
;; ```lisp" and " ``` ` " strings with constants will make the formatting process
;; more easily adaptable.
;; 
;; **Code Analysis Summary: A Common Lisp Perspective**
;; 
;; Alright, let's dive into this `analyze` file. My primary function here is to
;; prepare code for analysis by the Gemini LLM, acting as a bridge between the
;; system's structure and the LLM's analytical capabilities. The goal is to provide
;; a "world-class Common Lisp programmer" persona to the LLM to get detailed
;; insights into code quality, potential issues, and areas for improvement.
;; 
;; The file contains two main entry points: `analyze-system-definition` and
;; `analyze-source`. These functions act like wrappers around the `invoke-gemini`
;; function. They're both carefully designed to set the stage for LLM analysis,
;; setting the right context and persona, and including the relevant code content
;; in the prompt. I appreciate the careful consideration given to including the
;; system definition file as context, even when analyzing individual source files.
;; This provides the Gemini LLM with important dependency information.
;; 
;; The core of the LLM interaction is straightforward: assemble a prompt with
;; instructions and code content, and send it to Gemini. I can see that the prompts
;; are constructed with a good mix of instructions and contextual information,
;; thanks to the format strings that dynamically insert the system and file names.
;; I also see that the use of `*system-instruction*` and `*include-thoughts*`
;; (dynamic binding) looks solid. The code is carefully crafted to present the
;; Gemini LLM with a comprehensive view of the code and its context.
;; 
;; **Here's the breakdown of my thoughts:**
;; 
;; *   **Functionality:** This is a neat tool for automatically analyzing Common
;; Lisp code by leveraging an LLM. It helps provide suggestions for code
;; improvements, potential bugs, and follows best practice deviations.
;; *   **Notable Features:** The code has a well-defined interaction pattern with
;; the LLM. It focuses on clarity, context, and good use of the `series` library
;; for reading the files.
;; *   **Areas for Improvement:**
;;     1.  **Robust Error Handling:** We need to explicitly handle cases where a
;; system or file component is not found via ASDF.
;;     2.  **Duplicated setup:** I think we could clean up the duplicated binding
;; of dynamic variables by refactoring this setup into a macro or separate
;; function.
;;     3.  **Readability/Idiomaticity:** The use of `series` is somewhat
;; specialized. It may be helpful to consider if a more readable approach (for a
;; generalist reader) wouldn't be as good or preferable.
;;     4.  **Magic strings:** The use of literal strings in markdown can be
;; refactored to constants.
;; 
;; **Actionable Suggestions:**
;; 
;; 1.  **Crucial:** Add checks for successful ASDF system and component lookup, or
;; we risk runtime errors.
;; 2.  **Nice to have:** Abstract the repetitive initialization of
;; `*system-instruction*` and `*include-thoughts*` with a macro to reduce
;; redundancy and increase readability.
;; 3.  **Nice to have:** Replace literal strings in markdown and constants.
;; 
;; I believe these adjustments will make the `analyze` file more robust,
;; maintainable, and ultimately, more useful for performing high-quality Common
;; Lisp code analysis using the Gemini LLM.
;; 
;; **Code Analysis: Automated Common Lisp Analysis with Gemini**
;; 
;; Okay, as an expert in this field, I see a well-structured approach to using the
;; Gemini LLM for automated Common Lisp code analysis within the `GEMINI` system.
;; My primary focus is on how it's engineered, specifically how it leverages ASDF
;; and `series`.
;; 
;; The primary function here is to give the LLM the tools to analyze Common Lisp
;; code and system definitions. The system works by providing code formatted for
;; the LLM. It includes system instructions to help the LLM with the task. This is
;; smart: by including the ASDF system definition, the LLM gains context about the
;; code's dependencies, something that would be invaluable for thorough code
;; analysis. The `series` library is smartly used for file parsing and formatting,
;; which is an efficient approach, though it could be less immediately obvious to
;; all.
;; 
;; **Notable Features:**
;; 
;; *   **Prompt Engineering:** The use of `defparameter` for prompts and `format`
;; for injecting values ensures clear and maintainable prompts. That's a key part
;; of interacting with an LLM.
;; *   **Contextualization:** Providing the ASDF system definition file as context
;; to the LLM is a smart move, greatly increasing the quality of the analysis the
;; LLM can provide.
;; *   **Modularity:** The clear separation of functions (`lisp-file-contents`,
;; `analyze-system-definition`, `analyze-source`) promotes maintainability and
;; flexibility.
;; *   **Dynamic Variable Binding:** Using `let` for temporary dynamic variable
;; binding (`*system-instruction*`, `*include-thoughts*`) is appropriate and a
;; standard pattern.
;; 
;; **Areas for Improvement:**
;; 
;; 1.  **Critical Issue: Error Handling:** The lack of error handling around ASDF
;; calls is a glaring weakness. If ASDF can't find a system or component, it'll
;; crash. That needs to be addressed immediately.
;; 2.  **Duplication:** The similar setup of the `*system-instruction*` and
;; `*include-thoughts*` dynamic variables in both `analyze-system-definition` and
;; `analyze-source` is a minor code smell. I'd refactor that.
;; 3.  **Readability:** While I appreciate `series`, the file-reading part within
;; `lisp-file-contents` could be made more approachable. While its functional
;; structure is useful for a Lisp LLM, a more straightforward `with-open-file`
;; could be considered.
;; 
;; **Actionable Suggestions:**
;; 
;; 1.  **Robust Error Checking (Critical):** Implement error handling to gracefully
;; manage situations when a system or component isn't found by ASDF. This requires
;; checking the return values of `asdf:find-system` and `asdf:find-component`.
;; Signal an error if these fail.
;; 
;; 2.  **Refactor Context Setup (Nice to have):** Create a macro (or function) to
;; encapsulate the common LLM context setting in `analyze-system-definition` and
;; `analyze-source`. This will reduce repetition and improve code readability.
;; 
;; 3.  **Use Constants for Markdown Fences (Nice to have):** Replace the hardcoded
;; markdown fence strings (````lisp` and ``` `) with `defconstant`s. It'll make
;; code more maintainable.
;; 
;; **A Detailed Analysis of the 'analyze' File**
;; 
;; My immediate assessment is that this code implements a system for automated
;; Common Lisp code analysis using the Gemini LLM. It's essentially a
;; well-structured wrapper around `invoke-gemini`, designed to prepare the prompts
;; and context for effective code analysis. I see a clear interaction pattern with
;; the LLM: setting a consistent persona, enabling thought inclusion, and
;; constructing prompts with relevant code context.
;; 
;; **Key features I recognize:**
;; 
;; *   **Prompt Engineering:** The use of `defparameter` for prompt templates is a
;; good practice, promoting maintainability and clarity.
;; *   **Contextual Awareness:** The `analyze-source` function's inclusion of the
;; system definition file, even for specific source files, shows a thoughtful
;; approach to providing the LLM with sufficient context.
;; *   **ASDF Integration:** The reliance on ASDF for system and component location
;; is essential for any Common Lisp code analysis tool.
;; *   **Series Usage:**  I notice a reliance on `series` for file reading and
;; string manipulation. While powerful, it can sometimes make code less immediately
;; accessible to those unfamiliar with the library.
;; 
;; **Areas for improvement that I immediately spot:**
;; 
;; 1.  **Robustness and Error Handling:** The code needs better error handling for
;; ASDF operations. What happens if a system or component is not found? Currently,
;; the code would likely fail.
;; 2.  **Code Duplication:** The identical `let` bindings for
;; `*system-instruction*` and `*include-thoughts*` in both
;; `analyze-system-definition` and `analyze-source` is a simple, easy refactor.
;; 3.  **Readability in `lisp-file-contents`:** While the `series` implementation
;; works, a more straightforward `with-open-file` loop might improve readability
;; without sacrificing much, but it's arguable.
;; 4.  **Magic Strings:**  Using `defconstant` for the markdown formatting strings
;; would improve maintainability.
;; 
;; **My actionable recommendations, given my experience:**
;; 
;; 1.  **Critical: Add robust error handling to ASDF calls.**  Ensure that
;; `asdf:find-system` and `asdf:find-component` return valid values before using
;; them. Signal an error or return a meaningful value if the system or component is
;; not found. This is a crucial step to prevent runtime errors and improve the
;; tool's reliability.
;; 2.  **Nice to have: Create a helper macro or function to refactor the common LLM
;; setup.** Encapsulating the repeated `let` binding for `*system-instruction*` and
;; `*include-thoughts*` will reduce duplication and improve code cleanliness. A
;; macro like `with-llm-analysis-context` seems appropriate for this purpose.
;; 3.  **Nice to have: Use `defconstant` for markdown fence strings.** Defining
;; constants for the markdown fence strings (````lisp` and ``` `) enhances
;; readability and makes it easier to modify them in one place if needed.
;; 
;; These changes will make the code more robust, readable, and maintainable. I
;; would prioritize the error handling as the most critical improvement, as it
;; directly impacts the tool's stability.
;; 
;; **Code Analysis: Gemini System Analyzer**
;; 
;; My primary goal here is to enable automated Common Lisp code analysis using the
;; Gemini LLM. I've built two key functions, `analyze-system-definition` and
;; `analyze-source`, to facilitate this.  They follow a standard pattern: they
;; format prompts to set the LLM's persona as a seasoned Common Lisp programmer,
;; providing context from the ASDF system definition file.
;; 
;; `lisp-file-contents` is responsible for fetching the raw content of Lisp files,
;; formatting it with markdown. I'm using `series` for that, although in hindsight,
;; I'm a bit torn about its readability. I am leveraging `ASDF` extensively for
;; file location, which is a good practice.
;; 
;; **Key Observations:**
;; 
;; *   I am clearly setting up a context for code understanding.
;; *   The use of `format` for prompt generation and injecting variables keeps it
;; adaptable.
;; *   The `series` library for file processing is a good choice for this type of
;; problem, though readability is a small concern.
;; *   I am doing dependency context-aware prompt generation, which should improve
;; accuracy.
;; *   I use a `let` for setting dynamic variables, which is perfectly fine.
;; 
;; **Areas for Improvement:**
;; 
;; 1.  **Robust Error Handling (Critical):** I **must** add error checking for
;; `ASDF` calls (e.g., `find-system`, `find-component`) to prevent crashing on
;; invalid input. This is non-negotiable.
;; 2.  **Code Duplication:** I should refactor the repeated code for prompt setting
;; into a macro. This improves maintainability and readability.
;; 3.  **Magic Strings:** Defining constants for the markdown markers (````lisp`
;; and ```` `) would be good practice and make it easier to maintain in the future.
;; 
;; **Actionable Steps:**
;; 
;; 1.  **Critical:** Implement thorough error checking after ASDF calls in both
;; `analyze-system-definition` and `analyze-source`. Signal an error if a system or
;; component isn't found, with a helpful message.
;; 2.  Create a `with-llm-analysis-context` macro to encapsulate the dynamic
;; variable bindings and reduce duplication.
;; 3.  Define `defconstant` for `+lisp-code-fence-start+` and `+code-fence-end+`.
;; 
;; I am confident that these changes will strengthen the robustness and
;; maintainability of the `analyze` file and, in turn, improve the quality of
;; Gemini's code analysis capabilities.
;; 
;; **Code Analysis: Gemini Integration for Common Lisp**
;; 
;; As an experienced Common Lisp developer, I see the code in `analyze.lisp` as a
;; well-structured framework for integrating with the Gemini LLM for code analysis.
;; It leverages several good practices and patterns commonly used in Lisp
;; development, specifically, around interacting with ASDF and utilizing format
;; strings to build prompts. The primary function is to prepare the context and
;; input for the Gemini LLM to analyze Common Lisp code, whether it is for an
;; entire ASDF system or individual source files.
;; 
;; Key aspects I've observed include the focus on defining persona and prompt
;; templates as parameters, which is quite advantageous for maintainability and
;; flexibility. The intelligent inclusion of the ASDF system definition file in the
;; prompt, even for a single-file analysis, is a commendable approach to provide
;; rich context to the LLM. Also, the utilization of `series` for file reading,
;; although powerful, could be slightly less straightforward for those not familiar
;; with the series library, but it may be very reasonable to embrace the series
;; pattern given this is designed for Lisp code, where streaming processing can be
;; ideal. The code is efficient in its objective.
;; 
;; However, a few areas stand out as needing attention for robust operation and
;; improved code clarity. Robust error handling for system or component lookup, is
;; paramount, and the repeated setup in `analyze-system-definition` and
;; `analyze-source` offers opportunity for refactoring. Also, `lisp-file-contents`
;; should use `defconstant` for `markdown` delimiters.
;; 
;; Therefore, I'd suggest these improvements:
;; 
;; 1.  **Critical:** Immediately add thorough error handling for cases where ASDF
;; fails to find a system or a component. A critical need, because, without it, an
;; invalid ASDF system or file will lead to an unhandled error condition.
;; 2.  **Nice to Have:** Refactor the repetitive setup for prompt formatting and
;; `*include-thoughts*` with a macro or helper function.
;; 3.  **Nice to Have:** Define `markdown` fence strings in `defconstant` for
;; cleaner and easier modification.
;; 
;; These changes will add more safety and robustness, which is very useful.
;; Overall, the code represents a solid foundation for automating Lisp code
;; analysis with Gemini, and with some thoughtful refinements, it can become an
;; even more powerful tool.
;; 
;; **Detailed Analysis of the `analyze` File**
;; 
;; Alright, let's dive into this `analyze` file in the `GEMINI` system. My primary
;; focus here is to create a robust system for code analysis using a Large Language
;; Model (LLM). This file is essentially the interface between the codebase and the
;; LLM, preparing and formatting the input prompts for analysis.
;; 
;; **Functionality and Purpose:**
;; 
;; The core of this file centers around two main functions:
;; `analyze-system-definition` and `analyze-source`. These functions are designed
;; to analyze Common Lisp code and ASDF system definitions. They prepare the
;; necessary input for the LLM to perform its analysis. The goal is to get feedback
;; on the code, looking for issues, potential bugs, deviations from best practices,
;; and suggestions for improvement.
;; 
;; **Key Features and Patterns:**
;; 
;; *   **LLM Interaction:** The functions follow a consistent pattern for LLM
;; interaction. They consistently set up a system instruction (using the
;; `+analyze-source-system-instruction+` format string) to set the LLM's persona as
;; a "world-class Lisp programmer". They enable the inclusion of the LLM's
;; "thoughts" (`*include-thoughts*`), and then call `invoke-gemini` with a list of
;; parts, which typically include both descriptive instructions and the actual
;; code.
;; *   **Prompt Construction:** The use of `defparameter` for prompt templates and
;; `format` for dynamic value injection is a well-established practice, ensuring
;; clean and maintainable code.
;; *   **ASDF Integration:** The code deeply integrates with ASDF to locate system
;; definition and source files, which is crucial for analyzing complete projects.
;; *   **Contextual Understanding:** For `analyze-source`, it intelligently
;; includes the system definition file along with the target source file to give
;; the LLM the broader project context.
;; 
;; **Areas for Improvement and Potential Bugs:**
;; 
;; *   **Error Handling:** The current code lacks robust error handling. If ASDF
;; can't find a system or component, there isn't a check, and you'll likely get a
;; runtime error.
;; *   **Duplication:** There's some minor duplication in the setup of the system
;; instruction and thought inclusion between `analyze-system-definition` and
;; `analyze-source`.
;; *   **Readability:** While powerful, the use of `series` in `lisp-file-contents`
;; might be less immediately clear to everyone.
;; *   **Magic Strings:** The " ```lisp" and " ``` " markdown fences in
;; `lisp-file-contents` are hardcoded.
;; 
;; **Actionable Suggestions for Improvement:**
;; 
;; 1.  **Critical: Implement robust error handling for ASDF operations.**
;; (Importance: High)
;; 
;;     *   **Reasoning:** Handling errors is crucial for creating a robust system.
;; Invalid inputs, or missing files will result in the code crashing.
;;     *   **Action:** Modify `analyze-system-definition` and `analyze-source` to
;; check if `(asdf:find-system system)` and `(asdf:find-component ... file)`
;; succeed before proceeding. Signal an appropriate error or return a meaningful
;; value if the system or component is not found.
;; 
;; 2.  **Nice to Have: Refactor common LLM setup into a helper macro or function.**
;; (Importance: Medium)
;; 
;;     *   **Reasoning:** This is a nice-to-have, and refactoring will improve the
;; clarity of the code and reduce duplication.
;;     *   **Action:** Create a macro (e.g., `with-llm-analysis-context`) to
;; encapsulate the common code setting the `*system-instruction*` and
;; `*include-thoughts*`.
;; 
;; 3.  **Nice to Have: Use `defconstant` for markdown fence strings.** (Importance:
;; Low)
;; 
;;     *   **Reasoning:** Making the strings ````lisp` and ```` ` constant makes it
;; easier to change markdown styling in the future.
;;     *   **Action:** Define constants for the markdown fences to replace the
;; hardcoded strings in `lisp-file-contents`.
;; 

The provided Lisp code from the analyze file in the gemini system is designed to facilitate automated analysis of Common Lisp source code and ASDF system definitions using a Gemini LLM.


Functionality and Purpose

The primary purpose of this code is to act as an intermediary between Common Lisp source files and the Gemini LLM for code analysis. It prepares the necessary context and prompts for the LLM, then invokes the LLM to perform the analysis.

Here's a breakdown of its key components and their functionality:

  1. +analyze-source-system-instruction+: A defparameter holding a format string that defines the LLM's persona and task. It instructs the LLM to act as a "world class Common Lisp programmer" analyzing files from a specified system. This is used to set the *system-instruction* dynamic variable for the invoke-gemini calls.
  2. +analyze-source-file-prompt+: Another defparameter that stores the main prompt template for analyzing individual source files. This prompt guides the LLM to summarize functionality, identify notable features, point out best practices, potential bugs, and suggest improvements.
  3. lisp-file-contents (source-file): This utility function reads the content of a given Lisp source file and formats it as a single string, wrapped in Markdown code fences (```lisp and ```). It leverages the series library for efficient file processing.
  4. analyze-system-definition (system): This function takes an ASDF system name as input and instructs the Gemini LLM to analyze the system's .asd definition file. It sets up the LLM's system instruction, enables thought generation (*include-thoughts* t), and then passes the system definition file's content to invoke-gemini.
  5. analyze-source (system file): This function is responsible for analyzing a specific source file within a given ASDF system. It also sets the LLM's persona and enables thoughts. Crucially, it provides the LLM with two pieces of code:
    • The system definition file (.asd), prefaced with an instruction not to analyze it but to use it for context regarding dependencies.
    • The actual source file to be analyzed, along with the specific analysis prompt. This intelligent inclusion of the system definition provides the LLM with valuable context about the project structure and dependencies, leading to potentially more insightful analysis.

Notable Features and Patterns

  • LLM Integration: The code demonstrates a clear pattern for integrating with the Gemini LLM, using dynamic variables (*system-instruction*, *include-thoughts*) to control the LLM's behavior and invoke-gemini to send requests.
  • Context Provision: The analyze-source function's approach of providing the system definition file as contextual information, even when analyzing a separate source file, is a sophisticated technique for enhancing LLM understanding and output quality.
  • ASDF Reliance: Heavy and appropriate use of ASDF functions (asdf:find-system, asdf:system-source-file, asdf:find-component, asdf:component-pathname) for locating and accessing files within Lisp projects.
  • Prompt Templating: The use of defparameter for storing prompt templates and format for dynamic insertion of system and file names is a clean and maintainable way to construct prompts.
  • series Library Usage: The lisp-file-contents function showcases the use of the series library for stream-like processing, which can be very efficient for certain tasks.

Areas for Improvement and Potential Bugs

  • Lack of Error Handling for ASDF Operations: The current code does not explicitly handle cases where (asdf:find-system system) or (asdf:find-component (asdf:find-system system) file) might return NIL (i.e., the system or component is not found). This would lead to errors if asdf:system-source-file or asdf:component-pathname are called with NIL.
  • Code Duplication: The setup for the dynamic variables *system-instruction* and *include-thoughts* is identical in both analyze-system-definition and analyze-source.
  • Magic Strings: The Markdown code fence strings (```lisp and ```) are hardcoded within lisp-file-contents, which are sometimes referred to as "magic strings." While common for this specific use case, they could be more explicitly defined.

Suggestions for Improvement

  1. Critical Improvement: Add robust error handling for ASDF operations.
    • Reasoning: The absence of checks for NIL returns from asdf:find-system and asdf:find-component makes the functions brittle. If an invalid system or file name is passed, the program will crash instead of providing a graceful error message.
    • Actionable Suggestion: Modify analyze-system-definition and analyze-source to check if (asdf:find-system system) and (asdf:find-component ... file) return non-NIL values. If NIL, signal a meaningful error (e.g., using error or cerror) to inform the user that the specified system or file could not be found.
    ;; Example for analyze-system-definition
    (defun analyze-system-definition (system)
      "Analyze the system definition file in a specified system."
      (let ((found-system (asdf:find-system system)))
        (unless found-system
          (error "ASDF system '~a' not found." system))
        (let ((*system-instruction* ...)
              (*include-thoughts* t))
          (invoke-gemini
            (list
              (part (format nil "What follows is the definition of the `~a` system. Analyze this file." system))
              (part (lisp-file-contents (asdf:system-source-file found-system))))))))
    
  2. Nice to Have: Refactor common LLM setup into a helper macro or function.
    • Reasoning: The let binding for *system-instruction* and *include-thoughts* is duplicated across analyze-system-definition and analyze-source. Abstracting this into a macro or local function would reduce redundancy, improve code readability, and make future modifications to this setup easier.
    • Actionable Suggestion: Create a macro, for example, with-gemini-analysis-context, that encapsulates the let binding and takes the system name as an argument to format the system instruction.
    (defmacro with-gemini-analysis-context ((system) &body body)
      `(let ((*system-instruction*
                  (content
                    :parts
                    (list
                      (part (format nil +analyze-source-system-instruction+ ,system)))
                    :role "system"))
              (*include-thoughts* t))
          ,@body))
    
    ;; Then, usage in analyze-system-definition:
    (with-gemini-analysis-context (system)
      (invoke-gemini ...))
    
  3. Nice to Have: Use defconstant for Markdown code fence strings.
    • Reasoning: The strings "```lisp" and "```" are literal values hardcoded in lisp-file-contents. Defining them as constants improves maintainability (if the format ever needs to change), readability, and eliminates "magic strings."
    • Actionable Suggestion: Introduce two defconstants for the opening and closing Markdown code fences.
    (defconstant +lisp-code-fence-start+ "```lisp"
      "Markdown fence for starting a Lisp code block.")
    (defconstant +code-fence-end+ "```"
      "Markdown fence for ending a code block.")
    
    (defun lisp-file-contents (source-file)
      "Return the contents of a Lisp source file in a given ASDF system."
      (str:join #\newline
                (collect 'list
                  (catenate (scan (list +lisp-code-fence-start+))
                            (scan-file source-file #'read-line)
                            (scan (list +code-fence-end+))))))