It turns out there’s a nice functional variation of raytracing called raymarching. The algorithms involved are simple and elegant. The learning curve is shallow and you can generate great looking images without hairy trig or linear algebra.

We’ll follow the example of Georges Seurat and simply compute the color independently for each of myriads of pixels. This is efficiently done in parallel in real time on a GPU, but then you have to use shader language and I want to use Lisp. It is insanely inefficient to do this serially on the CPU in Lisp, but still fast enough to render an image in a couple of seconds.

Imagine you have some scene you want to render. There is a volume of 3-dimensional space the scene occupies. Now imagine we know for every point in 3-dimensional space how far away that point is from the nearest surface. This is a scalar value that can be assigned to any point. It is zero for every point that lies on a surface, positive for points above surfaces, and negative for points below. This is the SDF (Signed Distance Field). The SDF is all we need to know to generate a raytraced image of the scene.

We’ll use the SDF to feel our way through the scene. We’ll start at the tip of the ray we’re tracing. We don’t know where the surface is, but if we consult the SDF, we can determine a distance we can safely extend the ray without hitting any surface. From this new point, we can recur, again stepping along no further than the SDF at this new location permits. One of two things will happen: we either step forever or we converge on a surface.

(defun raymarch (sdf origin direction) (let iter ((distance 0) (count 0)) (let* ((position (+ origin (* direction distance))) (free-path (funcall sdf position))) (if (< free-path +min-distance+) position ;; a hit, a very palpable hit (unless (or (> count +max-raymarch-iterations+) (> free-path +max-distance+)) (iter (+ distance free-path) (+ count 1)))))))

To convert an SDF to a Seurat function, we trace an imaginary ray
from our eye, through the screen, and into the scene. The ray
origin is at your eye, and we’ll say that is about 3 units in front
of the window. The ray will travel 3 units to the screen and hit
the window at point `(i,j)`

, so the ray direction is ```
(normalize
(vector i j 3))
```

. We march along the ray to find if we hit a
surface. If we did, we compute the amount of light the camera sees
using the Lambert shading model.

(defun sdf->seurat (sdf) (let ((eye-position (vector 0 0 -4)) (light-direction (normalize (vector 20 40 -30)))) (lambda (i j) (let* ((ray-direction (normalize (vector i j 3))) (hit (raymarch sdf eye-position ray-direction))) (if hit (* #(0 1 0) (lambert sdf hit light-direction)) (vector 0 0 0))))))

Lambert shading is proportional to the angle between the surface and the light falling on it, so we take the dot product of the light direction with the normal to the surface at the point the light hits it. If we know the SDF, we can approximate the normal vector at a point by probing the SDF nearby the point and seeing how it changes.

(defun lambert (sdf hit light-direction) (dot (pseudonormal sdf hit) light-direction)) (defun pseudonormal (sdf position) (let ((o (funcall sdf position)) (dsdx (funcall sdf (+ #(0.001 0 0) position))) (dsdy (funcall sdf (+ #(0 0.001 0) position))) (dsdz (funcall sdf (+ #(0 0 0.001) position)))) (normalize (vector (- dsdx o) (- dsdy o) (- dsdz o)))))

These are all you need to generate good looking 3-d images from a SDF. Now the SDFs for primitive geometric shapes are pretty simple. Here is the SDF for a sphere.

(defun sdf-sphere (position radius) (lambda (vector) (- (length (- vector position)) radius)))

and the SDF for the ground plane

(defun sdf-ground (h) (lambda (vector) (+ (svref vector 1) h)))

Given the SDF for two objects, you can use higher order functions to compose them into a scene. Taking the minimum of two SDFs will give you the union of the shapes. Taking the maximum will give you the intersection of two shapes. Other higher order functions on SDFs can blend two SDFs. This has the effect of morphing the shapes together in the image.

I like this approach to raytracing because the arthimetic is straightforward and obvious. You only need the simplest of vector arithmetic, and you don’t need linear algebra or matrix math to get started (although you’ll want project matrixes later on when you want to move your camera around). I’m more comfortable with recursive functions than 3x3 matrices.

This approach to raytracing is best done on a graphics card. These algorithms are pretty straightforward to code up in shader language, but shader language is fairly primitive and doesn’t have higher order functions or closures. Code written in shader language has to be converted to not use closures and HOFs.