## Monday, August 30, 2021

### Tail recursion and fold-left

`fold-left` has this basic recursion:

```(fold-left f init ())      = init
(fold-left f init (a . d)) = (fold-left f (f init a) d)```
A straightforward implementation of this is
```(defun fold-left (f init list)
(if (null list)
init
(fold-left f (funcall f init (car list)) (cdr list))))```
The straightforward implementation uses a slightly more space than necessary. The call to `f` occurs in a subproblem position, so there the stack frame for `fold-left` is preserved on each call and the result of the call is returned to that stack frame.

But the result of `fold-left` is the result of the last call to `f`, so we don't need to retain the stack frame for `fold-left` on the last call. We can end the iteration on a tail call to `f` on the final element by unrolling the loop once:

```(defun fold-left (f init list)
(if (null list)
init
(fold-left-1 f init (car list) (cdr list))))

(defun fold-left-1 (f init head tail)
(if (null tail)
(fold-left-1 f (funcall f init head) (car tail) (cdr tail))))```

There aren't many problems where this would make a difference (a challenge to readers is to come up with a program that runs fine with the unrolled loop but causes a stack overflow with the straightforward implementation), but depending on how extreme your position on tail recursion is, this might be worthwhile.

## Friday, August 27, 2021

### A Floating-point Problem

Here's a 2x2 matrix:

```[64919121   -159018721]
[41869520.5 -102558961]```
We can multiply it by a 2 element vector like this:
```(defun mxv (a b
c d

x
y

(+ (* a x) (* b y))
(+ (* c x) (* d y))))

* (mxv 64919121     -159018721
41869520.5d0 -102558961

3
1

#'list)

(35738642 2.30496005d7)```
Given a matrix and a result, we want to find the 2 element vector that produces that result. To do this, we compute the inverse of the matrix:
```(defun m-inverse (a b
c d

(let ((det (- (* a d) (* b c))))
(/ d det) (/ (- b) det)
(/ (- c) det) (/ a det))))```
and multiply the inverse matrix by the result:
```(defun solve (a b
c d

x
y

(m-inverse a b
c d
(lambda (ia ib
ic id)
(mxv ia ib
ic id

x
y
So we can try this on our matrix
```* (solve 64919121     -159018721
41869520.5d0 -102558961

1
0
#'list)

(1.02558961d8 4.18695205d7)```
and we get the wrong answer.

```* (solve 64919121         -159018721
(+ 41869520 1/2) -102558961

1
0
#'list)

(205117922 83739041)```
If we use double precision floating point, we get the wrong answer by a considerable margin.

I'm used to floating point calculations being off a little in the least significant digits, and I've seen how the errors can accumulate in an iterative calculation, but here we've lost all the significant digits in a straightforward non-iterative calculation. Here's what happened: The determinant of our matrix is computed by subtracting the product of the two diagonals. One diagonal is ```(* 64919121 -102558961) = -6658037598793281```, where the other diagonal is ```(* (+ 41869520 1/2) -159018721) = -6658037598793280.5``` This second diagonal product cannot be represented in double precision floating point, so it is rounded down to -6658037598793280. This is where the error is introduced. An error of .5 in a quantity of -6658037598793281 is small indeed, but we amplify this error when we subtract out the other diagonal. We still have an absolute error of .5, but now it occurs within a quantity of 1, which makes it relatively huge. This is called “catastrophic cancellation” because the subtraction “cancelled” all the significant digits (the “catastrophe” is presumably the amplification of the error).

I don't care for the term “catastrophic cancellation” because it places the blame on the operation of subtraction. But the subtraction did nothing wrong. The difference betweeen -6658037598793280 and -6658037598793281 is 1 and that is the result we got. It was the rounding in the prior step that introduced an incorrect value into the calculation. The subtraction just exposed this and made it obvious.

One could be cynical and reject floating point operations as being too unreliable. When we used exact rationals, we got the exactly correct result. But rational numbers are much slower than floating point and they have a tendancy to occupy larger and larger amounts of memory as the computation continues. Floating point is fast and efficient, but you have to be careful when you use it.

## Wednesday, August 18, 2021

### Fold right

`fold-left` takes arguments like this:

`(fold-left function init list)`
and computes
```* (fold-left (lambda (l r) `(f ,l ,r)) 'init '(a b c))
(F (F (F INIT A) B) C)```
Notice how `init` is the leftmost of all the arguments to the function, and each argument appears left to right as it is folded in.

Now look at the usual way `fold-right` is defined:

`(fold-right function init list)`
It computes
```* (fold-right (lambda (l r) `(f ,l ,r)) 'init '(a b c))
(F A (F B (F C INIT)))```
although `init` appears first and to the left of `'(a b c)` in the arguments to `fold-right`, it is actually used as the rightmost argument to the last application.

It seems to me that the arguments to `fold-right` should be in this order:

```; (fold-right function list final)
* (fold-right (lambda (l r) `(f ,l ,r)) '(a b c) 'final)
(F A (F B (F C FINAL)))```
The argument lists to `fold-left` and `fold-right` would no longer match, but I think switching things around so that the anti-symmetry of the arguments matches the anti-symmetry of the folding makes things clearer.