(define (erf z tolerance) (define (erf-taylor terms) (* (/ 2 (sqrt pi)) (sum 0 terms (lambda (n) (/ (* (expt -1 n) (expt z (+ (* n 2) 1))) (* (factorial n) (+ (* n 2) 1))))))) (define (iter approx terms) (let ((new-approx (erf-taylor terms))) (if (and (> new-approx -1.0) (< new-approx 1.0) (< (abs (- new-approx approx)) tolerance)) new-approx (iter new-approx (+ terms 1))))) (iter 0.0 1))This was good enough to get a curve plotted, but it is a terrible way to compute

`Erf`

. Since `Erf`

is such an important function in statistics, I thought that I should put a *little*more effort into it, so I looked around and found “Computation of the error function erf in arbitrary precision with correct rounding” by Chevillard and Ravol. This paper seriously goes to town. Their goal is to return the floating point number closest to the actual value of

`Erf`

according to the rounding rules. It isn't enough to simply calculate `Erf`

using floats, you have to keep careful track of where rounding errors can happen and go out of your way to minimize them. This is probably more work than it is worth for me, but I assume that Chevillard and Ravol have already done the *really*hard analysis.

Chevillard and Ravol considered an approach using continued fractions. They dismissed it as being too slow for practical computation. They're correct. But it just so happens that I wrote a continued fraction library for Scheme, so although the technique is slow, using a continued fraction approximation is the easiest path to validating the faster computations.

Unfortunately, my continued fraction library is too limited to deal with non-integer coefficients, so I have to fix that first.