I found a polynomial approximation to

`Erf`

:

(define (polynomial-erf z)
;; Numerical Recipies 6.2
;; fractional error < 1.2e-7
(let* ((t (/ 1.0 (+ 1.0 (* .5 (abs z)))))
(ans (- 1.0 (* t
(exp (+ (- (* z z))
(+ -1.26551223
(* t (+ 1.00002368
(* t (+ .37409196
(* t (+ .09678418
(* t (+ -.18628806
(* t (+ .27886807
(* t (+ -1.13520398
(* t (+ 1.48851587
(* t (+ -.82215223
(* t .17087277))))))))))))))))))))))))
(if (>= z 0) ans (- ans))))

For kicks I plotted the difference between my Taylor series expansion and the polynomial approximation. (Since the polynomial approximation probably started life as the Taylor series, I wasn't expecting much enlightenment.) My Taylor series version has a tunable tolerance, though, so I dialed it up beyond the 1.2e-7 tolerance that the polynomial approximation claims. I got this graph:

So what are we seeing? First, my Taylor series version does a pretty good job for small values, but when we get to about

`Erf(1.5)`

or so, you can see the beginnings of a ‘square wave’ artifact. The discrete jumps are caused by adding more terms to the expansion. Down at the low end we see a smooth wiggly curve. This is the error from the polynomial approximation. Since both the polynomial and Taylor series are approximations, we cannot quantitatively determine how much error comes from each approximation, but we have a rough idea about what is going on.

## 1 comment:

I would generally skip Numerical Recipes, and go to sources like GAMS (Guide to Available Mathematical Software):

http://gams.nist.gov/cgi-bin/serve.cgi/Class/C8a has lots of code that is newer than NR, and probably performs better.

Post a Comment