summaryrefslogtreecommitdiffstats
path: root/scilab_doc/scilabisnotnaive/numericalderivative.tex
blob: 7ce07689eb1e99ecbe97a43c5b6e0bd7b72c6dca (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
\section{Numerical derivatives}

In this section, we detail the computation of the numerical derivative of 
a given function.

In the first part, we briefly report the first order forward formula, which 
is based on the Taylor theorem.
We then present the na´ve algorithm based on these mathematical formulas. 
In the second part, we make some experiments in Scilab and compare our
na´ve algorithm with the \emph{derivative} Scilab primitive.
In the third part, we analyse 
why and how floating point numbers must be taken into account when the 
numerical derivatives are to compute.

\subsection{Theory}

The basic result is the Taylor formula with one variable \cite{dixmier}

\begin{eqnarray}
f(x+h) &=& f(x) 
+ h f^\prime(x)
+\frac{h^2}{2} f^{\prime \prime}(x)
+\frac{h^3}{6} f^{\prime \prime \prime}(x)
+\frac{h^4}{24} f^{\prime \prime \prime \prime}(x) + \mathcal{O}(h^5)
\end{eqnarray}

If we write the Taylor formulae of a one variable function $f(x)$ 
\begin{eqnarray}
f(x+h) &\approx& f(x) + h \frac{\partial f}{\partial x}+ \frac{h^2}{2} f^{\prime \prime}(x)
\end{eqnarray}
we get the forward difference which approximates the first derivate at order 1 
\begin{eqnarray}
f^\prime(x) &\approx& \frac{f(x+h)  - f(x)}{h} + \frac{h}{2} f^{\prime \prime}(x)
\end{eqnarray}

The naive algorithm to compute the numerical derivate of 
a function of one variable is presented in figure \ref{naive-numericalderivative}.

\begin{figure}[htbp]
\begin{algorithmic}
\STATE $f'(x) \gets (f(x+h)-f(x))/h$
\end{algorithmic}
\caption{Naive algorithm to compute the numerical derivative of a function of one variable}
\label{naive-numericalderivative}
\end{figure}

\subsection{Experiments}

The following Scilab function is a straitforward implementation
of the previous algorithm.

\lstset{language=Scilab}
\lstset{numbers=left}
\lstset{basicstyle=\footnotesize}
\lstset{keywordstyle=\bfseries}
\begin{lstlisting}
function fp = myfprime(f,x,h)
  fp = (f(x+h) - f(x))/h;
endfunction
\end{lstlisting}

In our experiments, we will compute the derivatives of the 
square function $f(x)=x^2$, which is $f'(x)=2x$.
The following Scilab script implements the square function.

\lstset{language=Scilab}
\lstset{numbers=left}
\lstset{basicstyle=\footnotesize}
\lstset{keywordstyle=\bfseries}
\begin{lstlisting}
function y = myfunction (x)
  y = x*x;
endfunction
\end{lstlisting}

The most na´ve idea is that the computed relative error 
is small when the step $h$ is small. Because \emph{small}
is not a priori clear, we take $\epsilon\approx 10^{-16}$
in double precision as a good candidate for \emph{small}.
In the following script, we compare the computed 
relative error produced by our na´ve method with step
$h=\epsilon$ and the \emph{derivative} primitive with
default step.

\lstset{language=Scilab}
\lstset{numbers=left}
\lstset{basicstyle=\footnotesize}
\lstset{keywordstyle=\bfseries}
\begin{lstlisting}
x = 1.0;
fpref = derivative(myfunction,x,order=1);
e = abs(fpref-2.0)/2.0;
mprintf("Scilab f''=%e, error=%e\n", fpref,e);
h = 1.e-16;
fp = myfprime(myfunction,x,h);
e = abs(fp-2.0)/2.0;
mprintf("Naive f''=%e, h=%e, error=%e\n", fp,h,e);
\end{lstlisting}

When executed, the previous script prints out :

\begin{verbatim}
Scilab f'=2.000000e+000, error=7.450581e-009
Naive f'=0.000000e+000, h=1.000000e-016, error=1.000000e+000
\end{verbatim}

Our na´ve method seems to be quite inaccurate and has not 
even 1 significant digit !
The Scilab primitive, instead, has 9 significant digits.

Since our faith is based on the truth of the mathematical
theory, some deeper experiments must be performed.
We then make the following experiment, by taking an
initial step $h=1.0$ and then dividing $h$ by 10 at each
step of a loop with 20 iterations.

\lstset{language=Scilab}
\lstset{numbers=left}
\lstset{basicstyle=\footnotesize}
\lstset{keywordstyle=\bfseries}
\begin{lstlisting}
x = 1.0;
fpref = derivative(myfunction,x,order=1);
e = abs(fpref-2.0)/2.0;
mprintf("Scilab f''=%e, error=%e\n", fpref,e);
h = 1.0;
for i=1:20
  h=h/10.0;
  fp = myfprime(myfunction,x,h);
  e = abs(fp-2.0)/2.0;
  mprintf("Naive f''=%e, h=%e, error=%e\n", fp,h,e);
end
\end{lstlisting}

Scilab then produces the following output.

\begin{verbatim}
Scilab f'=2.000000e+000, error=7.450581e-009
Naive f'=2.100000e+000, h=1.000000e-001, error=5.000000e-002
Naive f'=2.010000e+000, h=1.000000e-002, error=5.000000e-003
Naive f'=2.001000e+000, h=1.000000e-003, error=5.000000e-004
Naive f'=2.000100e+000, h=1.000000e-004, error=5.000000e-005
Naive f'=2.000010e+000, h=1.000000e-005, error=5.000007e-006
Naive f'=2.000001e+000, h=1.000000e-006, error=4.999622e-007
Naive f'=2.000000e+000, h=1.000000e-007, error=5.054390e-008
Naive f'=2.000000e+000, h=1.000000e-008, error=6.077471e-009
Naive f'=2.000000e+000, h=1.000000e-009, error=8.274037e-008
Naive f'=2.000000e+000, h=1.000000e-010, error=8.274037e-008
Naive f'=2.000000e+000, h=1.000000e-011, error=8.274037e-008
Naive f'=2.000178e+000, h=1.000000e-012, error=8.890058e-005
Naive f'=1.998401e+000, h=1.000000e-013, error=7.992778e-004
Naive f'=1.998401e+000, h=1.000000e-014, error=7.992778e-004
Naive f'=2.220446e+000, h=1.000000e-015, error=1.102230e-001
Naive f'=0.000000e+000, h=1.000000e-016, error=1.000000e+000
Naive f'=0.000000e+000, h=1.000000e-017, error=1.000000e+000
Naive f'=0.000000e+000, h=1.000000e-018, error=1.000000e+000
Naive f'=0.000000e+000, h=1.000000e-019, error=1.000000e+000
Naive f'=0.000000e+000, h=1.000000e-020, error=1.000000e+000
\end{verbatim}

We see that the relative error begins by decreasing, and then 
is increasing.
Obviously, the optimum step is approximately $h=10^{-8}$, where the
relative error is approximately $e_r=6.10^{-9}$. 
We should not be surprised to see that Scilab has computed 
a derivative which is near the optimum.

\subsection{Explanations}

\subsubsection{Floating point implementation}

With a floating point computer, the total 
error that we get from the forward difference approximation
is (skipping the multiplication constants) the sum of the 
linearization error $E_l = h$ (i.e. the $\mathcal{O}(h)$ term)
and the rounding error $rf(x)$ on the difference $f(x+h)  - f(x)$
\begin{eqnarray}
E = \frac{rf(x)}{h} + \frac{h}{2} f^{\prime \prime}(x)
\end{eqnarray}
When $h\rightarrow \infty$, the error is then the sum of a
term which converges toward $+\infty$ and a term which converges toward 0.
The total error is minimized when both terms are equal.
With a single precision computation, the rounding error is $r = 10^{-7}$
and with a double precision computation, the rounding error is $r = 10^{-16}$.
We make here the assumption that the values $f(x)$ and 
$f^{\prime \prime}(x)$ are near 1 so that the error can be written 
\begin{eqnarray}
E = \frac{r}{h} + h
\end{eqnarray}
We want to compute the step $h$ from the rounding error $r$ with a 
step satisfying 
\begin{eqnarray}
h = r^\alpha
\end{eqnarray}
for some $\alpha > 0$.
The total error is therefore 
\begin{eqnarray}
E = r^{1-\alpha} + r^\alpha
\end{eqnarray}
The total error is minimized when both terms are equal, that is, 
when the exponents are equal  $1-\alpha = \alpha$ which leads to 
\begin{eqnarray}
\alpha = \frac{1}{2}
\end{eqnarray}
We conclude that the step which minimizes the error is 
\begin{eqnarray}
h = r^{1/2}
\end{eqnarray}
and the associated error is 
\begin{eqnarray}
E = 2 r^{1/2}
\end{eqnarray}

Typical values with single precision are $h = 10^{-4}$ and $E=2. 10^{-4}$
and with double precision $h = 10^{-8}$ and $E=2. 10^{-8}$.
These are the minimum error which are achievable with a forward difference
numerical derivate.

To get a significant value of the step $h$, the step is computed
with respect to the point where the derivate is to compute
\begin{eqnarray}
h = r^{1/2} x
\end{eqnarray}

One can generalize the previous computation with the 
assumption that the scaling parameter from the Taylor 
expansion is $h^{\alpha_1}$ and the order of the formula
is $\mathcal{O}(h^{\alpha_2})$. The total error is then 
\begin{eqnarray}
E = \frac{r}{h^{\alpha_1}} + h^{\alpha_2}
\end{eqnarray}
The optimal step is then 
\begin{eqnarray}
h = r^{\frac{1}{\alpha_1 + \alpha_2}}
\end{eqnarray}
and the associated error is 
\begin{eqnarray}
E = 2 r^{\frac{\alpha_2}{\alpha_1 + \alpha_2}}
\end{eqnarray}


An additional trick \cite{NumericalRecipes} is to compute the 
step $h$ so that the rounding error for the sum $x+h$ is minimum.
This is performed by the following algorithm, which implies a temporary 
variable $t$
\begin{eqnarray}
t = x + h\\
h = t - h
\end{eqnarray}


\subsubsection{Results}

In the following results, the variable $x$ is either a 
scalar $x^in \RR$ or a vector $x\in \RR^n$.
When $x$ is a vector, the step $h_i$ is defined by
\begin{eqnarray}
h_i = (0,\ldots,0,1,0,\ldots,0)
\end{eqnarray}
so that the only non-zero component of $h_i$ is the $i$-th component.

\begin{itemize}

\item First derivate : forward 2 points 
\begin{eqnarray}
f^\prime(x) &\approx& \frac{f(x+h)  - f(x)}{h} + \mathcal{O}(h)
\end{eqnarray}
Optimal step : $h = r^{1/2}$ and error $E=2r^{1/2}$.\\
Single precision : $h \approx 10^{-4}$ and $E\approx 10^{-4}$.\\
Double precision $h \approx 10^{-8}$ and $E\approx 10^{-8}$.

\item First derivate : backward 2 points 
\begin{eqnarray}
f^\prime(x) &\approx& \frac{f(x) - f(x-h)}{h} + \mathcal{O}(h)
\end{eqnarray}
Optimal step : $h = r^{1/2}$ and error $E=2r^{1/2}$.\\
Single precision : $h \approx 10^{-4}$ and $E\approx 10^{-4}$.\\
Double precision $h \approx 10^{-8}$ and $E\approx 10^{-8}$.

\item First derivate : centered 2 points 
\begin{eqnarray}
f^\prime(x) &\approx& \frac{f(x+h) - f(x-h)}{2h} + \mathcal{O}(h^2)
\end{eqnarray}
Optimal step : $h = r^{1/3}$ and error $E=2r^{2/3}$.\\
Single precision : $h \approx 10^{-3}$ and $E\approx 10^{-5}$.\\
Double precision $h \approx 10^{-5}$ and $E\approx 10^{-10}$.

\end{itemize}

\subsubsection{Robust algorithm}

The robust algorithm to compute the numerical derivate of 
a function of one variable is presented in figure \ref{robust-numericalderivative}.

\begin{figure}[htbp]
\begin{algorithmic}
\STATE $h \gets \sqrt{\epsilon}$
\STATE $f'(x) \gets (f(x+h)-f(x))/h$
\end{algorithmic}
\caption{A more robust algorithm to compute the numerical derivative of a function of one variable}
\label{robust-numericalderivative}
\end{figure}

\subsection{One more step}

In this section, we analyse the behaviour of \emph{derivative}
when the point $x$ is either large $x \rightarrow \infty$, 
when $x$ is small $x \rightarrow 0$ and when $x = 0$.
We compare these results with the \emph{numdiff} command,
which does not use the same step strategy. As we are going 
to see, both commands performs the same when $x$ is near 1, but 
performs very differently when x is large or small.

We have allready explained the theory of the floating 
point implementation of the \emph{derivative} command.
Is it completely \emph{bulletproof} ? Not exactly. 

See for example the following Scilab session, where one computes the 
numerical derivative of $f(x)=x^2$ for $x=10^{-100}$. The 
expected result is $f'(x) = 2. \times 10^{-100}$.

\begin{verbatim}
-->fp = derivative(myfunction,1.e-100,order=1)
 fp  =
    0.0000000149011611938477  
-->fe=2.e-100
 fe  =
    2.000000000000000040-100  
-->e = abs(fp-fe)/fe
 e  =
    7.450580596923828243D+91  
\end{verbatim}

The result does not have any significant digits.

The explanation is that the step is computed with $h = \sqrt{eps}\approx 10^{-8}$.
Then $f(x+h)=f(10^{-100} + 10^{-8}) \approx f(10^{-8}) = 10^{-16}$, because the 
term $10^{-100}$ is much smaller than $10^{-8}$. The result of the 
computation is therefore $(f(x+h) - f(x))/h = (10^{-16} + 10^{-200})/10^{-8} \approx 10^{-8}$.

The additionnal experiment 

\begin{verbatim}
-->sqrt(%eps)
 ans  =
    0.0000000149011611938477  
\end{verbatim}

allows to check that the result of the computation simply is $\sqrt{eps}$.
That experiment shows that the \emph{derivative} command uses a 
wrong defaut step $h$ when $x$ is very small.

To improve the accuracy of the computation, one can take control of the 
step $h$. A reasonable solution is to use $h=\sqrt{\epsilon}|x|$ so that the 
step is scaled depending on $x$. 
The following script illustrates than method, which produces 
results with 8 significant digits.

\begin{verbatim}
-->fp = derivative(myfunction,1.e-100,order=1,h=sqrt(%eps)*1.e-100)
 fp  =
    2.000000013099139394-100  
-->fe=2.e-100
 fe  =
    2.000000000000000040-100  
-->e = abs(fp-fe)/fe
 e  =
    0.0000000065495696770794  
\end{verbatim}

But when $x$ is exactly zero, the scaling method cannot work, because 
it would produce the step $h=0$, and therefore a division by zero
exception. In that case, the default step provides a good accuracy.

Another command is available in Scilab to compute the 
numerical derivatives of a given function, that is \emph{numdiff}.
The \emph{numdiff} command uses the step 
\begin{eqnarray}
h=\sqrt{\epsilon}(1+10^{-3}|x|).
\end{eqnarray}
In the following paragraphs, we try to analyse why this formula 
has been chosen. As we are going to check experimentally, this step
formula performs better than \emph{derivative} when $x$ is 
large.

As we can see the following session, the behaviour is approximately 
the same when the value of $x$ is 1.

\begin{verbatim}
-->fp = numdiff(myfunction,1.0)
 fp  =
    2.0000000189353417390237  
-->fe=2.0
 fe  =
    2.  
-->e = abs(fp-fe)/fe
 e  =
    9.468D-09  
\end{verbatim}

The accuracy is slightly decreased with respect to the optimal
value 7.450581e-009 which was produced by derivative. But the number 
of significant digits is approximately the same, i.e. 9 digits.

The goal of this step is to produce good accuracy when the value of $x$ 
is large, where the \emph{numdiff} command produces accurate
results, while \emph{derivative} performs poorly.

\begin{verbatim}
-->numdiff(myfunction,1.e10)
  ans  =
    2.000D+10  
-->derivative(myfunction,1.e10,order=1)
 ans  =
    0.  
\end{verbatim}

This step is a trade-off because it allows to keep a good accuracy with
large values of $x$, but produces a slightly sub-optimal step size when 
$x$ is near 1. The behaviour near zero is the same, i.e. both commands 
produce wrong results when $x \rightarrow 0$ and $x\neq 0$.

\subsection{References}

A reference for numerical derivates 
is \cite{AbramowitzStegun1972}, chapter 25. "Numerical Interpolation, 
Differentiation and Integration" (p. 875).
The webpage \cite{schimdtnd} and the book \cite{NumericalRecipes} give
results about the rounding errors.