Numerical Analysis-4. Root-Finding (Overview) PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

1/6/2016

ROOT-FINDING
Solution to Nonlinear Equation, = 0.

2 Bisection Method
This is the simplest and the most robust method for finding a root to an equation.
One of the main drawbacks is that we need two initial guesses and which
bracket the root.
Let = ( ) and = ( ) such that 0. Clearly, if = 0 then one or
both of and must be a root of () = 0 based on the Intermediate Value
Theorem.
The basic algorithm for the bisection method relies on repeated application of :
Let = ( + )/2,
If = () = 0 then = is an exact solution,
elseif < 0 then the root lies in the interval ( , ),
else the root lies in the interval ( , ).

Prepared by: ENGR. ALEXANDER S. CARRASCAL

1
1/6/2016

3
Bisection Method

Figure 1: Graphical representation of the bisection method showing two initial guesses
( and bracketting the root).

Prepared by: ENGR. ALEXANDER S. CARRASCAL

4 False Position (Regula Falsi) Method


This method is similar to the bisection method in that it requires two initial
guesses to bracket the root. It is also known as Linear Interpolation Method.
Instead of dividing the region in two, a linear interpolation is used to obtain a
new point which is (hopefully, but not necessarily) closer to the root.
The basic algorithm for the linear interpolation method is:
Let = ( ) and = ( ) such that and let

= = =

If = () = 0 then = is an exact solution,
else if < 0 then the root lies in the interval ( , ),
else the root lies in the interval ( , ).

Prepared by: ENGR. ALEXANDER S. CARRASCAL

2
1/6/2016

5
False Position Method

Figure 2: Graphical representation of the false position method showing two initial guesses
( and bracketting the root).

Prepared by: ENGR. ALEXANDER S. CARRASCAL

6 Newton-Raphson Method
Consider the Taylor Series expansion of () about some point = 0 :
() = (0 ) + ( 0 )(0 ) + ( 0 )2(0 ) + (| 0 |3).
Setting he quadratic and higher terms to zero and solving the linear
approximation of () = 0 for x gives
0
1 = 0
0
Subsequent iterations are defined in a similar manner as

+1 =

Geometrically, +1 can be interpreted as the value of at which a line, passing
through the point ( , ( )) and tangent to the curve () at that point,
crosses the axis.

Prepared by: ENGR. ALEXANDER S. CARRASCAL

3
1/6/2016

7
Newton-Raphson Method

When it works,
Newton-Raphson
converges much
more rapidly than
the bisection or
linear interpolation.

Figure 3: Graphical representation of the Newton-Raphson method with one initial guesse .

Prepared by: ENGR. ALEXANDER S. CARRASCAL

Newton-Raphson Method
However, if ()
vanishes at an
iteration point, or
indeed even between
the current estimate
and the root, then
the method will fail
to converge.

Figure 4: Divergence of the Newton-Raphson method due to the presence of of a turning point close to
the root.

Prepared by: ENGR. ALEXANDER S. CARRASCAL

4
1/6/2016

9
Secant (Chord) Method
This method is essentially the same as Newton-Raphson except that the
derivative () is approximated by a finite difference based on the current and
the preceding estimate for the root, i.e.
1

1
and this is substituted into the Newton-Raphson algorithm to give
1
+1 =
1
This formula is identical to that for the Linear Interpolation method. The
difference is that instead of replacing one of the two estimates so that the root
is always bracketed, the oldest point is always discarded in favour of the new.
This means it is not necessary to have two initial guesses bracketing the root,
but on the other hand, convergence is not guaranteed.

Prepared by: ENGR. ALEXANDER S. CARRASCAL

10
Secant Method

Figure 5: Convergence on the root using the Secant method.


Prepared by: ENGR. ALEXANDER S. CARRASCAL

5
1/6/2016

11
Secant Method

Figure 6: Divergence from the root using the Secant method.


Prepared by: ENGR. ALEXANDER S. CARRASCAL

12
Direct Iteration
A simple and often useful method involves rearranging and possibly transforming
the function () by ((), ) to obtain () = ((), ).
The only restriction on ((), ) is that solutions to () = 0 have a one to
one relationship with solutions to () = for the roots being sort.
The iteration formula for this method is then just
+1 = ( ).
The method will converge only if || < 1. The sign of determines whether the
convergence (or divergence) is monotonic (positive ) or oscillatory (negative ).

Prepared by: ENGR. ALEXANDER S. CARRASCAL

6
1/6/2016

13
Direct Iteration Method

Figure 7: Convergence from the root using the Direct Iteration method.
Prepared by: ENGR. ALEXANDER S. CARRASCAL

You might also like