0% found this document useful (0 votes)
181 views

Module On Numerical Analysis I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
181 views

Module On Numerical Analysis I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 112

Module on Numerical Analysis I 2013EC

Hawassa University Daye Campus


Department of Mathematics

Module on Numerical Analysis I

Prepared by: Gedefaw Amsale (MSc)


Reviewed by: Ashebir Endale (MSc)

Edited by: Belsty Wale (MSc)

December, 2013EC
Module on Numerical Analysis I 2013EC

Table of Contents
Introduction....................................................................................................................................1
CHAPTER ONE............................................................................................................................2
ERRORS IN NUMERICAL COMPUTATIONS.......................................................................2
1.1 Numerical Analysis...........................................................................................................................3
1.2 Some Mathematical preliminaries..................................................................................................4
1.2.1 Theorem (Intermediate value theorem).......................................................................................4
1.2.2 Theorem (Location theorem)......................................................................................................4
1.2.3 Theorem (Mean value theorem)..................................................................................................4
1.2.4 Theorem (Taylor’s theorem).......................................................................................................4
1.3 Errors.................................................................................................................................................4
1.3.1 Definition of Error......................................................................................................................4
1.4 Sources of Errors...........................................................................................................................5
1.4.1 Modeling Error...........................................................................................................................5
1.4.2 Data Uncertainty.........................................................................................................................5
1.4.3 The numerical Methods used......................................................................................................5
1.4.4 Computational Error...................................................................................................................5
1.5 Types of Error....................................................................................................................................6
1.5.1 Inherent Error.............................................................................................................................6
1.5.2 Round off Error..........................................................................................................................6
1.5.3. Truncation Error........................................................................................................................6
1.5.4. Absolute, Relative and Percentage Error...................................................................................7
1.6 Decimal places and Significant Digits...............................................................................................7
1.7 Propagation of Errors in Function Evaluations................................................................................10
CHAPTER TWO.........................................................................................................................11
SOLUTIONS OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS......................11
INTRODUCTION.......................................................................................................................11
Theorem (Intermediate value theorem)..............................................................................................12
Theorem (Location theorem).............................................................................................................12
2.1 The Bisection Methods....................................................................................................................12
2.2. Fixed Point Iteration (Iteration) Method.........................................................................................18
2.3 Newton-Raphson Method................................................................................................................21
2.4 The Secant Method..........................................................................................................................25
Module on Numerical Analysis I 2013EC

CHAPTER THREE.....................................................................................................................28
SOLUTIONS OF SYSTEMS OF LINEAR EQUATIONS......................................................28
3.1. Introduction...............................................................................................................................28
3.2. Direct Methods for Solving System of Linear Equations................................................................29
3.2.1. Gaussian Elimination Methods................................................................................................34
3.2.2. Gauss-Jordan Elimination Methods.........................................................................................35
3.2.3 Tri-diagonal matrix method......................................................................................................37
3.2.4 LU Decomposition Methods.....................................................................................................40
3.3 Iterative Methods for Solving System of Linear Equations.............................................................43
3.3.1 Gauss-Jacobi’s method.............................................................................................................43
3.3.2 Gauss-Seidel method................................................................................................................46
CHAPTER FOUR.......................................................................................................................56
INTERPOLATION.....................................................................................................................56
4.1 Introduction.....................................................................................................................................57
4.2. Finite Difference.............................................................................................................................57
4.2.1. Forward Differences................................................................................................................57
4.2.2. Backward Differences..............................................................................................................59
4.2.3. Central Differences..................................................................................................................61
4.2.4. The shift Operator (E)..............................................................................................................63
4.2.5. Averaging Operator ( μ)...........................................................................................................63
4.3. Interpolation with Equally Spaced points.......................................................................................65
4.3.1. Newton’s Forward Interpolation Formula................................................................................65
4.3.2. Newton’s Backward Interpolation Formula.............................................................................71
4.4. Interpolation with unequally spaced point......................................................................................73
4.4.1. Lagrange’s Interpolation Formula............................................................................................73
4.4.2. Newton’s General Interpolation Formula with Divided Difference...................................77
CHAPTER FIVE.........................................................................................................................83
NUMERICAL DIFFERENTIATION AND INTEGRATION................................................83
5.2. Numerical Differentiation...............................................................................................................84
5.2.1. Derivatives Using Newton’s Forward Interpolation Formula..................................................84
5.2.2. Derivatives Using Newton’s Backward Differences Formula..................................................85
5.3. Numerical Integration.....................................................................................................................91
5.3.1. Newton-Cotes Quadrature Formula.........................................................................................91
Module on Numerical Analysis I 2013EC

5.3.2. The Trapezoidal Rule...............................................................................................................92


5.3.3. The Simpson’s 1/3-Rules.........................................................................................................93
5.3.4. The Simpson’s 3/8-Rules.........................................................................................................94
5.3.5. Boole’s-Rules..........................................................................................................................94
5.3.6. Weddle’s-Rules.......................................................................................................................95
References...................................................................................................................................102
Module on Numerical Analysis I 2013EC

Introduction
Unlike other terms denoting mathematical disciplines, such as calculus or linear algebra the exact
extent of the discipline called numerical analysis is not yet clearly defined. By numerical
analysis, we mean the theory of constructive methods in mathematical analysis. By constructive
method we mean a procedure that permits us to obtain the solution of a mathematical problem
with an arbitrary precision in a finite number of steps that can be prepared rationally (the number
of steps depend on the desired accuracy).

Numerical analysis is both a science and an art. As a science, it is concerned with the process by
which a mathematical problem can be solved by the operations of arithmetic. As an art,
numerical analysis is concerned with choosing the procedure which is best suited to the solution
of a particular problem. In general numerical analysis is the study of appropriate algorithms for
solving problems of mathematical analysis by means of arithmetic calculations.

The analytic methods have certain limitations in practical applications. So in most cases exact
solution is not possible from applied mathematics. In such cases the numerical methods are very
important tools to provide practical method for calculating the solutions of problems to a desired
degree of accuracy. The use of high-speed digital computers for solving problems in the field of
engineering design and scientific research increased the demand for numerical methods.

The problems that are impossible to solve by classical methods or that are too formidable for a
solution by manual computation can be resolved in a minimum of time using digital computers.
So it is essential for the modern man to be familiar with the numerical methods used in
programming problems using the computers.

The computer however is only as useful as the numerical methods employed. If the method is
inherently inefficient, the computer solution is worthless, no matter how efficiently the method is
organized and programmed. On the other hand, an accurate, efficient numerical method will
produce poor results if inefficiently organized or poorly programmed. The proper utilization of
the computer to solve scientific problems requires the development of a program on numerical
method suitable to the problem.
Module on Numerical Analysis I 2013EC

CHAPTER ONE

ERRORS IN NUMERICAL COMPUTATIONS


Chapter Objectives
After studying this chapter, you should be able to:
 Mention the ways of determining the number of significant figures in a number.
 Represent numbers in a fixed point representation system.
 Use the IEEE standard floating-point to represent numbers
 Discuss sources of errors.
 Classify errors.
 Discuss the difference between chopping and rounding.
 Discuss measurements of error.
 Calculate absolute error.
 Calculate relative error.
 Determine the number of significant digits in an approximation.
 Discuss the propagation errors due to different arithmetic operations.
In talking real situations in our daily lives, we encounter a wide range of mathematical problems
related to engineering, physical life and social sciences, business, economics, industry and
others. To make decision in these disciplines, there is a high demand of solving such problems
and seeking numerical answers. There are generally two methods in which we solve
mathematical problems: analytic and numerical methods.

Analytic Methods: are those which produce exact solutions for mathematical problems. If it is
possible it is usually best to use these solutions. However, it is difficult or impossible to obtain
exact solutions for most practical problems. Consequently, analytic methods have limited
practical application and use only for few classes of mathematical problems.

Numerical Methods: are those by which approximate solutions are obtained for mathematical
problems. These methods involve large number of tedious arithmetic calculations. However, the
development of fast and efficient digital computers has simplified the calculation and hence the
Module on Numerical Analysis I 2013EC

role of numerical methods in solving mathematical problems has increased dramatically in recent
years. Numerical methods may divided in to two types: direct and iterative methods.

Direct Methods: refers to a procedure for computing a solution using a formula that is
mathematically exact. In practice, since the computer works with a limited memory, direct
methods are subjected to rounding errors and hence they don’t give exact solutions.

Iterative Methods: starts with an initial approximation and which, by applying suitable chosen
procedures, leads to successive better approximations. The process will be repeated until the
required accuracy is achieved. Iterative methods are often preferred over direct methods because
of simpler programming requirements or computer memory size limitation, even though the
direct method may seem simpler to explain and use.

1.1 Numerical Analysis


Numerical Analysis: is concerned with mathematical derivation of numerical methods, designing
an algorithm for the methods, implementation of these algorithms on computers and analysis of
the error associated with the methods.

In numerical analysis, the process of solving mathematical problems involves the following three
phases.

Phase 1: Identify the problems to be solved. It requires, depth analysis of mathematical


equations, providing the input and output data and describing their relationships.

Phase2: Designing an algorithm. An algorithm is a complete and unambiguous set of procedures


that will determine the answer to a problem with a finite steps.

Phase3: The designed algorithm will be coded or translated to a computer language. This process
is called programming.
Module on Numerical Analysis I 2013EC

1.2 Some Mathematical preliminaries


1.2.1 Theorem (Intermediate value theorem)
If f is continuous on [a , b] and k is a number between f (a) and f (b), then there exists a
number c in (a , b) such that f ( c ) =k .

1.2.2 Theorem (Location theorem)


If f is continuous on[a , b], f ( a ) f ( b ) <0 , then there exists a number c in (a , b) such that f ( c ) =0
.

1.2.3 Theorem (Mean value theorem)


If f is continuous on [a , b] and differentiable on(a , b), then there exists a number c in (a , b)
f ( b )−f (a)
such that f ' ( c )= .
b−a

1.2.4 Theorem (Taylor’s theorem)


Suppose f has n+1 continuous derivatives on [a , b] , then f can be expressed near x=x 0 in
[a , b] as:
'' 2 (n ) n
'
f ( x 0 ) ( x−x 0 ) f ( x 0 ) ( x−x 0 )
f ( x )=f ( x 0 ) +f ( x 0 ) ( x−x 0 ) + + …+ +…
2 n!

1.3 Errors
1.3.1 Definition of Error

Error = true value – approximate value

For numerical methods, the true value will be known only when we deal with functions that can
be solved analytically. However, in real world applications, we can’t obtain exact solutions and
consequently we can’t compute the associated errors. For these situations, an alternative is to use
the best available estimate of the true value.

Errors are deliberate but often unavoidable part of the solution process, are not mistakes which
are attributed due to human imperfection or malfunctions of computers.
Module on Numerical Analysis I 2013EC

1.4 Sources of Errors

The following are the main sources of error in obtaining numerical solutions to mathematical
problems.

1.4.1 Modeling Error

A mathematical modeling for physical solution is an attempt to give mathematical relationships


between certain quantities or parameters of the physical situations. Because of the complexity of
physical reality, a Varity of simplifying assumptions are used to construct more tractable
mathematical models. The resulting model has limitations on its accuracy as consequence of
these assumptions, and these limitations may or may not be troublesome, depending on the uses
of the model. In the case that the model is not accurate, no numerical method will improve this
basic lack of accuracy.

1.4.2 Data Uncertainty

Input data from a physical problem may contain error or inaccuracy within it associated with
measurement or previous computations. This affects the accuracy of any calculation based on the
data, limiting the accuracy of answers. Hence, there may be an error involved due to uncertainty
of physical data.

1.4.3 The numerical Methods used

In most cases numerical methods are approximate by themselves, that is even if the initial data
are void of errors all the arithmetic operations are identically performed, they yield the solution
of the original with some error which is called the error of the method. In a number of cases the
numerical method is constructed on the basis of an infinite process which leads the desired
solution (for instance computing the value of an elementary function with the aid of partial sums
of power series in to which this function is expanded). But in reality, the passage of the limit
usually fails to be realized and the process interrupted at a certain step yields an approximation.

1.4.4 Computational Error

In numerical computations, we usually come across two types of errors:


Module on Numerical Analysis I 2013EC

Round off Error: arises because it is impossible to represent all real numbers exactly on a finite
state machine since a world in a computer memory stores only a finite number of digits and
hence a computer numbers have limited number of digits. We are forced to cut off some of the
1
digits of a number. For example, , e ,∧π .
3

Truncation Error: is caused by the fact that when an exact mathematical procedure is replaced
by another to make it easier to solve a problem.

x2 x3
Example if e x =1+ x + + +…+… ∞=X
2! 3 !

x2 x3
Is replaced by 1+ x+ + =X '
2 ! 3!

Then the truncation error is | X−X '|

1.5 Types of Error

In any numerical computation, we come across the following types of errors

1.5.1 Inherent Error

Errors which are already present in the statement of a problem before its solution are called
inherent errors. Such errors arise either due to the given data being approximate or due to the
limitations of mathematical tables, calculators or the digital computer. Inherent errors can be
minimized by taking better data or by using high precision computing aids.

1.5.2 Round off Error

Arises because it is impossible to represent all real numbers exactly on a finite state machine
since a world in a computer memory stores only a finite number of digits and hence a computer
numbers have limited number of digits. We are forced to cut off some of the digits of a number
Module on Numerical Analysis I 2013EC

1.5.3. Truncation Error

Caused by the fact that when an exact mathematical procedure is replaced by another to make it
easier to solve a problem.

1.5.4. Absolute, Relative and Percentage Error

If X is the true value of a quantity and X ' is its approximate value, then

I. Ea =¿ | X−X '| is absolute error .


Ea
II. Er = is relative error.
|X|
III. E p =100 E r% is percentage error.

Example

1
The three approximation of are given as 0.30, 0.33, and 0.34
3

a. Find Ea , Er and E p for each approximation.


b. Which of these is the best approximation?
Solution
Note: b. the smallest the error gives the best approximation

1.6 Decimal places and Significant Digits

1. A number is rounded off by dropping one or more digits at its right. If the digit to be
dropped is less than 5, then we leave the digit to its left as it is and if the digit to be dropped
greater than or equal to 5, then we increase the digit to its left by 1.
2. Let x be a real number having decimal representation
x=a n an−1 an−2 … a2 a 1 a 0 . d 1 d 2 … d k d k +1 d k +2 …

We say that x has been correctly rounded to a k decimal places number, which we denote it by

d k (x ) =
{ a n a n−1 a n−2 … a2 a1 a0 . d 1 d2 … d k if d k+1 <5
−k
an an−1 an−2 … a2 a1 a0 . d 1 d 2 … d k +1× 10 , if d k+1 ≥ 5
Module on Numerical Analysis I 2013EC

Example

If x=92.507623 then find

a. d 3 (x )
b. d 4 ( x)
c. d 8 (x )
3. We say X ' approximates X correct to k decimal places, if k is the largest positive integer
such that the absolute error
Ea ( X ) =|X −X |≤ 0.5× 10
' ' −k

Example

If X =0.0031758 , then X ' =0.0032 approximates X correct to 4 decimal places.

Check that | X−X '|≤ 0.5× 10−k ?

The concept of a significant figure, or digit, has been developed to formally designate the
reliability of numerical value. The significant digits of a number are those that can be used with
confidence. They correspond to the number of certain digits plus one estimated digit.

4. The following rules apply to find the number of significant digits in a number.
i. All nonzero digits in the number are significant.
ii. Zeros between nonzero digits in the number are significant.
iii. Zeros to the left of the first nonzero digit in the number are not significant.
iv. When the number ends in zeros that are to the right of the decimal point,
these zeros are significant.
v. When the number ends in zeros that are not to the right of the decimal
point, these zeros are not necessary significant.

When trailing zeros are used in large numbers, it is not clear how many, if any, of the zeros are
significant. For example, at face value the number 23,400 may have three, four, or five
significant digits, depending on whether the zeros are known with confidence. Such uncertainty
Module on Numerical Analysis I 2013EC

can be resolved by using scientific notations, where 4 4 4 2 × 34×10 ,2 ×340×10 ,2 × 3400×10


significant that the number is known to three, four, and five significant figures, respectively.
The concept of significant figures has two important implications for our study of numerical
methods:
1. Since numerical methods yield approximate results, we must develop criteria to specify how
confident we are in our approximate result. One way to do this is in terms of significant figures.
For example we might decide that our approximation is acceptable if it is correct to three
significant figures.
2. Although quantities such as p , e, or 2 represent specific quantities, they cannot be expressed
exactly by a limited number of digits. For example the representation of
p =3.1415926535897932384626433832795… is ad infinitum. Because computers retain only a
finite number of significant figures, such numbers cannot be represented exactly. The omission
of the remaining significant figures is called round-off error.
Both round-off error and the use of significant figures to express our confidence in a numerical
result will be explored in detail in subsequent sections.
Notice that as indicated above, we separate the significant figures (the mantissa) from the power
of ten (the exponent); the form in which the exponent is chosen so that the magnitude of the
mantissa is less than 10 but not less than 1 is referred to as a scientific notation.

Example

Identify the number of significant digits for the following decimals

a. 0.31416 (i← 5)
b. 102.3 (ii ← 4 )
c. 0.000045 (iii ← 2)
d. 127 (i← 3)
e. 0.0494200000 (iii∧iv ← 9)
f. 29000 ( v ← 2,3,4 ,∨5 )
5. An approximation X ' of the number X correct to n significant digits if n is the largest
positive integer such that the absolute error
Ea ( X ) =|X −X |≤ 0.5× 10
' ' S −n +1
Where S is the largest integer such that
Module on Numerical Analysis I 2013EC

10 ≤|X|
S

Example: if X =135.2469208 ,then X ' =135.247 approximate X correct to 6 significant digits.


Check that| X−X '|≤ 0.5× 10S −n+1? (0.0000792 ≤0.5 × 102−6+1=0.0005 )

6. Writing a number X in scientific notation is to put X in the form


± a ×10n wher e 1 ≤ a<10 and n is an integer

Example

Approximate 63459286 correct to 3 significant digits.(ans. 6.35x104)

1.7 Propagation of Errors in Function Evaluations

Let y=f ( x ) be a function that depends on x which is in error. Let the absolute error in x b e ∆ x .
Then, we have y +∆ y =f ( x+ ∆ x)

The error in y is given by ∆ y =f ( x +∆ x )−f (x)

From Taylor’s series we have

' f ” (x ) ∆ x2 f ' ' ' ( x)∆ x 3


f ( x +∆ x )=f ( x )+ f ( x ) ∆ x+ + +…
2! 3!

Dropping the second and higher order derivatives and rearranging we get

f ( x +∆ x )−f ( x )=f ' ( x ) ∆ x

Hence the absolute error in y can be estimated by ∆ y =|f ' (x )|∆ x

Similarly if y=f (x 1 , x 2 , x 3 , … , x n ) is a function of several variables with an error in each x i be


∆ x i , then

∆ y=
| | | | | |
∂y
∂ x1
∆ x1 +
∂y
∂ x2
∆ x2+
∂y
∂ x3
∆ x 3+ …+
∂y
∂ xn
∆ xn
| |
Example
Module on Numerical Analysis I 2013EC

a. Let y=2 x 3−5 x 2+ 2 and x=1∧∆ x=0.001

Find ∆ y

b. Let ¿ 5 x3 y 2 z . Given that x=3 , y=1 ,∧z =2 with errors 0.003, 0.001, and 0.002
respectively.
Find errors in w

CHAPTER TWO

SOLUTIONS OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS

Chapter Objectives
After studying this chapter, you should be able to:
 Locate the roots of nonlinear equations using two graphic methods.
 Apply bisection, false position, secant, fixed point iteration, and Newton-Raphson
methods to find the roots of nonlinear equations.
 Right algorithms for the methods of solving nonlinear equations.
 Draw flow charts for the methods of solving nonlinear equations.
 Discuss the advantages and disadvantages of the methods of solving non-linear
equations.
 Mention the speed of convergence for each method of solving nonlinear
equations.
 Compare the methods of solving nonlinear equations.
 Write programs for the methods of solving nonlinear equations.

INTRODUCTION
A frequently occurring problem in science and engineering is the root finding problem given that
an equation of the form f ( x )=0 , where f(x) is algebraic or transcendental or combination of
them, the value of x satisfying the equation is called the root or solution of the equation or the
Module on Numerical Analysis I 2013EC

zero of f(x). The equation may involving a single independent variable x or a coupled system of
two nonlinear equations f ( x , y )=0 in two independent variables( x , y ) . So in this unit we shall
discuss different numerical methods used to solve equations of the form f ( x )=0

We have seen that expression of the form f ( x )=an x n + an−1 x n−1 +…+a 1 x +a0 where a ' s are
constants with a n ≠ 0 and n ∈ Z +¿¿ is called polynomial in x of degree n and the equation f ( x )=0
is called algebraic equation of degreen . If f ( x) contains some other functions like exponential,
trigonometric, logarithmic etc., then f ( x )=0is called a transcendental equation.

Examples of transcendental equations can be found in many applications. In the theory of


diffraction of light, we need the roots of the equation x−tanx=0. In the calculation of planetary
orbits, we need the roots of Kepler’s equation x−a sinx=b for various values of a∧b .

Remark: For a function of a single independent variable y=f (x ), a point x=α is called a root or
a zero of f (x) if the value of the function is zero at the point, meaning f ( α )=0.

Since locating the zeros of functions has been an active area of study for several hundred years,
numerous methods have been developed. In this unit, we begin with four simple methods that are
quite useful: the bisection method, fixed point method, Newton’s method and the secant method.

Theorem (Intermediate value theorem)


If f is continuous on [a , b] and k is a number between f (a) and f (b), then there exists a
number c in (a , b) such that f ( c ) =k .
Theorem (Location theorem)
If f is continuous on[a , b], f ( a ) f ( b ) <0 , then there exists a number c in (a , b) such that f ( c ) =0
.
Example

Identify the interval(s) that contains a root of the equation

4 3 2
x −2 x −4 x + 4 x +4=0 . Using the intermediate value theorem.

Solution

x −4 −3 −2 −1 0 1 2 3 4
Module on Numerical Analysis I 2013EC

4 3
x −2 x −4 x + 4+ 4
2
308 91 12 −1 4 3 −4 7 84
From the table we can say that x 4 −2 x 3−4 x 2+ 4 x +4=0 has a solution in the intervals

[ −2 ,−1 ] , [−1,0 ] ,[1,2] & [2,3].

2.1 The Bisection Methods

One of the first numerical methods developed to find the root of nonlinear equation f ( x )=0 was

the bisection method (binary-search method). The method is based on the Intermediate Value

Theorem stated as below. It is the simplest and most robust method for computing the root of a

continuous real-valued function defined on a bounded interval of the real line.

Intermediate Value Theorem

An equation f ( x )=0, where f ( x ) is a real continuous function, has at least one root between
x l∧x u if f ( x l ) f (x u)<0 .

Note that if f (x l) f (x u )>0, there may or may not be any root between x l∧x u. If f ( x l) f ( x u )<0, then
there may be more than one root between x l∧x u. So the theorem only guarantees one root
between x l∧x u.

Procedure for the Bisection Method

The steps to apply the bisection method to find the root of the equation f ( x )=0 are

1. Choose x l∧x u as two guesses for the root such that f ( x l) f (x u )<0.
2. Estimate the root, x m , of the equation f ( x )=0 as the mid-point x l∧x u as
x l+ xu
x m=
2
3. Now check the following
a. If f ( x l) f (x m )<0 , then the root lies between x l∧x m ; then
x l=x l∧x u= x m
b. If f ( x u) f (x m )<0 , the root lies between x m∧x u; then x l=x m∧x u= x u
c. If f ( x l ) f ( x m ) =0; then the root is x m. Stop the iteration if this is true.
Module on Numerical Analysis I 2013EC

4. Find the new estimate of the root and find the absolute error as:
x l+ xu
x m=
2
|∈a|=|x new
m −x m |
old

5. If |∈a|≤ TOL then stop the iteration either wise go to step 3


Note: Let α be the real root of the equation f ( x )=0 and the initial estimate of the root be
a∧b then
The error of the bisection method after the nth iteration is given by
b−a
∈n = n
2

If lim ∈n=lim
n→∞ n→∞ ( )
b−a
2
n
=0 then the bisection method always converge.

Note: if a number is correct to n decimal places, then the maximum error in that approximation is
1 −n
given by ×10
2

Note: we can know at what iteration the numerical solution converges in bisection method using
the inequality

b−a 1 −n
∈n = ≤ Tol= × 10
2
n
2

Example

At what iteration the numerical solution of x 3−x−1=0 on the interval [ 1,2 ] up to two decimal
places accuracy

Solution

1 −2 1
Tol= × 10 =0.005=
2 200

b−a 2−1 1
n
= n = n
2 2 2

b−a
n
≤ Tol
2
Module on Numerical Analysis I 2013EC

1 1
⇒ ≤
2 200
n

200 ≤ 2 ⇒ log 2 200 ≤ n⇒ n ≥ log 2 200 ⇒ n ≥ 7.64385619


n

By approximating n to the next integer we get n=8

So the solution converges at 8th iteration.

Example

Using bisection method find the numerical solution of

x −x−1=0 On the interval [ 1,2 ] up to two decimal places accuracy


3

Solution

i teration a b a+b f (a) f (b) f (x m) b−a


x m= ε a=
2 2n
1 1 2 1.5 −1 5 0.875 0.5
2 1 1.5 1.25 −1 0.875 −297 0.25
3 1.5 1.25 1.375 0.875 −0.297 0.2246 0.125
4 1.25 1.375 1.313 −0.297 0.2246 −0.0494 0.0625
5 1.375 1.313 1.344 0.2246 −0.0494 0.0837 0.03125
6 1.313 1.344 1.329 −0.0494 0.0837 0.0183 0.015625
7 1.3 13 1.329 1.321 −0.0494 0.0183 −0.0158 0.0078125
8 1.329 1.321 1.325 0.0183 −0.0158 0.0012 0.00390625

Matlab program

f=@ (x) x.^4-x-10;


a=-2;b=-1;tol=0.001;imax=100;
Module on Numerical Analysis I 2013EC

if f(a)*f(b)>0
disp('Bisection method fails\n')
else
disp('iteration a b (xm)solution f(xm) Tolerance')
for i=1:imax
xm=(a+b)/2;
toli=(b-a)/2;
fxm=f(xm);
fprintf('%3i %11.6f %11.6f %11.6f %11.6f %11.6f\n',i,a, b, xm, fxm, toli)
if toli<tol
break
end
if f(a)*f(xm)<0
b=xm;
else
a=xm;
end
end
end

iteration a b( xm)solution f (xm)Tolerance


1−2.000000−1.000000−1.500000 3.437500 0.500000
2−2.000000−1.500000−1.750000 1.128906 0.250000
3−1.750000−1.500000−1.625000 1.402100 0.125000
4−1.750000−1.625000−1.687500 0.203354 0.062500
5−1.750000−1.687500 1.7187500.445466 0.031250
6−1.718750−1.687500 1.703125 0.116807 0.015625
7−1.703125−1.687500−1.6953130.044326 0.007813
8−1.703125−1.695313 1.699219 0.035976 0.003906
9−1.699219−1.695313−1.697266 0.0042410.001953
10−1.699219−1.697266 1.6982420.015851 0.000977
Module on Numerical Analysis I 2013EC

Example Find the root of the following equations


a. x log 10 x=1.2 up to two decimal places accuracy (ans. X=0.6071)
No of iteration App value of x
1 2.500000000000
2 2.750000000000
3 2.625000000000
4 2.687500000000
5 2.718750000000
6 2.734375000000
7 2.742187500000
8 2.738281250000
9 2.740234375000
10 2.741210937500
11 2.740722656250

b. x 3−x−1=0 on the interval [ 1,2 ] up to two decimal places accuracy


No of iteration App value of x
1 1.500000000000
2 1.250000000000
3 1.375000000000
4 1.312500000000
5 1.343750000000
6 1.328125000000
7 1.320312500000
8 1.324218750000
9 1.326171875000
10 1.325195312500
11 1.324707031250
12 1.324951171875
13 1.324829101563
14 1.324768066406
Module on Numerical Analysis I 2013EC

15 1.324737548828
16 1.324722290039
17 1.324714660645
18 1.324718475342
19 1.324716567993
20 1.324717521667
21 1.324717998505
(ans. X=1.32472)
a. x log 10 x=1.2 up to two decimal places accuracy (ans. X=0.6071)
No of iteration App value of x
1 2.500000000000
2 2.750000000000
3 2.625000000000
4 2.687500000000
5 2.718750000000
6 2.734375000000
7 2.742187500000
8 2.738281250000
9 2.740234375000
10 2.741210937500
11 2.740722656250

2.2. Fixed Point Iteration (Iteration) Method


A number α is called a fixed point of a function ϕ ( x ) if ϕ ( α )=α . The mathematical problem of
finding values of x satisfying x=ϕ( x ) is called the fixed point iteration problem.
To find the root of the equation f ( x )=0 by fixed point iteration successive approximations, we
write the given equation in the form x=ϕ(x ). New we assume the first estimate of the root be x 0.
The nth approximation of the root is given by
x n=ϕ ( x n−1 ) , n=1,2,3 , …
Procedures for iteration methods to find the root of f ( x )=0
Module on Numerical Analysis I 2013EC

1. Take an initial guess x 0

2. Find the next approximation x 1 using the formula x 1=ϕ ( x 0 )

3. Find the successive approximation using the formula x n=ϕ ( x n−1 )

4. Stop the iteration if |x i+1−x i|≤ TOL

Theorem: if

i. α be a root of f ( x )=0 which is equivalent to x=ϕ ( x )


ii. I ,be any interval containing the point x=α
iii. |ϕ' ( x )|< 1, ∀ x ∈ I
Then x n=ϕ ( x n−1 ) will converge to the root α provided the initial guess x 0 is chosen in I .

Mat lab programming

%0<x<pi/2;
f=@(x)1-3*x+c0s(x);g=@(x)(1+cos(x))/3;h=@(x)-1*(sin(x))/3;a=0;tol=0.000005;imax=100;
if abs(h(0.5))>1
disp('Fixed method does not converge\n')
else
disp('No of iteration App value of x')
for i=1:imax
x=g(a);
fprintf('%6i %.12f\n',i,x)
if abs(x-a)<tol
break
end
for x=g(a)
a=x;
end
end
end
Module on Numerical Analysis I 2013EC

Example

Find the root of the following equations

a. x 3−x−1=0 b. cosx =3 x−1

Solution

a. x 3−x−1=0 ⇒ x=x 3−1=ϕ ( x) or x=√3 x+ 1=ϕ (x )


So we have to choose which one of ϕ (x ) by checking |ϕ' ( x )|< 1, ∀ x ∈ I

Take I =[1,2]

Let ϕ ( x )=x 3−1⇒ ϕ ' ( x )=3 x 2 ≮ 1 ∀ x ∈ I

1
Let ϕ ( x )=√ x+1 ⇒ ϕ ( x ) =
3 '
<1 ∀ x ∈ I
3 √ ( x+1 )
3 2

So we have to use the formula

x n=√ x n−1+1=ϕ ( x n−1 ) , n=1,2 ,…


3

a. %0<x<2;
f=@(x)x.^3-1*x-1;g=@(x)(x+1).^(1/3);h=@(x)(1/3)*(x+1).^(-
1*2/3);a=1;tol=0.000005;imax=100;
if abs(h(2))>1
disp('Fixed method does not converge\n')
else
disp('No of iteration App value of x')
for i=1:imax
x=g(a);
fprintf('%6i %.12f\n',i,x)
if abs(x-a)<tol
break
end
Module on Numerical Analysis I 2013EC

for x=g(a)
a=x;
end
end
end

No of iteration App value of x

1 1.259921049895

2 1.312293836683

3 1.322353819139

4 1.324268744552

5 1.324632625251

6 1.324701748510

7 1.324714878441

8 1.324717372436

b.
No of iteration App value of x

1 0.666666666667

2 0.595295753592

3 0.609327563382

4 0.606677683188

5 0.607182245997

6 0.607086320465

7 0.607104562823
Module on Numerical Analysis I 2013EC

8 0.607101093830

2.3 Newton-Raphson Method


Unlike the root of bisection method the root of Newton-Raphson method is not bracketed. In
fact, only one initial guess of the root is needed to get the iterative process started to find the root
of an equation. The method hence falls in the category of open methods. Convergence in open
methods is not guaranteed but if the method does converge, it does so much faster than the
bracketing methods.

The Newton-Raphson method is by far the most popular numerical method for approximating
roots of functions. The method assumes that the function f ( x) is differentiable in the
neighborhood of the root and that the derivative is not zero in that neighborhood.

Derivation of the formula

The Newton-Raphson method is based on the principle that if the initial guess of the root of
f ( x )=0 is at x 0, then if one draws the tangent to the curve at ( x 0 , f (x 0 ) ), the point x 1 where
the tangent crosses the x−¿ axis is an improved estimate of the root(fig below).
Module on Numerical Analysis I 2013EC

Where N=x 0 , Q=x 1 ,T =x 2 , P=(x 0 , f ( x 0 ) ) and R=(x 1 , f ( x1 ) )

Using the definition of the slope of the tangent line through a point ( x 0 , f ( x 0 ) ) on the curve
y=f ( x ), we have:

f ( x0 ) −0 f ( x0 )
which gives x 1=x 0−
'
Slope= f ( x 0 )= '
x 0−x 1 f ( x0)

The value x 1 is then accepted as a new approximation to the root. The point ( x 1 , f (x 1) ) can be
taken as a new point to draw a tangent through. Its intersection with the x−axis , given by
f ( x1 )
x 2=x 1− ' is also accepted as a new approximation to the root. This process can be
f ( x1 )
repeated over and over, leading to the iteration method given by the Newton-Raphson
formula:
Module on Numerical Analysis I 2013EC

f (x n)
x n+1=x n − ' ,n=0,1,2 , …Which is called the Newton-Raphson formula for solving
f ( xn )
nonlinear equations of the form f ( x )=0.

The procedure of the Newton-Raphson Method

The steps of the Newton-Raphson method to find the root of an equation f ( x )=0 are:

1. Evaluate f ' (x) symbolically


2. Use an initial guess of the root, x i, to estimate the new value of the root, x i+1, as
f ( x i)
x i+1=x i−
f ' (x i)

3. Find the absolute error |∈a|as :|∈a|=|x i+1 −xi|


4. If |∈a|≤ TOR then stop the iteration. Otherwise go to step-2
Convergent criteria of Newton-Raphson method

f ( xn ) f (x)
Let ϕ ( x n )=x n+ 1=x n− '
⟹ ϕ ( x )= x− '
f ( x n) f ( x)
f ( x ) f ' ' ( x)
Which gives ϕ ' ( x )= 2
[ f ' (x )]
The iteration converges if ϕ ' ( x )< 1
∴ Newton’s formula will converge if

|f ( x ) f ' ' ( x)|< [ f ' ( x) ]2

Note: Newton’s method has a quadratic convergence.

Proof
Suppose x n differs from the root α by a small quantity ε n so that
x n=α +ε n∧x n+1=α+ε n+1
f (x n) f (α + ε n )
x n+1=x n − ' becomes α +ε n +1=α + ε n−
f ( xn ) f ' ( α +ε n )
f ( α + εn )
ε n+1=ε n− '
f (α + εn )
Module on Numerical Analysis I 2013EC

'
Expanding f ( α + ε n )∧f ( α + ε n ) using Taylor’s series theorem

' ε 2n f '' ( α ) ε 3n f ' '' ( α )


f ( α + ε n )=f ( α ) + ε n f ( α ) + + +…+
2! 3!
2 ' ''
' ' ''εn f ( a )
f ( α +ε n) =f ( a )+ ε n f ( a )+ +…
2!

' ε 2n f ' ' ( α ) ε 3n f ' ' ' ( α ) ¿


ε n+1=ε n−f ( α ) + ε n f ( α ) + + +…+ ¿
2! 3! ' '' ε 2n f '' ' ( a ) Since
f ( a ) + ε n f ( a )+ +…
2!
f ( α )=0 we have

' ε 2n f ' ' ( α ) ε 3n f ' '' ( α ) ¿


ε n+1=ε n−ε n f ( α ) + + +…+ ¿
2! 3! ' '' ε 2n f '' ' ( a )
f ( a ) + ε n f ( a )+ +…
2!
Neglecting third and higher degree of ε n we get

' ε 2n f ' ' ( α ) ¿


2 '' 2 ''
1 εn f ( a ) εn f ( a )
ε n+1=ε n−ε n f ( α ) + + ⟹ ε n+1= = ¿
2! ' '' ε 2n f ' '' ( a ) 2 f ' (a) 2 f ' (a)
f ( a )+ ε n f ( a ) + +…
2!

This shows that the subsequent error at each steps, is proportional to the square of the
previous error and as such the convergence is quadratic. Thus Newton-Raphson method
has second order convergence.
Matlab program
1. f=@(x)exp(-1*x)-x;g=@(x)-1*exp(-1*x)-1;a=0;tol=0.0005;imax=100;
2. if g(a)==0
3. disp('Newton method fails\n')
4. else
5. disp('No of iteration App value of x')
6. for i=1:imax
7. x=a-f(a)/g(a);
8. fprintf('%6i %.12f\n',i,x)
9. if abs(x-a)<tol
10. breakm
Module on Numerical Analysis I 2013EC

11. end
12. for x=a-f(a)/g(a)
13. a=x;
14. end
15. end
16. End

Example

Use the Newton-Raphson iteration method to estimate the root of the ff equations.

a. cos x=2 x up to 5 decimal place accuracy and use x 0=0.5 (ans. X=0.45018)

Solution
No of iteration App value of x
1 0.450626693077
2 0.450183647578
3 0.450183611295

b. e− x −x=0 up to 4 decimal place accuracy and use x 0=0 (ans. x=0.5671)


No of iteration App value of x
1 0.500000000000
2 0.566311003197
3 0.567143165035
4 0.567143290410

2.4 The Secant Method


The Newton-Raphson method of solving a nonlinear equation f ( x )=0 is given by the iterative
f ( x i)
formula x i+1=x i− …………………………………( 1 )
f ' (x i)
Module on Numerical Analysis I 2013EC

One of the drawbacks of the Newton-Raphson method is that you have to evaluate the derivative
of the function. With availability of symbolic manipulators such as mat lab this process has
become more convenient. However, it still can be a laborious process, and even intractable if the
function is derived as a part of a numerical scheme. To overcome these drawbacks, the derivative
of the function, f ' ( x ) is approximated as
f ( x i) −f ( x i−1 )
f ' ( xi )= ……………………………………………………..( 2 )
x i−x i−1
Substituting Equation ( 2 ) in Equation ( 1 ) we get
f (x i) ( xi −xi −1 )
x i+1=x i− …………………………………………………( 3 )
f ( xi ) −f ( x i−1 )
Equation( 3 ) is called the secant method. This method now requires two initial guess, but
unlike the bisection method, the two initial guesses do not need to bracket the root of the
equation. The secant method is an open method and may or may not converge. However
when secant method converges, it will typically converge faster than the bisection
method. However, since the derivative is approximated as given by Equation ( 2 ), it
typically converges slower than the Newton-Raphson method.

Procedure for Secant Method to find the Root of an equation


1. Choose the interval[ x 0 , x 1 ] in which f ( x )=0 has a root, where x 1> x0 .
2. Find the next approximation x 2 of the required root using the formula:

x 2=x 1−
[ f ( x 1 ) ( x 1−x 0 )
f ( x 1 )−f ( x 0 ) ]
3. Find the successive approximations of the required root using the formula:
f (x i) ( xi −xi −1 )
x i+1=x i−
f ( xi ) −f ( x i−1 )

4. Stop when |∈a|=|x i+1−x i|≤TOL


Matlab program

1. f=@(x)x.^2+4*sin(x);x1=-1;x2=-2;tol=0.00005;imax=100;
2. if f(x2)-f(x1)==0
3. disp('secant method diverges\n')
Module on Numerical Analysis I 2013EC

4. else
5. disp('No of iteration App value of x')
6. for i=1:imax
7. x=x2-((x2-x1)/(f(x2)-f(x1)))*f(x2);
8. fprintf('%6i %.12f\n',i,x)
9. if abs(x-x1)<tol
10. break
11. end
12. if x==x2-((x2-x1)/(f(x2)-f(x1)))*f(x2)
13. x1=x;
14. else
15. x2=x;
16. end
17. end
18. end

Example

Apply the secant method for the ff equations to find the roots.

a. x 3−5 x 2−17 x+ 20=0 up to 5 decimal point accuracy (ans. x=0.95818)


No of iteration App value of x
1 0.952380952381
2 0.958163362110
3 0.958184498758
b. x 2+ 4 sin x=0 up to 5 decimal point accuracy (ans. X=-1.93375)
No of iteration App value of x
1 -1.867038861133
2 -1.931354568387
3 -1.933670835937
4 -1.933750900536
5 -1.933753664037
Module on Numerical Analysis I 2013EC

CHAPTER THREE

SOLUTIONS OF SYSTEMS OF LINEAR EQUATIONS


Chapter Objectives
After studying this chapter, you should be able to:
 Mention the two set of methods that are used to solve system of linear equations.
 Discuss the properties of solutions of system of linear equations.
 List the direct methods of solving system of linear equations.
 Solve system of linear equations using Gaussian Elimination method.
 Explain the advantages of pivoting and partial pivoting techniques
 Solve system of linear equations using Gaussian-Jordan method.
 Solve system of linear equations using LUD Method.
 Solve special systems, like tri-diagonal systems.
 Write algorithms for each of the direct methods.
 Draw flowcharts for each of the direct methods.
 Write a computer program for each of the direct methods.
 Mention the advantages and disadvantages of each of the direct methods.
 List down the indirect methods.
 Identify the sufficient condition for the direct methods to converge.
 Solve system of equations by using Gauss Jacobi method.
 Solve system of linear equations using Gauss Seidel method.
 Write algorithms for the indirect methods.
 Draw flowcharts for the indirect methods.
Module on Numerical Analysis I 2013EC

 Mention the advantages and disadvantages of the indirect methods


 Compare the indirect methods with the direct methods.

III.1. Introduction
Many physical systems can be modeled by a set of linear equations, which describe relationships
between system variables. In simple cases, there are two or three variables; in complex systems
(for example, in a linear model of the economy of a country) there may be several hundred
variables. Linear systems also arise in connection with many problems of numerical analysis.
Examples of these are the solution of partial differential equations by finite difference methods,
statistical regression analysis, and the solution of eigenvalue problems (see, for example, Gerald
and Wheatley (1994)).
Hence there arises a need for rapid and accurate methods for solving systems of linear
equations. The two methods of solving system of equations that are commonly known are the
direct method and indirect (or iterative method). The direct method is based on the elimination of
variables to transform the set of equations to a triangular form, the completed in a finite number
of steps resulting in the exact solution and thus the amount of computation involved can be
specified in advance. The method is independent of the accuracy desired. The indirect methods
always begins with an approximate solution, and obtains an improved solution with each step of
the iteration but would require an infinite number of steps to obtain an exact solution in the
absence of round – off errors. The accuracy of the solution unlike the direct method depends on
the number of iterations performed. Usually ‘iterative methods’ are used for spars matrices
whereas for dense matrices we use direct methods.
Numerical methods for solving linear algebraic systems may be divided into two types: direct
and iterative.

 Direct methods are those which, in the absence of round-off or other errors, yield the
exact solution in a finite number of elementary arithmetic operations.
 Iterative methods start with initial vectors x ( 0 ) and generates a sequence of approximate
solution vectors { x ( k ) } which converge to the exact solution vectors x as the number of
iterations increases.
Module on Numerical Analysis I 2013EC

3.2. Direct Methods for Solving System of Linear Equations


We will now develop the back-substitution algorithm, which is useful for solving a linear
system of equations that has an upper-triangular coefficient matrix. This will be incorporated in
the algorithm for solving a general linear system
Definition; An n × n matrix A = (a ij) is called upper- triangular provided that the elements
satisfya ij = 0 ij a whenever i > j. The n × n matrix A= (a ij) is called lower triangular provided
thata ij = 0 ,whenever i < j.
We will develop a method for constructing the solution to upper-triangular linear system of
equations and leave the investigation of lower-triangular systems to the reader.
If A is an upper-triangular matrix, then AX=B is said to be an upper-triangular system of linear
equations and has the form
.

To solve this system of equations where a kk =0 for k 1, 2,..., n, we start from the last equation
since it involves only x n so solving it first we have:
bn
x n=
ann
Now x n is known and it can be used in the next-to-last equation:
bn−1−an−1 ; x n
x n−1=
a n−1, n−1
Now x n and x n−1 are used to find x n−2
Module on Numerical Analysis I 2013EC

bn−2−an−2 ; x n−1−an−1 x n
x n−2=
a n−2, n−2

Once the values x x n , x n−1 …… … ….. x k +1, are known, the general step is

x k =¿ for k=n-1, n-2………1

The uniqueness of the solution is easy to see. So in general the back substation algorithm is
stated as below:
Module on Numerical Analysis I 2013EC
Module on Numerical Analysis I 2013EC
Module on Numerical Analysis I 2013EC

A general set of m linear equations in n unknowns is given by:

a 11 x1 + a12 x 2+ …+a1 n x n=b1

a 21 x1 + a22 x 2+ …+a 2n x n=b 2

⋮ ⋮ ⋮ ⋮

a m 1 x 1+ am 2 x2 +…+ amn xn =bm

Which can be written in the matrix product form as:

[ ][ ] [ ]
a11 a12 ⋯ a1 n x1 b1
a21 a22 ⋯ a2 n x2 b2
⋮⋮ ⋮ = ⋮
⋮⋮ ⋮ ⋮
am 1 am 2 ⋯ a mn xn bm

Or simply [ A ] [ X ] =[ B ] where [ A ] is called the coefficient matrix, [ B ] is called the constant matrix
and [ X ] is called the solution matrix of the system.

[ ]
a11 a12 ⋯ a1 n ⋮ b1
a21 a 22 ⋯ a2 n ⋮ b2
[ A ⋮ B ]= ⋮⋮ is called the augmented matrix
⋮⋮
am 1 am 2 ⋯ amn ⋮ bm

A system of equations [ A ] [ X ] =[ B ] is consistent if there is a solution, and it is inconsistent if there


is no solution. However, a consistent system of equations may have a unique or infinitely many
solutions.

There are four types of direct methods. These are:

a. Gaussian Elimination Methods


b. Gauss-Jordan Elimination Methods
c. Tridiagonal matrix method
d. LU Decomposition Method
Module on Numerical Analysis I 2013EC

3.2.1. Gaussian Elimination Methods (find the echelon form of the augmented matrix)

In this section we develop a scheme for solving a general system Ax = B of n × n system of


equations. The aim is to construct an equivalent upper-triangular system Ux = Y that can be
solved by using back-substitution. During transformation of a system to upper triangular form,
one or more of the following elementary operations are used at every step:
1. Interchanging of two equations.
2. Multiplication of an equation by a non-zero constant;
3. Subtraction from one equation some nonzero multiple of another equation;
Mathematically speaking, it should be clear to the student that performing elementary operations
on a system of linear equations leads to equivalent systems with the same solutions. This
statement requires proof which may be found as a theorem in books on linear algebra. It forms
the basis of all elimination methods for solving systems of linear equations.
It consists of two major steps. These are:

 Forward Elimination and


 Back Substitution
 Forward Elimination of unknowns: In this step the unknowns are eliminated in each
equation starting with the first equation. This way the system of equations can be reduced
to upper triangular system of equations.
 Back Substitution: Now the equations are solved starting from the last equation as it has
only one unknown.

Example
Solve the following S.L.E by using G.E.M

{ {
x−3 y + z=4 2 x + y + z=10
a. 2 x −8 y+ 8 z=−2 b. 3 x +2 y +3 z=18
−6 x+ 3 y −15 z=9 x+ 4 y + 9 z=16
Solution
Module on Numerical Analysis I 2013EC

{
x−3 y + z=4
a. 2 x −8 y+ 8 z=−2
−6 x+ 3 y −15 z=9

Step 1: Forward Elimination

i. To eliminate x from the 2nd and 3rd equations multiply the 1st equation by −2 and 6
and add to the 2nd and 3rd equation respectively.

{ {
x−3 y + z=4 x−3 y + z=4
−2 y+ 6 z =−10 ⇒ − y +3 z=−5
−15 y −9 z=33 −5 y −3 z =11
ii. To eliminate the y term in the last equation multiply the 2 nd equation by −5 and add
it to the 3rd equation. The resulting new system of equations becomes

{
x−3 y + z=4
− y +3 z=−5
−18 z=36

Step 2: Back substitution

From the 3rd equation we get z=−2 , substitute this into the 2nd equation yields y=−1. Using both
of these results in the first equation gives x=3

Hence the solution of the above system is given by ( x , y , z )=(3 ,−1 ,−2)

Or we can find the echelon form of the augmented matrix of the system of equation. That is

[ ]
1 −3 1 4
0 −1 3 −5
0 0 −18 36

3.2.2. Gauss-Jordan Elimination Methods (find the rref of the augmented matrix)

The general procedure for Gauss-Jordan Elimination can be summarized in the following steps:

1. Write the augmented matrix of the system.


2. Write the augmented matrix in RREF
Module on Numerical Analysis I 2013EC

3. Stop the process in step2 if you obtain a row whose elements are all zeros except the last
one on the right. In that case, the system is inconsistent and has no solutions. Otherwise,
finish step2 and the solutions of the system from the final matrix.

Example
Solve the following S.L.E using G.J.E.M

{ {
x + y + z=5 2 x +3 y +4 z =19
a. 2 x +3 y +5 z=8 b. x +2 y + z=4
4 x +5 z=2 3 x− y −z=9
a. → ( 3,4 ,−2 ) b. →(3.7778 ,−2.1111,4 .4444)
Self-assessment exercise
A civil engineering builds 3 types of buildings; each type requires metal, cement and
wood in the following composition

Product type Metal(tones) Cement(tones) Wood(tones)

A 5 1 1

B 2 4 2

C 1 2 5

If he/she has 12kg of metal, 15kg of cement, and 20kg of wood in total ,how many
buildings of each type can be constructed at most
[hint let x ,y,and z be number of buildings of type A, B,and C respectively ]

Comparison of Gauss-Jordan and Gaussian Elimination


The method of Gaussian elimination is in general more efficient than Gauss-Jordan elimination
in that it involves fewer operations of addition and multiplication. It is during the back
substitution that Gaussian elimination picks up this advantage. Particularly in larger systems of
equations, many more operations are saved in Gaussian elimination during back substitution. The
reduction in the number of operations not only saves time on a computer but also increases the
Module on Numerical Analysis I 2013EC

accuracy of the final answer. With large systems, the method of Gauss-Jordan elimination
involves approximately 50% more arithmetic operations than does Gaussian elimination.
Gauss-Jordan elimination, on the other hand, has the advantage of being more straightforward
for hand computations. It is superior for solving small systems.

3.2.3 Tri-diagonal matrix method


Thomas algorithm for tri-diagonal system

Definition:
Consider the system of equations defined by

b 1 x 1+ c1 x 2=d 1

a 2 x 1+ b2 x 2 +c 2 x 3=d 2

a 3 x 2+ b3 x3 + c3 x 4=d 3

a n x n−1+ bn x n=d n

Or in matrix notation, consider the system of linear simultaneous algebraic equations given by

Ax=d

[ ]
c1 ¿
b1 0
c 2 ¿ ⋱ ¿ c n−1
where A= a2 b2 ¿
a2 ¿
0 ⋱ ¿ bn
⋱ ¿ an

is a tridiagonal matrix, x=[x 1 , x 2 , … , x n ]t and d=[d 1 ,d 2 , … , d n ]t .

Note:
An n × nmatrix A is called tridiagonal matrix if a ij=0whenever |i− j|>1.
Module on Numerical Analysis I 2013EC

These equations can be solved by using Thomas algorithm which is described in three steps as
shown below:

Step 1: set y 1=b 1and compute

ai c i−1
y i=b i− i=2,3 , … , n
y i−1

d1
Step 2: set z 1= and compute
b1

d i−a i z i−1
z i= i=2,3 , … , n
yi

Step 3: x n=z n

c i x i+1
x i=z i− i=n−1 , n−2, … , 1
yi

Remark: Hence we consider a 4 × 4 tridiagonal system of equations given by

[ ][ ] [ ]
a11 a 12 0 0 x1 d1
a21 a 22 a23 0 x2 d
= 2
0 a 32 a33 a34 x3 d3
0 0 a43 a44 x4 d4

It can be written as

a 11 x1 + a12 x 2=d 1

a 21 x1 + a22 x 2+ a23 x 3=d 2

a 32 x2 + a33 x 3 +a34 x 4=d 3

a 43 x 3 +a 44 x 4 =d 4

Example:

1. Solve the following equations by Thomas algorithm

3 x 1−x 2=5
Module on Numerical Analysis I 2013EC

2 x1 −3 x 2 +2 x3 =5

x 2+ 2 x 3 +5 x 4=10

x 3−x 4 =1

Solution:
Here

[ ][ ] [ ]
3 −1 0 0 x 1 5 [ a2 , a3 , a4 ]=[ 2,1,1 ]
2 −3 2 0 x 2
= 5 [ b1 , b2 , b3 , b4 ]=[ 3 ,−3,2 ,−1 ]
0 1 2 5 x3 10 [ c 1 , c 2 ,c 3 ]= [−1,2,5 ]
0 0 1 −1 x 4 1
[ d 1 , d 2 , d 3 , d 4 ]=[5,5,10,1]
Step 1: set y 1=3 and compute

ai c i−1
y i=b i− i=2,3 , … , n
y i−1

a2 c 1 −7
y 2=b 2− =
y1 3

a3 c 2 20
y 3=b3− =
y2 3

a 4 c3 −55
y 4 =b 4− =
y3 20

5
Step 2: set z 1= and compute
3

d i−a i z i−1
z i= i=2,3 , … , n
yi

d 2−a 2 z 1 −5
z 2= =
y2 7
Module on Numerical Analysis I 2013EC

d 3−a 3 z 2 75
z 3= =
y3 20

d 4−a 4 z3
z4 = =1
y4

Step 3: x 4 =z 4=1

c i x i+1
x i=z i− i=n−1 , n−2, … , 1
yi

c3 x4
x 3=z 3− =2
y3

c2 x3
x 2=z 2− =1
y2

c1 x2
x 1=z 1− =2
y1

2. Solve the following tri-diagonal system of equations using the Thomas algorithm
2 x1 + x 2=1
3 x 1+2 x 2+ x3 =2
x 2+ 2 x 3 +2 x 4=−1
x 3+ 4 x 4 =−3
Ans: x 1=1 , x 2=−1 , x 3=1 , x 4 =−1.

3.2.4 LU Decomposition Methods


Some linear equations AX=B are relatively easy to solve. For example, if A is a lower triangular
matrix, then elements of X can be computed recursively using forward substitution. Similarly if
A is an upper triangular matrix, then the elements of X can be computed recursively using
backward substitution. A third method for the solution of general systems of linear algebraic
equations is the LU decomposition method. The objective of this method is to find a lower
triangular factor L and an upper triangular factor U such that the system of equations can be
transformed according to
A×x = b ® (L×U)×x = A*×x = b*
Module on Numerical Analysis I 2013EC

The matrix A* in the above equation is the matrix A after row exchanges have been made to
allow the factors L and U to be computed accurately; the vector b* is the vector b after an
identical set of row exchanges. A decomposition in which each diagonal element lii of L has a
unit value is known as the Doolittle method; one in which each diagonal element uii of U has a
unit value is known as the Crout method. Another method in which corresponding diagonal
elements lii and uii are equal to each other is known as the Cholesky method. Regardless of
which method is used to obtain the factors L and U of A*, the methods described in most linear
equations encountered in practice, do not have a triangular coefficient matrix. In such cases, the
linear equation is often best solved using the L-U factorization method or L-U decomposition
method. The L-U factorization method is designed to decompose the coefficient matrix into the
product of lower and upper triangular matrices, allowing the linear equation to be solved using a
combination of backward and forward substitution.
The L-U factorization method involves two phases:

 In the first factorization phase, Gaussian Elimination is used to factor the matrix A into
the product A=LU of a lower triangular matrix L and an upper triangular matrix U.
 In the solution phase of the L-U factorization method, the factored linear equations
AX=(LU)X=L(UX)=B is solved by first solving LY=B for Y using forward substitution,
and then solving UX=Y for X using backward substitution.
To decompose A as A=LU, consider an nxn matrix A given below

[ ][ ][ ]
a11 a 12 ⋯ a1 n l 11 0 ⋯ 0 u 11 u 12 ⋯ u1 n
A= a21 a 22 ⋯ a2 n = l 21 l 22 ⋯ 0 0 u 22 ⋯ u2 n
⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱
a n1 an 2 ⋯ ann l n 1 l n 2 ⋯ l nn 0 0 ⋯ u nn

To produce a unique solution it is convenient to choose eitheruii =1∨l ii =1 ,i=1,2 , … , n

When we choose

a. l ii =1, the method is called the Doolittle’s method


b. uii =1, the method is called the Croute’s method

Consider a 3x3 matrix A . We can decompose matrix A using Croute’s method as follows
Module on Numerical Analysis I 2013EC

[ ][ ][ ]
a11 a 12 a13 l 11 0 0 1 u12 u13
a21 a 22 a23 = l 21 l 22 0 0 1 u23
a31 a 32 a33 l 31 l 32 l 33 0 0 1

[ ]
l 11 l 11 u 12 l11 u13
¿ l 21 ( l21 u12+l 22 ) (l 21 u13 +l 22 u23 )
l 31 ( l31 u12+l 32 ) (l 31 u 13+l 32 u 23+l 33)

We can find, therefore, the elements of the matrices L and U by equating the above two matrices.

Example

Solve the following S.L.E using Croute’s LU decomposition methods.

{
2 x+ y−z=5
a. x +3 y +2 z=5
3 x −2 y −4 z=3
b. The currents I 1 , I 2 ,∧I 3 occurring in a closed circuit having three mesh points is given by:

{
−I 1+3 I 2=5
3 I 1+ I 2 +2 I 3=3
−11 I 2−7 I 3=−15

Solution

[ ] []
2 1 −1 5
A= 1 3 2 ∧B= 5
3 −2 −4 3

[ ] [ ][ ]
2 1 −1 l 11 0 0 1 u12 u 13
A= 1 3 2 = l 21 l 22 0 0 1 u 23
3 −2 −4 l 31 l 32 l 33 0 0 1

[ ]
l 11 l 11 u 12 l11 u13
¿ l 21 ( l21 u12 +l 22 ) (l 21 u13 +l 22 u23 )
l 31 (l31 u12 +l 32 ) (l 31 u 13+l 32 u 23+l 33)

1 1 −1 −1
l 11 =2 ,l 11 u 12=1⟹ u12= = , l 11 u13=−1⟹ u13= =
l 11 2 l 11 2
Module on Numerical Analysis I 2013EC

5
l 21 =1 ,l 31=3 , ( l 21 u12 +l 22) =3 ⟹1 x 0.5+l 22=3⟹ l 22 =3−0.5=2.5=
2

l 21 u13 +l 22 u23=2⟹ 1 x (−0.5)+2.5 x u 23=2⟹ u23=1

−7
l 31 u12 +l 32=−2⟹ 3 x 0.5+ l 32=−2⟹ l 32=−3.5=
2

l 31 u13 +l 32 u23 +l 33=−4 ⟹ 3 x (−0.5 ) + (−3.5 ) x 1+l 33=−4 ⟹ l 33 =1

l 11 =2 ,l 21=1 , l 22 =2.5 ,l 31=3 , l 32=−3.5∧l 33=1

u12=0.5 , u13=−0.5∧u23=1

[ ] [ ]
2 0 0 1 0.5 −0.5
L= 1 2.5 0 ∧U= 0 1 1
3 −3.5 1 0 0 1

AX=(LU ) X=L(UX )=B

[ ][ ] [ ]
2 0 0 y1 2 y1
Let UX =Y ⟹ LY = 1 2.5 0 y2 = y 1 +2.5 y 2
3 −3.5 1 y 3 3 y 1−3.5 y 2 + y 3

[ ][]
2 y1 5
LY =B⟹ y 1+2.5 y 2 =5
3 y 1−3.5 y 2 + y 3 3

⟹ y 1=2.5 , y 1+2.5 y 2=5 ⟹ y 2=1∧3 y 1 −3.5 y 2 + y 3 =3⟹ y 3=−1

[ ][ ] [ ] [ ][ ]
1 0.5 −0.5 x 2.5 x +0.5 y−0.5 z 2.5
UX =Y ⟹ 0 1 1 y= 1 ⟹ y+ z = 1
0 0 1 z −1 z −1

⟹ z=−1 , y+ z =1⟹ y=2∧x+ 0.5 y −0.5 z=2.5 ⟹ x =2.5−1−0.5=1

S.S¿ { ( 1 , 2,−1 ) }
Module on Numerical Analysis I 2013EC

3.3 Iterative Methods for Solving System of Linear Equations


 Because of round-off errors, direct methods become less efficient than iterative
methods when they applied to large systems. In addition, the amount of storage space
required for iterative solutions on a computer is far less than the one required for
direct methods when the coefficient matrix of the system is sparse. Thus, especially
for sparse matrices, iterative methods are more attractive than direct methods.
 For the iterative solution of a system of equations, one starts with an arbitrary starting
vectors x (0 ) and computes a sequence of iterates x (m ) for m=1 , 2 ,3 , …
 Iterative methods will be successful only when the system is diagonally dominant
system.

n
Note: an n × n matrix A is diagonally dominant if|aii|> ∑ |a ij|
j=1 , j≠ i

Under the category of iterative method, we shall describe the following two methods:

i. Gauss-Jacobi’s method
ii. Gauss-Seidel method

3.3.1 Gauss-Jacobi’s method


Let us consider again a system of n × n linear equations with n unknowns:

a 11 x1 + a12 x 2+ …+a1 n x n=b1

a 21 x1 + a22 x 2+ …+a 2n x n=b 2 …………………………………(1)

⋮ ⋮ ⋮ ⋮

a n1 x 1+ an 2 x 2 +…+ ann x n=bn

From equation (1) we get

1
x 1= ( b −(a12 x 2+ a13 x 3 +…+ a1 n x n ) )
a11 1

1
x 2= ( b −( a21 x 1+ a23 x 3 +…+ a2 n x n ) )
a22 2
Module on Numerical Analysis I 2013EC

⋮ ⋮

1
x n= ( b −(an 1 x1 + an 2 x 2 +…+a nn−1 x n−1))
ann n

(0 ) (0 ) ( 0)
Let x 1 , x 2 , … , xn be the initial approximations of the unknowns x 1 , x 2 , … ,∧x n

Let x (1n ) , x (2n) , … , x (nn) be the nth iteration of the given S.L.E then the ( n+1 )th iteration is given by:

1
x (1n +1)= (b −(a12 x (2n )+ a13 x (3n) +…+a 1n x (nn )))
a11 1

1
x (2n +1)=
a22
( b2−(a21 x (1n )+ a23 x (3n) +…+ a2 n x (nn )) )

⋮ ⋮

1
x (nn +1)= ( b −(a n1 x (1n) +a n 2 x (2n )+ …+ann−1 x (n−1
ann n
n)
))

The process is continued till convergence is secured.

Note1: in the absence of any better estimates, the initial approximations are taken as

(0 ) ( 0) (0 )
x 1 =0 , x2 =0 , … , x n =0

2: we assume that the diagonal terms a 11 , a22 ,… , a nn are all nonzero and are the largest
coefficients of x 1 , x 2 , … , x n respectively so that convergence is assured. If some of the diagonal
elements are zeros, we may have to rearrange the equations.

3: stop the iteration if ‖x n+1


i −xin‖≤ TOL for i=1,2,3 , … , n

Example

Solve the following S.L.E using G.J.M up to 3 decimal place accuracy.

{
5 x+ y + z +u=4

{
20 x+ y −2 z =17
x +7 y + z +u=12
a. 3 x +20 y−z =−18 b.
x + y +6 z+ u=−5
2 x −3 y+ 20 z=25
x+ y + z + 4 u=−6
Solution
Module on Numerical Analysis I 2013EC

1
x (n +1)= [ 17− y (n )+2 z (n ) ]
20
1
y (n +1)= [−18−3 x (n )+ z (n ) ]
20
1
z (n +1)= [ 25−2 x ( n) +3 y (n )]
20
a. let (x ¿ ¿ ( 0 ) , y (0) , z( 0) )=( 0,0,0) ¿

n x (n +1) y (n +1) z (n +1)

0 17 −18 25
=0.85 =-0.9 =1.25
20 20 20

1 1.02 -0.965 1.03

2 1.00125 -1.0015 1.00325

3 1.0004 -1.000025 0.99965

4 0.99996625 -1.0000775 0.99995625

1 −3
Tol¿ x 10 =0.0005
2

|x 4 −x3|=0.00043375 ≤ 0.0005

|y 4 − y 3|=0.0000525≤ 0.0005

|z 4 −z 3|=0.00030625 ≤ 0.0005
Therefore the solution is (0.99996625 ,−1.0000775,0.99995625)≈ (1 ,−1,1)

3.3.2 Gauss-Seidel method

 Gauss-Seidel is a modification of Gauss-Jacobi method. Because the new value can be


immediately stored in the location that held the old values, the storage requirement for x
Module on Numerical Analysis I 2013EC

with the Gauss-Seidel method is half what it would be the Jacobi method and the rate of
convergence is most rapid.
 The Gauss-Seidel method is a refinement of the Jacobi method that usually (but not always)
gives more rapid convergence. The latest value of each variable is substituted at each step in
the iterative process. This method like the Jacobi method converges if the coefficient matrix
A is diagonally dominant. So given an nxn system of equations the general iteration process
can be written as
Let us consider again the S.L.E

a 11 x1 + a12 x 2+ …+a1 n x n=b1

a 21 x1 + a22 x 2+ …+a 2n x n=b 2 …………………………………(1)

⋮ ⋮ ⋮ ⋮

a n1 x 1+ an 2 x 2 +…+ ann x n=bn

From equation (1) we get

1
x 1= ( b −(a12 x 2+ a13 x 3 +…+ a1 n x n ) )
a11 1

1
x 2= ( b −( a21 x 1+ a23 x 3 +…+ a2 n x n ) )
a22 2

⋮ ⋮

1
x n= ( b −(an 1 x1 + an 2 x 2 +…+a nn−1 x n−1))
ann n

(0 ) (0 ) ( 0)
Let x 1 , x 2 , … , xn be the initial approximations of the unknowns x 1 , x 2 , … ,∧x n

Let x (1n ) , x (2n) , … , x (nn) be the nth iteration of the given S.L.E then the ( n+1 )th iteration is given by:

1
x (1n +1)= (b −(a12 x (2n )+ a13 x (3n) +…+a 1n x (nn )))
a11 1

1
x (2n +1)= ( b −(a21 x (1n +1) +a 23 x (3n) +…+ a2 n x (nn)))
a22 2
Module on Numerical Analysis I 2013EC

⋮ ⋮

1
x (nn +1)= ( b −(a n1 x (1n+1 )+ an 2 x (2n +1) +…+ ann−1 x (n−1
ann n
n +1)
))

Example

Solve the following S.L.E using G.S.M up to 3 decimal place accuracy.

{ {
5 x+ y + z +u=4 10 x −2 y −z−w=3

{
20 x + y−3 z=17
x +7 y + z +u=12 −2 x+10 y− z−w=15
a. 3 x + y−z=−18 b. c.
x + y +6 z+ u=−5 −x− y +10 z−2 w=27
2 x−3 y +20 z=25
x+ y + z + 4 u=−6 −x− y−2 z +10 w=−9

Solution

1
x (n +1)= [ 3+2 y (n )+ z ( n) +w (n) ]
10

( n +1) = 1 [ 15+2 x ( n+1 )+ z(n) +w (n )]


10
y

1
( n +1 ) =
10
[ 27+ x( n+1 )+ y ( n+1 )+2 w (n )]
z

1
w (n +1)= [−9+ x ( n+1) + y ( n+1) +2 z (n+1 ) ]
10

n x (n +1) y (n +1) z (n +1) w (n +1)

0 3 78 2.886 -0.1368
=0.3 =1.56
10 50

1 0.88692 1.952304 2.956562 -0.0247652

2 0.983640 1.989907 2.992401 -0.004165

3 0.996805 1.998184 2.996521 -0.001196

c.(1,2,3,0)

Matlab program for Gauss-Seidal method

Example :solve the following by using Matlab program


Module on Numerical Analysis I 2013EC

{
4 x 1 + x 2+2 x 3=−3
2 x1 +5 x 2−x 3=5
−3 x1−x 2 + x 3=0

k=1;x1=0;x2=0;x3=0;
disp('k x1 x2 x3')
fprint`f('%2.0f %-8.5f %-8.5f %-8.5f\n', k, x1, x2, x3)
for k=1:10
x1=(-3-(x2+2*x3))/4;
x2=(5-(2*x1-x3))/5;
x3=(3*x1+x2);
fprintf('%2.0f %-8.5f %-8.5f %-8.5f\n', k, x1, x2, x3) end
k x1 x2 x3
1 0.00000 0.000000.00000
1−0.750001.30000−0.19000
2−0.980001.35400−0.31720
3−0.929901.30852−0.29624
4−0.929011.31236−0.29494
5−0.930621.31326−0.29572
6−0.93046 1.31304−0.29567
7−0.93043 1.31304−0.29565
8−0.93044 1.31304−0.29565
9−0.93043 1.31304−0.29565
10−0.930431.31304−0.29565

Comparison of Gaussian Elimination and Gauss-Seidel


Module on Numerical Analysis I 2013EC

Limitations Gaussian elimination is a finite method and can be used to solve any system of linear
equations. The Gauss-Seidel method converges only for special systems of equations; thus it can
be used only for those systems.
Efficiency The efficiency of a method is a function of the number of arithmetic operations
(addition, subtraction, multiplication, and division) involved in each method.
For a system of n equation in n variables, where the solution is unique and the value of n is large,
3
Gaussian elimination involves 2n /3 arithmetic operations to solve the problem, while Gauss-
2
Seidel requires approximately 2n arithmetic operations per iteration.
Therefore if the number of iterations is less than or equal to n/3, the iterative method requires
fewer arithmetic operations.
Accuracy In general, when Gauss-Seidel is applicable, it is more accurate than the Gaussian
elimination.
Storage Iterative methods are, in general, more economical in core-storage requirementsof a
computer.
System of non-linear equation

Newton’s Method for Solving System of Non-Linear Equations

Now we consider the solution of simultaneous non-linear equations by Newton’s method.

Consider the system

{f ( x , y )=0
g ( x , y )=0
(1)

Let (x 0 , y 0 ) be an initial approximation to the root of the system and (x 0 +h , y 0 +k ) be the root of
the system given by (1). Then we must have

{f (x 0 +h , y 0 +k )=0
g ( x 0+ h , y 0+ k )=0
(2)

Let us assume that f and g are differentiable expanding (2) by Tayler series, we obtain
Module on Numerical Analysis I 2013EC

{
∂f ∂f
f ( x 0 +h , y 0 +k )=f 0 +h +k +…=0
∂ x0 ∂ y0
(3)
∂g ∂g
g ( x 0 +h , y 0 +k )=g0 + h +k + …=0
∂ x0 ∂ y0

Neglecting, the second and higher order terms and retaining only the linear terms of (3) ,we
obtain

{
∂f ∂f
h +k =−f 0
∂ x0 ∂ y0
( 4)
∂g ∂g
h +k =−g0
∂ x0 ∂ y0

The matrix form representation of ( 4) is given by

[ ] [ ] [ ]( ) [ ]
∂f ∂f ∂f ∂f
∂ x0
∂g
∂ y0 h
∂g k ()f
=− 0 ⇒
g0
∂ x0
∂g
∂ y 0 x1
∂ g y1
f
=− 0
g0
∂ x0 ∂ y0 ∂ x0 ∂ y0

Where

f 0=f ( x0 , y 0 ) , g 0=g ( x 0 , y 0 ) ,
∂f
( )
=
∂f
∂ x0 ∂ x x=x 0
, =( )
∂g ∂ g
∂x ∂x x=x 0
,
∂f
( )
=
∂y ∂y
∂f
y= y 0
,
∂f
=
∂g
∂ x0 ∂ y( ) y= y 0

Solving (4) for h and k , the next approximation of the root is given by

{ x 1=x 0 +h
y 1= y 0 +k

The above process is repeated to desired degree of accuracy.

Suppose we have the following system of non-linear equations

{
f ( x , y , z )=0
g ( x , y , z )=0 (5)
r ( x , y , z )=0

Let (x 0 , y 0 , z 0) be an initial approximation to the root of the system and (x 0 +h , y 0 +k , z 0+l) be


the root of the system given by (5). Then we must have
Module on Numerical Analysis I 2013EC

{
f (x 0 +h , y 0 +k , z 0+ l)=0
g ( x 0+ h , y 0+ k , z 0 +l)=0 (6)
r (x 0 +h , y 0+ k , z0 +l )=0

Let us assume that f , g and r are differentiable expanding (6) by Tayler series, we obtain

{
∂f ∂f ∂f
f ( x 0 +h , y 0 +k , z 0 +l )=f 0 +h +k +l + …=0
∂ x0 ∂ y0 ∂ z0
∂g ∂g ∂g
g ( x 0 +h , y 0 +k , z 0 +l )=g 0+ h +k +l +…=0 (7)
∂ x0 ∂ y0 ∂ z0
∂r ∂r ∂r
r ( x 0 +h , y 0 +k , z 0 +l )=r 0+ h +k +l +…=0
∂ x0 ∂ y0 ∂ z0

Neglecting, the second and higher order terms and retaining only the linear terms of (7),
we obtain

{
∂f ∂f ∂f
h +k +l =−f 0
∂ x0 ∂ y 0 ∂ z0
∂g ∂g ∂g
h +k +l =−g 0 (8)
∂ x0 ∂ y0 ∂ z0
∂r ∂r ∂r
h +k +l =−r 0
∂ x0 ∂ y 0 ∂ z0

[ ] [ ]
∂f ∂f ∂f ∂f ∂f ∂f
∂ x0 ∂ y0 ∂ z0 ∂ x0 ∂ y0 ∂ z0

() ( ) ( ) ()
h f0 x 1−x 0 f0
⇒ ∂g ∂g ∂g
k =− g0 ⇒
∂g ∂g ∂g
y 1− y 0 =− g 0
∂ x0 ∂ y0 ∂ z0 ∂ x0 ∂ y0 ∂ z0
l r0 z 1−z 0 r0
∂r ∂r ∂r ∂r ∂r ∂r
∂ x0 ∂ y0 ∂ z0 ∂ x0 ∂ y0 ∂ z0

( ) () [ ]
∂f ∂f ∂f
∂ x0 ∂ y0 ∂ z0
x 1−x 0 f0
∂g ∂g ∂g
⇒ J0 y 1− y 0 =− g 0 ,where J 0=
∂ x0 ∂ y0 ∂ z0
z 1−z 0 r0
∂r ∂r ∂r
∂ x0 ∂ y0 ∂ z0
Module on Numerical Analysis I 2013EC

( ) ()()( ) ()
x 1−x 0 f0 x1 x0 f0
⇒ y 1− y 0 =−inv ( J 0 )∗ g0 ⇒ y 1 = y 0 −inv ( J 0 )∗ g0
z 1−z 0 r0 z1 z0 r0

The ( k +1 )th iteration formula is given by

( )( ) ()
x k+1 xk fk
y k +1 = y k −inv ( J k )∗ gk , for k ≥ 0
z k +1 zk rk

Solving (8) for h , k and r the next approximation of the root is given by

{
x 1=x 0 +h
y 1= y 0 +k
z1 =z 0+l

The above process is repeated to desired degree of accuracy.

Example

{
2
f ( x , y ) =x + y −20 x+ 40=0
solve
g ( x , y )=x + y 2−20 y +20=0

Solution

Let x 0= y 0 =0 be the initial approximation to the root

2 ∂f ∂f
f =x + y−20 x+ 40 ⇒ =2 x−20 , =1
∂x ∂y

2 ∂g ∂g
g= x+ y −20 y +20 , ⇒ =1 , =2 y−20
∂x ∂y

f 0=f ( x0 , y 0 )=f ( 0,0 )=40 , g 0=g ( x 0 , y 0 ) =g ( 0,0 )=20

∂f ∂f
=−20 , =1
∂ x0 ∂ y0

∂g ∂g
=1 , =−20
∂ x0 ∂ y0
Module on Numerical Analysis I 2013EC

The linear equations are

{
∂f ∂f
h +k =−f 0

h
∂ x0
∂g
+k
∂ y0
∂g
=−g0
{
⇒ −20 h+k =−40 ⇒ k=1.103∧h=2.055
h−20 k=−20
∂ x0 ∂ y0

The next aproximation is given by

{ x 1=x 0 +h
y 1= y 0 +k {
x =2.055
⇒ 1
y 1=1.103

{ x 2=x 1+ h
y 2= y 1 +k {
x =4.110
⇒ 2
y 2=2.206

Comtinuing in the same way until we get the desired degree of accuracy

The solution is

{x=16.44
y=17.65

Example

1. Solve using Newton’s method the following system of nonlinear algebraic equations

a. x 31−2 x 2−2=0
3 2
x 1−5 x 3+7=0 with x(0) (0 ) (0)
1 =2 , x 2 =1 , x 3 =2 correct to three decimal places.

2
x 2 x 3−1=0

b. x 2+ 4 y 2=9
18 y−14 x =−45 with x 0=1 and y 0=−1 correct to six decimal places.
2

Using mat lab

% solution of system of nonlinear equation


F1=@(x,y)x^3-2*y-2;
Module on Numerical Analysis I 2013EC

F2=@(x,z)x^3-5*z^2+7;
F3=@(y,z)y*z^2-1;
F1x=@(x)3*x;F1y=-2;F1z=0;
F2x=@(x)3*x^2;F2y=0;F2z=@(z)-10*z;
F3x=0;F3y=@(z)z^2;F3z=@(y,z)2*y*z;
Jacob=@(x,y,z)30*x*z^3+12*x^2*y*z;
xi=2;yi=1;zi=2;Err=0.0001;
for i=1:8
Jac=Jacob(xi,yi,zi);
Delx=(F3(yi,zi)*F1z*F2y-F3(yi,zi)*F1y*F2z(zi)+F2(xi,zi)*F1y*F3z(yi,zi)-
F2(xi,zi)*F1z*F3y(zi)-F1(xi,yi)*F2y*F3z(yi,zi)+F1(xi,yi)*F2z(zi)*F3y(zi))/Jac;
Dely=(F3(yi,zi)*F1x(xi)*F2z(zi)-F3(yi,zi)*F1z*F2x(xi)-F2(xi,zi)*F1x(xi)*F3z(yi,zi)
+F2(xi,zi)*F1z*F3x+F1(xi,yi)*F2x(xi)*F3z(yi,zi)-F1(xi,zi)*F2z(zi)*F3x)/Jac;
Delz=(F3(yi,zi)*F1y*F2x(xi)-F3(yi,zi)*F1x(xi)*F2y+F2(xi,zi)*F1x(xi)*F3y(zi)-
F2(xi,zi)*F1y*F3x-F1(xi,yi)*F2x(xi)*F3y(zi)+F1(xi,yi)*F2y*F3x)/Jac;
xip1=xi+Delx;
yip1=yi+Dely;
zip1=zi+Delz;
Errx=abs((xip1-xi)/xi);
Erry=abs((yip1-yi)/yi);
Errz=abs((zip1-zi)/zi);
fprintf('i=%2.0f x=%-7.4f y=%-7.4f z=%-7.4f Error in x=%-7.4f Error in y=%-7.4f Error in
z=%-7.4f\n',i,xip1,yip1,zip1,Errx,Erry,Errz)
if Errx<Err&& Erry<Err &&Errz<Err
break
else
xi=xip1;yi=yip1;zi=zip1;
end
end
Example : Solve the following using mat lab
Module on Numerical Analysis I 2013EC

{
3
x −2 y=2
x3 −5 z 2=−7
2
y z =1

Solution

i=1 , x=1.3056 y=0.9167 z=1.3333 Error∈x=0.3472 Error∈ y =0.0833 Error ∈ z=0.3333

i=2 , x=1.4726 y=0.4397 z=1.4226 Error ∈x=0.1279 Error ∈ y=0.5203 Error∈ z=0.0670

i=3 , x=1.4301 y=0.5029 z=1.4084 Error∈x=0.0288 Error ∈ y=0.1436 Error∈ z=0.0100

i=4 , x=1.4469 y=0.4986 z=1.4162 Error∈x=0.0117 Error∈ y =0.0086 Error∈ z=0.0055

i=5 , x=1.4404 y=0.5006 z =1.4134 Error∈ x=0.0045 Error∈ y=0.0040 Error ∈ z=0.0020

i=6 , x=1.4429 y=0.4998 z=1.4145 Error ∈x=0.0017 Error ∈ y=0.0016 Error ∈z=0.0008

i=7 , x=1.4420 y=0.5001 z=1.4141 Error∈x=0.0007 Error∈ y =0.0006 Error∈ z=0.0003

i=8 , x=1.4424 y =0.5000 z =1.4143 Error ∈ x=0.0003 Error∈ y=0.0002 Error∈ z=0.000
Module on Numerical Analysis I 2013EC

CHAPTER FOUR

INTERPOLATION
After studying this chapter, you should be able to:
 Define finite differences
 Construct finite difference table
Module on Numerical Analysis I 2013EC

 Explain the influence of round off errors


 Define finite difference operators
 Find finite differences of functions
 Explain the relation between and among the finite difference operators
 Explain how the finite difference table of polynomial functions behave
 Define interpolation.
 Mention the uses of interpolation.
 Drive Newton’s interpolation formulae.
 Solve problems using Newton’s interpolation formulae.
 Drive Lagrange interpolation formula.
 Solve problems using Lagrange interpolation method.
 Drive divided interpolation formula.
 Solve problems using divided difference interpolation method.
 Write algorithms for Newton interpolation method, Lagrange interpolation method and
divided difference interpolation method.
 Write C++ program for Newton interpolation method, Lagrange interpolation method and
divided difference interpolation method.

4.1 Introduction
Given the set of tabular values ( x 0 , y 0 ) , ( x 1 , y 1 ) , ( x 2 , y 2 ) , … , ( xn , y n ) satisfying the relation

y=f ( x ) where the explicit nature of f (x) is not known, it is required to find a simpler function,
say ϕ ( x ), such that f (x) and ϕ ( x ) agree at the set of tabulated points. Such a process is called
interpolation. If ϕ ( x ) is a polynomial, then the process is called polynomial interpolation and
ϕ ( x ) is called the interpolating polynomial. Similarly, different types of interpolation arise
depending on whether ϕ ( x ) is a trigonometric function, exponential functions, etc.
Module on Numerical Analysis I 2013EC

Numerical interpolation approximates functions and we approximate functions for one or several
of the following reasons:

 A large number of important mathematical functions may only be known through tables
of their values.
 Some function may be known to exist but are computationally too complex to manipulate
numerically.
 Some function may be known but the solution of the problem in which they appear may
not have an obvious mathematical expression to work with.

Some of the methods of interpolation that will be considered in this unit include Newton’s
Forward and Backward difference interpolation formulae, Newton’s divided difference
interpolation formula and the Lagrangian interpolation.

4.2. Finite Difference


Assume that we have a table of values ( x i , y i ), i=0 , 1 , 2, … , n of any function y=f ( x ), the valus
of x being equally spaced, i.e, x i=x o +ih , i=0 , 1 ,2 , … , n. Suppose that we are required to recover
the values of f (x) for some intermediate values of x , or to obtain the derivative of f (x) for some
x in the range x 0 ≤ x ≤ x n. The methods for the solution to these problems are based on the
concept of the differences of a function which we now proceed to define.

4.2.1. Forward Differences


If y 0 , y 1 , y 2 ,… , y n denotes a set of values of y , then y 1− y 0 , y 2− y 1 , … , y n − y n−1 are called the
differences of y . Denoting these differences by ∆ y 0 , ∆ y 1 , ∆ y 2 , … , ∆ y n−1 respectively, we have

∆ y 0 = y 1− y 0 , ∆ y 1= y 2− y 1 , …, ∆ y n−1= y n− y n−1

Where ∆ is called the forward difference operator and ∆ y 0 , ∆ y 1 , ∆ y 2 , … are called first ordered
forward differences. The differences of first ordered forward differences are called second
2 2 2
ordered forward differences and are denoted by ∆ y 0 , ∆ y 1 , ∆ y 2 , … . Similarly, one can defined
3rd , 4th , …, nth order forward differences.

∆ 2 y 0 =∆ y 1−∆ y 0= y 2−2 y 1+ y 0
Module on Numerical Analysis I 2013EC

3 2 2
∆ y 0 =∆ y 1−∆ y 0 = y 3−3 y 2+ 3 y1 − y 0

∆ 4 y 0=∆3 y 1 −∆3 y 0= y 4−4 y 3+ 6 y 2−4 y 1+ y 0

⋮ = ⋮ = ⋮

n n n n
∆ y 0 = y n−c 1 y n−1+ c 2 y n−2−…+ (−1 ) y 0

Table4.1 Forward Difference Table

x y=f (x ) ∆ ∆2 ∆3 ∆4 ∆5

x0 y0

∆ y0

2
x1 y1 ∆ y0
3
∆ y1 ∆ y0
4
x2 y2 2
∆ y1 ∆ y0
5
∆ y2 ∆ 3 y1 ∆ y0

x3 y3 2
∆ y2
4
∆ y1

∆ y3 3
∆ y2

x4 y4 2
∆ y3
∆ y4

x5 y5

Example

Given f ( 0 )=3, f ( 1 ) =12, f ( 2 ) =81, f ( 3 )=200, f ( 4 ) =100,a nd f ( 5 ) =8.

a. Find ∆ 5 f (0) b. ∆ 2 f (2) c. ∆ 4 f (1)


Solution
Module on Numerical Analysis I 2013EC

x y=f (x ) ∆ ∆
2

3

4

5

0 3

1 12 60

69 -10

2 81 50 −259

119 -269 755

3 200 −219 496

-100 227

4 100 8

-92

5 8

4.2.2. Backward Differences


The differences y 1− y 0 , y 2− y 1 , … , y n − y n−1 are called first backward differences if they are
denoted by ∇ y 1 , ∇ y 2 , ∇ y 3 , … , ∇ y n respectively, so that

∇ y 1= y 1− y 0 , ∇ y 2= y 2− y 1 , …, ∇ y n= y n− y n−1

Where ∇ is called the backward difference operator. In a similar way, one can define backward
differences of higher orders. Thus we obtain:

2
∇ y 2 =∇ y 2−∇ y 1= y 2−2 y 1 + y 0

3 2 2
∇ y3 =∇ y 3−∇ y 2= y 3−3 y 2 +3 y 1− y 0

4 3 3
∇ y 4 =∇ y 4 −∇ y3 = y 4 −4 y 3 +6 y 2−4 y 1 + y 0

⋮ = ⋮ = ⋮

n n−1 n−1 n n n
∇ y n=∇ y n −∇ y n−1 ¿ y n −c 1 y n −1 +c 2 y n−2−…+ (−1 ) y 0
Module on Numerical Analysis I 2013EC

Table4.2 Backward Difference Table

x y=f (x ) ∇ ∇
2

3

4

5

x0 y0

∇ y1

x1 y1 ∇2 y2
3
∇ y2 ∇ y3

x2 y2 2
∇ y3 ∇4 y4

∇ y3 3
∇ y4 ∇ 5 y5

x3 y3 ∇2 y 4
4
∇ y5

∇ y4 ∇ 3 y5

x4 y4 2
∇ y5
∇ y5

x5 y5

Example

Construct the backward difference table for y=log 10 x given below

x 10 20 30 40 50

y 1 1.3010 1.4771 1.6021 1.6990

4.2.3. Central Differences


The central difference operator δ is defined by the relations:

( h2 )−f ( x− h2 ) which is equivalent to δ y


δf ( x )=f x + n−
1
2
= y n− y n−1 .

Similarly, higher-order central differences can be defined as


Module on Numerical Analysis I 2013EC

2 3 2 2 4 3 3
δ y 1=δ y 3 −δ y 1 , δ y 3 =δ y 2−δ y 1 , δ y 2=δ y 5 −δ y 3 and so on
2 2 2 2 2

Table 4.3 Central Difference Table

x y=f ( x ) δ δ
2
δ
3
δ
4
δ
5

x0 y0

y1 δ y1
2
2
x1 δ y1

δ y3
2 3
y2 δ y3
2

x2 2
δ y2
4
3
δ y2
δ y5
δ y5 2
y3 2

5
x3 2
δ y3 δ y5
2

δ y7 4
δ y3
2
3
δ y7
y4 2

x4 2
δ y4

δ y9
2
x5 y5

Example

1
Construct central difference table for f ( x )= , x ∈ [ 3,3.5 ] ∧h=0.1
x

x y=f ( x ) δ δ
2
δ
3
δ
4
δ
5
Module on Numerical Analysis I 2013EC

3 0.3333

δ y 1 =-0.0107
2

3.1 0.3225 2
δ y 1 =0.0006

δ y 3 =-0.0101
2
δ 3 y 3 =0
2
3.2 0.3125 2
δ y 2 =0.0006
4
3
δ y 2 =0
δ y 5 =0
δ y5 =- 2
2

3.3 0.3030 0.0095 5


2
δ y 3 =0.0006 δ y 5 =-0.001
2
4
δ y 3 =-0.0001
δ y7 =- δ3 y7
2
=-
2

3.4 0.2941 0.0089 0.0001


2
δ y4
=0.0005

3.5 0.2857 δ y9 =-
2

0.0084

4.2.4. The shift Operator (E)


The operator E is called shift operator or displacement or translation operator. It shows the
operation of increasing the argument value x by its interval of differencing h so that:
Module on Numerical Analysis I 2013EC

Ef ( x )=f ( x +h) in the case of a continuous variable x , and

E y x = y x+1 in the case of a discrete variable.

Similarly, E2 f ( x +h )=f (x +2 h)

Powers of the operator (positive or negative) are defined in a similar manner:

n n
E f ( x )=f ( x+nh ) , E y x = y x+ nh

4.2.5. Averaging Operator ( μ)


Averaging operator which is denoted by μ is defined by

( )
−1
h 1 −1 1
f x + + f (h−h/2) (E ¿ ¿ + E 2 ) f ( x )
2 E 2 f ( x )+ E 2 f ( x ) 2
μf ( x ) = = = ¿
2 2 2

−1
1
(E ¿ ¿ + E 2 )
Hence 2
μ= ¿
2

4.2.6. Differential Operator ( D )

Differential operator which is denoted by D is defined by

2 3 2
' h f ( x ) h f ( x) ( hD ) f ( x )
Ef ( x )=f ( x+ h )=f ( x ) +h f ( x ) + + +…+¿ f ( x ) +hDf ( x ) + +…
2! 3! 2!

( hD )2 ( hD )3
Ef ( x )=f ( x ) [1+hD + + +…]
2! 3!

( hD )2 ( hD )3
E=1+ hD + + +…
2! 3!

E=e hD

ln ⁡(E)
ln ( E ) =hD ⟹ D=
h
Module on Numerical Analysis I 2013EC

Hence

1. E=1+ ∆=1+ e hD
ln ⁡(1+∆) 1 1 2 1 3 1 4
2. D= = [∆− ∆ + ∆ − ∆ + …]
h h 2 3 4

Relationship between the operators

1. ∆ f ( x ) =f ( x+ h )−f ( x )

f ( x +h ) −f ( x )=Ef ( x ) −f ( x )=( E−1 ) f ( x )

∆ f ( x ) =( E−1 ) f ( x ) ⟹ ∆=E−1

2. ∇ f ( x )=f ( x )−f ( x−h )=f ( x )−E−1 f ( x )

∇ f ( x )=( 1−E−1 ) f ( x ) ⟹ ∇=1−E−1

f ( x ) =( E −E ) f ( x )
1 −1 1 −1
h
2 ( ) ( )
h
3. δf ( x )=f x + −f x− =E 2 f ( x )−E
2
2 2 2

1 −1
2 2
δ =E −E

Example

( )
2 1/2
δ
1. Show that μ= 1+
4

Solution

( ( E −E )
)
1 −1 2 1/ 2

( ) ( ) (( ))
2 1/ 2 2 2 1/ 2 2 1/ 2
δ E+2+ E−1 E1 /2 + E−1 /2
1+ = 1+ = =
4 4 4 2

( ) (( ))
1/ 2 2 1 /2
δ2 E1/ 2+ E−1 /2 E 1/2 + E−1/ 2
1+ = = =μ
4 2 2

2. Evaluate the following


a. ∆ tan−1 x

b. ( 3 x−4 )
E
Module on Numerical Analysis I 2013EC

c. ∆ ( e x log 2 x )
d. ∆ ( x 2 / cos 2 x )

Solution

a. ∆ tan−1 x=tan −1 ( x +h ) −tan −1 x

Let tan−1 ( x +h )=A and tan−1 x=B

tan A−tan B x+ h−x h


tan ( A−B )= = =
1+tan A tan B 1+ x(x +h) 1+ x( x +h)

A−B=∆ tan−1 x=tan −1 ( 1+ x (xh +h) )


∆ 1
b. ( 3 x−4 )= ( ∆ ( 3 x−4 ) )=E−1 ( ∆ ( 3 x−4 ) )=E−1 ( 3 ( x +h ) −4−(3 x −4) )
E E

( 3 x−4 )=E−1 ( 3 h )=3 h
E

Exercise

Prove that

−1
a. δ=∇ ( 1−∇ ) 2

b. ∆ ∇=∆−∇=δ 2
∆ ∇
c. ∆+ ∇= −
∇ ∆

( )=∆(E¿¿−1+1)¿
1 −1
d. δ E 2 + E 2

e. ∆=1−e−hD

4.3. Interpolation with Equally Spaced points


4.3.1. Newton’s Forward Interpolation Formula
Let the function y=f ( x ) take the values y 0 , y 1 , … , y n corresponding to the values x 0 , x 1 , … , x n
of x . Let these values of x be equi-spaced such that x i=x 0 +ih (i=1,2,3 , …). Assuming y (x ) to
be polynomial of the nth degree in x such that
Module on Numerical Analysis I 2013EC

y ( x 0 )= y 0 , y ( x 1 ) = y 1 , … , y ( xn ) = y n

Let the polynomial representation of( x 0 , y 0 ) ,(x1 , y 1) ,… ,( x n , y n) be

y ( x ) =a0 + a1 ( x−x 0 ) + a2 ( x−x 0 )( x −x1 ) + a3 ( x−x 0 )( x −x1 ) ( x−x 2 ) +…+ an ( x−x 0 )( x−x1 ) … ( x−x n−1 )
………………………..(*)

Putting x=x 0 and solve for a 0 we get a 0= y 0

Putting x=x 1 and solve for a 1 we get

y ( x 1) =a 0+ a1 ( x1−x 0 )

⟹ y 1= y 0 +a1 ( h )

y 1− y 0
⟹ a1=
h

∆ y0
⟹ a1=
h

∆2 y 0 ∆ 3 y0 ∆n y 0
In a similar manner we get a 2= , a3 = ,…,a n=
2 h2 3 !h3 n! hn

Substituting these values of a i ' s in (*)we obtain

∆ y0 ∆2 y 0 ∆3 y 0 ∆n y 0
y ( x) = y0 +
h
( x−x 0) + ( x−x 0) ( x−x 1) + ( x−x 0) ( x−x 1) ( x−x 2) +…+a n = ( x−x 0 ) ( x−x 1 ) … ( x−x n−
2 h2 3! h3 n ! hn

x−x 0
Hence let P=
h

x−x n x−x 0 + x 0−x n x −x0 x 0 −x n x−x 0 x n −x 0 x−x 0 nh


= = + = − = − =P−n
h h h h h h h h

So (*) is written in the form of

2 3
P ( P−1 ) P ( P−1 ) ( P−2 ) P ( P−1 )( P−2 ) ...(P−n+1) n
y ( x) = y0 + P ∆ y0 + ∆ y 0+ ∆ y 0+ …+ ∆ y0
2! 3! n!

is called Newton’s Forward Interpolation Formula


Module on Numerical Analysis I 2013EC

We use Newton’s Forward Interpolation Formula at the beginning of the interval

Example

1
a. Construct the difference table for f ( x )= , x ∈ [ 2,2.5 ] ∧h=0.1
x
b. Using NFIF Find i. f ( 2.25 ) ii. f ( 2.35 )
Solution

a.

x y=f (x ) ∆ ∆2 ∆3 ∆4 ∆5

2 0.5

-0.024

2.1 0.476 0.003

-0.021 -0.002

2.2 0.455 0.001 0.003

-0.020 0.001 −0.005

2.3 0.435 0.002 −0.002

-0.018 -0.001

2.4 0.417 0.001

-0.017

2.5 0.400

x−x 0 2.25−2.2
b i . P= for h=0.1 , x=2.25∧x 0=2.2 ⟹ P= =0.5
h 0.1
Module on Numerical Analysis I 2013EC

2 3
P ( P−1 ) P ( P−1 )( P−2 ) P ( P−1 ) ( P−2 ) ...( P−n+1) n
f ( x )= y ( x )= y 0 + P ∆ y 0 + ∆ y0 + ∆ y 0 +…+ ∆ y0
2! 3! n!

0 . 5 ( 0 .5−1 ) 0 .5 ( 0 . 5−1 ) ( 0 .5−2 )


⟹ y ( 2.25 )=0.445+ ( 0.5 )(−0.020 )+ ( 0 . 002 ) + (−0 . 001 ) +…+ ¿
2! 3!

y ( 2.25 ) ≅ 0.4446875

1
The exact value is y ( 2.25 )= =0.44444 … .
2.25

Example

Find the 4th degree polynomial which takes the following values

x 0 2 4 6 8 10

y 1 2 1 10 9 ?

Solution

x−0 x
P= =
2 2

65 2 5 3 x 4
y ( x ) =1+ 7 x− x+ x−
12 4 12

Example

From the table estimate the number of students who obtained marks b/n 40 and 45.

Mark 30-40 40-50 50-60 60-70 70-80

No.of student 31 42 51 35 31

Solution

First we prepare the cumulative frequency table as follows

Mark less than (x) 40 50 60 70 80

No.of student 31 73 124 159 190


Module on Numerical Analysis I 2013EC

Prepare a forward difference table as follows

x yx ∆ yx ∆ yx
2 3
∆ yx ∆ yx
4

40 31

42

50 73 9

51 -25

60 124 -16 37

35 12

70 159 -4

31

80 190

We shall find y 45 that is number of students with mark less than 45.

x−x 0 5
Taking x 0=40 , x=45 we have p= = =0.5
h 10

Using NFIF we get


Module on Numerical Analysis I 2013EC

2 3 4
P ( P−1 ) P ( P−1 )( P−2 ) P ( P−1 )( P−2 ) ( P−3)
y 45= y 40+ P ∆ y 0 + ∆ y 40 + ∆ y 40 + ∆ y 40
2! 3! 4!

0 . 5 ( 0 .5−1 ) 0 . 5 ( 0 .5−1 )( 0 . 5−2 ) 0 .5 ( 0 . 5−1 ) ( 0 . 5−2 ) ( 0. 5−3)


y 45=31+ ( 0.5 )( 42 )+ ( 9 )+ (−25 ) + ( 37 )
2! 3! 4!

y 45=47.87

The number of students with marks less than 45 is 47.87 i.e.,48.

But the number of students with marks less than 40 is 31.

Hence the number of students getting marks b/n 40 and 45 is 48-31=17.

Finding missing terms

Example

Determine the missing entry in the table below

X 0 1 2 3 4

Y 1 4 1 ? 97
7

Solution

Let y be a polynomial of degree 3.

Hence ∆ 4 y 0=0 ⟹ ( E−1 ) y 0=( E 4−4 E3 +6 E2−4E+1 ) y 0=0


4

4 3 2
⟹ E y 0−4 E y 0 +6 E y 0−4 E y 0 + y 0=0

⟹ y 4−4 y 3 +6 y 2−4 y 1 + y 0=0

⟹ 97−4 y 3 +6 ( 17 )−4 ( 4 )+ 1=0 ⟹ y 3=46

Example

Find the missing entries in the following table

X 0 1 2 3 4 5
Module on Numerical Analysis I 2013EC

Y 1 ? 11 28 ? 116

y 0=1 , y 2=11 , y 3=28 ,∧ y 5 =116 . since four values are known we assume y=f ( x )as a
polynomial of degree three.

Hence ∆ 4 y 0=0 and ∆ 5 y 0 =0

⟹ ( E4 −4 E3 +6 E2−4E+1 ) y 0 =0 and ( E5 −5 E4 +10 E3 −10 E 2+5E-1 ) y 0 =0

⟹ y 4 −4 y 3+ 6 y 2−4 y1 + y 0 =0 and y 5−5 y 4 + 10 y 3 −10 y 2+5 y 1− y 0 =0

{ y 4 −4 y 1=45
−5 y 4 +5 y 1=−285

⟹ y 1=4∧ y 4=6

4.3.2. Newton’s Backward Interpolation Formula


To interpolate a value of f ( x) near to the end of the tabular values we use the following
polynomial

y ( x ) =a0 + a1 ( x−x n ) + a2 ( x−x n) ( x− xn−1 ) + a3 ( x −xn ) ( x−x n−1 ) ( x−x n−2 ) +…+ an ( x−x n )( x− xn−1 ) … ( x−x 1 )
……………….(**)

Now let us find a 0 , a 1 , … , an

Putting x=x n and solve for a 0 we get a 0= y n

Putting x=x n−1 and solve for a 1 we get

y ( x n−1) =a 0+ a1 ( xn −1−x n )

⟹ y n−1= y n +a1 ( h )

y n−1− y n y n− y n−1
⟹ a1 = =
x n−1−x n x n−x n−1

∇ yn
⟹ a1=
h
Module on Numerical Analysis I 2013EC

2 3 n
∇ yn ∇ yn ∇ yn
In a similar manner we get a 2= 2
, a3 = 3
,…,a n= n
2h 3 !h n!h

x−x n
Let P=
h

x−x n−i x−x n + x n−x n−i x−x n x n−x n−i x −x n ih


= = + = + =P+i
h h h h h h

x−x n ( x−x n ) ( x−x n−1 ) ( x−x n ) ( x−x n−1) ( ( x−x n−2) )


⟹ =P , =P ( p +1 ) , =P ( p+ 1 )( P+2 ) and so on.
h h
2
h
3

Substituting these values of a i ' s and P+i in (**) in their respective place we obtain

P(P+ 1) 2 P(P+1) ( P+2 ) 3 P ( P+ 1 ) … ( P+(n−1) ) n


y ( x) = yn + P ∇ yn + ∇ y n+ ∇ y n +…+ ∇ yn
2! 3! n!

Example

The population of a town in the census was as given below.

Year, x 1891 1901 1911 1921 1931

Population, y 46 66 81 93 101
(thousands)

Solution

First we construct the difference table

x y=f (x ) ∇ ∇
2

3

4

1891 46

∇ y 1 =20

1901 66
Module on Numerical Analysis I 2013EC

2
∇ y 2 =-5

1911 81 ∇ y 2 =15 3
∇ y3 =2
4
2
∇ y 3 =-3 ∇ y 4 =-3

1921 93 ∇ y 3 =12 ∇ 3 y 4 =-1

2
∇ y 4 =-4
1931 101 ∇ y 4 =8

x−x n 1925−1931
P= = =−0 . 6
h 10

P(P+ 1) 2 P(P+1) ( P+2 ) 3


y ( x) = yn + P ∇ yn + ∇ y n+ ∇ y n +…
2! 3!

(−0 . 6 ) (−0 . 6+1) (−0 . 6 ) (−0 . 6+1) (−0 .6 +2 ) (−0 .6 ) (−0 . 6+1) (−0 .6+ 2
y ( 1925 )=101+ (−0 . 6 ) ( 8 ) + (−4 ) + (−1 ) +
2! 3! 4!

y ( 1925 ) ≅ 96 . 8368 thousands ≅ 96 , 837

Exercise

1. The table below gives the value of tan x for 0 . 10 ≤ x ≤0 . 30

x 0.10 0.15 0.20 0.25 0.30

y=tanx 0.1003 0.1511 0.2027 0.255 0.3093


3

Find

a. tan0.12 b. tan0.26 c. tan0.5


2. Find the number of men getting wages b/n Rs.10 and Rs.15 from the following data:

Wages in Rs. 0-10 10-20 20-30 30-40


Module on Numerical Analysis I 2013EC

Frequency 9 30 35 42

4.4. Interpolation with unequally spaced point


When the values of the argument are not at equally spaced then we use two such formulae for
interpolation.

1. Lagrange’s interpolation formula


2. Newton’s divided difference formula

4.4.1. Lagrange’s Interpolation Formula


Let y 0=f ( x 0 ) , y 1 =f ( x 1 ) , … , y n =f ( x n ) be(n+1) entries of a function y=f ( x ). Let p( x ) be a
polynomial of degree n corresponding to the arguments x 0 , x 1 , x 2 , … , x n which can be written as:

Pn ( x ) =A 0 ( x−x 1 )( x −x2 ) … ( x−x n ) + A1 ( x−x 0 ) ( x−x 2 ) … ( x−x n ) +…+ An ( x−x 0 ) ( x−x 1 ) … ( x−x n−1)

………………..( 1 )

Where A0 , A 1 , … , A nare constants to be determined.

The constants A0 , A 1 , … , A n, will be determined by considering the tabulated function y=f (x )


and the polynomial function P( x ) agree at the set of tabulated points.

Putting x=x 0 , x 1 , x 2 , … , x n in( 1 ) successively, we get the following.

For x=x 0 , y 0= A0 ( x 0−x 1 )( x0 −x 2) … ( x 0−x n )

y0
That is A0 =
( x 0 −x1 ) ( x 0−x 2 ) … ( x 0−x n )

For x=x 1 , y 1= A 1 ( x 1−x 0 ) ( x 1−x 2 ) … ( x 1−x n )

y1
That is A1=
( x 1−x 0 ) ( x 1−x 2 ) … ( x 1−x n )

Similarly

For x=x n , y n=A n ( x n−x 0 ) ( x n−x 2) … ( x n−x n−1 )


Module on Numerical Analysis I 2013EC

yn
That is An =
( x n− x0 ) ( x n−x 2 ) … ( x n−x n−1 )

Substituting the values of A0 , A 1 , … , A n in equation ( 1 ), we get

( x−x 1 ) ( x−x 2) … ( x− xn )
Pn ( x ) = y 0 +¿
( x 0 −x1 ) ( x 0−x 2 ) … ( x 0−x n )

( x−x 0 ) ( x−x 2 ) … ( x−x n )


y 1 +…+¿
( x 1−x 0 )( x 1−x 2 ) … ( x 1−x n )

( x−x 0 ) ( x−x 1 ) … ( x−x n−1 )


y n …………………….(2)
( x n−x 0 )( xn −x 2) … ( x n−x n−1 )

This is called Lagrange’s interpolation formula and which can be written as a general form:

n
Pn ( x ) =∑ Li ( x ) f i
i=0

Where

( ) ( x−x 0 ) … ( x−x i−1 ) ( x−x i+1 ) … ( x−x n )


n
x−x j
Li ( x )= ∏ x i−x j
=¿ ¿ ,( i=0,1,2 , … , n) are
j=0 , j ≠i ( xi −x 0 )( x i−x 1 ) … ( x i−x i−1 )( x i−x i +1) … ( x i−x n )
individually polynomials of degree n in x and are called the Lagrange’s interpolation
coefficients.

Note: Li ( x j )= {
1if i= j
0 if i ≠ j

Note: The Lagrange’s interpolation formula can also be used to split the given function into
partial fractions.

Dividing both sides of equation (2) by ( x−x 0 ) ( x−x 1 ) ( x−x 2 ) . … . ( x−x n ) we get

f (x ) y0 1
= . +
( x−x 0 ) ( x−x 1) ( x−x 2 ) . … . ( x −x n) ( x 0−x 1 ) ( x 0−x 2 ) . …. ( x 0−x n ) x−x 0
y1 1
.
+…+ ¿
( x 1−x 0 )( x 1−x 2 ) . … . ( x 1−x n ) x−x 1
Module on Numerical Analysis I 2013EC

yn 1
.
( x n−x 0 )( xn −x 1) ( x n−x 2 ) . … . ( x n −xn −1 ) x−x n

Example

Use Lagrange’s method of interpolation to find the unique polynomial P( x ) of degree 2 such
that : P ( 1 )=1 , P ( 3 )=27 , P ( 4 )=64 .

Solution

x 0=1 , x1 =3 , x 2=4∧ y 0=1 , y 1=27 , y 2=64

( x−x 1 ) ( x−x 2 ) ( x−x 0 )( x−x 2) ( x−x 0 ) ( x−x 1 )


P2 ( x ) = y0 + y 1+ y2
( x 0−x 1 ) ( x 0−x 2 ) ( x 1−x 0 )( x1 −x2 ) ( x 2−x 0 ) ( x 2−x 1)

( x−3 )( x−4 ) ( x−1 ) ( x −4 ) ( x−1 ) ( x−3 )


P2 ( x ) = ( 1) + ( 27 ) + ( 64 ) ⟹ P2 ( x )=8 x 2−19 x+12
( 1−3 )( 1−4 ) ( 3−1 ) (3−4 ) ( 4−1 ) ( 4−3 )

Exercise

1. The percentages of criminals for different age group are given below. Determine the
percentage number of criminals less than 35 year using the Lagrange’s formula.

Age % No. of
criminals

Under25 year 52

Under30year 67.3

Under40 year 84.1

Under50 year 94.4

Answer 77.405

2. using Lagrange’s formula express the function


x 2 +6 x−1
as a sum of partial fractions.
( x 2−1 ) ( x −4 )( x−6 )
[hint tabulate the value of f ( x )=x 2+ 6 x−1 for x=−1,1,4,6 ]
Module on Numerical Analysis I 2013EC

Solution

f ( x )=x 2 +6 x−1 , x 0=−1 , x1 =1 , x 2=4 , x3 =6∧ y 0=−6 , y 1=6 , y 2=39 , y 3=71

2 2
x +6 x−1 x + 6 x−1
=
( x −1 ) ( x −4 )( x−6 ) ( x +1 ) ( x −1 )( x−4 ) ( x−6 )
2

2
x +6 x−1 y0 1 y1 1
= . + . +¿
( x+1 ) ( x−1 ) ( x −4 )( x−6 ) ( x 0−x 1 )( x 0 −x2 ) ( x 0−x 3 ) x−x 0 ( x 1−x 0 ) ( x1 −x2 ) ( x 1−x 3 ) x −x1

y2 1 y3 1
. + .
( x 2−x 0 )( x 2−x 1 ) ( x 2−x 3 ) x−x 2 ( x 3−x 0 ) ( x 3−x 1 ) ( x 3−x 2) x−x 3

x 2 +6 x−1
=
3 1
+
1 1
( ) ( ) ( ) ( )

13 1
+
71 1
( x+1 ) ( x−1 ) ( x −4 )( x−6 ) 35 x+ 1 5 x−1 10 x−4 70 x−6

Inverse Interpolation

In interpolation, we estimate the missing value of the function y = f (x) corresponding to a value
x intermediate
between two given values. In inverse interpolation, we interpolate the argument x corresponding
to an
intermediate value y of the entry.

From

( x−x 1 )( x −x2 ) … ( x−x n )


y= y 0 +¿
( x 0−x 1 ) ( x 0−x 2 ) … ( x0 −x n )

( x−x 0 ) ( x−x 2 ) … ( x−x n )


y 1 +…+¿
( x 1−x 0 )( x 1−x 2 ) … ( x 1−x n )

( x−x 0 ) ( x−x 1 ) … ( x−x n−1 )


yn
( x n−x 0 )( xn −x 2) … ( x n−x n−1 )

By interchanging x and y we get

( y− x1 ) ( y− y 2) … ( y − y n )
x= x 0 +¿
( y 0 − y 1 ) ( y 0 − y 2 ) … ( y 0− y n )
Module on Numerical Analysis I 2013EC

( y− y 0 )( y− y 2 ) … ( y− y n )
x 1+ …+ ¿
( y 1 − y 0 ) ( y 1− y 2 ) … ( y 1 − y n )

( y − y 0 ) ( y− y 1 ) … ( y− y n−1 )
x n This is called Inverse Interpolation formula.
( y n− y 0 ) ( y n − y 2 ) … ( y n− y n−1 )

Example
The following table gives the values of y corresponding to certain values of x. Find the value of x
when
y = 167.59789 by applying Lagrange’s inverse interpolation formula.

x 1 2 5 7

y 1 12 117 317

Solution:
Here x0 = 1, x1 = 2, x2 = 5, x3 = 7, y0 = 1, y1 = 12, y2 = 117, y3 = 317 and y = 167.59789.
By using the inverse Lagrange’s formula we can get x = 5.65238 when y = 167.59789
4.4.2. Newton’s General Interpolation Formula with Divided Difference
Lagrange’s interpolation formula has the disadvantage that if any other interpolation point were
added, the interpolation coefficient will have to be recalculated.

This labor of recomputing the interpolation coefficients is saved by using Newton’s divided
differences interpolation formula.

Let the function y=f ( x ) take the values y 0 , y 1 , … , y n corresponding to the values x 0 , x 1 , … , x n
which are not equally spaced. The difference of the function values with respect to the difference
of the arguments is called divided differences.

y −y y −y y −y
[ x 0 , x 1 ]= x 1−x 0 , [ x 1 , x 2 ]= x 2−x 1 , … , [ xn −1 , xn ]= xn −x n−1 are called first order divided
1 0 2 1 n n−1

difference.

y n +1− y n y n− y n−1
( − )
[ x n , x n +1 ]−[ x n−1 , x n ] x n +1−x n x n−x n−1 is called the second order divided
[ x n−1 , x n , x n+1 ] = x n +1−x n−1
=
x n +1−x n−1
difference for the arguments x n−1 , x n , x n+1
Module on Numerical Analysis I 2013EC

In general the nth order divided differences for the arguments x 0 , x 1 , … , x n is given by

[ x 1 , x 2 , … , x n ]−[ x 0 , x 1 , … , x n−1 ]
[ x 0 , x 1 ,… , x n ]= x n−x 0

Properties of Divided Differences

1. Divided differences are symmetric with respect to the arguments i.e., independent of the
order of arguments.
i.e.,[ x 0 , x 1 ]=[ x 1 , x 0 ] , [ x n−1 , x n , x n +1 ]=[ x n+1 , x n−1 , x n ] = [ x n , x n+1 , x n−1 ].

2. The nth order divided differences of a polynomial of nth degree are constant

Newton’s Divided Difference Formula

Let y 0 , y 1 , … , y nbe the values of y=f (x ) corresponding to the arguments x 0 , x 1 , … , x n.

Then from the definition of divided differences, we have

y− y
[ x , x 0 ]= x−x 0
0

⟹ y= y 0 + ( x−x 0 ) [ x , x 0 ] …………..( i )

[ x , x 0 ] −[ x 0 , x 1 ]
[ x , x 0 , x 1 ]= ⟹ [ x , x 0 ]=¿ [ x 0 , x 1 ] + ( x−x 1 ) [ x , x0 , x 1 ]
x−x 1

Substituting this value of [ x , x 0 ] in( i ) ,we get

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x , x 0 , x 1 ] …………….( ii )

[ x , x 0 , x 1 ]−[ x 0 , x1 , x2 ]
[ x , x 0 , x 1 , x 2 ]= x−x 2

⟹ [ x , x 0 , x 1 ] =[ x 0 , x 1 , x 2 ] + ( x−x 2 ) [ x , x 0 , x 1 , x 2 ]

Substituting this value of [ x , x 0 , x 1 ] in( ii ) ,we get


Module on Numerical Analysis I 2013EC

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) ( [ x 0 , x 1 , x 2 ] + ( x−x 2 ) [ x , x0 , x 1 , x 2 ] )

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x0 , x 1 , x 2 ] + ( x−x 0 )( x −x1 ) ( x−x 2 ) [ x , x 0 , x 1 , x 2 ]

Proceeding in this manner, we get

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x0 , x 1 , x 2 ] +¿

( x−x 0 ) ( x−x 1 ) ( x−x 2 ) [ x 0 , x 1 , x 2 , x 3 ] +¿

( x−x 0 ) ( x−x 1 ) ( x−x 2 ) … ( x −xn −1 ) [ x 0 , x 1 ,… , x n ] +¿

( x−x 0 ) ( x−x 1 ) ( x−x 2 ) … ( x −xn ) [ x , x 0 , x1 , … , x n ] is called Newton’s general interpolation


formula with divide differences.

Table 4.4: Table of Divided Differences

x y 1st DD 2nd DD 3rd DD 4th DD 5th DD

y0

x0 [ x0 , x1 ]
y1 [ x0 , x1 , x2 ]
x1 [ x1 , x2 ] [ x0 , x1 , x2 , x3 ]
y2 [ x1 , x2 , x3 ] [ x0 , x1 , x2 , x3 , x4 ]
x2 [ x2 , x3 ] [ x1 , x2 , x3 , x4 ] [ x0 , x1 , x2 , x3 , x4 , x5 ]
y3 [ x2 , x3 , x4 ] [ x1 , x2 , x3 , x4 , x5 ]
x3 [ x2 , x3 , x4 , x5 ]
[ x3 , x4 ]
y4
[ x 3 , x 4 , x5 ]
x4
[ x 4 , x5 ]
y5
Module on Numerical Analysis I 2013EC

x5

Example

Apply Newton’s divided difference formula find f (8) if

f ( 1 ) =3 , f ( 3 )=31 , f ( 6 ) =223 , f ( 10 )=1011 , f ( 11 )=1343

Solution

x y=f ( x ) 1st DD 2nd DD 3rd DD 4th DD

1 3

31−3
=14
3−1

3 31 64−14
6−1
19−10
=10
223−31 10−1
=64
6−3
=1
6 223 1−1
=0
11−1
197−64
10−3
1011−223
27−19
10−6 =19
1011 11−3
10
=197
=1

332−197
11−6
1343−1011
11−10 =27
11 1343
=332
Module on Numerical Analysis I 2013EC

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x0 , x 1 , x 2 ] + ( x−x 0 )( x −x1 ) ( x−x 2 ) [ x0 , x 1 , x 2 , x 3 ]

+¿ ( x−x 0 ) ( x−x 1 ) ( x−x 2 )( x −x3 ) [ x 0 , x 1 , x 2 , x 3 , x 4 ]

y=3+ ( x−1 ) 14 + ( x−1 ) ( x −3 ) 10+ ( x −1 )( x−3 )( x−6 ) 1+ ( x−1 ) ( x−3 ) ( x−6 ) ( x−10 ) 0

y=3+ ( x−1 ) 14 + ( x−1 ) ( x −3 ) 10+ ( x −1 )( x−3 )( x−6 ) 1

y ( 8 )=3+ ( 8−1 ) 14+ ( 8−1 ) ( 8−3 ) 10+ ( 8−1 ) ( 8−3 )( 8−6 ) 1=521

Example:

From the following data estimate the number of persons having incomes b/n 2000 and 2500:

Income Below 500 500-1000 1000-2000 2000-3000 3000-4000

No. of 6000 4250 3600 1500 650


persons

Solution

First we prepare the cumulative frequency table as follows

Mark less than (x) 500 1000 2000 3000 4000

No.of persons 600 10250 13850 15350 16000


0

We have to draw a Newton’s divided difference interpolation table.

X Y 1st DD 2nd DD 3rd DD 4th DD

500 6000

8.5
Module on Numerical Analysis I 2013EC

1000 10250 -0.00326

3.6 0.00000084

2000 13850 -0.00105 0

1.5 0.000000208

3000 15350 -0.000425

0.65

400 16000

y= y 0+ ( x−x 0 ) [ x 0 , x1 ] + ( x−x 0 ) ( x−x 1 ) [ x0 , x 1 , x 2 ] + ( x−x 0 )( x −x1 ) ( x−x 2 ) [ x0 , x 1 , x 2 , x 3 ]

+¿ ( x−x 0 ) ( x−x 1 ) ( x−x 2 )( x −x3 ) [ x 0 , x 1 , x 2 , x 3 , x 4 ]

y=6000+ ( x−500 ) 8 .5+ ( x−500 )( x−1000 )(−0 . 00326 ) + ( x−500 )( x−1000 ) ( x−2000 ) 0 . 00000084

+¿ 0

y=6000+ ( 2500−500 ) 8 . 5+ ( 2500−500 )( 2500−1000 )(−0. 00326 )+ ( 2500−500 ) ( 2500−1000 ) ( 2500−2000 ) 0 . 00

+¿ 0

y=6000+ ( 2000 ) 8. 5+ ( 2000 )( 1500 ) (−0 . 00326 ) +¿

( 2000 ) ( 1500 )( 500 ) 0 . 00000084+ 0

y 2500 =14480

That means the number of persons having income less than 2500 is 146 7 5

The number of persons having income less than 2000 is 13850


Module on Numerical Analysis I 2013EC

CHAPTER FIVE

NUMERICAL DIFFERENTIATION AND INTEGRATION


Chapter Objectives
 After studying this chapter, you should be able to:
 Drive the formula for numerical differentiation.
 Find the derivatives of functions using a numerical method.
 Drive a formula for the trapezoidal rule.
 Set error bound for the trapezoidal rule.
 Solve definite integrals using the trapezoidal rule.
 Drive a formula for the Simpson’s rule.
 Set error bound for the Simpson’s rule.
 Solve definite integrals using the Simpson’s.
5.1 Introduction
Calculus is the mathematics of change. Because engineers and scientists must continuously deal
with systems and processes that change, calculus is an essential tool of their profession.

There are so many methods available to find the derivative and definite integration of a function.
But when we have a complicated function or a function given in tabular form, then we use
numerical methods. In this unit, we shall be concerned with the problem of numerical
differentiation and integration.

The function to be differentiated or integrated will typically be in one of the following three
forms:
Module on Numerical Analysis I 2013EC

1. A simple continuous function such as a polynomial, an exponential or a trigonometric


function.
2. A complicated continuous function that is difficult or impossible to differentiate or
integrate directly.
3. A tabulated function where values of x and f (x) are given at a number of discrete points

In the first case, the derivative or integral of a simple function may be evaluated analytically
using calculus. For the second case, analytical solutions are often impractical, and sometimes
impossible, to obtain. In these instances, as well as in the third case of discrete data, approximate
methods must be employed.

5.2. Numerical Differentiation


The method of obtaining the derivatives of a function using a numerical technique is known as
numerical differentiation. There are essentially two situations where numerical differentiation is
required.

1. The function values are known but the function is unknown, such functions are called
tabulated function.
2. The function to be differentiated is complicated and, therefore, it is difficult to
differentiate.

The choice of the formula is the same as the choice for interpolation. If the derivative at a
point near the beginning of a set of values given by a table is required then we use Newton
forward interpolation formula, and if the same is required at a point near the end of the set of
given tabular values, then we use Newton backward interpolation formula. The central
difference formula (Bessel’s and Sterling’s) used to calculate value for points near the middle
of the set of given tabular values. If the values of x are not equally spaced, we use Newton’s
divided difference interpolation formula to get the required value of the derivative.

5.2.1. Derivatives Using Newton’s Forward Interpolation Formula


Newton’s forward interpolation is given by
Module on Numerical Analysis I 2013EC

2 3
P ( P−1 ) P ( P−1 ) ( P−2 ) P ( P−1 )( P−2 ) ...(P−n+1) n
y ( x) = y0 + P ∆ y0 + ∆ y 0+ ∆ y 0+ …+ ∆ y0
2! 3! n!
…….( i )

x−x 0
Where P=
h

Differentiating equation ( i ) with respect to P we get:

2
dy 2 P−1 2 3 P −6 P+2 3
=∆ y 0 + ∆ y0 + ∆ y 0+ …+¿
dP 2! 3!

dy dy dP 1 dy
And = . =
dx dP dx h dP

Therefore

dy 1
= ¿ …………( ii )
dx h

At x=x 0 , P=0 therefore putting P=0 in ( ii ) we get

[ ]
dy
dx x=x 0
=
1
h [ 1 1 1 1 1
∆ y 0 − ∆ 2 y 0 + ∆ 3 y 0 − ∆ 4 y 0 + ∆ 5 y 0 − ∆ 6 y 0+ …
2 3 4 5 6 ]
Differentiating ( ii ) again w.r.t' x ' , we get

( ) ( )
2
d y d dy dP 1 d dy
2
= =
d x dP dx dx h dP dx

¿
1 2
h
2
∆ [
y 0 + ( P−1 ) ∆ 3
y 0 +
6 P2−18 P+11 4
12 ]
∆ y 0 +… …………………..( iii )

Putting P=0 in ( iii ) we get:

[ ] [ ]
2
d y 1 2 3 11 4 5 5 137 6
2
= 2
∆ y 0−∆ y 0+ ∆ y 0− ∆ y 0 + ∆ y0 …
dx x=x 0 h 12 6 180
Module on Numerical Analysis I 2013EC

Similarly, [ ]
d3 y
dx
3
x= x0
=
h [
1 3
3
3 4
∆ y0− ∆ y0 + …
2 ]
5.2.2. Derivatives Using Newton’s Backward Differences Formula
Newton’s backward interpolation formula is given by:

P(P+ 1) 2 P(P+1) ( P+2 ) 3 P ( P+ 1 ) … ( P+(n−1) ) n


y ( x) = yn + P ∇ yn + ∇ y n+ ∇ y n +…+ ∇ y n …….( i )
2! 3! n!

x−x n
Where P=
h

Differentiating both sides of equation ( i ) w.r.t P , we get:

dy 2 P+1 2 3 P2+ 6 P+2 3 4 P 3+18 P2 +22 P+6 4


=∇ y n + ∇ y n+ ∇ yn + ∇ y n +…
dp 2! 3! 4!

dy dy dP 1 dy
And = . =
dx dP dx h dP

dy 1
[
= ∇ yn +
dx h
2 P+1 2
2!
∇ y n+
3 P2+ 6 P+2 3
3!
∇ yn +
4 P 3+18 P2 +22 P+6 4
4!
∇ y n +… …….( ii ) ]
Putting P=0 in ( ii ) we get:

[ ]
dy
dx x=x n
=
1
h [ 1 1 1 1 1
]
∇ y n + ∇ 2 y n + ∇3 y n+ ∇ 4 y n + ∇5 y n + ∇ 6 y n+ … ……………( iii )
2 3 4 5 6

Differentiating ( ii ) w.r.t P we get:

( ) ( )
2
d y d dy dP 1 d dy
2
= =
d x dP dx dx h dP dx

d2 y 1 2
2
dx h
= 2
∇ y n[+
6 P+ 6 3
6
∇ y n +
12 P 2+36 P+22 4
24
∇ y n+ … ……….( iv ) ]
Putting x=x n , P=0 in equation( iv ) we get:

[ ]
d2 y
d x2 x=x n
=
1 2
h 2 [ 3 11 4 5 5
∇ y n + ∇ y n+ ∇ y n + ∇ y n +
12 6
137 6
180
∇ y n+ … ]
Module on Numerical Analysis I 2013EC

Similarly, [ ]
d3 y
dx
3
x= xn
=
h [
1 3
3
3 4
∇ y n+ ∇ y n + …
2 ]
Example

From the following table, find

i. [ ] dy
dx x=1.1

ii.
[ ] d2 y
dx
2
x=1.1

iii. [ ] dy
dx x=1.5

x 1.0 1.1 1.2 1.3 1.4 1.5 1.6

y 7.989 8.403 8.781 9.129 9.451 9.750 10.031

Solution

Since the approximation is required around the beginning and end of the table we apply
Newton’s forward difference formula for i∧iii Newton’s backward difference formula forii.

Forward difference table:

x y ∆ ∆2 ∆3 ∆4 ∆5 ∆6

1.0 7.989

0.414

1.1 8.403 −0.036

0.378 0.006
Module on Numerical Analysis I 2013EC

1.2 8.781 -0.030 -0.002

0.348 0.004 0.001

1.3 9.129 -0.026 -0.001 -0.001 -0.002

0.322 0.003

1.4 9.451 -0.023 -0.002

0.299 0.005

1.5 9.750 -0.018

0.281

1.6 10.031

We have

i. [ ]
dy
dx x=x 0
=
1
h [ 1 1 1 1 1
∆ y 0− ∆2 y 0 + ∆3 y 0− ∆ 4 y 0+ ∆5 y 0− ∆ 6 y 0+ …
2 3 4 5 6 ]
Here h=0 .1 ,∧x 0=1 .1

[ ]
dy
dx x=1. 1
=
1
0 .1 [ 1 1 1 1 1
0 .378− (−0 . 030 ) + ( 0 . 004 )− (−0 .001 ) + (−0 . 001 )− ( 0 ) …
2 3 4 5 6 ]
[ ]
dy
dx x=1. 1
=3 . 9435

[ ] [ ]
2
d y 1 2 3 11 4 5 5
ii. 2
= 2
∆ y 0−∆ y 0+ ∆ y 0− ∆ y 0 …
dx x=x 0 h 12 6

[ ] [ ]
2
d y 1 11 5
= (−0 . 030 )−( 0 . 004 )+ (−0 .001 ) − (−0 . 001 ) …
dx
2
x=1 .1 (0 . 1)2
12 6

[ ]
d2 y
d x2 x=1 .1
=−0 .341
Module on Numerical Analysis I 2013EC

Backward difference table:

x y=f (x ) ∇ ∇
2

3

4

5

6

1.0 7.989

0.414=∇ y 1

1.1 8.403 −0.036=∇ 2 y 2

0.378¿ ∇ y 2 0.006 ¿ ∇ 3 y 3

1.2 8.781 -0.030¿ ∇ 2 y 3 -0.002 ¿ ∇ 4 y 4

0.348¿ ∇ y3 0.004¿ ∇ 3 y 4 0.001¿ ∇ 5 y 5

1.3 9.129 -0.002=


-0.026¿ ∇ 2 y 4 -0.001¿ ∇ 4 y 5

0.322¿ ∇ y 4 3
0.003¿ ∇ y 5 -0.001¿ ∇ 5 y 6 ∇6 y6

1.4 9.451
-0.023¿ ∇ 2 y 5 -0.002¿ ∇ 4 y 6

0.299¿ ∇ y5 0.005¿ ∇ 3 y 6
1.5 9.750
-0.018¿ ∇ 2 y 6
0.281¿ ∇ y 6
1.6 10.031

iii. [ ]
dy
dx x=1.5

Here h=0 .1 ,∧x n=1 .5

[ ]
dy
dx x=x n
=
1
h[ 1 1 1 1 1
∇ y n + ∇ 2 y n + ∇3 y n+ ∇ 4 y n + ∇5 y n + ∇ 6 y n+ …
2 3 4 5 6 ]
[ ] dy
dx x=1.5
=
1
0.1 [ 1 1 1 1 1
( 0 . 299 ) + (−0 . 023 ) + ( 0 . 003 ) + (−0 . 001 )+ ( 0 .001 ) + ( 0 )
2 3 4 5 6 ]
Module on Numerical Analysis I 2013EC

[ ] [ ]
2
d y 1 2 11 5 137 6
= ∇ y n + ∇3 y n+ ∇4 y n + ∇5 y n + ∇ y n+ …
dx
2
x=x n h
2
12 6 180

[ ]
d2 y
d x2 x=1 .5
=
1
(0 . 1)[2
11
12
5
(−0 . 023 ) + ( 0 . 003 ) + (−0 . 001 )+ ( 0 . 001 )+
6
137
180 ]
( 0) ¿

Errors in numerical differentiation


It must be recognized that numerical differentiation is subject to considerable error; thebasic
difficulty is that, while f (x) - Pn(x) may be small, the differences (f'(x) - Pn'(x)) and (f"(x-Pn"(x)),
etc. may be very large. In geometrical terms, although two curves may be close together, they
may differ considerably in slope, variation in slope, etc.

It should also be noted that all these formulae involve division of a combination of differences
(which are prone to loss of significance or cancellation errors, especially if h is small) by a
positive power of h. Consequently, if we want to keep round-off errors down, we should use a
large value of h. On the other hand, it can be shown that the truncation error is approximately
proportional to hp, where p is a positive integer, so that k must be sufficiently small for the
truncation error to be tolerable. We are in a cleft stick and must compromise with some optimum
choice of h. In brief, large errors may occur in numerical differentiation, based on direct
polynomial approximation, so that an error check is always advisable. There arc alternative
methods, based on polynomials, which use more sophisticated procedures such as least-squares
Module on Numerical Analysis I 2013EC

or mini-max, and other alternatives involving other basis functions (for example,
trigonometric functions). However, the best policy is probably to use numerical differentiation
only when it cannot be avoided!
Example
We will estimate the values of f'(0.1) and f"(0.1) for f(x) = e, using the data in.If we use the
above formulae with = 0, we obtain (ignoring fourth and higher differences):

Since f"(0.1) = f’’ (0.1) = f (0.1) = 1.10517, it is obvious that the second result is much less
accurate (due to round-off errors).
Checkpoint
How are formulas for the derivatives of a function obtained from interpolation formulae?
Why the accuracy of the usual numerical differentiation process is not necessarily increased if the
argument interval is reduced?

When should numerical differentiation be used?

EXERCISES
1. Derive formulae involving backward differences for the first and second derivatives of a
function.
2. The function is tabulated for f( x )= 1.00(0.05)1.30 to 5D:
Module on Numerical Analysis I 2013EC

a. Estimate the values of f'(1.00) and f"(1.00), using Newton's forward difference formula.
b. Estimate f'(1.30) and f"(1.30), using Newton's backward difference formula.
3. Use Taylor series to find the truncation errors in the formulae:

5.3. Numerical Integration


Like numerical differentiation, we need to seek the help of numerical integration techniques in
the following reasons.

1. Functions do not possess closed form solution.


2. Closed form solutions exist but these solutions are complex and difficult to use for
calculations.
3. Data for variables are available in the form of a table, but no mathematical relationship
b/n them is known as is often the case with experimental data.

5.3.1. Newton-Cotes Quadrature Formula


Let y=f ( x ) be a function, where y takes the values y 0 , y 1 , y 2 ,… , y n for x 0 , x 1 , x 2 , … , x n. we
b

want to find the value of I =∫ f (x) dx .


a

Let the interval of integration ( a , b ) be divided into n equal subintervals of width h= ( b−a
n )
so that x 0=a , x 1=x 0 +h , x 2=x 0+ 2h , … , x n=x 0 +nh=b.

b x0 +nh

Therefore I =∫ f (x)dx= ∫ f ( x) dx …………………( i )


a x0

We can approximate f ( x ) by Newton’s forward interpolation formula which is given by:


Module on Numerical Analysis I 2013EC

2 3
P ( P−1 ) P ( P−1 ) ( P−2 ) P ( P−1 )( P−2 ) ...(P−n+1) n
y ( x) = y0 + P ∆ y0 + ∆ y 0+ ∆ y 0+ …+ ∆ y0
2! 3! n!

x−x 0 1
Where P= ⟹ dP= dx ⟹ dx=hdP
h h

Therefore equation ( i ) becomes,

[ ]
b n
P ( P−1 ) 2 P ( P−1 )( P−2 ) 3
I =∫ f (x) dx=h ∫ y 0 + P ∆ y 0 + ∆ y0+ ∆ y 0 +… dP
a 0 2! 3!

[ ]
2 2 3
n n ( 2 n−3 ) n ( n−2 )
I =hn y 0 + ∆ y 0 + ∆ y0 + ∆ y 0 +…+up ¿ ( n+1 ) terms …( ii )
2 12 24

This formula is called Newton Cotes-quadrature formula.

5.3.2. The Trapezoidal Rule


The Trapezoidal rule is the simplest practical numerical integration method. It is based on the
principle of finding the area of a trapezium. The principle behind the method is to replace the
curve y=f ( x ) by a straight line typically, we approximate the area A under the curve y=f ( x )
h
b/n the ordinates at x 0∧x 1 by A≅ ( y + y ), where
2 0 1

y 0=f ( x 0 ) , y 1 =f ( x 1 ) and h is the distance b/n x 0 ¿ x 1

Area

Derivation of the trapezoidal formula


Module on Numerical Analysis I 2013EC

Putting n=1 in equation ( ii ) and taking the curve y=f ( x ) through ( x 0 , y 0 )∧( x 1 , y 1 ) as a
polynomial of degree one so that differences of order higher than one vanish, we get:

x0 +h


x0
1
( h
) h
f ( x)dx=h y 0 + ∆ y 0 = ( 2 y 0 + ( y 1− y 0 ) )= ( y 1+ y 0 )
2 2 2

Similarly, for the next sub intervals( x 0 +h , x 0+ 2h ) , ( x 0 +2 h , x0 +3 h ) ,… , we get:

x0 +2 h x 0+3 h x 0+nh
h h h
∫ f (x ) dx= ( y 0 + y 1 ) , ∫ f (x )dx = ( y 2 + y 3 ) , … , ∫ f ( x )dx= ( y n−1 + y n )
2 2 2
x 0 +h x + 2h 0 x + ( n −1 ) h 0

Adding the above integrals, we get:

b x0 +2 h x 0+3 h

I =∫ f ( x ) dx=
a

x 0 +h
(1 h
) h
f ( x ) dx=h y 1 + ∆ y 1 = ( y 1 + y 2 ) , ∫ f ( x ) dx= ( y 2 + y 3 ) , … ,
2 2 x + 2h 0
2

x 0+nh


x + ( n−1) h
0
1
( h
)
f ( x)dx=h y n−1 + ∆ y n−1 = ( y n−1 + y n )Which is known as Trapezoidal rule.
2 2

5.3.3. The Simpson’s 1/3-Rules


Putting n=2 in equation ( ii ) and taking the curve y=f ( x ) through ( x 0 , y 0 ) , ( x 1 , y 1 ) and ( x 2 , y 2 )
as a polynomial of degree two so that differences of order higher than two vanish, we get:

( )
x0 +2 h 2
1 2h h
∫ f ( x ) dx=2 h y 0 +∆ y 0 + ∆ y 0 = [ 6 y 0 +6 ( y 1− y 0 ) + ( y 2−2 y 1− y 0 ) ]= ( y 0 +4 y 1+ y 2 )
6 6 3
x0

( )
x0 +4 h 2
1 h
Similarly ∫ f ( x ) dx=2 h y 2+ ∆ y 2+ ∆ y 2 = ( y 2 +4 y 3+ y 4 ) , … ,
x +2 h 0
6 3

( )
x 0+nh 2
1 h
∫ f (x )dx=2h y n−2+ ∆ y n−2 + 6 ∆ y n−2 = 3 ( y n−2 + 4 y n−1+ y n )
x + ( n−2) h
0

Adding the above integrals, we get:

b
h
I =∫ f ( x ) dx ≅ [ y +4 ( y 1+ y 3+ …+ y n−1 ) +2 ( y 2+ y 4 +…+ y n −2 ) + y n ]
a 3 0

Which is known as Simpson’s one-third rule.


Module on Numerical Analysis I 2013EC

Note: To use Simpson’s one-third rule, the given interval of integration must be divided in to an
even number of subintervals.

5.3.4. The Simpson’s 3/8-Rules


Putting n=3 in equation ( ii ) and taking the curve y=f ( x ) through( x 0 , y 0 ) , ( x 1 , y 1 ),
( x 2 , y 2 ) ∧( x 3 , y 3 ) as a polynomial of degree three so that differences of order higher than three
vanish, we get:

( )
x0 +3 h 2 3
3 3 1
∫ f ( x ) dx=3 h y 0 + ∆ y + ∆ y 0+ ∆ y 0
2 0 4 8
x0

3h
¿
8
[ 8 y 0 +12 ( y 1− y 0 ) +6 ( y 2−2 y 1 + y 0 ) +( y 3−3 y 2+ 3 y1 − y 0 ) ]
3h
¿
8 0
( y +3 y 1 +3 y 2+ y3 )

Similarly

x0 +6 h x0 +nh
3h 3h
∫ f ( x) dx= 8 ( y 3 +3 y 4 +3 y 5 + y 6 ) , … , ∫ f ( x)dx= 8 ( y n−3 +3 y n−2 +3 y n−1+ y n )
x +3 h
0 x + ( n−3) h
0

Adding the above integrals, we get:

b
3h
I =∫ f ( x ) dx ≅ [ ( y 0 + y n ) +3 ( y 1 + y 2 + y 4 + y 5 …+ y n−2 + y n−1 ) +2 ( y 3+ y 6 +…+ y n−3 ) ]
a 8

Which is known as Simpson’s three-eight rule.

Note : To use Simpson’s one-third rule, the given interval of integration must be divided in to
subintervals whose number n is a multiple of 3.

5.3.5. Boole’s-Rules
Putting n=4 in equation ( ii ) and taking the curve y=f ( x ) through( x 0 , y 0 ) , ( x 1 , y 1 ),
( x 2 , y 2 ) , ( x 3 , y 3 ) & ( x 4 , y 4 ) as a polynomial of degree four so that differences of order higher
than four vanish, we get:
Module on Numerical Analysis I 2013EC

b
2h
I =∫ f ( x ) dx ≅ [ 7 ( y 0+ y n ) +32 ( y 1+ y 3 …+ y 2 n−1 ) +12 ( y 2+ y 6 +…+ y 4 n−2) +14 ( y 4 + y 8 + …+ y 4 n ) ]
a 45

5.3.6. Weddle’s-Rules
Putting n=6 in equation ( ii ) and taking the curve y=f ( x ) through( x 0 , y 0 ) , ( x 1 , y 1 ),
( x 2 , y 2 ) , ( x 3 , y 3 ),( x 4 , y 4 ) , ( x 5 , y 5 ) ∧( x 6 , y 6 ) as a polynomial of degree six so that differences of
order higher than six vanish, we get:

b
3h
I =∫ f ( x ) dx ≅ [ y + 5 y 1 + y 2 +6 y 3 + y 4 +5 y 5 +2 y 6 +5 y 7+ y 8 ]
a 10 0

Example

6
dx
Evaluate ∫ by using
0 1+ x 2

a. Trapezoidal rule
b. Simpson’s 1/3 rule
c. Simpson’s 3/8 rule
d. Boole’s rule
e. Weddle’s rule
Solution

1
Divide the interval( 0,6 ) in to six parts each of widthh=1. The value of f ( x )= are
1+ x 2
f ( 0 )=1 , f ( 1 )=0.5 , f ( 2 )=0.2, f ( 3 )=0.1 , f ( 4 )=0.0588 , f ( 5 ) =0.0385∧f ( 6 )=0.027

a. By Trapezoidal rule
6
h
∫ 1+dxx 2 = 2 [ y 0 +2 ( y 1 + y 2 + y 3 + y 4 + y 5 ) + y 6 ]
0

1
¿
2
[ 1+ 2 ( 0.5+ 0.2+ 0.1+0.0588+0.0385 ) +0.027 ]
¿ 1.4108
b. By Simpson’s 1/3 rule
6

∫ 1+dxx 2 = h3 [ y 0 +4 ( y 1+ y3 + y 5 ) + 2 ( y 2 + y 4 ) + y 6 ]
0
Module on Numerical Analysis I 2013EC

1
¿
3
[ 1+4 ( 0.5+0.1+0.0385 ) +2 ( 0.2+0.0588 ) +0.027 ] =1.3662
c. By Simpson’s 3/8 rule
6

∫ 1+dxx 2 = 38h [ ( y 0 + y 6 ) +3 ( y 1 + y 2+ y 4 + y 5 ) +2 ( y 3 ) ]
0

3
¿
8
[ ( 1+0.027 ) +3 ( 0.5+ 0.2+ 0.0588+0.0385 ) +2 ( 0.1 ) ]=1.3571
d. By Boole’s 3/8 rule
6

∫ 1+dxx 2 = 245h [ 7 ( y 0+ y 6 ) +32 ( y 1+ y 3 + y 5 ) +12 ( y 2 ) +14 ( y 4 ) ]


0

2
¿
45
[7 ( 1+0.027 )+ 32 ( 0.5+ 0.1+ 0.0385 )+ 12 ( 0.2 ) +14 ( 0.588 ) ]

e. By Weddle’s 3/8 rule


6

∫ 1+dxx 2 = 310h [ y 0+5 y 1 + y 2 +6 y 3 + y 4 +5 y 5 +2 y 6 ]


0

3
¿
10
[ 1+5 ( 0.5 ) +0.2+6 ( 0.1 ) +0.0588+5 ( 0.0385 ) +2 ( 0.027 ) ]
¿ 1.3735 number of persons having income b/n 2000 and 2500 is
14675−13850=825

Numerical double integration


d b

The double integral I =∫ ∫ f ( x , y )dxdy is evaluated numerically by two successive integrations


c a

in x and y directions considering one variable at a time. Repeated application of trapezoidal rule
(or Simpson’s rule) yields formula for I .

1. Trapezoidal rule: Dividing the interval (a , b) into n equal sub-intervals each of length h ,
and the interval (c , d) into m equal sub intervals each of length k , we have:
x i=x 0 +ih, x 0=a , x n=b
y j= y 0 + jk , y 0=c , y m=d
Using trapezoidal rule in both directions, we get
Module on Numerical Analysis I 2013EC

d
h
[ ]
I = ∫ f ( x 0 , y ) + f ( x n , y ) +2 { f ( x1 , y ) + f ( x 2 , y )+ …+ f ( x n−1 , y ) } dy
2c
hk
¿ ¿
4
Where f ij =f (x i , y i)

The computational molecule of the method (¿) for n=m=1 and n=m=2 can be written as

Simpson’s rule:
If it is undesirable (for example, when using tables) to increase the subdivision of an interval
a<x<b, in order to improve the accuracy of a quadrature, one alternative is to use an
approximating polynomial of higher degree. The integration formula, based on a quadratic
(i.e., parabolic) approximation is called Simpson's Rule. It is adequate for most purposes that
one is likely to encounter in practice.
x j ≤x≤x j +2h
Simpson's Rule gives for

A parabolic arc is fitted to the curve y = f(x) at the three tabular points
x j ≤x≤x j +2h
Hence, if N/2 = (b - a)/2 is even, one obtains Simpson's Rule:
Module on Numerical Analysis I 2013EC

where

Integration by Simpson's Rule involves computation of a finite sum of given values of the
integrand f, as in the case of the trapezoidal rule. Simpson's Rule is also effective for
implementation on a computer; a single direct application in a hand calculation usually gives
sufficient accuracy
For a given integrand f, it is quite appropriate to computer program increased interval
subdivision, in order to achieve a required accuracy, while for hand calculations an error bound
may again be useful.
x j ≤x≤x j + 2h
Let the function f(x) have in the Taylor expansion

Then

One may reformulate the quadrature rule for


x j ≤x≤x j + 2h by replacing fj+2 = f (j+1 +
h) and fj = f (xj+1 - k) by its Taylor series; thus

A comparison of these two versions shows that the truncation error is


Module on Numerical Analysis I 2013EC

WE divide the interval (a , b) into 2 n equal sub-intervals each of length h and the interval (c , d)
into 2 m equal sub-intervals each of length k . Then applying Simpson’s rule in both directions,
we get
d b
I =∫ ∫ f ( x , y )dx dy
c a

[{ }]
n n−1
hk
I= f 00 + 4 ∑ f 2 i−1+ 2 ∑ f 2 i ,0 + f 2 n ,0
9 i=1 i=1

{ }
m n n−1
+ 4 ∑ f 0,2 j−1+ 4 ∑ f 2 i−1,2 j−1 +2 ∑ f 2 i ,2 j−1 + f 2 n ,2 j−1
j=0 i=1 i=1

{ }
m−1 n n−1
+2 ∑ f 0,2 j+ 4 ∑ f 2 i−1,2 j +2 ∑ f 2i , 2 j +f 2 n , 2 j
j=1 i=1 i=1

{ }
n n−1
+ f 0,2 m+ 4 ∑ f 2 i−1,2 m +2 ∑ f 2 i ,2 m + f 2 n ,2 m
i=1 i=1

Where h and k are the spacing in the x and y directions respectively and

b−a d−c
h= ,k=
2n 2m

x i=x 0 +ih, i=1,2 , … ,2 n−1

y j= y 0 + jk , j=1,2 , … ,2 m−1

x 0=a , x 2 n=b , y 0=c , y 2 m=d

The computational module for M = N = 1 and M = N = 2 can be written as


Module on Numerical Analysis I 2013EC

Example
1. Using trapezoidal rule evaluate
2 2
dxdy
I =∫ ∫ taking h=k =0.25 so that n=m=4
1 1 x+y
2. Apply Simpson’s rule to evaluate the integral
2.6 4.4
dxdy
I =∫ ∫ taking h=0.2∧k=0.3 so that n=m=2
2 4 xy
Solution

1. Since h=0.25 along the x−axis we have 1,1.25 , 1.5 ,1.75 , 2


Since k =0.25 along the y−axis we have 1,1.25 , 1.5 ,1.75 , 2
hk
I= ¿
4
1
I=
64
[ f ( 1,1 ) +f ( 1,2 ) +2 ( f ( 1,1.25 ) + f ( 1,1.5 ) + f ( 1,1.75 ) ) +( f ( 2,1 ) + f ( 2,2 ) ) + 2 ( f ( 2,1.25 )+ f ( 2,1.5 ) + f ( 2,1.75 ) ) +
2.6 4.4
dxdy
2. I =∫ ∫ ,
2 4 xy
since h=0.2 on the x−axis we have 4,4.2,4 .4
since k =0.3 on the y−axis we have 2,2.3,2 .6
hk
I= ¿
9
0.06
9 [ i−1 , j−1 i−1 , j+1 i+1 , j−1 i+ 1, j +1
I= (f +f +f +f )+ 4 ( f i−1 , j+ f i , j−1 +f i , j +1+ f i+1 , j ) +16 f i , j ]
0.06
I=
9
[ f ( 4,2 ) + f ( 4,2.6 ) + f ( 4.4,2 )+ f ( 4.4,2 .6 ) +4 {f ( 4,2.3 )+ f ( 4.2,2 ) + f ( 4.2,2.6 ) +f ( 4.4,2 .6 ) }+16 f (4.2,2.3)
2.6 4.4
dxdy
I =∫ ∫ ≅ 0.025
2 4 xy
Exercise

Evaluate the double integral

2 1
2 xy
∫∫ dxdy
1 0 ( 1+ x 2 )( 1+ y 2 )

Using
Module on Numerical Analysis I 2013EC

a. The trapezoidal rule with h=k =0.25


b. The Simpson’s rule with h=k =0.25

Solution

a.
>> x=0:0.25:1;

>> y=1:0.25:2;

>> [X,Y]=meshgrid(x,y);

>> F=2*X*Y./[(1+X.^2)*(1+Y.^2)];(remember that X and Y are in capital letter)

>> I=trapz(y,trapz(x,F,2))

I =0.3482

References
 Ames, W.F., Numerical Methods for Partial Differential Equations, second edn.,
Academic
o Press, N.Y., 1977.
 Atkinson, K.E., An Introduction to Numerical Analysis, John Wiley & Sons, N.Y., 1978.
 Berndt, R. (Ed.), Ramanujan’s Note Books, Part I, Springer Verlag, N.Y., 1985.
 Booth, A.D., Numerical Methods, Academic Press, N.Y., 1958.
 Collatz, L., Numerical Treatment of Differential Equations, third edn., Springer Verlag,
Berlin,1966.
 Conte, S.D., Elementary Numerical Analysis, McGraw Hill, 1965.
 Datiquist, G., and A. Bjorek, Numerical Methods, Prentice-Hall, Englewood Cliffs, N.Y.,
1974.
 Davis, P.Y., Interpolation and Approximation, Blaisedell, N.Y., 1963.
 Frank Ayres, Theory and Differential Equations (Schuam’s outline series, 1981)
Module on Numerical Analysis I 2013EC

 Gerald C. F. and Wheatlly P. O., Applied numerical Analysis 5th ed, Edsion Wesley,Co

 Richard L. Burden, Numerical Analysis, 1981, 2 nd Ed. - P.A. Strock, Introduction to


numerical analysis
 Robert Ellis and Denny Glick, Calculus with Analytical Geometry- 3rd Ed.
 Volkov, Numerical methods 1986

You might also like