Lesson 3 Math 311

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Lesson 3: Interpolation and Curve Fitting

Interpolation – is the process of computing values for a tabulated function at points not in the table.

Curve fitting – is the process of constructing a curve, or mathematical function, that has the best fit to a
series of data points, possibly subject to constraints. Curve fitting can involve
either interpolation, where an exact fit to the data is required, or smoothing, in which a
"smooth" function is constructed that approximately fits the data.

A. The Vandermonde Method

The Vandermonde method is the most straight-forward method which may be used to find an
interpolating polynomial. The substitution of the points into the desired polynomial yields a system
of linear equations in the coefficients of the polynomial which may then be solved using Gaussian
elimination or LU decomposition.

Suppose we have 3 points (2, 5), (3, 6), (7, 4) and we want to fit a quadratic polynomial through
these points. The general form of a quadratic polynomial is p(x) = c1 x2 + c2 x + c3. Thus, if we were
to simply evaluate p(x) at these three points, we get three equations:

p(2) = c1 4 + c2 2 + c3 = 5
p(3) = c1 9 + c2 3 + c3 = 6
p(7) = c1 49 + c2 7 + c3 = 4

This, however, is a system of equations defined by:

c1 4 + c2 2 + c3 = 5
c1 9 + c2 3 + c3 = 6
c1 49 + c2 7 + c3 = 4

Therefore, we may draw the obvious generalization that given n points (x1, y1), (x2, y2), ...(xn, yn), we
can find a polynomial of degree n − 1 which passes through those points by doing the following:

1. Writing down the general polynomial of degree n − 1,


2. Evaluating the polynomial at the points x1, ..., xn, and
3. Solving the resulting system of linear equations.

Rather than performing all of these operations, we will simply write down the problem in the form:

Vc = y
Where:
y = the vector of y values
c = the vector of coefficients,
V = the Vandermonde matrix
Vandermonde Matrix
- Is a matrix with terms of a geometric progression in each row, i.e. an m x n matrix of the
form:

𝑎1𝑛−1 ⋯ 𝑎12 𝑎1 1
𝑎2𝑛−1 ⋯ 𝑎22 𝑎2 1
𝑉 = 𝑎3𝑛−1 ⋯ 𝑎32 𝑎3 1
⋮ ⋮ ⋮ ⋱ ⋮
𝑛−1 2
[𝑎𝑚 ⋯ 𝑎𝑚 𝑎𝑚 1]

Where a1, a2, ⋯ correspond to x1, x2, ⋯ respectively

Example 1
Find interpolating polynomial passing three points (1, -6), (2, 2), and (4, 12) using Vandermonde
matrix.

Example 2
Add another point (3, -10) for the data in example 1
Answer: 𝑓(𝑥) = 9𝑥 3 − 64𝑥 2 + 137𝑥 − 88
9

B. Lagrange Polynomial
In numerical analysis, Lagrange polynomials are used for polynomial interpolation. It the simplest way
to exhibit the existence of a polynomial for interpolation with unevenly spaced data.

Lagrange interpolating polynomial is the polynomial f(x) of degree ≤ (n-1) that passes to through
points P1(x1, y1), P2(x2, y2), P3(x3, y3) . . . , Pn(xn, yn) and is given by:
𝑛

𝑓(𝑥) = ∑ 𝑓𝑗 (𝑥)
𝑗=1

Where:
𝑥− 𝑥𝑘
𝑓𝑗 (𝑥) = 𝑦𝑗 ∏𝑛𝑘=1 𝑥 , 𝑘 ≠𝑗
𝑗 − 𝑥𝑘
Written explicitly:

(𝑥 − 𝑥2 )(𝑥 − 𝑥3 ) ⋯ (𝑥 − 𝑥𝑛 ) (𝑥 − 𝑥1 )(𝑥 − 𝑥3 ) ⋯ (𝑥 − 𝑥𝑛 )
𝑓(𝑥) = 𝑦1 + 𝑦 + ⋯
(𝑥1 − 𝑥2 )(𝑥1 − 𝑥3 ) ⋯ (𝑥1 − 𝑥𝑛 ) (𝑥2 − 𝑥1 )(𝑥2 − 𝑥3 ) ⋯ (𝑥2 − 𝑥𝑛 ) 2
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) ⋯ (𝑥 − 𝑥𝑛−1 )
+ 𝑦
(𝑥𝑛 − 𝑥1 )(𝑥𝑛 − 𝑥2 ) ⋯ (𝑥𝑛 − 𝑥𝑛−1 ) 𝑛

Example
Find the interpolating polynomial passing through the given points (1, -6), (2, 2), (4, 12), and
(3, -10)
C. Newton’s Divided Difference Interpolating Polynomial
Newton’s divided difference interpolation formula is an interpolation technique used when the
interval difference is not the same for all sequence of values.

General Form: nth-order polynomial to fit (n+1) data points


𝑗−1
𝑓(𝑥) = ∑𝑘𝑗=0 𝑏𝑗 𝑛𝑗 (𝑥) , where 𝑏𝑗 = [𝑦0 , 𝑦1 , ⋯ , 𝑦𝑗 ] and 𝑛𝑗 (𝑥) = ∏𝑖=0 (𝑥 − 𝑥𝑖 )

𝑓(𝑥) = 𝑏0 + 𝑏1 (𝑥 − 𝑥0 ) + 𝑏2 (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) + ⋯ + 𝑏𝑛 (𝑥 − 𝑥0 ) (𝑥 − 𝑥1 ) ⋯ (𝑥 − 𝑥𝑛−1 )

Where:
𝑏0 = 𝑓(𝑥0 )
𝑏1 = 𝑓[𝑥1 , 𝑥0 ]
𝑏2 = 𝑓[𝑥2 , 𝑥1 , 𝑥0 ]

𝑏𝑛 = 𝑓[𝑥𝑛 , 𝑥𝑛−1 , ⋯ , 𝑥1 , 𝑥0 ]

The bracket function is known as divided difference:

𝑓(𝑥1 ) − 𝑓(𝑥0 )
1st order difference: 𝑓[𝑥1 , 𝑥0 ] = 𝑥 1 − 𝑥0

𝑓[𝑥2 ,𝑥1 ]−𝑓[𝑥1 ,𝑥0 ]


2nd order difference: 𝑓[𝑥2 , 𝑥1 , 𝑥0 ] = 𝑥2 − 𝑥0

nth order difference: 𝑓[𝑥𝑛 , 𝑥𝑛−1 , ⋯ , 𝑥1 , 𝑥0 ] =


𝑓[𝑥𝑛 ,𝑥𝑛−1 ,⋯ ,𝑥2 ,𝑥1 ]−𝑓[𝑥𝑛−1 ,𝑥𝑛−2 ,⋯ ,𝑥1 ,𝑥0 ]
𝑥𝑛 − 𝑥0

Therefore:

𝑓(𝑥) = 𝑓(𝑥0 ) + (𝑥 − 𝑥0 ) 𝑓[𝑥1 , 𝑥0 ] + (𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) 𝑓[𝑥2 , 𝑥1 , 𝑥0 ] + ⋯


+ (𝑥 − 𝑥0 ) (𝑥 − 𝑥1 ) ⋯ (𝑥 − 𝑥𝑛−1 ) 𝑓[𝑥𝑛 , 𝑥𝑛−1 , ⋯ , 𝑥1 , 𝑥0 ]
Newton’s Divided Difference Table:

Where:
𝑓(𝑥1 ) − 𝑓(𝑥0 )
𝑓[𝑥1 , 𝑥0 ] = 𝑥1 − 𝑥0

𝑓(𝑥2 ) − 𝑓(𝑥1 )
𝑓[𝑥2 , 𝑥1 ] =
𝑥2 − 𝑥1

𝑓(𝑥3 ) − 𝑓(𝑥2 )
𝑓[𝑥3 , 𝑥2 ] = 𝑥3 − 𝑥2


etc

Example 1
Find an interpolating polynomial using the given set of point: (1, -6), (2, 2), (4, 12) and (3, -10).

Example 2
Find an interpolating polynomial using the given set of point:
X 1.6 2 2.5 3.2 4 4.5
y 2 8 14 15 8 2
D. Least Square Regression
If data shows a linear relationship between x and y variables, then a line that best fit the
relationship is called regression line

Least square regression line – is a line that makes vertical distance from the data points to the
regression line is as small as possible. It is called “least square” because the best line of fit is
one that minimizes the variance (the sum of squares of the errors)

Least square method - is a form of mathematical regression analysis used to determine the line of
best fit for a set of data, providing a visual demonstration of the relationship between the
data points. Each point of data represents the relationship between a known independent
variable and an unknown dependent variable.

The goal of Least-Squares Method is to find a good estimation of parameters that fit a function, f(x),
of a set of data. The Least-Squares Method requires that the estimated function has to deviate as
little as possible from f(x) in the sense of a 2-norm.
General classification of least square method:
1. Linear
2. Non-linear

Further classification:
1. ordinary least squares (OLS)
2. weighted least squares (WLS)
3. alternating least squares (ALS)
4. partial least squares (PLS)

To fit a set of data best, the least-squares method minimizes the sum of squared residuals (it is also
called the Sum of Squared Errors, SSE.)
𝑛

𝑆 = ∑ 𝑟𝑖 2
𝑖=1

Where:
𝑟𝑖 = the residual, the difference between the actual points and the regression
line

𝑟𝑖 = 𝑦𝑖 − 𝑓(𝑥𝑖 )

𝑓(𝑥𝑖 ) = model function which is the regression line


Linear Least Square Method:

Equation of line: 𝑦 = 𝑚𝑥 + 𝑏

For least square regression line corresponding to n points:

𝑛 ∑(𝑥𝑦) − ∑ 𝑥 ∑ 𝑦
𝑚= 𝑛 ∑(𝑥 2 ) − (∑ 𝑥)2

∑𝑦 − 𝑚 ∑𝑥
𝑏= 𝑛

Example:
The following are 8 data points that shows the relationship between the number of fishermen and
the amount of fish (in thousand pounds) they can catch a day. Determine the least-square
regression line

Number of Fisherman Fish Caught


18 39
14 9
9 9
10 7
5 8
22 35
14 36
12 22
Lesson 4: Numerical Differentiation and Integration

Numerical differentiation is the process of finding the numerical value of a derivative of a given
function at a given point. In general, numerical differentiation is more difficult than numerical
integration. This is because while numerical integration requires only good continuity properties of the
function being integrated, numerical differentiation requires more complicated properties such as
Lipschitz classes.

There are many applications where derivatives need to be computed numerically. The simplest
approach simply uses the definition of the derivative

for some small numerical value of .

Numerical integration is the approximate computation of an integral using numerical techniques. The
numerical computation of an integral is sometimes called quadrature. Ueberhuber (1997, p. 71) uses the
word "quadrature" to mean numerical computation of a univariate integral, and "cubature" to mean
numerical computation of a multiple integral.

The most straightforward numerical integration technique uses the Newton-Cotes formulas (also called
quadrature formulas), which approximate a function tabulated at a sequence of regularly
spaced intervals by various degree polynomials. If the endpoints are tabulated, then the 2- and 3-point
formulas are called the trapezoidal rule and Simpson's rule, respectively. The 5-point formula is
called Boole's rule. A generalization of the trapezoidal rule is Romberg integration, which can yield
accurate results for many fewer function evaluations.

If the functions are known analytically instead of being tabulated at equally spaced intervals, the best
numerical method of integration is called Gaussian quadrature. By picking the abscissas at which to
evaluate the function, Gaussian quadrature produces the most accurate approximations possible.
However, given the speed of modern computers, the additional complication of the Gaussian
quadrature formalism often makes it less desirable than simply brute-force calculating twice as many
points on a regular grid (which also permits the already computed values of the function to be re-used).
An excellent reference for Gaussian quadrature is Hildebrand (1956).

Modern numerical integrations methods based on information theory have been developed to simulate
information systems such as computer controlled systems, communication systems, and control systems
since in these cases, the classical methods (which are based on approximation theory) are not as
efficient (Smith 1974).
A. Newton-Cotes Integration Formula
Newton-Cotes formulas are the most common numerical integration schemes. They are based on
the strategy of replacing a complicated function or tabulated data with an approximating function
that is easy to integrate:

𝑏 𝑏
𝐼 = ∫𝑎 𝑓(𝑥)𝑑𝑥 ≅ ∫𝑎 𝑓𝑛 (𝑥)𝑑𝑥

Where 𝑓𝑛 (𝑥) = a polynomial of the form:

𝑓𝑛 (𝑥) = 𝑎0 + 𝑎1 𝑥 + ⋯ + 𝑎𝑛−1 𝑥 𝑛−1 + 𝑎𝑛 𝑥 𝑛

Where n = order of polynomial

Fig a: 1st order polynomial (straight line)


Fig b: 2nd order polynomial (parabola)

Closed and open form of Newton-Cotes formulas are available. The closed forms are those where
the data points are at the beginning and end of the limits of integration are known. The open forms
have integration limits that extend beyond the range of the data.
Fig a: Closed form
Fig b: Open form

B. Trapezoidal Rule
Trapezoidal formula is the 1st of the Newton-Cotes closed integration formulas. It is a
technique for approximating the definite integral
𝑏
∫𝑎 𝑓(𝑥)𝑑𝑥

It corresponds to the case where the polynomial is of first-order:

𝑏 𝑏
𝐼 = ∫𝑎 𝑓(𝑥)𝑑𝑥 ≅ ∫𝑎 𝑓1 (𝑥)𝑑𝑥

𝑓(𝑥1 )−𝑓(𝑥0 )
From linear interpolation formula: 𝑓1 (𝑥) = 𝑓(𝑥0 ) + 𝑥1 − 𝑥0
(𝑥 − 𝑥0 )

𝑓(𝑏)−𝑓(𝑎)
Then: 𝑓1 (𝑥) = 𝑓(𝑎) + 𝑏− 𝑎
(𝑥 − 𝑎)

Therefore:
𝑏
𝑓(𝑏) − 𝑓(𝑎)
𝐼 = ∫ [𝑓(𝑎) + (𝑥 − 𝑎)] 𝑑𝑥
𝑎 𝑏 − 𝑎
𝑏 𝑓(𝑎) (𝑏 − 𝑎)+ {𝑓(𝑏)−𝑓(𝑎)}(𝑥− 𝑎)
= ∫𝑎 [ 𝑏− 𝑎
] 𝑑𝑥

𝑏 𝑏 𝑓(𝑎)−𝑎 𝑓(𝑎) 𝑓(𝑏)−𝑓(𝑎) 𝑓(𝑏)−𝑓(𝑎)


= ∫𝑎 [ 𝑏−𝑎
+ 𝑏− 𝑎
𝑥− 𝑏− 𝑎
𝑎] 𝑑𝑥

𝑏 𝑓(𝑏)−𝑓(𝑎) 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑎) − 𝑎 𝑓(𝑏) + 𝑎 𝑓(𝑎)


𝐼 = ∫𝑎 [ 𝑥 + ] 𝑑𝑥
𝑏− 𝑎 𝑏−𝑎

𝑏 𝑓(𝑏)−𝑓(𝑎) 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑏)


= ∫𝑎 [ 𝑥 + ] 𝑑𝑥
𝑏− 𝑎 𝑏−𝑎

𝑏
𝑓(𝑏)−𝑓(𝑎) 𝑥 2 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑏)
= [ + 𝑥]
𝑏− 𝑎 2 𝑏−𝑎 𝑎

𝑓(𝑏) − 𝑓(𝑎) 𝑏2 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑏) 𝑓(𝑏)−𝑓(𝑎) 𝑎 2 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑏)


𝐼 = 𝑏− 𝑎 2
+ 𝑏−𝑎
𝑏 − [ 𝑏− 𝑎 2
+ 𝑏−𝑎
𝑎]

𝑓(𝑏) − 𝑓(𝑎) (𝑏2 − 𝑎2 ) 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑏)


𝐼 = + (𝑏 − 𝑎)
𝑏− 𝑎 2 𝑏−𝑎

𝑓(𝑏) − 𝑓(𝑎) (𝑏 − 𝑎) (𝑏 + 𝑎)
= 𝑏− 𝑎 2
+ 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑏)

𝑏 𝑓(𝑏) − 𝑏 𝑓(𝑎) + 𝑎 𝑓(𝑏) − 𝑎 𝑓(𝑎)


= + 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑏)
2

𝑏 𝑓(𝑏) − 𝑏 𝑓(𝑎) + 𝑎 𝑓(𝑏) − 𝑎 𝑓(𝑎) + 2𝑏 𝑓(𝑎) − 2𝑎 𝑓(𝑏)


= 2

𝑏 𝑓(𝑏) − 𝑎 𝑓(𝑎) + 𝑏 𝑓(𝑎) − 𝑎 𝑓(𝑏)


= 2

(𝑏 − 𝑎) 𝑓(𝑏) + (𝑏 − 𝑎) 𝑓(𝑎)
= 2

(𝑏 − 𝑎) [𝑓(𝑏) + 𝑓(𝑎)]
= 2

𝒇(𝒃)+ 𝒇(𝒂)
𝑰 = (𝒃 − 𝒂) 𝟐
→ Trapezoidal Rule

Geometrically, the trapezoidal rule is equivalent to approximating the area of trapezoid


under the straight line connecting f(a) and f(b), i.e.:

𝐼 ≅ 𝑤𝑖𝑑𝑡ℎ 𝑥 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 ℎ𝑒𝑖𝑔ℎ𝑡

𝐼 ≅ (𝑏 − 𝑎) 𝑥 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 ℎ𝑒𝑖𝑔ℎ𝑡
Error of Trapezoidal Rule:

𝟏
𝑬𝒕 = − 𝟏𝟐 𝒇" (𝝃) (𝒃 − 𝒂)𝟑

Where:
𝐸𝑡 = local truncation error
𝜉 = value somewhere in the interval from a to b
Multiple-Application Trapezoidal Rule:
One way to improve the accuracy of trapezoidal rule is to divide the integration interval from
a to b into a number of segments and apply the method to each segment:

𝑏−𝑎
ℎ= 𝑛
where n = desired number of segments

If a and b are designated x0 and xn, then the total integral:


𝑥 𝑥 𝑥
𝐼 = ∫𝑥 1 𝑓(𝑥)𝑑𝑥 + ∫𝑥 2 𝑓(𝑥)𝑑𝑥 + ⋯ + ∫𝑥 𝑛 𝑓(𝑥)𝑑𝑥
0 1 𝑛−1

𝑓(𝑥0 )−𝑓(𝑥1 ) 𝑓(𝑥1 )−𝑓(𝑥2 ) 𝑓(𝑥𝑛−1 )−𝑓(𝑥𝑛 )


= ℎ + ℎ + ⋯+ ℎ
2 2 2

Simplify into multiple-application trapezoidal rule :

𝒉
𝑰= [𝒇(𝒙𝟎 ) + 𝟐 ∑𝒏−𝟏
𝒊−𝟏 𝒇(𝒙𝒊 ) + 𝒇(𝒙𝒏 )]
𝟐
General form: Multiple-Segment Trapezoidal Rule

General from: Approximate Error of Truncation

∑𝑛 ′′
𝑖−1 𝑓 (𝜉𝑖 )
𝑓 ̅ ′′ ≅
𝑛

𝑛𝑓 ̅ ′′ ≅ ∑𝑛𝑖−1 𝑓 ′′ (𝜉𝑖 )

1
Substitute to: 𝐸𝑡 = − 12 𝑓 " (𝜉) (𝑏 − 𝑎)3
Therefore:
(𝒃 − 𝒂)𝟑 ′′
𝑬𝒂 = − 𝒇
𝟏𝟐𝒏𝟐

Where:
𝑏
∫𝑎 𝑓"(𝑥) 𝑑𝑥
𝑓 ′′ = 𝑏−𝑎
→ average second derivative at a point 𝜉 located in
segment i

Example:
a. Use a two-segment trapezoidal rule to estimate the integral of

𝑓(𝑥) = 0.2 + 25𝑥 − 200𝑥 2 + 675𝑥 3 − 900𝑥 4 + 400𝑥 5

From a = 0 to b = 0.8
b. Repeat using four segment

C. Simpson’s Rule
Another way to obtain a more accurate estimate of an integral is to use higher-order
polynomials to connect. This polynomial is known as Simpson’s Rule.

C.1. Simpson’s 1/3 Rule


Results when second-order interpolating polynomial is substituted in Newton-Cotes
Integration Formula:

𝑏 𝑏
𝐼 = ∫𝑎 𝑓(𝑥)𝑑𝑥 ≅ ∫𝑎 𝑓2 (𝑥)𝑑𝑥

If a and b are designated as x0 and x2 and 𝑓2 (𝑥) is represented by a second-order Lagrange


polynomial, the integral becomes:

𝑥2 (𝑥
− 𝑥1 )(𝑥 − 𝑥2 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
𝐼= ∫ [ 𝑓(𝑥0) + 𝑓(𝑥1) + 𝑓(𝑥0) ] 𝑑𝑥
𝑥0 (𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 ) (𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
After integration and algebraic manipulation will results into:


𝐼= [𝑓(𝑥0 ) + 4 𝑓(𝑥1 ) + 𝑓(𝑥2 )] → Simpson’s 1/3 Rule
3

𝑏−𝑎
Since ℎ= 2
, therefore:

Where 𝑎 = 𝑥0 , 𝑏 = 𝑥2 , and 𝑥1 = the point midway between a and b wich is

(𝑎 + 𝑏)
𝑥1 =
2

Simpson’s Truncation Error:

1
𝐸𝑡 = − 90 ℎ5 𝑓 4 (𝜉)

Where 𝜉 lies somewhere in the interval from a to b


𝑏−𝑎
With ℎ = 2 , then:

(𝒃 − 𝒂)𝟓
𝑬𝒕 = − 𝒇𝟒 (𝝃)
𝟐𝟖𝟖𝟎

Note: Simpson’s 1/3 Rule is more acurate than trapezoidal rule


.
Multiple-Application Simpson’s 1/3 Rule
The same with trapezoidal rule, accuracy in Simpson’s 1/3 rule can be improved by dividing the
integration into an even number of segments of equal width. i.e., odd number of points

Note: Simpson’s multiple segment formula is applicable only if there is even number of segment

Truncation error for multiple-application Simpson’s 1/3 rule:


(𝒃−𝒂)𝟓
𝑬𝒕 = − 𝟏𝟖𝟎 𝒏𝟒 𝒇𝟒

C.2. Simpson’s 3/8 Rule

Multiple segment:

3ℎ
𝐼= 8
[𝑓(𝑥0 ) + 3 ∑ 𝑓(𝑥𝑖𝑛𝑡 𝑛𝑜𝑡 𝑚𝑝3 ) + 2 ∑ 𝑓(𝑥𝑖𝑛𝑡 𝑚𝑝3 ) + 𝑓(𝑥𝑛 )]
Simpson’s 1/3 vs 3/8 Rule:
- 1/3 rule used three points and two segments
- 3/8 rule use four points and three segments

Example:
Determine the approximate integral using Simpson’s 1/3 and 3/8 rule of:

𝑓(𝑥) = 0.2 + 25𝑥 − 200𝑥 2 + 675𝑥 3 − 900𝑥 4 + 400𝑥 5

From a=0 to b=0.8. If the exact integral is 1.640533, compare its actual error from the truncation
error formula.

You might also like