0% found this document useful (0 votes)
1 views36 pages

Chapter 5 Ppt

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 36

Numerical Methods

(CENG-3081)

Instructor: Mastewal Shumet (Ph.D. Candidate)


Lecturer, School of Civil & Environmental Engineering.
Members of Road and Transport Engineering Chair
Addis Ababa Institute of Technology
Addis Ababa University
masteshumet@gmail.com
mastewal.shumet@aait.edu.et
mastewal.shumet@aau.edu.et
12/4/2024 Mastewal S. 1
Numerical Methods
(CENG-3081)
Chapter 5
CURVE FITTING

12/4/2024 Mastewal S. 2
Introduction To Curve Fitting
• Curve fitting is a fundamental concept in
numerical methods where a curve is constructed
to best fit a set of data points.
• The goal of curve fitting is to create a
mathematical model that approximates the
relationship between the variables in the data.
Data Points : -
• These are the given set of points (𝑥𝑖 , 𝑦𝑖 ), where 𝑥𝑖
is the independent variable, and 𝑦𝑖 is the
dependent variable. The task is to find a curve
that passes through or comes close to these points.
12/4/2024 Mastewal S. 3
CURVE FITTING
• Describes techniques to fit curves (curve fitting) to discrete
data to obtain intermediate estimates.

• There are two general approaches to curve fitting:


– Data exhibit a significant degree of scatter. The strategy is to derive a
single curve that represents the general trend of the data.
– Data is very precise. The strategy is to pass a curve or a series of curves
through each of the points.

• In engineering two types of applications are encountered:


– Trend analysis. Predicting values of dependent variable, may include
extrapolation beyond data points or interpolation between data points.
– Hypothesis testing. Comparing existing mathematical model with
measured data.

12/4/2024 Mastewal S. 4
12/4/2024 Mastewal S. 5
Mathematical Background
Simple Statistics/
• In course of engineering study, if several measurements
are made of a particular quantity, additional insight can
be gained by summarizing the data in one or more well
chosen statistics that convey as much information as
possible about specific characteristics of the data set.
• These descriptive statistics are most often selected to
represent
– The location of the center of the distribution of the data,
– The degree of spread of the data.

12/4/2024 Mastewal S. 6
• Arithmetic mean. The sum of the individual data
points (yi) divided by the number of points (n).

y=
 y i

n
i = 1,  , n
• Standard deviation. The most common measure of a
spread for a sample.

 y − ( y )
2 2
St /n
Sy =
n −1
or S 2
= i i

n −1
y

S t =  ( yi − y ) 2

12/4/2024 Mastewal S. 7
• Variance. Representation of spread by the square of
the standard deviation.

S y2 =
 i
( y − y ) 2

Degrees of freedom
n −1

• Coefficient of variation. Has the utility to quantify the


spread of data.

Sy
c.v. = 100%
y

12/4/2024 Mastewal S. 8
Least Squares Regression

Linear Regression
• Fitting a straight line to a set of paired
observations: (x1, y1), (x2, y2),…,(xn, yn).
y=a0+a1x+e
a1- slope
a0- intercept
e- error, or residual, between the model and the
observations

12/4/2024 Mastewal S. 9
Criteria for a “Best” Fit/
• Minimize the sum of the residual errors for all
available data:
n n

e = (y
i =1
i
i =1
i − ao − a1 xi )

n = total number of points


• However, this is an inadequate criterion, so is the sum
of the absolute values
n n

e = y
i =1
i
i =1
i − a0 − a1 xi

12/4/2024 Mastewal S. 10
• Best strategy is to minimize the sum of the squares of
the residuals between the measured y and the y
calculated with the linear model:
n n n
S r =  ei2 =  ( yi , measured − yi , model ) 2 =  ( yi − a0 − a1 xi ) 2
i =1 i =1 i =1

• Yields a unique line for a given set of data.

12/4/2024 Mastewal S. 11
Least-Squares Fit of a Straight Line/
S r
= −2 ( yi − ao − a1 xi ) = 0
ao
S r
= −2 ( yi − ao − a1 xi ) xi  = 0
a1
0 =  yi −  a 0 −  a1 xi
0 =  yi xi −  a 0 xi −  a1 xi2

a 0 = na0 Normal equations, can be


na0 + ( xi )a1 =  yi solved simultaneously

n xi yi −  xi  yi
a1 =
n x − ( xi )
2 2
i Mean values
a0 = y − a1 x
12/4/2024 Mastewal S. 12
12/4/2024 Mastewal S. 13
“Goodness” of our fit/
If
• Total sum of the squares around the mean for the
dependent variable, y, is St
• Sum of the squares of residuals around the regression
line is Sr
• St-Sr quantifies the improvement or error reduction
due to describing data in terms of a straight line rather
than as an average value.
St − S r
r =2

St r2-coefficient of determination
Sqrt(r2) – correlation coefficient
12/4/2024 Mastewal S. 14
• For a perfect fit
Sr=0 and r=r2=1, signifying that the line
explains 100 percent of the variability of the
data.
• For r=r2=0, Sr=St, the fit represents no
improvement.

12/4/2024 Mastewal S. 15
Polynomial Regression
• Some engineering data is poorly represented
by a straight line.
• For these cases a curve is better suited to fit the
data.
• The least squares method can readily be
extended to fit the data to higher-order
polynomials.

12/4/2024 Mastewal S. 16
General Linear Least Squares
y = a0 z0 + a1 z1 + a2 z 2 +  + am zm + e
z0 , z1,  , zm are m + 1 basis functions
Y  = Z A+ E
Z  − matrix of the calculated values of the basis functions
at the measured values of the independent variable
Y− observed valued of the dependent variable
A− unknown coefficients
E− residuals
2 Minimized by taking its partial
n m 
S r =   yi −  a j z ji 
derivative w.r.t. each of the
coefficients and setting the
i =1  j =0  resulting equation equal to zero
12/4/2024 Mastewal S. 17
Example

12/4/2024 Mastewal S. 18
Solution

12/4/2024 Mastewal S. 19
Linearization of Nonlinear Relationships

12/4/2024 Mastewal S. 20
Linearization of Nonlinear Relationships

12/4/2024 Mastewal S. 21
Linearization of a Power Equation (Example)

x 1 2 3 4 5
y 0.5 1.7 3.4 5.7 8.4

12/4/2024 Mastewal S. 22
Solution
x y
1 0.5 0.000 -0.301 0.000 0.000 0.501
2 1.7 0.301 0.230 0.091 0.069 1.687
3 3.4 0.477 0.531 0.228 0.254 3.432
4 5.7 0.602 0.756 0.362 0.455 5.681
5 8.4 0.699 0.924 0.489 0.646 8.398
Sum 2.079 2.141 1.169 1.424 0.501

-0.300 1.752
0.501
12/4/2024 Mastewal S. 23
MULTIPLE LINEAR REGRESSION

12/4/2024 Mastewal S. 24
Determine Multi linear Equation Constants
𝒚 = 𝒂𝒐 + 𝒂𝟏 𝒙𝟏 + 𝒂𝟐 𝒙𝟐

𝑥1 𝑥2 y
0 0 5
2 1 10
2.5 2 9
1 3 0
4 6 3
7 2 27

12/4/2024 Mastewal S. 25
Interpolation
• Used to estimate intermediate values between precise
data points.
• The most common method used is the polynomial
interpolation.
• For n+1 data points, there is a unique polynomial of
order n that passes through all the points.
𝑓(𝑥) = 𝑎𝑜 + 𝑎1 𝑥 + 𝑎2 𝑥2 + … + 𝑎𝑛 𝑥𝑛
• Newton and Lagrange interpolating polynomials are the
most widely used methods

12/4/2024 Mastewal S. 26
Newton's Divided Difference Interpolating Polynomials
• Called linear interpolation (similar triangles) (linear)
f1 ( x) − f ( xo ) f ( x1 ) − f ( xo )
=
x − xo x1 − xo
f ( x1 ) − f ( xo )
f1 ( x) = f ( xo ) + ( x − xo )
x1 − xo f1 ( x) = bo + b1 ( x − xo )

bo = f ( xo )

f ( x1 ) − f ( xo )
b1 =
x1 − xo

12/4/2024 Mastewal S. 27
Cont’d
• The smaller the interval the better approximation
• Using Curvature approximation is better than that of straight line
estimate
Second order ( Quadratic Interpolating) polynomial
𝑓2(𝑥) = 𝑏𝑜 + 𝑏1(𝑥 − 𝑥𝑜) + 𝑏2(𝑥 − 𝑥0) (𝑥 − 𝑥1)
𝒃𝟎 Can be solved by setting → 𝒙 = 𝒙𝟎
𝒃𝟏 Can be solved by setting → 𝒙 = 𝒙𝟏
𝒃𝟐 Can be solved by setting → 𝒙 = 𝒙𝟐

f ( x1 ) − f ( xo )
𝑏𝑜 = 𝑓2 𝑥𝑜 b1 =
x1 − xo
f ( x2 ) − f ( x1 ) f ( x1 ) − f ( xo )

x2 − x1 x1 − xo
b2 =
12/4/2024 x2 −S.xo
Mastewal 28
General form of Newton's Interpolating Polynomials
• The generalized to fit an nth order polynomial to n+1 data
points. The nth order polynomial is
𝒇𝒏 𝒙 = 𝒃𝟎 + 𝒃𝟏 𝒙 − 𝒙𝒐 + 𝒃𝟐 𝒙 − 𝒙𝒐 𝒙 − 𝒙𝟏 +. . + 𝒃𝒏(𝒙 − 𝒙𝒐)(𝒙 − 𝒙𝟏) … . (𝒙 − 𝒙𝒏 − 𝟏)

𝑇ℎ𝑒 𝑑𝑎𝑡𝑎 𝑝𝑜𝑖𝑛𝑡𝑠 𝑐𝑎𝑛 𝑏𝑒 𝑢𝑠𝑒𝑑 𝑡𝑜 𝑒𝑣𝑎𝑙𝑢𝑎𝑡𝑒 𝑡ℎ𝑒


𝑐𝑜𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑠 𝑏𝑜, 𝑏1, 𝑏2, … . , 𝑏𝑛.

𝑏0 = 𝑓(𝑥𝑜)
𝑏1 = 𝑓[𝑥1, 𝑥𝑜]
𝑏2 = 𝑓[𝑥2, 𝑥1, 𝑥𝑜]
.
.
.
12/4/2024
𝑏𝑛 = 𝑓[𝑥𝑛,Mastewal
𝑥𝑛 S.− 1, … , 𝑥1, 𝑥𝑜] 29
Cont’d
• First finite divided difference is given by
f ( xi ) − f ( x j )
f [ xi , x j ] =
xi − x j
• The second finite divided difference is expressed by
f [ xi , x j ] − f [ x j , xk ]
f [ xi , x j , xk ] =
xi − xk
• Similarly the 𝑛𝑡ℎ finite divided difference is

f [ xn , xn −1 ,..., x1 ] − f [ xn −1 , xn − 2 ,..., xo ]
f [ xn , xn −1 ,..., x1, xo ] =
xn − xo

12/4/2024 Mastewal S. 30
Example (First Order Polynomial)
• The upward velocity of a rocket is given as a function of
time in table below
• Velocity as a function of time. 𝑚
𝑡 (𝑠) 𝑉 ( )
𝑠
0 0
10 227.04
15 362.78
20 517.35
22.5 602.67
30 901.67
• Determine the value of the velocity at 𝑡 = 16 seconds using first
order polynomial interpolation by Newton’s divided difference
polynomial method.
12/4/2024 Mastewal S. 31
Solution
• For linear interpolation, the velocity is given by
𝑣 𝑡 = 𝑏0 + 𝑏1 (𝑡 − 𝑡0 )
• Since we want to find the velocity at 𝑡 = 16 , and we are using a
first order polynomial, we need to choose the two data points that
are closest to 𝑡 = 16 that also bracket 𝑡 = 16 to evaluate it.
• The two points are 𝑡 = 15 and 𝑡 = 20.
• Then
𝑡0 = 15, 𝑣 𝑡0 = 362.78
𝑡1 = 20, 𝑣 𝑡1 = 517.35
𝑔𝑖𝑣𝑒𝑠
𝑏0 = 𝑣 𝑡0 = 362.78
𝑣 𝑡1 − 𝑣 𝑡0 517.35 − 362.78
𝑏1 = = = 30.914
𝑡1 − 𝑡0 20 − 15

𝒗 𝟏𝟔 = 𝟑𝟔𝟐. 𝟕𝟖 + 𝟑𝟎. 𝟗𝟏𝟒


12/4/2024
𝟏𝟔 − 𝟏𝟓 = 𝟑𝟗𝟑. 𝟔𝟗 𝒎/𝒔
Mastewal S. 32
Example (Quadratic Polynomial)
• The upward velocity of a rocket is given as a function of time
in table below
• Velocity as a function of time.
𝑡 (𝑠) 𝑚
𝑉 ( )
𝑠
0 0
10 227.04
15 362.78
20 517.35
22.5 602.67
30 901.67
• Determine the value of the velocity at 𝑡 = 16 seconds using second
order polynomial interpolation by Newton’s divided difference
polynomial method.
12/4/2024 Mastewal S. 33
Solution
• For Quadratic interpolation, the velocity is given by
𝑣 𝑡 = 𝑏0 + 𝑏1 𝑡 − 𝑡0 + 𝑏2 𝑡 − 𝑡0 𝑡 − 𝑡1
• Since we want to find the velocity at 𝑡 = 16 , and we are using a
second order polynomial, we need to choose the three data points
that are closest to 𝑡 = 16 that also bracket 𝑡 = 16 to evaluate it.
• The three points are 𝑡 = 10, t = 15 and 𝑡 = 20.
• Then
𝑡0 = 10, 𝑣 𝑡0 = 227.04
𝑡1 = 15, 𝑣 𝑡1 = 362.78
𝑡2 = 20, 𝑣 𝑡2 = 517.35
𝑔𝑖𝑣𝑒𝑠
𝑏0 = 𝑣 𝑡0 = 227.04

𝑣 𝑡1 − 𝑣 𝑡0 362.78 − 227.04
𝑏1 = = = 27.148
12/4/2024
𝑡1 − 𝑡0 15 − 10
Mastewal S. 34
Solution Cont’d
𝑣 𝑡 = 𝑏0 + 𝑏1 𝑡 − 𝑡0 + 𝑏2 𝑡 − 𝑡0 𝑡 − 𝑡1
𝑡0 = 10, 𝑣 𝑡0 = 227.04
𝑡1 = 15, 𝑣 𝑡1 = 362.78
𝑡2 = 20, 𝑣 𝑡2 = 517.35
𝑔𝑖𝑣𝑒𝑠

𝑣 𝑡2 − 𝑣 𝑡1 𝑣 𝑡1 − 𝑣 𝑡0

𝑡2 − 𝑡1 𝑡1 − 𝑡0
𝑏2 =
𝑡2 − 𝑡0

517.35 − 362.78 362.78 − 227.04



𝑏2 = 20 − 15 15 − 10 = 0.37660
20 − 10

𝑣 𝑡 = 227.04 + 27.148 𝑡 − 𝑡0 + 0.3766 𝑡 − 𝑡0 𝑡 − 𝑡1


12/4/2024
𝒗 𝟏𝟔 =Mastewal
𝟑𝟗𝟐.S. 𝟏𝟗 𝒎/𝒔 35
Reading Assignment
• Errors in Newton's Interpolating polynomials
• Lagrange Interpolation Polynomials
• Spline Interpolation(linear, Quadratic & Cubic)

12/4/2024 Mastewal S. 36

You might also like