The Art of Polynomial Interpolation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 118

The Art of Polynomial Interpolation

The Art of Polynomial


Interpolation

STUART MURPHY
The Art of Polynomial Interpolation by Stuart Murphy is licensed under a Creative
Commons Attribution-NonCommercial 4.0 International License, except where
otherwise noted.
Contents

Introduction 1

Techniques 4

Chapter One - Elimination (Substitution) Interpolation 9


Chapter One – Practice Exercises 15
Chapter Two - Newton's Divided Difference 19
Interpolation
Chapter Two – Practice Exercises 33
Chapter Three - Quadratic Spline Interpolation 37
Chapter Three - Practice Exercises 56
Chapter Four - Least Squares Regression 58
Chapter Four – Practice Exercise 69
Chapter Five - Measuring the Least Squares Fit/ 70
Exponential Least Squares Regression
Chapter Five - Practice Exercise 79
Chapter Six - Approximation with Taylor Series 80
Chapter Six – Practice Exercise 90
Chapter Seven - Taylor Series Remainder Test 91
Chapter Seven - Practice Exercise 93
Solutions to Selected Practice Exercises 94

Acknowledgements

About the Author 111

Versioning History 112


Introduction
The inspiration for this text grew out of a simple question that
emerged over a number of years of teaching math to Middle School,
High School and College students.
Practically speaking, what is the origin of a particular
polynomial?
So much time is spent analyzing, factoring, simplifying and
graphing polynomials that it is easy to lose sight of the fact that
polynomials have a wealth of practical uses. Exploring the
techniques of interpolating data allows us to view the development
and birth of a polynomial. This text is focused on laying a foundation
for understanding and applying several common forms of
polynomial interpolation. The principal goals of the text are:

1. Breakdown the process of developing polynomials to


demonstrate and give the student a feel for the process and
meaning of developing estimates of the trend (s) a collection of
data may represent.
2. Introduce basic matrix algebra to assist students with
understanding the process without getting bogged down in
purely manual calculations. Some manual calculations have
been included, however, to assist with understanding the
concept.
3. Assist students in building a basic foundation allowing them to
add additional techniques, of which there are many, not
covered in this text.

What this text is not:


It is not a comprehensive survey of interpolation techniques.
The techniques presented are ones the author believes will
provide a basic understanding of polynomial interpolation that
students can build upon. There are many flavors and sub flavors of

Introduction | 1
interpolation and I encourage students who are interested to check
them out.
It is not a lesson in using interpolation apps:
Quite the opposite. By engaging in exercising calculations, the
student is better equipped to understand how and why these
techniques work.
What is polynomial interpolation?
We experience information in discreet ways.
Typically, it comes from measurements or observations. However,
what we often want to do is look at a continuous process the data
represents; all at once or at least at any point we choose. While
we cannot represent a continuous process with a single number
we can do so with an equation. Graphically this equation could be
a single point (not usually that interesting). A straight line (degree
one polynomial), a curved line (degree two polynomial) that we often
call a quadratic equation or parabola; or some higher degree that
graphically, often begins to look like a wave repeating itself.
When dealing with data the specific numbers always represent
a snapshot. For example, if we measure rainfall and wind speed each
day for a year, we have a collection of data points that compare
rainfall to wind speed. We might ask if there is a relationship
between wind speed and rainfall. For example, hurricane force
winds are usually accompanied by heavy rainfall. It would be nice
to develop an equation that can predict rainfall when high winds
are expected. Normally someone analyzing this data would plot the
points on the x, y coordinate plane. In this text, the sample data used
to illustrate the various interpolation methods will be plotted in this
way.
Can the math stand alone? Most certainly not. The challenge
for someone utilizing interpolation techniques is to apply expertise
and experience to determine the most appropriate polynomial
structure. In other words, is the model most likely to accurately (or
at least reasonably) produce useful estimated values? This is what I
mean by the “Art” of polynomial interpolation.
Interpolation uses a known set of independent and dependent

2 | Introduction
values to estimate other dependent values, typically along a
continuous line represented by a polynomial. Technically if you use
the model to identify additional data points outside of the range of
the given points this is known as extrapolation. Our focus will be on
interpolation within the given range.
Adaptations of the techniques we explore have been used in pre-
computer times to generate tables of trig or log values used in
applications such as navigation. Nowadays they are adapted for use
by computers and calculators and they are an important part of the
tool kit researchers use to predict future events such as emerging
storms tracks, climate change, political elections, changing
demographics, spread of disease, and so forth.
We will explore five Interpolation techniques: Elimination
(Substitution), Newton’s Divided Difference, Splines, Least Squares
and Taylor Series.

Introduction | 3
Techniques

A Brief Explanation of the Techniques Presented


in This Text

A) Elimination (Substitution) (or solving a linear


system of n equations with n unknowns)

Essentially this technique utilizes a process known to high school


Algebra One students: Elimination (or substitution). This allows for
solving a set of n-unknown variables in a set of n-equations.

B) Newton’s Divided Difference

Newton’s Divided Difference interpolation has many applications.


Historically it and similar techniques have been used to develop
trigonometric and logarithmic tables.
An important advantage is that if you start out with a handful of
known points plotted on a coordinate plane you can decide on an
appropriate degree polynomial that would be representative of the
general trend. A benefit is that any new given points can easily be
added one at a time thus increasing the degree of the polynomial for
each new point added, without having to start over.

4 | Techniques
C) Splines

Splines interpolation is a great technique to employ if there are


certain discreet points that modify the nature of the trend. For
example, the trajectory of a rocket launch could be broken into
segments: Launch to Stage One separation, interval to Stage Two
separation, a major scheduled course adjustment and so on. Each
of the resulting intervals could be represented by a separate
polynomial. Spline interpolation creates just such separate
polynomials while at the same time recognizing the continuity
inherent in the event and building that into the resulting set of
equations that collectively represent the event. A variation of this
would be a single spline developed in a sub-interval of the domain
that is of particular interest.

D) Least Squares Regression

Polynomial Least Squares regression is useful for fitting a


polynomial such as a quadratic equation to many data points
ensuring that each point influences the resulting polynomial in such
a manner that the resulting polynomial is considered a best fit for
that set of data.

E) Taylor Series Polynomial

A technique that employs use of the Taylor Series to develop a


polynomial that approximates the actual function at and near a
given domain value. It does not require a set of data points. The
major limitation is that it works for a limited class of functions.

Techniques | 5
Note there are plenty of applications that will provide the desired
results very quickly. However, this textbook is meant to assist
students with an understanding of the computations and reasons
for them. Included are the manual calculations with explanation as
well as basic Matrix commands that students can use to mirror the
manual calculations.
Let’s look at a simple example.
Assumption: The faster a car is driven the lower the fuel
efficiency.

Sample Vehicle Fuel Efficiency Measurements

X (Miles per Hour) Y (Miles per Gallon)


MPH MPG
45 43
55 42
65 38

75 32

Long Description

Chart of Sample Vehicle Speed

6 | Techniques
Figure 1 – Comparing Linear to Quadratic Interpolation Methods

Long Description

Figure 1 - Comparing Linear to Quadratic


Interpolation Methods

The above plot suggests two likely scenarios. The question is:
Which more accurately represents what is really happening?
Linear: Pick two points that seem reasonable and draw a straight
line (red) through them.
Quadratic: Someone else looking at the data might conclude the
curved line (blue) is more reasonable and accurate.
Visually we would conclude that the quadratic is mathematically

Techniques | 7
a better fit because the curve is significantly closer to the given
data points. However, it is important to remember that while this is
true, an automotive expert applying expertise and experience may
conclude that in fact the linear interpolation is more meaningful
or that more data points are needed. We want to keep in mind
that the “Art” component is what has to be applied to determine
what degree polynomial and which technique will provide the best
approximation.

8 | Techniques
Chapter One - Elimination
(Substitution) Interpolation
A common method for solving the resulting system of equations
is using linear algebra and matrix math. However, neither are
necessary to illustrate this technique and apply to a practical
problem. We will use elimination to solve the example below. While
I think it is important students experience how basic algebra works
for interpolation, they will quickly see that unless the numbers
are small and simple this particular technique quickly becomes
unwieldy for large values generated during the process.
For example:

Sample Vehicle Fuel Efficiency Measurements

X (Miles per Hour) Y (Miles per Gallon)


MPH MPG
45 43

55 42
65 38
75 32

Long Description

Sample Vehicle Fuel Efficiency Measurement

Chapter One - Elimination


(Substitution) Interpolation | 9
Apply expertise and experience to create a polynomial that will
reasonably predict the fuel efficiency of the particular vehicle used
to gather the above data.
Step one: Deciding that a quadratic equation looks like the best
fit, we select the first, second and fourth points to construct a
second degree (quadratic) polynomial.
Step Two: Even though the result will be a quadratic equation we
are able to use straightforward linear techniques of elimination and
substitution. The reason for this is that we are not trying to find x
and y. The three points we selected already give us those. Instead,
we are trying to create the quadratic in standard form by solving
for the unknown constants a, b and c:

Step Three: Lets create three quadratic equations with the same
three unknowns a, b, c and replacing x, y in each with the actual data
point values.
Eq. one: ———–>

Eq. two: ———–>

Eq. three: ———–>

Step Four: The elimination process:


45(Eq. two) – 55( Eq one):
[ ] – [
]
Eq. four:     b is eliminated
55(Eq. three) – 75(Eq. two):
[ ] – [
]
Eq. five:      b is eliminated

10 | Chapter One - Elimination (Substitution) Interpolation


Conduct elimination on the resulting two equations with two
unknowns to eliminate c.
2(Eq. four) – Eq. five:

______________________

Eq. six:      c is eliminated


Plugging the resulting value of a into Eq. 4 allows us to solve for c:

Step Five: Substitute a and c into any of the original equations


to find b:

Our interpolated polynomial is:

For students looking for a less manual process here is the setup
using matrix math to run the calculations in a spreadsheet.

Chapter One - Elimination (Substitution) Interpolation | 11


Figure 1.1 The Matrix Math formula

Figure 1.2 – Setup of Solution in Matrix Notation

12 | Chapter One - Elimination (Substitution) Interpolation


Long Description

Figure 1.2 - Setup of Solution in Matrix Notation

Let’s look between 45 and 55 at and see how well our


polynomial estimates a reasonable value:

It is recommended that the original points also be plugged into


the equation as a check.
Plotting on our graph shows that this is
indeed a very good estimate.

Chapter One - Elimination (Substitution) Interpolation | 13


Figure 1.3 – Line graph displaying the results of the Quadratic Interpolation

Long Description

Figure 1.3 - Line graph displaying the results of the


Quadratic Interpolation

14 | Chapter One - Elimination (Substitution) Interpolation


Chapter One – Practice
Exercises

1a)

The owner of the ABC Children’s Party Company has offered a


limited menu of pricing options depending on the number of
children attending the party. The available prices are included in the
table below:

ABC Children's Party Company

Maximum children attending the Cost per Total Cost of


party Child Party
10 $37 $370
25 $28 $700

50 $22 $1100
100 $15 $1500

Long Description

ABC Children's Party Company

Chapter One – Practice


Exercises | 15
The prices cover the cost plus acceptable profit and have worked
well in the past. To improve the companies competitiveness, the
owner would like to offer more flexible pricing that is specific to the
actual number of children. She would like to develop a cubic (3rd
degree) polynomial that will generate the unit price when she inputs
the expected number of children attending the party. To develop
this polynomial the student must use the algebraic technique of
substitution (elimination) discussed in this chapter.
(Solution Given)

1b)

This exercise offers practice in using basic matrix commands either


manually or in a spreadsheet program to solve n-equations in n-
unknowns.
(Solution given for 2nd to 5th row of data)
Given the following data points, develop a polynomial that will
interpolate any value of p(x) on the given interval, for the bracketed
points. It will result in a third-degree polynomial:

16 | Chapter One – Practice Exercises


Exercise 1b Sample Data Point

x y or f(x)
-4 12
[-1.75] [-2]

[1] [-3.7]
[3.3] [-1.4]
[6.9] [4]
7 3.9
9.1 6

Long Description

Exercise 1b Sample Data Point

Tables  are provided to assist students

Figure 1.4 Guide for students

Chapter One – Practice Exercises | 17


Long Description

Tables  to Assist Atudents

1c)

Select any three data points from the above table and develop a 2nd
degree (quadratic) Polynomial.

18 | Chapter One – Practice Exercises


Chapter Two - Newton's
Divided Difference
Interpolation
A quick word regarding Divided Difference. The title might suggest
that derivatives are involved, and in a way that would be correct. The
good news is that knowledge of derivatives is not necessary for this
technique. However, students should be familiar with the concept
of slope, slope-intercept form and how slope is calculated since the
process utilizes the change in the dependent variable (commonly
known as y or f(x)) divided by the change in the independent variable
(commonly known as x).
Students may have already encountered Divided Difference
technique in high school algebra when asked to analyze a set of
data to determine the non-linear (usually quadratic) equation that
produced the dependent variable, as the following example
illustrates.

Example

Given the following set of x values, determine the quadratic (2nd


degree polynomial) that correctly produces the corresponding y
values. Show in standard form:

Chapter Two - Newton's Divided


Difference Interpolation | 19
Sample Data

x y
-2 25.2
-1 11.3

0 2
1 -2.7
2 -2.8

Long Description

Sample Data

Solution

This simplified use of Newton’s Divided Difference works because


one of the x values is zero and there is a uniform distance of one
between each x value.

20 | Chapter Two - Newton's Divided Difference Interpolation


Figure 2.1 Simplified use of Newton’s Divided Difference

Long Description

Figure 2.1 Simplified use of Newton’s Divided


Difference

Since the 2nd divided differences are all the same this tells us that
there is a quadratic solution
with  
By plugging in the x,y values (0,2) we can easily solve for c as
follows:

Or simply . Now that we know a and c we plug those


in using one of the other points such as (1,-2.7) and solve for b as

Chapter Two - Newton's Divided Difference Interpolation | 21


follows: which simplifies to  

Resulting in the solution equation of  


 which works for all given points and approximates everything in
between.
Newton’s Divided Difference Interpolation generalizes the above
process. The given points no longer have to be in any particular
order and the x values do not have to be spaced at uniform intervals;
offering a welcome flexible technique.

The Generalized Process

Using Newton’s Divided Difference approach, let’s develop a


polynomial that takes a limited number of data points (think points
plotted on the coordinate plane) and fit them to a polynomial that is
continuous across the interval.
This method is an iterative process that allows us to begin with
one point. We can then add additional data points at our discretion,
especially if we believe they will produce a better, more
representative, polynomial.
Each time we add a point the resulting polynomial increases by a
degree resulting in a polynomial of degree one less than the number
of points included in the interpolation process.
(x1, y1): Constant Function: 
(x1, y1), (x2, y2): Linear Function: f_1(x) = a_1x + C
(x1, y1), (x2, y2), (x3, y3): Quadratic Function: 

.
.
.

22 | Chapter Two - Newton's Divided Difference Interpolation


resulting in an degree Function:

The following example illustrates the iterative process and


demonstrates its validity.

I) The Constant Solution

The Constant Solution

x f(x)
-2 3

Long Description

The Constant Solution

the constant solution

II) The Linear Solution: By adding a second point

Chapter Two - Newton's Divided Difference Interpolation | 23


we move to a straight-line solution

The Linear Solution

x f(x)
-2 3

-1 -4

Long Description

The Linear Solution

This is accomplished by preserving the constant solution


while adding a linear component that works for all
points on the straight line passing through both given points as
follows.
This added
component will not alter the solution for while introducing
the appropriate linear structure (degree one polynomial).
Solving for  ensuring f(x) will satisfy both points
and everything on the line passing through the two given points.
 
 
 
 

24 | Chapter Two - Newton's Divided Difference Interpolation


Thus 
Simplifying   since this is valid slope
intercept form, we have a linear solution
Checking 1st point it
checks
Checking 2nd point it
checks

III) The Quadratic Solution – 2nd degree


polynomial

The Quadratic Solution

x f(x)
-2 3
-1 -4
3 6

Long Description

The Quadratic Solution

Chapter Two - Newton's Divided Difference Interpolation | 25


Adding a third point, allows for the development of a quadratic
(2nd degree) equation. We repeat the process with the same goal:
preserving the constant solution at the first point and the linear
solution for first two points. The newly added third point will be
satisfied by the previous linear solution plus the added quadratic
component.

this component (in red) ensures this new solution works for
previous points as well as establishing a valid quadratic form.

Remember

Solving for the constant:

Plug in and simplify

As a check we will plug in our three given values of x to verify it


produces the corresponding given y values.
Check One         
                      

CheckTwo

26 | Chapter Two - Newton's Divided Difference Interpolation


Check Three

We have engaged in an iterative process. Utilizing generalized


notation for the above we conducted three iterations, with an
additional point added at each iteration.
Single point: :
Constant Solution
Second Point Added: :
solving for
linear
Third Point Added: :

solve for

quadratic
Each new iteration builds upon and preserves the previous
solutions.
In general, the solution polynomial can continue to be increased
one degree at a time solving for each new variable as long as
additional points become available. This results in the following
general form:

Normally it is best to select the lowest order polynomial that


is reasonable. Higher order polynomials can introduce unwanted
error.

Chapter Two - Newton's Divided Difference Interpolation | 27


The table approach below offers a convenient methodology for
manually calculating the constants. It lends itself to adding
additional points as needed without having to start over.

The following Table Methodology illustrates and simplifies the


above process.

Figure 2.2 Table Methodology

Long Description

Figure 2.2 Table Methodology

Starting at the right-hand column we backtrack diagonally left


and up (circled in red). Backtracking left and downward would have
produced the same simplified equation (circled in green)

This produces the following results:

28 | Chapter Two - Newton's Divided Difference Interpolation


Simplifying:

This satisfies the three given points as well as any interpolated


points between the minimum and maximum value of x. Because it is
a continuous function, it also produces extrapolated points beyond
the range. These extrapolated points may or may not be valid for
any particular situation being analyzed. That is part of the “Art” of
interpolation which relies on the experience and expertise of the
one studying a particular phenomenon.

The Sin function – An interesting example

One of the neat things we can use interpolation for is to create a


polynomial that provides reasonable estimates of the sin (or cos) of
an angle for any given measure. In fact, the numbers we will use are
small and simple that even the Elimination (Substitution) approach
will easily produce the desired result.

The Sine function illustrated on the coordinate plane

Chapter Two - Newton's Divided Difference Interpolation | 29


Figure 2.3 Sine Function Graph

Long Description

Figure 2.3 Sine Function Graph

30 | Chapter Two - Newton's Divided Difference Interpolation


Figure 2.4 Estimating sin wave – Newton’s Divided Difference Table

Long Description

Figure 2.4 Estimating sin wave - Newton's Divided


Difference Table

Simplifying the resulting equation produces:

Chapter Two - Newton's Divided Difference Interpolation | 31


Figure 2.5 An approximation of sin value

Long Description

Figure 2.5 Sine Function Approximation

32 | Chapter Two - Newton's Divided Difference Interpolation


Chapter Two – Practice
Exercises

2a)

While the owner in exercise 1a) was happy with the results of using
elimination/substitution, she was curious to see if the results would
differ using Newton’s Divided Difference (NDD) interpolation. You
have decided to assist her by generating a cubic polynomial using
NDD. (Solution given) The data is:

ABC Children's Party Company

Maximum children attending the Cost per Total Cost of


party Child Party
10 $37 $370

25 $28 $700
50 $22 $1100

100 $15 $1500

Chapter Two – Practice


Exercises | 33
Long Description

ABC Children's Party Company

2b)

Using the same seven data points from the previous chapter select
three data points and plug into the grid below to produce a
quadratic solution. Simplify the resulting polynomial and put in
standard form. Note solution given for the three bracketed points.
(Solution given)

Seven Data Points

x y or f(x)

-6.2 -8
[-3] [-7]
-1.5 -2.2

[1] [0.7]
3.5 3

4.25 5
[7.9] [11]

34 | Chapter Two – Practice Exercises


Long Description

Seven Data Points

Exercise 2b Answer Grid

- x f(x) 1st divided difference 2nd divided difference

- -

- - - - -
- - - - -

- - - - -
- - - - -
- - - - -

Long Description

Exercise 2b answer grid

Chapter Two – Practice Exercises | 35


2c)

Add an additional data point and develop a 3rd degree (cubic)


polynomial. Compare this to the solution from 2a) and decide
whether or not it improves the interpolation. Note student answers
may vary.

36 | Chapter Two – Practice Exercises


Chapter Three - Quadratic
Spline Interpolation
This technique offers several advantages over other techniques. It
produces a smooth curve over the interval being studied while at
the same time offering a distinct polynomial for each subinterval
(known as Splines). Secondly it eliminates some of the problems
inherent in trying fit a single higher order polynomial which can
actually produce misleading estimates by being too precise.
One disadvantage that we quickly discover is that the resulting
set of polynomials can be taxing to solve manually using techniques
such as elimination/substitution, Gauss-Jordan reduction or
Cramer’s rule. Fortunately, many applications including most
spreadsheet programs allow us to solve the resulting system, easily
producing the family of equations.
Let’s begin with a simple case that the student can choose to solve
manually to can gain an understanding of the process. The matrix
operations are shown as well.

Spline Example

Chapter Three - Quadratic Spline


Interpolation | 37
Figure 3.1 Spline Example

Long Description

Figure 3.1 Spline Example

Instead of one equation we could have an equation representing


the interval [2,5] and a second equation [5,7]. The key is that the
point in the middle contributes to both equations creating a
connection that ensures a smooth handoff from the first to the
second equation. The general form is:
        

Since we want to solve for the six constants in a proper linear


fashion, we need four more equations. To find them we employ the

38 | Chapter Three - Quadratic Spline Interpolation


connection at . Since each equation satisfies two endpoints
this allows us to double the number of equations as follows:
 

   

 
We now have four equations. The fifth equation we can develop
at the point (5,8) known as an internal knot. Note the two endpoints
are sometimes referred to as external knots.
If we take the derivative of the two equations at , we know
they have to be equal because the slope has to be the same at that
point. We can set them equal to each other and simplify. This results
in:
   
Rearrange
 

We now have five of the six equations


 

   

 
 
This is the sixth equation; see explanation below.
The sixth equation is based on the assumption that the line
leaving the endpoint is a straight line. The quadratic component
zeros out thus our sixth equation is simply . The other
endpoint would have produced which would have worked
equally well. We’ll use these six equations and solve with matrix
operations.

Chapter Three - Quadratic Spline Interpolation | 39


Constants Displayed in Matrix Form

a1 b1 c1 a2 b2 c2 y
4 2 1 0 0 0 1
25 5 1 0 0 0 8

0 0 0 25 5 1 8
0 0 0 49 7 1 3
10 1 0 -2 -10 0 0
1 0 0 0 0 0 0

For illustrative purposes a detailed flow of the matrix operation is


offered below:

40 | Chapter Three - Quadratic Spline Interpolation


Figure 3.2 Matrix Operation Flow

Long Description

Figure 3.2 Matrix Operation Flow

Chapter Three - Quadratic Spline Interpolation | 41


Figure 3.3 Two Spline Interpolation Equations

Long Description

Figure 3.3 Two Spline Interpolation Equations

42 | Chapter Three - Quadratic Spline Interpolation


Example – Space Launch Data

The following combines a general explanation of the technique


along with a specific example. We will use the following launch
data for the Saturn 5 rocket. Note this data was pulled from readily
available data for several launches and in fact does not represent
any one launch. The data tracks a hypothetical Saturn 5 from launch
until third stage shutdown shortly before entering earth orbit.

Space Launch Data

x (time in minutes) y (velocity in 1000ft per second)


0 1
1 2
2.5 9

3 9.2
4 10

5 12
6 14.5

7 17
8 20
8.75 23

9 23.5
10 24

11 25.5

11.25 25.9
11.5 25.9

Chapter Three - Quadratic Spline Interpolation | 43


Selected Interval Points (knots)

x y Interval
1 2 start of first interval
2.5 9 1st stage separation

8.75 23 2nd stage separation


11.25 25.9 3rd stage shutdown

Figure 3.4 Plot of Launch-Knots Identified

44 | Chapter Three - Quadratic Spline Interpolation


Long Description

Figure 3.4 Plot of Launch-Knots Identified

Since we have selected data points we create


quadratic spline equations each with three unknowns:
       

      

       
We want to solve for the unknowns. However, with only
three equations we need to create six additional equations in order
to apply one of the standard techniques for solving n equations in n
unknowns.
Notice that each equation is a solution for two of the knots as
shown in figure 1. This allows us to split each spline equation into
two equations providing a total of n= 6 equations as follows:
                          

                 

               

         

           

Chapter Three - Quadratic Spline Interpolation | 45


      

We are getting closer. We will now create two more equations


using basic knowledge of the derivative and the fact that two pairs
of equations are solutions for the two interior knots. This works
because the first derivative of each equation in a pair will have the
same slope at the common data point (knot).
This is not a course in calculus so I will simply show the first
derivatives for each pair to obtain our additional equations.

at

  the

seventh equation

at

the eighth equation


For our ninth equation we recognize that at each endpoint the
resulting line extending beyond the interval is a straight line. Since
this eliminates the quadratic component, we can simply make our
ninth equation be:
a_1 = 0

We now have our nine equations with nine unknowns. Figure 4


below includes the nine equations.

46 | Chapter Three - Quadratic Spline Interpolation


Figure 3.5 The Nine Equations

Long Description

Figure 3.5 The Nine Equations

Gathering the equations and squaring the quadratic variables


results in following nine equations with nine unknowns. The x
variables are replaced with the x-value from the related knot.
a_1(1) + b_1(1) + c_1 = 2
a_1(6.25) + b_1(2.5) + c_1 = 9

Chapter Three - Quadratic Spline Interpolation | 47


a_2(76.56) + b_2(8.75) + c_2 = 23

It would be a cumbersome task to solve the above system by hand.


Instead, we will put the data in matrix form and solve.

Nine Equations Solved with Matrix Math

1 1 1 0 0 0 0 0 0 a1 2
6.25 2.5 1 0 0 0 0 0 0 b1 9

0 0 0 6.25 2.5 1 0 0 0 c1 9
0 0 0 76.56 8.75 1 0 0 0 a2 23
0 0 0 0 0 0 76.56 8.75 1 b2 23
0 0 0 0 0 0 126.56 11.25 1 c1 25.9
5 1 0 -5 -1 0 0 0 0 a3 0

0 0 0 17.5 1 0 -17.5 -1 0 b3 0
1 0 0 0 0 0 0 0 0 c3 0

Plug the above into a spreadsheet and apply matrix operations as


follows:

48 | Chapter Three - Quadratic Spline Interpolation


Figure 3.6 Nine Equations Solved with Matrix Math

Long Description

Figure 3.6 Nine Equations Solved with Matrix Math

The calculations produced three polynomials for the interval

         

                 

Chapter Three - Quadratic Spline Interpolation | 49


         

These equations produce reasonable estimates for the overall


flight pattern as shown in figure 3.7.

Figure 3.7 A solution for a space launch

Long Description

Figure 3.7 Saturn 5 Rocket Possible Solution

50 | Chapter Three - Quadratic Spline Interpolation


Direct Method Cubic Interpolation

Cubic interpolation takes us to the next level and is a common


method for developing an equation that approximates f(x) for a
particular value of x as well the neighborhood on either side made
up of the four closest given data points. It is well suited if we want
to interpolate for a particular interval of x. This will not provide a
family of polynomials that satisfy the domain of the function. Rather
it provides that single cubic polynomial that gives us a good picture
of what is happening at and near a particular point of interest. This
approach allows us to setup and solve a single cubic equation. The
principal limitation is that it is not valid for the entire domain of
x only the four closest points. Since we often only want to look at
a limited range the benefits of a significant reduction in algebraic
manipulation outweighs the limitation.
We will use our table of data from the previous example.

Chapter Three - Quadratic Spline Interpolation | 51


Saturn 5 Rocket Launch Data

x (time in minutes) y (velocity in 1000 ft per second


0 1
1 2

2.5 9
3 9.2
4 10
5 12
6 14.5

7 17
8 20
8.75 23
9 23.5
10 24

11 25.5
11.25 25.9
11.5 25.9

Let’s say we want to estimate the velocity when


minutes. We check the points on either side to determine the four
closest values to 5.85 (shown in red).

52 | Chapter Three - Quadratic Spline Interpolation


Closest Values to x=5.85

Checking Distances Four Data Points


5.85 - 3 = 2.85 -
5.85 - 4 = 1.85 (4,10)

5.85 - 5 = 0.85 (5,12)


6 - 5.85 = 0.15 (6,14.5)
7 - 5.85 = 1.15 (7,17)
8 - 5.85 = 2.15 -

Other Data Points From Example

x (time in minutes) y (velocity in 1000 ft per second)


4 10
5 12

6 14.5
7 17

Utilizing the standard form for a cubic polynomial allows us to


quickly set up four equations with four unknowns. Remember we
are not finding x and y we already know those. Rather our
unknowns are the constants a,b,c,d.

Chapter Three - Quadratic Spline Interpolation | 53


  
Using high school algebra (elimination/substitution), Gauss
Jordan reduction or some other method, solve for the four
unknowns. Below shows the setup using Matrix math to solve the
cubic polynomial in a spreadsheet program.

Figure 3.8 Solved Cubic Polynomial via Spreadsheet Program

Long Description

Figure 3.8 Solved Cubic Polynomial via Spreadsheet


Program

Resulting Equation
    for
the interval
Let’s see how well our cubic polynomial fits when plotted against
all the given points plus x = 5.85

54 | Chapter Three - Quadratic Spline Interpolation


Figure 3.9 Graph of Cubic Interpolation

Long Description

Figure 3.9 Direct Method Cubic Interpolation

Notice that the solution provides the best estimate in the


neighborhood of the closest points.

Chapter Three - Quadratic Spline Interpolation | 55


Chapter Three - Practice
Exercises

3a)

Using the data from the Saturn launch example in chapter three
calculate the family of quadratic splines for the following different
Selected Interval Data Points (knots) and compare to the example.

Saturn 5 Rocket Launch Data

x (time in minutes) y (velocity in 1000 ft per second


0 1

1 2
2.5 9
3 9.2

4 10
5 12
6 14.5

7 17
8 20

8.75 23
9 23.5
10 24
11 25.5

11.25 25.9

11.5 25.9

56 | Chapter Three - Practice


Exercises
Selected Interval Points (knots)

x y Interval

1 2 start of first interval


2.5 9 1st stage separation
8.75 23 2nd stage separation
11.25 25.9 3rd stage shutdown

3b)

Using the table in 3a) for time = 7.5, conduct a Direct Method Cubic
Interpolation. Show the resulting polynomial in standard form and
graph the solution manually or with your favorite graphing tool.
(Solution given)

Chapter Three - Practice Exercises | 57


Chapter Four - Least Squares
Regression
This technique is often used when many points of data are involved
and the analyst would like the resulting polynomial to be influenced
by all the identified points. The degree of the Interpolated
polynomial should be selected ahead of time based on the expertise
of the analyst. As a general rule of thumb, the lowest degree
polynomial that appears to fit is the better choice. So, one might fit a
quadratic or cubic solution to a large number of points which could
run to dozens or even hundreds of points. The result will always be
considered mathematically a best fit to the data.
To gain an understanding of the underlying principle and process
we will begin with a simple data set consisting of five points.

Scenario

A helium balloon that gathers meteorological data is released. For


each mile it rises, the distance it travels downrange is also recorded.
The data is recorded in the following table.

58 | Chapter Four - Least Squares


Regression
Altitude and Downrange

Altitude - x miles Downrange - y miles


1 2
2 3

3 5
4 5
5 4

Figure 4.1 Data points for a Helium Balloon

Long Description

Figure 4.1 Helium Balloon Data Points

Chapter Four - Least Squares Regression | 59


Let’s begin with the simplest model – the straight line. We want
to find a best fit linear equation that minimizes the sum of the
distances between the actual and interpolated values of y for a given
value of x.
1) A generalized linear equation will serve as our
starting point.
2) It is easy to see that with a little rearranging we have an
equation that lends itself to finding that minimum distance
mentioned above: y -  (ax + b) = 0
We will square this equation so that resulting differences in
distance are always positive as we are not interested in the direction
of the difference but the sum of the differences.
Since we want the sum of these squared equations, we have the
following for this example:

Interestingly by squaring these equations we will obtain a


quadratic equation which will be useful in finding a linear solution.
In fact, it will allow us to create two partial derivative equations for
each of the constants we are trying to solve for. In this case a, b. This
will result in two linear equations in two unknowns which we can
solve using elimination/substitution or more advanced techniques
such as matrix computations. And because they are upward facing
quadratics, we minimize each equation be setting them to zero.

1)

60 | Chapter Four - Least Squares Regression


2)

Next, we simplify each equation by distributing the summation


notation. And, since they are equal to zero, we simply divide out the
-2. We now have two equations in two unknowns a,b.

Simplify 1)

Simplify 2)

We now have two equations in two unknowns a, b. Let’s calculate


the various sums and plug in.

Chapter Four - Least Squares Regression | 61


I) Plug in
to set up the
two
equations as
follows:

One:

Two:

II)
Rearrange:

One: 55a +
15b = 63

Two: 15a +
5b = 19

III) Apply
substitution
/elimination
to solve for a,
b

We now
have a
polynomial
that can
interpolate

62 | Chapter Four - Least Squares Regression


values in the
interval [1,5]

  or 

Figure 4.2 Graph of Linear Solution

Long Description

Figure 4.2 Graph of Linear Solution

Chapter Four - Least Squares Regression | 63


As we can see, the linear solution offers an estimate that is closer
to some of the given points than others. Can we do better by
generating a curved line? (2nd degree polynomial)

The Quadratic Solution

The challenge is to expand on the above technique and apply it to


develop the best fit quadratic equation.
In the linear, our goal was to solve two equations in two
unknowns. Now we want to solve three equations in three
unknowns. The unknowns are the constants of our quadratic
equation in standard form:
Rearranging the standard form, we develop the Least Squares
Summation equation:

Now we take partial derivatives with respect to each of the three


constants a, b, c as follows:

a —>

b —>

64 | Chapter Four - Least Squares Regression


c —>

Simplify by dividing out the -2 and distributing the summation


notation

Let’s calculate the additional sums needed. We already


calculated some of the sums for the linear equation.
These are:

Additional sums:

Chapter Four - Least Squares Regression | 65


    

   

Plugging in shows the three equations in three unknowns:

Rearranging

Solving manually or using spreadsheet software the following


equation is obtained:

This is the interpolation polynomial that generates a curved line


(parabola) that is the best fit for the five given data points and it
estimates y values for any other point within interval.
Matrix Operations simplify the calculations
Note: multiplying the transpose by the matrix produces the

66 | Chapter Four - Least Squares Regression


summation in n-equations with n-unknowns. This holds true no
matter how many data points are involved.

Figure 4.3 Matrix Operations Simplifying Calculations

Long Description

Figure 4.3 Matrix Operations Simplifying


Calculations

Chapter Four - Least Squares Regression | 67


Figure 4.4 Graph of Quadratic Solution

Long Description

Figure 4.4 Graph of Quadratic Solution

Visually, the quadratic is a better fit than the linear solution.


In the next section we’ll show how to measure the goodness of the
fit quantitatively.

68 | Chapter Four - Least Squares Regression


Chapter Four – Practice
Exercise

4a)

Use weekly closing data for the Dow Jones Industrial Average and
run a Least Squares Regression to produce a 3rd degree (cubic)
interpolation polynomial. Plot the data on a chart for a visual
representation. Solution given uses data from January 2020
through July 2021, during height of the COVID-19 pandemic.
(Solution given)

Chapter Four – Practice


Exercise | 69
Chapter Five - Measuring the
Least Squares Fit/Exponential
Least Squares Regression

How Well Does the Linear Polynomial Fit the


Data?

It is natural and useful to ask: How good a predictor is the resulting


polynomial for the given values of x. In other words, how close
do the predicted values of y come to the actual values of y for a
particular value of x.
Let’s look at the chart for the linear regression we calculated (red
dotted line) in Chapter Four. The length of red vertical lines between
the actual and predicted values tells us how good the fit is. The
smaller the red lines (closer), the better the fit.

Figure 5.1 The Linear Fit

70 | Chapter Five - Measuring the


Least Squares Fit/Exponential Least
Long Description

Figure 5.1 The Linear Fit

However, simply measuring each distance and adding them


together presents some problems. We want to eliminate direction
because the negatives and positives tend to cancel each other out.
An easy way to do this is measure each distance and then square the
result. Hence the name Least Squared Regression.
Next, we need a baseline or something to compare our summed
squared regression. It turns out a horizontal line passing through
the mean of the y values offers us a worst-case scenario. In other
words, the distance between the given y and the horizontal line
is essentially no fit. So we add the given y values and divide by
5 (number of data points in this example).

Shown in

green above. The closer the predicted value is from the actual value
and the farther it is from the mean value, the better our prediction.
Using the data above we will conduct a (Squared Regression)
analysis to gauge numerically how well the linear and quadratic
polynomials fit the data.

Chapter Five - Measuring the Least Squares Fit/Exponential Least Squares


Regression | 71
Squared Regression Analysis

Generated y values Difference between actual and generated


x y squared:

1 2 2.6 0.36
2 3 3.2 0.04

3 5 3.8 1.44
4 5 4.4 0.36
5 4 5 1

- - -

However, to put this in perspective we need to add a column and


calculate the sum of the squared distance between the actual values
of y and the mean value of y.

Squared Regression Analysis with Total


Differences

Difference Total Squared


between actual difference between
Generated y values and generated
x y actual and mean.
squared:

1 2 2.6 0.36 3.24


2 3 3.2 0.04 0.64

3 5 3.8 1.44 1.44


4 5 4.4 0.36 1.44
5 4 5 1 0.04

- - -

72 | Chapter Five - Measuring the Least Squares Fit/Exponential Least


Squares Regression
By taking the ratio of the sum of our squared error to the sum
of the No-Fit values and subtracting from one we get a number
(percent) that tells us how good our fit is in terms that is
understandable.

The value of 53% suggests that this may not be the best fit.

Let’s calculate for the quadratic fit to see if it is a better fit.

Squared Regression Analysis with Different


Generated y Values

To
Difference between actual di
Generated y values and generated squared: be
x y an

1 2 1.7428 0.06615184 3.
2 3 3.6284 0.39488656 0.
3 5 4.6568 0.11778624 1.4

4 5 4.828 0.029584 1.4


5 4 4.142 0.020164 0.

- - -

Chapter Five - Measuring the Least Squares Fit/Exponential Least Squares


Regression | 73
The quadratic is a better fit than the straight line. However, part
of the “Art” of interpolation means the analyst still has to decide
which is more meaningful and representative of the situation being
analyzed.

Exponential Least Squares Regression

An important interpolation is one involving exponential


polynomials. It has many applications in finance, biochemistry, and
radioactive decay.
We will focus on the standard form using the constant e. This
is known as the natural number or Euler’s number value. Its
importance lies in the fact that it represents the fundamental rate
of growth shared by continually growing processes. One example is
continuous compounding of money in a savings account.
 
The form of the polynomial is
In this, we can think of r as the rate and A we can think of as both
the y intercept and demonstrating whether it is growth (positive
value) or decay (negative value).
Graphically it looks like (A and r are both set to 1):

74 | Chapter Five - Measuring the Least Squares Fit/Exponential Least


Squares Regression
Figure 5.2 Exponential Growth

Long Description

Figure 5.2 Exponential Growth.

does not lend itself to directly calculating an


interpolative polynomial. This is due in part because standard
deviation does not apply to this type of continuous and ever
accelerating growth.
Since we already know how to deal with standard polynomials
that can be solved used linear techniques such as matrix arithmetic,
our goal is to eliminate e. Solve for r and A then plug the results back
into the original polynomial.
Since we are dealing with the natural number e, we can convert
the above to a linear function by taking the natural log of both sides
as follows:
Chapter Five - Measuring the Least Squares Fit/Exponential Least Squares
Regression | 75
When we rearrange, we have a linear equation in slope intercept
form:

Let’s use the following sample set of data points and use Matrix
math to develop the interpolated data:

Interpolated Data

x actual y Iny
-1 0.4 -0.916
0 1.1 0.095

1 2.62 0.963
2 8.1 2.092
3 24.03 3.179
4 57.9 4.059

76 | Chapter Five - Measuring the Least Squares Fit/Exponential Least


Squares Regression
Figure 5.3 Matrix Math Solution

Long Description

Figure 5.3 Matrix Math Solution

Chapter Five - Measuring the Least Squares Fit/Exponential Least Squares


Regression | 77
Figure 5.4 Graph of a line of fit for exponential function

Long Description

Figure 5.4 The Exponential Fit

This resulted in a very good fit.

78 | Chapter Five - Measuring the Least Squares Fit/Exponential Least


Squares Regression
Chapter Five - Practice
Exercise

5a)

Measure the accuracy of the Fit from Exercise 4a, i.e. find .
(Solution given)

Chapter Five - Practice Exercise | 79


Chapter Six - Approximation
with Taylor Series
While this text is not about calculus, I believe it is important for
students to become familiar with approximation using Taylor Series.
References to derivatives are necessary but the actual derivatives in
the examples will be given.
A way to think about Taylor Series polynomials is that they are
simply a polynomial of any degree you wish to use that
approximates a function being studied. Similar to Newton’s divided
difference we start with the simplest approximation, the constant.
Let’s call our approximation . We will let be a
particular point on the x-axis that will be the center of our
approximation. The approximation improves the closer the value of
x is to a. The function we are approximating is f(x).
For a straight line at a particular point, we can say an
approximation polynomial is .
Suppose we choose a point , the graph might look
something like:

80 | Chapter Six - Approximation


with Taylor Series
Figure 6.1 The Horizontal Straight Line Estimator

Long Description

Figure 6.1 The Horizontal Straight Line Estimator

At , the horizontal line is an excellent approximation.


We can say that which is a constant.
Clearly, once we move away from a in either direction it turns out
the constant does not serve us very well.
Our next step is adding a linear component while still retaining
the constant. Which means we now have a polynomial that allows
us to adjust the slope of the line. Let’s try

Chapter Six - Approximation with Taylor Series | 81


 
where is the first

derivative of the function.


By adding the linear component, we can think of as

the slope. This improves our approximation:

Figure 6.2 The Linear Solution

Long Description

Figure 6.2 The Linear Solution

By adding the linear component, we can see how the picture


improves at the point because we now have a line tangent
(representing the slope at ). Definitely an improvement over

82 | Chapter Six - Approximation with Taylor Series


the constant as our approximation is pretty good as long as we stay
near a.
So far, we have brought to bear a constant value and the slope.
Because Taylor series allows us to add higher degree terms to our
polynomial, we can now bring to bear the effect of concavity to the
approximation. Think of concavity as adding curviness to what so
far has been a straight line.
Let’s add a quadratic (second degree) and cubic (third degree)
component to our polynomial. These will introduce the curviness
by adjusting the line at any given x value up or down. Figure 3 also
illustrates the effect of higher order polynomials.

Figure 6.3 Effect of Higher Order Polynomials

Long Description

Figure 6.3 Effect of Higher Order Polynomials

Chapter Six - Approximation with Taylor Series | 83


We can see that as each higher-level component is added the
approximation improves the farther we travel from the point x = a.
Quadratic:
 
Cubic:

We could continue this indefinitely:


 

 
...
From here we will develop the general form of the Taylor series
employing basic algebra.
This is done iteratively by solving one constant at a time. We set
since in fact all Taylor polynomials either start with
or include the adjustment, so that in effect the center will
always equal zero.
We will solve for a fourth-degree polynomial. This will be enough
to demonstrate the general pattern of the Taylor series. To solve
for each constant, we replace each of the with as

follows:

since

next we take first derivative of both sides


 

Again so we are left with  

84 | Chapter Six - Approximation with Taylor Series


The second derivative of both sides

since we’re left with 

The third derivative of both sides

 
since we’re left with

The fourth derivative

Plugging in the solution for the four constants produces the


general form:

Normally we don’t show the denominators when they are simply


one. However, I’ve done so to illustrate the emerging pattern.
Remember and . This allows us to observe that the
denominators are really successive factorials.

...

Chapter Six - Approximation with Taylor Series | 85


Sin Function

Let’s use an actual example to illustrate the process. Some things


to remember. Taylor Series approximation only works for certain
functions; typically, those that are continuous, repeatedly
differential and irrational. They are also known as transcendental
functions. Trig functions such as sin and cos, as well as exponential
and logarithmic functions, imperfect roots, along with several other
categories work well. Suppose we have been assigned a project to
create our own App that will generate sin values.
We will focus on the mechanics of the process. For students who
would like to delve deeper into Taylor Series there are a wealth of
texts and videos available.
Step One: Select the function to be approximated. For this
example, we will choose the sin function. It is well suited for Taylor
Series approximation. It is continuous over the real numbers and it
is repeatedly differential.
Step Two: Select an value that we want to center our
approximation around. It turns out 0 degree is an easy value to work
with as we differentiate sin.
Step Three: Repeatedly differentiate sin until the desired final
degree of our Taylor Polynomial is reached. In this example we
arbitrarily decided a ninth degree Taylor polynomial will produce
Sin values accurate enough to meet our needs. Note we will work
with radians as the angle measure.
Derivatives

86 | Chapter Six - Approximation with Taylor Series


Step Four: Plug in our derivatives into the general form of the
Taylor polynomial:

Since every other term has zero in the numerator we can drop
these and condense p(0). Further since a = 0, we can simplify the
binomials.
The resulting Taylor Series polynomial is:
\large p(0) = \frac {1}{1!}(x)  + \frac {-1}{3!}(x)^3 +  \frac
{1}{5!}(x)^5 +  \frac {-1}{7!}(x)^7 +  \frac {1}{9!}(x)^9
We have a relatively simple polynomial we can program into our
app to produce values of sin for angles between 0 and 90 degrees .
Since sin is periodic, we can program in computations that give us
the reference angle for angles greater than 90 or less than 0 degrees
.
Step Five: We are now ready to test p(a) for various angles
between 0 and 90 degrees. Since it is easier to work with Radians,
I’ve included a conversion for students not familiar with them. f(x)
is generated from an app precise to 15 decimal positions. p(x) is our
Taylor approximation.

Chapter Six - Approximation with Taylor Series | 87


Step 5 of Taylor Approximation

Degrees Radians f(x) p(x)


0 0 0.000000000000000 0.000000000000000

18 0.309016994374947 0.309016994375021

22.5 0.382683432365090 0.382683432365947

30 0.500000000000000 0.500000000000000

45 0.707106781186547 0.707106782936867

72 0.951056516295154 0.951056822327524

90 1.000000000000000 1.000003542584290

p(x) provides an excellent approximation out to at least six


decimal places for the values of x we tested. The symmetry and
reflectivity properties of the sin function will allow us to generate
0 0
values less than 0 and greater than 90 .

88 | Chapter Six - Approximation with Taylor Series


Figure 6.4 Graph of Taylor Approximation

Long Description

Figure 6.4 Taylor Approximation of Sin Function

Note: Difference slight enough that lines appear to overlap on the


graph.

Chapter Six - Approximation with Taylor Series | 89


Chapter Six – Practice
Exercise

6a)

Replicate the above example (sin) for the cos. Compare the resulting
graph to the one for sin.
(Solution given)

90 | Chapter Six – Practice Exercise


Chapter Seven - Taylor Series
Remainder Test
A formal way to test the accuracy of a Taylor polynomial
approximation is to employ the Taylor Remainder test. By adding
a remainder term to our Taylor polynomial approximation, we in
effect convert it into an equation,
Our function .
Written more compactly we have f_n(x) = p_n(x) +  r_n(x) . This
remainder term becomes the difference between at a
particular point and at that same value of x.
In the above example we ran our polynomial out to the ninth-
degree term.
actually looks like the next higher degree term:

where c is

between a and x
The question we ask is what value for c should we use. The answer
in this case is to solve the remainder twice for the endpoints of
the range we are interested in. In this case we want to know how

accurate c will be between and .

This will provide a range of possible values between and

For

Chapter Seven - Taylor Series


Remainder Test | 91
For

we drop the negative as it’s a matter of distance, not direction.


This bounds the possible error of our approximation:

92 | Chapter Seven - Taylor Series Remainder Test


Chapter Seven - Practice
Exercise

7a)

Conduct the Taylor Remainder Test on your solution for Practice


Problem 6a.
(Solution given)

Chapter Seven - Practice


Exercise | 93
Solutions to Selected Practice
Exercises

Solution to Exercise One Practice Problems

Exercise 1a)

ABC Children's Party Company

Maximum children attending the Cost per Total Cost of


party Child Party
10 $37 $370
25 $28 $700
50 $22 $1100
100 $15 $1500

The four equations in four unknowns:

94 | Solutions to Selected Practice


Exercises
Equations in Table Form

a b c d cost
1000 100 10 1 37
15625 625 25 1 28

125000 2500 50 1 22
1000000 10000 100 1 15

Figure 8.1 Matrix Setup for Exercise 1a

Long Description

Figure 8.1 Matrix Setup for Exercise 1a

Resulting Pricing Polynomial

Solutions to Selected Practice Exercises | 95


Figure 8.2 Exercise 1b

96 | Solutions to Selected Practice Exercises


Long Description

Figure 8.2 Exercise 1b)

Solution to Exercise Two Practice Problems

2a)

Newton’s Divided Difference Table is populated as follows:

Solutions to Selected Practice Exercises | 97


Newton’s Divided Difference Table

Linear Quadratic
x y

10 37 37 - -

- - - -

25 28 28 -

- - - -

50 22 22 -

- - - -

100 15 15 - -

Simplifies to:

98 | Solutions to Selected Practice Exercises


2b)

2b Table

x y or f(x)
-6.2 -8

-3 -7
-1.5 -2.2
1 0.7
3.5 3
4.25 5

7.9 8

Solutions to Selected Practice Exercises | 99


2b Difference Table

x f(x) 1st Divided Difference 2nd Divided Difference

-3 -7 - -

- - -

1 0.7 -

- - -

7.9 11 - -

2
Simplifies to: -0.040x + 1.845x – 1.105

Solution to Chapter Three Practice Exercises

Exercise 3b)

100 | Solutions to Selected Practice Exercises


Figure 8.3 Exercise 3b

Long Description

Figure 8.3 Exercise 3b

Solutions to Selected Practice Exercises | 101


Solution to Chapter Four Practice Exercise

4a)

The Setup:
Abbreviated List of weekly Dow Jones closing averages:

Weekly Closing Averages

Week Actual Interpolation

1 28,583.68 1.00 1.00 1.00 1 28,416.89149


2 28,939.67 8.00 4.00 2.00 1 28,034.20169
3 29,196.04 27.00 9.00 3.00 1 27,677.60694
4 28,722.85 64.00 16.00 4.00 1 27,346.5493

- - - - - - -
- - - - - - -

- - - - - - -
78 34,292.29 474,552.00 6,084.00 78.00 1 34,491.0287

79 34,577.37 493,039.00 6,241.00 79.00 1 34,485.14307


80 34,888.79 512,000.00 6,400.00 80.00 1 34,462.39157
81 34,511.99 531,441.00 6,561.00 81.00 1 34,422.21628

82 35,058.52 551,368.00 6.724.00 82.00 1 34,364.05925


83 35,084.53 571,787.00 6,889.00 83.00 1 34,287.36256

102 | Solutions to Selected Practice Exercises


Figure 8.4 Matrix Solution 4a

Long Description

Figure 8.4 Matrix Solution 4a

Solutions to Selected Practice Exercises | 103


Figure 8.5 Graph of Weekly DJIA

Long Description

Figure 8.5 Weekly DJIA Close January 2020 through


July 2021

104 | Solutions to Selected Practice Exercises


Solution to Chapter Five Practice Exercises

Step One:
1a) Find the difference between each actual value and its
associated value generated by the interpolative polynomial. Square
the result.
1b) Find the difference between each actual value and the Mean of
the actual values. Square the result.
Step Two:
2a) Sum the results from 1a
2b) Sum the results from 1b
Step Three:
Divide 2a by 2b subtracting the result from 1.
Answer:  

Solution to Chapter Six Practice Exercises

6a)

Select the Function to be approximated. Cos function centered at


x=0
Derivatives of cos

Solutions to Selected Practice Exercises | 105


Plug derivatives into the general form of the Taylor polynomial:

Every other term has zero in the numerator so we can drop these
and condense p(0). Further since a = 0 we can simplify the binomials.

\large p(0) = \frac {1}{0!}  + \frac {-1}{2!}(x)^2  + \frac {1}{4!}(x)^4


+ \frac {-1}{6!}(x)^6 + \frac {1}{8!}(x)^8
f(x) is generated from an app precise to 15 decimal positions. p(x)
is the Taylor approximation for Cosine.

Figure 8.6 Speech Bubble

106 | Solutions to Selected Practice Exercises


Taylor Approximation for Cosine

Degrees Radians f(x) p(x)


0 0 1.000000000000000 1.000000000000000

18 0.951056516295154 0.951056516297732

22.5 0.923879532511287 0.923879532535293

30 0.866025403784439 0.866025404210352

45 0.707106781186548 0.707106805683294

72 0.309016994374947 0.309019668329804

90 0.000000000000000 0.000000000000000

Solutions to Selected Practice Exercises | 107


Solution to Chapter Seven Practice Exercise

7a)

where c is

between a and x

Solving the remainder twice for 0 and

This will provide a range of possible values between 0 and

for

for

Drop negative as it is a matter of distance not direction.


Gives us an error possibility

108 | Solutions to Selected Practice Exercises


ACKNOWLEDGEMENTS
Content Editor: Darius Rub, MS University at Buffalo
Content Editor: Charlene Cardinale, Math and Science educator
Cover Design: Denise J. Murphy-Rohr, Graphic Designer
Copy Editor: Christina Riehman-Murphy, Open & Affordable
Educational Resources Librarian, Penn State Libraries

Acknowledgements | 109
About the Author

Figure 10.1 Stuart Murphy

Stuart Murphy spent a number of years working in the insurance


industry, managing and implementing health plans for commercial
and government entities. During this time, he also served as a
registered lobbyist.
Over the years Stu has taught middle, high school, and college
level math; as well as COBOL and Assembler. Stu currently teaches
middle school mathematics.
He and his wife Sharon have three children and eight
grandchildren. They make their home in Pennsylvania.

About the Author | 111


Versioning History
This page will provide a record of edits and changes made to this
book since its initial publication in July 2022. Whenever edits or
updates are made, we make the required changes in the text and
provide a record and description of those changes here. If the
change is minor, the version number increases by 0.1. However, if
the edits involve substantial updates, the version number goes up to
the next full number.
If you find an error in this book, please contact
smurph11@gmail.com or cer20@psu.edu. We will make the
necessary changes, and update this Versioning History page to
reflect the edits made.

Version Date Change Details


Initial
1.1 July 2022
Publication

112 | Versioning History

You might also like