01 Computational Methods For Numerical Analysis With R - 1
01 Computational Methods For Numerical Analysis With R - 1
01 Computational Methods For Numerical Analysis With R - 1
1
2 Computational Methods for Numerical Analysis with R
R Function 1.1
The Fibonacci sequence
reader to open the text and read functions. The small size is designed to
help example implementations fit in the head of the reader. As a result, error
handling and potential evaluation options are not included, unless convenient
or necessary. In some cases, smaller independent components were moved into
new functions. This provides additional flexibility to the implementation and
improves the readability of the functions.
The functions included in this book, and some others, are bundled into a
package called cmna, standing for Computational Methods for Numerical Anal-
ysis. This package provides the algorithms, implementations, documentation,
and testing. The package is available through standard distribution channels.
> library(cmna)
The call to library makes the library available for use. Despite bundling
these functions into a single package, the functions provided do not provide
a unified whole. That is, the functions provide a baseline for understanding
the imeplementation and are designed to provide a comprehensive solution for
numerical applications. Where possible, functions with similar applications are
given similar interfaces. This is not to say the functions were not designed.
But they are not designed to work together in any meaningful way.
Because of these design choices, it is also important to remember the func-
tions implemented here are not robust. That is, they do not perform basic
error handling and they are unlikely to fail gracefully when presented with
a problem. This is the tradeoff from the simple implementations given. They
explain the fundamental method from a numerical standpoint. Handling un-
usual circumstances or errors, should be a routine part of any program, but
because they are not included here, some implementations may not fail well.
As such, error handling is addressed when necessary, such as when a function
could silently return an incorrect answer, rather than simply terminate.
The functions in cmna are not robust and should not be used for actual ap-
plied numerical analysis. Applied numerical analysis, such as an architectural
model, relies on well-tested and well-understood numerical models that handle
errors. R includes robust and tested functions for numerical analysis. Because
the functions we develop are not robust enough for everyday use, when appro-
priate, these functions are described and demonstrated to show how to solve
actual numerical analysis problems in R. This allows you to move along and
actually use the demonstrated solution rather than fixing the cmna version to
meet more demanding requirements.
1.1.3 Efficiency
The first constraint we will look at is efficiency. Programmers generally work
very hard to make sure their computers do not have to. Every gain in effi-
ciency enables more interesting problems to be solved. Because programs for
numerical analysis are run so many times, increased efficiency decreases the
6 Computational Methods for Numerical Analysis with R
overall runtime, as well. Efficiency comprises two areas. The first is time com-
plexity, focusing on how long it takes for a program to execute. Programs that
require more steps for larger inputs are said to have higher time complexity.
The second is space complexity, measuring how much storage, even intermedi-
ate, is necessary to execute the program. Programs that require more memory
for larger inputs are said to have higher space complexity.1 We are going to
use less formal methods focusing on the count of operations a function must
perform to complete and return a value. For instance, examine this loop:
> count <- 0
> m <- 10
> for(i in 1:m)
+ count <- count + 1
> count
[1] 10
This is a fancy way to perform a loop 10 times. The function loops from 1 to
m times. This loop’s internals execute 10 times. To verify this, we increment
the count variable and print the value, which is m. Of course, we can also nest
loops, and this increases the number of times the loop’s internals, the parts
within the innermost loop, run:
> count <- 0
> m <- 10
> n <- 7
> for(i in 1:m)
+ for(j in 1:n)
+ count <- count + 1
> count
[1] 70
In this case, we have two loops. The outer loop runs from 1 to m and the
inner loop executes from 1 to n. The inner loop executes n times for each time
through the outer loop. Therefore, the loop’s innermost internals execute mn
times. These are straightforward examples. Slightly less straightforward is an
example where the outer loop executes from 1 to m. The inner loop executes
from 1 to i where i represents the iteration of the outer loop:
> count <- 0
> m <- 10
> for(i in 1:m)
+ for(j in 1:i)
+ count <- count + 1
> count
[1] 55
1 For a textbook treatment, I recommend Introduction to Algorithms by Cormen et al.
(2009) or Algorithms by Sedgewick and Wayne (2011). For a less formal approach, an
Internet search for “time complexity,” “space complexity,” or “big-O notation” is sufficient.
Introduction to Numerical Analysis 7
for ( i in 2: sqrt ( n ) )
if ( n %% i == 0)
return ( FALSE )
return ( TRUE )
}
R Function 1.2
Test an integer for primality
The inner loop’s internals execute 1 time the first time through the outer loop,
2 times the second time, and so on up to m times the mth time Pmthrough the
loop. Accordingly, the inner loop executes k times where k = i=1 i.
Each of these grow at least as quickly as the input does. However, that
is not always the case for a function. Consider the function isPrime, listed
in Function 1.2. This function performs a basic primality test by testing the
input for division by a successive number of potential divisors.
When testing for primality, it is only necessary
√ to test the integers from
2 to the last integer less than or equal to n. This function still grows, but
at a much slower rate. Further, the relative
√ rate of increase decreases as n
increases. Also, in the case of isPrime, n is the maximum number of √ times
through the loop. If the number is not prime, an integer less than n will
successfully divide n and the loop will terminate early.
Of course, within a loop, it is possible for complex operations to be less
obvious. In this example,
> count <- 0
> m <- 10
> x <- 1:100
> y <- rep(1, 100)
> for(i in 1:m) {
+ y <- y * x
+ count <- count + 1
+ }
> count
[1] 10
the value of count is 10. However, many more multiplications happened within
the loop. The loop itself contains an element-wise multiplication. This is a
multiplication for each element of the vectors x and y. As each are 100 elements
long, this loop actually contains 1000 multiplications. Despite there being only
8 Computational Methods for Numerical Analysis with R
one for loop within the example as stated, it includes more operations than
the last example, with its explictly nested loops.
Understanding what is happening inside a loop is critical to estimating
how long it will take to execute. If the loop calls other external functions,
each of those will have their operations to perform. Likely, these functions are
opaque to us, and we will not necessarily know what they are doing and how
they do it.
Despite these speed analyses, R is not a fast programming language. Its
internal operations are fast enough for most purposes. If speed were the most
important consideration, we would use c or Fortan. As compiled languages,
they execute as machine code directly on the hardware. R programs, like
Python, matlab® , and other interpreted programming languages, do not run
directly on the hardware, but are instead translated and executed by the
language’s runtime engine.
But we use languages like R, matlab® , or Python for other reasons and
in the case of R, the reasons might be access to libraries created by other
programmers, the statistical facilities, or its accessibility. And understanding
how to estimate the number of operations necessary to complete a task is
language-neutral. So if one implemented
√ a c version of the prime tester in
Function 1.2, it would still require n tests.
and aggregate data types. The first data type is for Boolean values. These
can take on two values, either TRUE or FALSE, are intended to represent their
logical counterparts. These values, returned from some functions, are often
passed to functions to set a flag that affects the operations of the function.
Like many programming languages, R can treat these Boolean values like they
are numbers. The Boolean value of TRUE is converted to 1 and the Boolean
value of FALSE is converted to 0:
> TRUE == -1; TRUE == 0; TRUE == 1; TRUE == 3.14
[1] FALSE
[1] FALSE
[1] TRUE
[1] FALSE
> FALSE == -1; FALSE == 0; FALSE == 1; FALSE == 3.14
[1] FALSE
[1] TRUE
[1] FALSE
[1] FALSE
This is different from many programming languages, however. With R, all
values other than 0 (FALSE) and 1 (TRUE) are not equal to either Boolean
value. In addition, TRUE and FALSE are ordered such that TRUE is greater than
FALSE.
> TRUE < FALSE; TRUE > FALSE
[1] FALSE
[1] TRUE
In addition, these Boolean values can be operated on by many functions within
R. For instance, because TRUE can be treated as 1, you can use sum function to
count the number of TRUE values in a vector. Other functions work as expected,
too. For instance, the following example creates a vector of five Boolean values
of which three are true. In addition to the function sum, the function mean and
function length return the expected values:
> x <- c(TRUE, FALSE, TRUE, FALSE, TRUE)
> sum(x); mean(x); length(x)
[1] 3
[1] 0.6
[1] 5
In a similar way, we can use this to count the number of entries in an array
that are equal to a certain number:
> x <- c(1, 2, 3, 4, 3, 1, 2, 3, 3, 4, 1, 3, 4, 3)
> sum(x == 3)
[1] 6
Or, extending the model, we can count how many entries are less than 3:
> sum(x < 3)
10 Computational Methods for Numerical Analysis with R
[1] 5
R provides two principal data types for storing numbers. The first of these are
integers. In R, integers are simple integer data types. These data types do not
provide for any decimal component of a number. They are whole numbers and
are treated as such. There are no numbers in between these integers and the
distance between each integer is 1 unit. The mechanics and implications of this
are further explored in Section 2.2.1. While having no fractional component to
a number may seem like an unnecessary hindrance, the data type is available
for use when special circumstances encourage it. For instance, we might use an
integer to represent the year, as the fractional component can be represented
separately, as a month and day count, which are also integers.
For those applications which require a fractional component to the number,
there is also the numeric data type. The numeric data type is the base data
type. It is the default data type when new numerical data is entered and most
computations return numeric types. It is stored as a floating point number
within the computer and the mechanics and implications of that are explored
in Section 2.2.2. Floating point numbers are used to represent any number
with a fractional component, such as a distance measurement or simply, 1/2.
R provides built-in functions we can use to test the type of a variable
and to convert among data types. We sometimes call the conversion process
coercion. First, the function is.numeric will return true if the argument given
is numeric and the function is.integer will return true if the argument is an
integer. Both functions return false, otherwise:
> x <- 3.14
> is.numeric(x)
[1] TRUE
> is.integer(x)
[1] FALSE
The is.numeric and is.integer functions test the internal data storage
type, not the actual data, so testing 3 for integer status will be false, as a raw
number, when entered, is numeric by default:
> is.integer(3)
[1] FALSE
Similar test functions exist for Boolean and many other data types within
R. Conversion is almost equally simple. The functions as.numeric and
as.integer will convert a number to the named data type. At the most
extreme, these conversion features will extract numerical data from strings.
Suppose the variable x is a string containing the “3.14” and we wish to treat
it as a number:
> x <- "3.14"
> as.numeric(x)
[1] 3.14
> as.integer(x)
Introduction to Numerical Analysis 11
[1] 3
The as.integer function converts any number to an integer. This is done by
truncating the fractional part, which is equivalent to rounding down. If the
argument to the function is, for instance, 3.9, the conversion to an integer will
still yield 3. Data type conversion is not the best method to use for rounding
fractional numbers R. There is a powerful round function that is designed to
address the place selection and other aspects of rounding rules.
Like the family of functions for data type testing, there is a family of
conversion functions that can convert among many different data types within
R. For instance, we can convert a numerical value to a string using the function
as.character, which converts a number to its decimal representation, suitable
for display.
Integer and numeric data can usually be freely mixed and matched. R will,
unless otherwise directed, convert the internal representations into whatever
form is most appropriate for the data being represented. This is almost always
going to be the numeric data type, and for almost any purpose, that is the
best option for us, as well. The numeric data type is versatile enough and
capable enough to handle our common data storage needs. In addition, as
R’s principal data type, any function we will want to use will understand it
implicitly.
In addition to these data types, R supports a number of nonnumerical
data types. Strings are shown in the last example. R supports two data types
called factors and ordered factors, which are simply label types. The labels can
be considered as categories which are explicitly unordered. The ordered data
types are factors with an inherent ordering. These are frequently used in R to
support statistical applications. R also supports a number of date formats and
other specialized aggregate types. Generally, these data types are not often
used in purely numerical problems, but may be the end result of an applied
problem.
$b
[1] 4
> (z2 <- list(s = "test", nine = 9))
$s
[1] "test"
$nine
[1] 9
In addition, lists are recursive. That is, we can embed one list within another,
whereas vectors concatenate.
Introduction to Numerical Analysis 13
[[1]]$b
[1] 4
[[2]]
[[2]]$s
[1] "test"
[[2]]$nine
[1] 9
We will use lists for some return values, when it is important to return a result
in more than one part. But otherwise, we will not use lists for mathematical
purposes.
The second important aggregate data type is the matrix. Matrices are the
heart of linear algebra, And because linear algebra underlies much of numerical
analysis, as we shall see, the matrix is the core data type we will use. Inside
of R and other numerical analysis platforms, very fast linear algebra routines
accomplish many of the calculations we rely on.
There are several ways to create a matrix in R. The first we will look at is
the matrix function.
> (A <- matrix(1:12, 3, 4))
[,1] [,2] [,3] [,4]
[1,] 1 4 7 10
[2,] 2 5 8 11
[3,] 3 6 9 12
The matrix function takes three principal options. The first is a list of data
elements to populate the matrix. The second is the number of rows and the
third is the number of columns. The number of elements should equal the
number of rows times the number of columns. If the number of columns is
omitted, the matrix function will assume a sensible value, if possible, such as
the total number elements divided by the number of rows given.
As shown in the display of matrix A above, each column and row is num-
bered. We can address individual rows and columns of a matrix using those
numbers. We can also address individual elements by specifying both the row
and column. The syntax is similar as for a vector:
> A[2,3]
[1] 8
> A[2,]
14 Computational Methods for Numerical Analysis with R
[1] 2 5 8 11
> A[,3]
[1] 7 8 9
Note the label for the row includes a number, a comma, and nothing else. In
contrast, to specify a column begins with a comma, and the column number. In
each case, there is an empty specification where the comma separates nothing
from the addressed row or column and the returned type for both is a vector.
This syntax also matches the column and row labels shown in the displayed
matrix A. Finally, specifying both the row and column yields a single element.
In addition to assembling a matrix using the matrix function, it is possible
to create a matrix from vectors. Two functions, rbind and cbind will append
vectors as rows and columns, respectively, to existing matrices. If the existing
matrix is instead another vector, the two vectors shall become a matrix:
> x1 <- 1:3
> x2 <- 4:6
> cbind(x1, x2)
x1 x2
[1,] 1 4
[2,] 2 5
[3,] 3 6
> rbind(x1, x2)
[,1] [,2] [,3]
x1 1 2 3
x2 4 5 6
Note in the cbind example, each column is named, and in the rbind example,
each row is named. In both cases, each is named after the source vector.
However, all can be addressed through numerical addressing.
Another important operation on matrices is transposition. Transposition
mirrors a matrix across the main diagonal. To transpose a matrix in R, use
the t function. Using the example of A above:
> (A)
[,1] [,2] [,3] [,4]
[1,] 1 4 7 10
[2,] 2 5 8 11
[3,] 3 6 9 12
> t(A)
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 4 5 6
[3,] 7 8 9
[4,] 10 11 12
Of course, we can use the t function on the output to return the original
matrix.
Introduction to Numerical Analysis 15
The R function t will also transpose a vector. That is, it will convert a
“horizontal” vector into a “vertical” vector and vice versa. However, the method
is not necessarily intuitive because a vertical vector is really stored as a single-
column matrix. Using the t on a vector will first return a matrix with 1 row
and a column for each element vector. Then a second call to the t function will
transpose the new matrix into a vertical vector: Using the example variable
x, from above:
> t(x)
[,1] [,2] [,3] [,4]
[1,] 1 0 1 0
> t(t(x))
[,1]
[1,] 1
[2,] 0
[3,] 1
[4,] 0
It may be clear from these examples, but vectors and matrices, in R, are
closely related. In linear algebra, the vector is silently treated as a single-
dimensional matrix, in either orientation. This simplifies the rules for multi-
plying matrices by vectors and this is suitable for most applications. However,
in R, a vector is not a single-dimensional matrix. A vector is a distinct object
that can be coerced into a single-dimensional matrix. Frequently, however, a
vector can be treated as a single-dimensional matrix, provided element ad-
dressing is not used. If element addressing is being used on a vector, it can be
advantageous to convert it to a single-dimensional matrix for clarity.
In addition to vectors and matrices, there is another important aggregate
data type in R, the data frame. Usually, a matrix is expected to have the
same data type for each element. In our examples, every element has been
numeric. A data frame will usually have many different data types, such as
Boolean objects, timestamps, or even strings. Though implemented internally
as specialized lists, data frames are two-dimensional objects, like a matrix,
without the expectation of homogenous data types.
Like lists, we may use data frames to store return values conveniently,
but will not often use them otherwise. Data frames can be created with the
data.frame function, which works similarly to the matrix function. In addi-
tion, the cbind and rbind functions will work with data frames in the expected
way.
Finally, aggregate data structures present a different question relating to
missing data. Data could be missing for a variety of reasons, such as an ob-
servation was not possible or was lost. In some cases, data may be removed
intentionally for different purposes. In the simplest case, we can simply declare
the data was not available.
R provides a special value, NA, that represents missing data. Testing the
value NA against any other value, including itself, yields a result of NA. Because
16 Computational Methods for Numerical Analysis with R
of this propagating NA, R provides a function to test for NA values, the is.na
function.
> NA == NA
[1] NA
> NA == 1
[1] NA
> natest <- c(1, 2, NA, 4, 5)
> is.na(natest)
[1] FALSE FALSE TRUE FALSE FALSE
We might reasonably infer the missing value is 3, simply by guessing. How-
ever, R does not know what value belongs there and uses the missing value
placeholder.
Using vectors and matrices, R provides all of the tools necessary to im-
plement linear algebra. With linear algebra, R can also provide a full suite
for numerical analysis. As we move forward, we will introduce more concepts
about vectors and matrices as we need them. In particular, Section 3.1 will
provide much more information about working with vectors and matrices,
including basic linear algebra operations.
for ( i in 1: n )
s <- s + x [ i ]
return ( s )
}
R Function 1.3
The naı̈ve approach to summing a vector
p1 + p2
p1 = x1 + x2 p2 = x3 + x4
x1 x2 x3 x4
Figure 1.1
Assembly of partial sums in piecewise addition
if ( n == 1)
return ( x )
m = floor ( n / 2)
return ( pwisesum ( x [1: m ]) + pwisesum ( x [( m + 1) : n ]) )
}
R Function 1.4
The pairwise summation algorithm
added to the result of another call to the pairwise function, creating a partial
sum. The partial sums filter up through the tree of recursive function calls
and the final result is the total of all these partial sums, each added as pairs.
This is shown in Figure 1.1.
We call this the piecewise summation algorithm. This function works be-
cause of the associate property of addition and the relative simplicity of the
overall operation. A complete implementation of a piecewise summation algo-
rithm is presented in Function 1.4. From the user’s viewpoint, the pwisesum
function acts just like the naivesum function. Assuming the value of x is un-
changed:
> pwisesum(x)
[1] 4.5
The actual implementation given in Function 1.4 makes some allowances for
the potential real-world implementation problems. Most importantly, it is un-
likely the number of elements in the vector x will be power of 2. As a result,
the division of labor between the two uses the floor function to establish the
integers below and above the one-half demarcation point for finding the split.
In practice, this has no effect on the implementation nor the execution.
Introduction to Numerical Analysis 19
for ( i in 1: n ) {
y <- x [ i ] - comp
t <- x [ i ] + s
comp <- ( t - s ) - y
s <- t
}
return ( s )
}
R Function 1.5
The Kahan summation algorithm
return ( y )
}
R Function 1.6
The algebraic approach to evaluating polynomials
We will represent this polynomial as a vector and use it for examples in this
subsection:
> f <- c(30, -19, -15, 3, 1)
In the naı̈ve approach, we will evaluate the polynomial in roughly the same
manner as we would in a high school algebra class. We will substitute the value
of x into the function, exponentiate it as appropriate, and then multiply by
the coefficient. Each term is then summed and the value is returned. The naı̈ve
approach is implemented in the R in the function naivepoly, which is given
as Function 1.6.
The implementation opens by creating a vector, y, with the same length
as the length of the input value, x, and setting each element to 0. This allows
the naivepoly function to accept and return evaluations of a polynomial at
a vector of x locations.
Then the function loops over each element of the coefficient vector, from
22 Computational Methods for Numerical Analysis with R
return ( y )
}
R Function 1.7
A better algebraic approach to polynomial evaluation
the first to the nth, in that order. For each time through the loop, which is
the variable i in the function, the algorithm executes a single addition and i
multiplications. The i multiplications are composed of i − 1 multiplications
in the exponentiation and one more Pnfor the coefficient. For a polynomial of
degree n, that is n additions and i=0 i = (n2 + n)/2 multiplications. The
number of multiplications grows at the rate of the square of the degree of the
polynomial.
For the function f (x) constructed before, evaluation at a point x reveals
its simple use. Consider evaluating f (−1), f (0), and f (1) using the naivepoly
function:
> x <- c(-1, 0, 1)
> naivepoly(x, f)
[1] 32 30 0
We can increase the efficiency of polynomial evaluation. For instance, each
time through the loop, we find the exponentiation of x. But for each coefficient,
ai , the associated value of the exponentiation, xi , is the product series,
i
Y
xi = x. (1.5)
j=0
For xi , there are i instances of x multiplied together. But for xi−1 , there is
one fewer x and so on down to x0 = 1. We can use this by caching the value
of each multiplication and storing it for the next iteration.
Function 1.7, betterpoly, implements this. It uses a variable cached.x to
keep the running value of the product series available for the next iteration of
the loop. Compared to the function naivepoly, there are two multiplications
and a single addition for every time through the loop. For a polynomial of
degree n, there are 2n multiplications.
This is a substantial improvement over naivepoly. For a polynomial of
Introduction to Numerical Analysis 23
return ( y )
}
R Function 1.8
Horner’s rule for polynomial evaluation
[1] 32 30 0
Like the implementation of betterpoly, horner includes an unnecessary step.
On the first iteration of the loop, x is multiplied by the partially complete
sum, which had been initialized to. But similar to other cases, the extra step
is permitted here to simplify the overall implementation. In addition, if a con-
ditional if-then test were included, it would be executed on every iteration of
the loop, likely at greater computational expense than the single unnecessary
multiplication.
These types of code simplification steps usually require some sort of anal-
ysis like this to determine if they improve the program’s execution and read-
ability. In some cases, the gains of readability may be far outweighed by any
speed gains that could be made that reduce simplicity. Many factors go into
these decisions including future code reuse and the business requirements of
the operating environment.
There is no built-in function or function set that provides polynomial eval-
uation. However, the polynom package includes functions for polynomials and
polynomial division. The polynom package includes dedicated objects for stor-
ing polynomials, including specialized access functions. The functions included
here do not create a class for managing internal data. Accordingly, the func-
tions in the polynom package are more robust than the implementation pro-
vided here.
return ( x )
}
R Function 1.9
The nth root algorithm
> 65536^(1/4)
[1] 16
> 1000^(1/3)
[1] 10
> pi^(.5)
[1] 1.772454
Accessing the objects as a primitive operator provides a substantial benefit to
a tolerance-limited solution encouraged by the nth root algorithm.
Comments
From here, we have established some basic information and introduced numer-
ical analysis. We have distinguished what we do in numerical analysis from
symbolic computing, the realm of both computer algebra systems and pen and
paper. And we have also introduced the basic data types and structure that
provide R with the power and elegance necessary to do numerical analysis.
With a basic toolkit, we finally took a look at a number of algorithms to solve
common problems in mathematics.
Using these tools, we are going to explore a number of problems that arise
in applied mathematics across fields. Before we get to more algorithms, we
need to talk about error and what it means for the computer to be wrong.
The computer can be wrong for a lot of reasons, and some of them we can
limit. In other cases, there is little we can do other than recognize the problem
and brace ourselves.
The algorithms for division and summation range from straightforward
to very complex, and for good reason. Each has a place and an application.
Sometimes an algorithm for a numerical problem will be very fast, but only
apply in a subset of cases. Or it may produce a result that is not as accurate.
Numerical analysis then moves from mathematics to an art, as we select the
algorithms and tools necessary to meet our needs for the least amount of
computing effort. Sometimes meeting this requirement, however, requires more
human and programming effort.
For instance, review the isPrime function in Function 1.2.√ In this function,
given a number n, we test every integer between 2 and n inclusive in the
search for divisors. However, this process is painfully more intense than need
be. For instance, after only a moment of thinking, we realize that if 2 is not a
divisor of n, then no multiple of 2 will be a divisor n, either. Accordingly, we
should probably not check any event number except 2. And while that alone
would halve the number of tests to complete the evaluation, it is only a partial
help. After all, the same logic extends that if 3 is not a divisor, neither is any
multiple of 3, and with 5, and 7, and so on. A better algorithm than presented
would remove unnecessary and repetitive evaluations.
Introduction to Numerical Analysis 27
These sorts of analysis are the core of numerical analysis. We want to find
the best possible algorithm to suit the task at hand, but the task goes beyond
the problem as stated. It includes how fast a solution must be calculated, how
accurate it must be, and how much time we have to implement a solution. All
of these come together as our constraints when solving numerical problems
and each has a different aspect. Project managers often say you can have a
solution fast, cheap, or correct, provided you pick only two. The same sort of
constraints play out here.
The rest of this text will develop an understanding of error analysis, and
the meaning of both precision and accuracy. Then we will develop a set of
tools for performing tasks in linear algebra, followed by a suite of tools for
interpolation, of various types. Then we will move into calculus and discuss
algorithms for both differentiation and integration, optimization, and finally
a review of methods for differential equations.
Exercises
1. What is the circumference of a 10-meter-diameter circle symboli-
cally? What is that to 3 decimal places? Which one is correct?
2. Describe an application for which a numerical solution were prefer-
able to a symbolic solution. Describe an application where symbolic
solutions are preferable to numerical.
3. If an unsigned variable were stored using 16 bits, what is the largest
number it could hold?
4. Using R’s internal trigonometric functions, implement a secant func-
tion.
5. Using the exponentiation operator, create a function that finds the
nth root of any number.
6. Function 1.1 provides a recursive implementation of the Fibonacci
sequence. Rewrite the function to provide a result without recursion.
What are the relative strengths and weaknesses of each implemen-
tation?
7. Implement a product function in R without using recursion.
8. Implement a product function in R using recursion.
9. Find the Horner polynomial expansion of the Fibonacci polynomial,
F6 (x) =5 +4x3 + 3x.
10. Reimplement Function 1.7 to eliminate the calculation of cached.x
on the final iteration of the loop.
28 Computational Methods for Numerical Analysis with R