vertopal.com_Ch02-statlearn-lab
vertopal.com_Ch02-statlearn-lab
Getting Started
To run the labs in this book, you will need two things:
• An installation of Python3, which is the specific version of Python used in the labs.
• Access to Jupyter, a very popular Python interface that runs code through a file called
a notebook.
You can download and install Python3 by following the instructions available at anaconda.com.
There are a number of ways to get access to Jupyter. Here are just a few:
Please see the Python resources page on the book website statlearning.com for up-to-date
information about getting Python and Jupyter working on your computer.
You will need to install the ISLP package, which provides access to the datasets and custom-
built functions that we provide. Inside a macOS or Linux terminal type pip install ISLP;
this also installs most other packages needed in the labs. The Python resources page has a link
to the ISLP documentation website.
To run this lab, download the file Ch2-statlearn-lab.ipynb from the Python resources
page. Now run the following code at the command line: jupyter lab Ch2-statlearn-
lab.ipynb.
If you're using Windows, you can use the start menu to access anaconda, and follow the
links. For example, to install ISLP and run this lab, you can run the same code above in an
anaconda shell.
Basic Commands
In this lab, we will introduce some simple Python commands. For more resources about
Python in general, readers may want to consult the tutorial at docs.python.org/3/tutorial/.
Like most programming languages, Python uses functions to perform operations. To run a
function called fun, we type fun(input1,input2), where the inputs (or arguments) input1
and input2 tell Python how to run the function. A function can have any number of inputs. For
example, the print() function outputs a text representation of all of its arguments to the
console.
The following command will provide information about the print() function.
print?
sep
string inserted between values, default a space.
end
string appended after the last value, default a newline.
file
a file-like object (stream); defaults to the current sys.stdout.
flush
whether to forcibly flush the stream.
Type: builtin_function_or_method
3 + 5
In Python, textual data is handled using strings. For instance, "hello" and 'hello' are
strings. We can concatenate them using the addition + symbol.
'hello world'
A string is actually a type of sequence: this is a generic term for an ordered list. The three most
important types of sequences are lists, tuples, and strings.
We introduce lists now.
The following command instructs Python to join together the numbers 3, 4, and 5, and to save
them as a list named x. When we type x, it gives us back the list.
x = [3, 4, 5]
x
[3, 4, 5]
Note that we used the brackets [] to construct this list.
We will often want to add two sets of numbers together. It is reasonable to try the following
code, though it will not produce the desired results.
y = [4, 9, 7]
x + y
[3, 4, 5, 4, 9, 7]
The result may appear slightly counterintuitive: why did Python not add the entries of the lists
element-by-element? In Python, lists hold arbitrary objects, and are added using
concatenation. In fact, concatenation is the behavior that we saw earlier when we entered
"hello" + " " + "world".
This example reflects the fact that Python is a general-purpose programming language. Much
of Python's data-specific functionality comes from other packages, notably numpy and
pandas. In the next section, we will introduce the numpy package. See
docs.scipy.org/doc/numpy/user/quickstart.html for more information about numpy.
import numpy as np
In the previous line, we named the numpy module np; an abbreviation for easier referencing.
In numpy, an array is a generic term for a multidimensional set of numbers. We use the
np.array() function to define x and y, which are one-dimensional arrays, i.e. vectors.
x = np.array([3, 4, 5])
y = np.array([4, 9, 7])
Note that if you forgot to run the import numpy as np command earlier, then you will
encounter an error in calling the np.array() function in the previous line. The syntax
np.array() indicates that the function being called is part of the numpy package, which we
have abbreviated as np.
Since x and y have been defined using np.array(), we get a sensible result when we add them
together. Compare this to our results in the previous section, when we tried to add two lists
without using numpy.
x + y
array([ 7, 13, 12])
In numpy, matrices are typically represented as two-dimensional arrays, and vectors as one-
dimensional arrays. {While it is also possible to create matrices using np.matrix(), we will
use np.array() throughout the labs in this book.} We can create a two-dimensional array as
follows.
array([[1, 2],
[3, 4]])
The object x has several attributes, or associated objects. To access an attribute of x, we type
x.attribute, where we replace attribute with the name of the attribute. For instance, we
can access the ndim attribute of x as follows.
x.ndim
x.dtype
dtype('int64')
dtype('float64')
Typing fun? will cause Python to display documentation associated with the function fun, if it
exists. We can try this for np.array().
np.array?
Docstring:
array(object, dtype=None, *, copy=True, order='K', subok=False,
ndmin=0,
like=None)
Create an array.
Parameters
----------
object : array_like
An array, any object exposing the array interface, an object whose
``__array__`` method returns an array, or any (nested) sequence.
If object is a scalar, a 0-dimensional array containing object is
returned.
dtype : data-type, optional
The desired data-type for the array. If not given, NumPy will try
to use
a default ``dtype`` that can represent the values (by applying
promotion
rules when necessary.)
copy : bool, optional
If true (default), then the object is copied. Otherwise, a copy
will
only be made if ``__array__`` returns a copy, if obj is a nested
sequence, or if a copy is needed to satisfy any of the other
requirements (``dtype``, ``order``, etc.).
order : {'K', 'A', 'C', 'F'}, optional
Specify the memory layout of the array. If object is not an array,
the
newly created array will be in C order (row major) unless 'F' is
specified, in which case it will be in Fortran order (column
major).
If object is an array the following holds.
===== =========
===================================================
order no copy copy=True
===== =========
===================================================
'K' unchanged F & C order preserved, otherwise most similar
order
'A' unchanged F order if input is F and not C, otherwise C order
'C' C order C order
'F' F order F order
===== =========
===================================================
.. versionadded:: 1.20.0
Returns
-------
out : ndarray
An array object satisfying the specified requirements.
See Also
--------
empty_like : Return an empty array with shape and type of input.
ones_like : Return an array of ones with shape and type of input.
zeros_like : Return an array of zeros with shape and type of input.
full_like : Return a new array with shape of input filled with value.
empty : Return a new uninitialized array.
ones : Return a new array setting values to one.
zeros : Return a new array setting values to zero.
full : Return a new array of given shape filled with value.
Notes
-----
When order is 'A' and ``object`` is an array in neither 'C' nor 'F'
order,
and a copy is forced by a change in dtype, then the order of the
result is
not necessarily 'C' as expected. This is likely a bug.
Examples
--------
>>> np.array([1, 2, 3])
array([1, 2, 3])
Upcasting:
Minimum dimensions 2:
Type provided:
>>> x = np.array([(1,2),(3,4)],dtype=[('a','<i4'),('b','<i4')])
>>> x['a']
array([1, 3])
This documentation indicates that we could create a floating point array by passing a dtype
argument into np.array().
dtype('float64')
The array x is two-dimensional. We can find out the number of rows and columns by looking at
its shape attribute.
x.shape
(2, 2)
A method is a function that is associated with an object. For instance, given an array x, the
expression x.sum() sums all of its elements, using the sum() method for arrays. The call
x.sum() automatically provides x as the first argument to its sum() method.
x = np.array([1, 2, 3, 4])
x.sum()
10
We could also sum the elements of x by passing in x as an argument to the np.sum() function.
x = np.array([1, 2, 3, 4])
np.sum(x)
10
As another example, the reshape() method returns a new array with the same elements as x,
but a different shape. We do this by passing in a tuple in our call to reshape(), in this case
(2, 3). This tuple specifies that we would like to create a two-dimensional array with 2 rows
and 3 columns. {Like lists, tuples represent a sequence of objects. Why do we need more than
one way to create a sequence? There are a few differences between tuples and lists, but perhaps
the most important is that elements of a tuple cannot be modified, whereas elements of a list
can be.}
x = np.array([1, 2, 3, 4, 5, 6])
print('beginning x:\n', x)
x_reshape = x.reshape((2, 3))
print('reshaped x:\n', x_reshape)
beginning x:
[1 2 3 4 5 6]
reshaped x:
[[1 2 3]
[4 5 6]]
The previous output reveals that numpy arrays are specified as a sequence of rows. This is called
row-major ordering, as opposed to column-major ordering.
Python (and hence numpy) uses 0-based indexing. This means that to access the top left
element of x_reshape, we type in x_reshape[0,0].
x_reshape[0, 0]
Similarly, x_reshape[1,2] yields the element in the second row and the third column of
x_reshape.
x_reshape[1, 2]
6
Similarly, x[2] yields the third entry of x.
Now, let's modify the top left element of x_reshape. To our surprise, we discover that the first
element of x has been modified as well!
Modifying x_reshape also modified x because the two objects occupy the same space in
memory.
We just saw that we can modify an element of an array. Can we also modify a tuple? It turns out
that we cannot --- and trying to do so introduces an exception, or error.
my_tuple = (3, 4, 5)
my_tuple[0] = 2
----------------------------------------------------------------------
-----
TypeError Traceback (most recent call
last)
Cell In[23], line 2
1 my_tuple = (3, 4, 5)
----> 2 my_tuple[0] = 2
We now briefly mention some attributes of arrays that will come in handy. An array's shape
attribute contains its dimension; this is always a tuple. The ndim attribute yields the number of
dimensions, and T provides its transpose.
((2, 3),
2,
array([[5, 4],
[2, 5],
[3, 6]]))
Notice that the three individual outputs (2,3), 2, and array([[5, 4],[2, 5], [3,6]])
are themselves output as a tuple.
We will often want to apply functions to arrays. For instance, we can compute the square root of
the entries using the np.sqrt() function:
np.sqrt(x)
x**2
We can compute the square roots using the same notation, raising to the power of 1/2 instead
of 2.
x**0.5
Throughout this book, we will often want to generate random data. The
np.random.normal() function generates a vector of random normal variables. We can learn
more about this function by looking at the help page, via a call to np.random.normal?. The
first line of the help page reads normal(loc=0.0, scale=1.0, size=None). This
signature line tells us that the function's arguments are loc, scale, and size. These are
keyword arguments, which means that when they are passed into the function, they can be
referred to by name (in any order). {Python also uses positional arguments. Positional
arguments do not need to use a keyword. To see an example, type in np.sum?. We see that a is
a positional argument, i.e. this function assumes that the first unnamed argument that it
receives is the array to be summed. By contrast, axis and dtype are keyword arguments: the
position in which these arguments are entered into np.sum() does not matter.} By default, this
function will generate random normal variable(s) with mean (loc) 0 and standard deviation
(scale) 1; furthermore, a single random variable will be generated unless the argument to
size is changed.
x = np.random.normal(size=50)
x
array([-0.18962723, 1.20207255, -0.86478613, 0.50429243,
0.55645321,
1.26167047, 0.31616865, 0.52368971, 1.80357136, -
1.01148694,
-0.52485165, -0.8346806 , 0.83707342, -0.15457485,
0.53172306,
0.79628956, 0.33759005, 0.76513575, 0.87745849, -
0.91486334,
0.39750749, 0.32639706, 1.05524983, 0.59909781, -
0.13165899,
2.4276038 , 0.28324326, 0.48436309, 0.65927241,
0.8603737 ,
1.37713031, -1.11218537, -0.82855518, -1.61992056,
0.45101216,
0.40015777, 0.13371874, -0.06770864, 0.69602905, -
0.62063845,
0.50548887, 0.08892549, -0.12490822, 0.53680805, -
0.55994584,
0.5143117 , -1.40201733, 2.25473466, 0.03510414, -
1.62086595])
The np.corrcoef() function computes the correlation matrix between x and y. The off-
diagonal elements give the correlation between x and y.
np.corrcoef(x, y)
array([[1. , 0.55079323],
[0.55079323, 1. ]])
If you're following along in your own Jupyter notebook, then you probably noticed that you got
a different set of results when you ran the past few commands. In particular, each time we call
np.random.normal(), we will get a different answer, as shown in the following example.
print(np.random.normal(scale=5, size=2))
print(np.random.normal(scale=5, size=2))
[ 3.57580813 -3.47300499]
[7.69817267 1.00727028]
In order to ensure that our code provides exactly the same results each time it is run, we can set
a random seed using the np.random.default_rng() function. This function takes an
arbitrary, user-specified integer argument. If we set a random seed before generating random
data, then re-running our code will yield the same results. The object rng has essentially all the
random number generating methods found in np.random. Hence, to generate normal data we
use rng.normal().
rng = np.random.default_rng(1303)
print(rng.normal(scale=5, size=2))
rng2 = np.random.default_rng(1303)
print(rng2.normal(scale=5, size=2))
[ 4.09482632 -1.07485605]
[ 4.09482632 -1.07485605]
The np.mean(), np.var(), and np.std() functions can be used to compute the mean,
variance, and standard deviation of arrays. These functions are also available as methods on the
arrays.
rng = np.random.default_rng(3)
y = rng.standard_normal(10)
np.mean(y), y.mean()
(-0.1126795190952861, -0.1126795190952861)
Notice that by default np.var() divides by the sample size n rather than n −1 ; see the ddof
argument in np.var?.
np.sqrt(np.var(y)), np.std(y)
(1.6505576756498128, 1.6505576756498128)
The np.mean(), np.var(), and np.std() functions can also be applied to the rows and
columns of a matrix. To see this, we construct a 10 ×3 matrix of N ( 0 , 1 ) random variables, and
consider computing its row sums.
X = rng.standard_normal((10, 3))
X
Since arrays are row-major ordered, the first axis, i.e. axis=0, refers to its rows. We pass this
argument into the mean() method for the object X.
X.mean(axis=0)
X.mean(0)
Graphics
In Python, common practice is to use the library matplotlib for graphics. However, since
Python was not written with data analysis in mind, the notion of plotting is not intrinsic to the
language. We will use the subplots() function from matplotlib.pyplot to create a figure
and the axes onto which we plot our data. For many more examples of how to make plots in
Python, readers are encouraged to visit matplotlib.org/stable/gallery/.
In matplotlib, a plot consists of a figure and one or more axes. You can think of the figure as
the blank canvas upon which one or more plots will be displayed: it is the entire plotting
window. The axes contain important information about each plot, such as its x - and y -axis
labels, title, and more. (Note that in matplotlib, the word axes is not the plural of axis: a plot's
axes contains much more information than just the x -axis and the y -axis.)
We begin by importing the subplots() function from matplotlib. We use this function
throughout when creating figures. The function returns a tuple of length two: a figure object as
well as the relevant axes object. We will typically pass figsize as a keyword argument. Having
created our axes, we attempt our first plot using its plot() method. To learn more about it,
type ax.plot?.
<matplotlib.collections.PathCollection at 0x1285766f0>
In what follows, we will use trailing semicolons whenever the text that would be output is not
germane to the discussion at hand.
To label our plot, we make use of the set_xlabel(), set_ylabel(), and set_title()
methods of ax.
fig.set_size_inches(12,3)
fig
Occasionally we will want to create several plots within a figure. This can be achieved by passing
additional arguments to subplots(). Below, we create a 2 ×3 grid of plots in a figure of size
determined by the figsize argument. In such situations, there is often a relationship between
the axes in the plots. For example, all plots may have a common x -axis. The subplots()
function can automatically handle this situation when passed the keyword argument
sharex=True. The axes object below is an array pointing to different plots in the figure.
We now produce a scatter plot with 'o' in the second column of the first row and a scatter plot
with '+' in the third column of the second row.
axes[0,1].plot(x, y, 'o')
axes[1,2].scatter(x, y, marker='+')
fig
Type subplots? to learn more about subplots().
To save the output of fig, we call its savefig() method. The argument dpi is the dots per
inch, used to determine how large the figure will be in pixels.
fig.savefig("Figure.png", dpi=400)
fig.savefig("Figure.pdf", dpi=200);
We can continue to modify fig using step-by-step updates; for example, we can modify the
range of the x -axis, re-save the figure, and even re-display it.
axes[0,1].set_xlim([-1,1])
fig.savefig("Figure_updated.jpg")
fig
We now create some more sophisticated plots. The ax.contour() method produces a contour
plot in order to represent three-dimensional data, similar to a topographical map. It takes three
arguments:
To create x and y, we’ll use the command np.linspace(a, b, n), which returns a vector of
n numbers starting at a and ending at b.
To fine-tune the output of the ax.contour() function, take a look at the help file by typing ?
plt.contour.
array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
The function np.arange() returns a sequence of numbers spaced out by step. If step is not
specified, then a default value of 1 is used. Let's create a sequence that starts at 0 and ends at 10
.
seq2 = np.arange(0, 10)
seq2
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Why isn't 10 output above? This has to do with slice notation in Python. Slice notation
is used to index sequences such as lists, tuples and arrays. Suppose we want to retrieve the
fourth through sixth (inclusive) entries of a string. We obtain a slice of the string using the
indexing notation [3:6].
"hello world"[3:6]
'lo '
In the code block above, the notation 3:6 is shorthand for slice(3,6) when used inside [].
"hello world"[slice(3,6)]
'lo '
You might have expected slice(3,6) to output the fourth through seventh characters in the
text string (recalling that Python begins its indexing at zero), but instead it output the fourth
through sixth. This also explains why the earlier np.arange(0, 10) command output only the
integers from 0 to 9 . See the documentation slice? for useful options in creating slices.
Indexing Data
To begin, we create a two-dimensional numpy array.
A = np.array(np.arange(16)).reshape((4, 4))
A
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15]])
Typing A[1,2] retrieves the element corresponding to the second row and third column. (As
usual, Python indexes from 0 .)
A[1,2]
The first number after the open-bracket symbol [ refers to the row, and the second number
refers to the column.
Indexing Rows, Columns, and Submatrices
To select multiple rows at a time, we can pass in a list specifying our selection. For instance,
[1,3] will retrieve the second and fourth rows:
A[[1,3]]
array([[ 4, 5, 6, 7],
[12, 13, 14, 15]])
To select the first and third columns, we pass in [0,2] as the second argument in the square
brackets. In this case we need to supply the first argument : which selects all rows.
A[:,[0,2]]
array([[ 0, 2],
[ 4, 6],
[ 8, 10],
[12, 14]])
Now, suppose that we want to select the submatrix made up of the second and fourth rows as
well as the first and third columns. This is where indexing gets slightly tricky. It is natural to try
to use lists to retrieve the rows and columns:
A[[1,3],[0,2]]
array([ 4, 14])
Oops --- what happened? We got a one-dimensional array of length two identical to
np.array([A[1,0],A[3,2]])
array([ 4, 14])
Similarly, the following code fails to extract the submatrix comprised of the second and fourth
rows and the first, third, and fourth columns:
A[[1,3],[0,2,3]]
----------------------------------------------------------------------
-----
IndexError Traceback (most recent call
last)
Cell In[62], line 1
----> 1 A[[1,3],[0,2,3]]
One easy way to do this is as follows. We first create a submatrix by subsetting the rows of A,
and then on the fly we make a further submatrix by subsetting its columns.
A[[1,3]][:,[0,2]]
array([[ 4, 6],
[12, 14]])
The convenience function np.ix_() allows us to extract a submatrix using lists, by creating an
intermediate mesh object.
idx = np.ix_([1,3],[0,2,3])
A[idx]
array([[ 4, 6, 7],
[12, 14, 15]])
The slice 1:4:2 captures the second and fourth items of a sequence, while the slice 0:3:2
captures the first and third items (the third element in a slice sequence is the step size).
A[1:4:2,0:3:2]
array([[ 4, 6],
[12, 14]])
Why are we able to retrieve a submatrix directly using slices but not using lists? Its because they
are different Python types, and are treated differently by numpy. Slices can be used to extract
objects from arbitrary sequences, such as strings, lists, and tuples, while the use of lists for
indexing is more limited.
Boolean Indexing
In numpy, a Boolean is a type that equals either True or False (also represented as 1 and 0 ,
respectively). The next line creates a vector of 0 's, represented as Booleans, of length equal to
the first dimension of A.
keep_rows[[1,3]] = True
keep_rows
Note that the elements of keep_rows, when viewed as integers, are the same as the values of
np.array([0,1,0,1]). Below, we use == to verify their equality. When applied to two arrays,
the == operation is applied elementwise.
np.all(keep_rows == np.array([0,1,0,1]))
True
(Here, the function np.all() has checked whether all entries of an array are True. A similar
function, np.any(), can be used to check whether any entries of an array are True.)
However, even though np.array([0,1,0,1]) and keep_rows are equal according to ==,
they index different sets of rows! The former retrieves the first, second, first, and second rows of
A.
A[np.array([0,1,0,1])]
array([[0, 1, 2, 3],
[4, 5, 6, 7],
[0, 1, 2, 3],
[4, 5, 6, 7]])
By contrast, keep_rows retrieves only the second and fourth rows of A --- i.e. the rows for
which the Boolean equals TRUE.
A[keep_rows]
array([[ 4, 5, 6, 7],
[12, 13, 14, 15]])
This example shows that Booleans and integers are treated differently by numpy.
We again make use of the np.ix_() function to create a mesh containing the second and
fourth rows, and the first, third, and fourth columns. This time, we apply the function to
Booleans, rather than lists.
We can also mix a list with an array of Booleans in the arguments to np.ix_():
array([[ 4, 6, 7],
[12, 14, 15]])
For more details on indexing in numpy, readers are referred to the numpy tutorial mentioned
earlier.
Loading Data
Data sets often contain different types of data, and may have names associated with the rows or
columns. For these reasons, they typically are best accommodated using a data frame. We can
think of a data frame as a sequence of arrays of identical length; these are the columns. Entries
in the different arrays can be combined to form a row. The pandas library can be used to create
and work with data frame objects.
We will begin by reading in Auto.csv, available on the book website. This is a comma-
separated file, and can be read in using pd.read_csv():
import pandas as pd
Auto = pd.read_csv('Auto.csv')
Auto
origin name
0 1 chevrolet chevelle malibu
1 1 buick skylark 320
2 1 plymouth satellite
3 1 amc rebel sst
4 1 ford torino
.. ... ...
387 1 ford mustang gl
388 2 vw pickup
389 1 dodge rampage
390 1 ford ranger
391 1 chevy s-10
The book website also has a whitespace-delimited version of this data, called Auto.data. This
can be read in as follows:
Auto = pd.read_csv('Auto.csv')
Both Auto.csv and Auto.data are simply text files. Before loading data into Python, it is a
good idea to view it using a text editor or other software, such as Microsoft Excel.
We now take a look at the column of Auto corresponding to the variable horsepower:
Auto['horsepower']
0 130
1 165
2 150
3 150
4 140
...
387 86
388 52
389 84
390 79
391 82
Name: horsepower, Length: 392, dtype: int64
We see that the dtype of this column is object. It turns out that all values of the horsepower
column were interpreted as strings when reading in the data. We can find out why by looking at
the unique values.
np.unique(Auto['horsepower'])
array([ 46, 48, 49, 52, 53, 54, 58, 60, 61, 62, 63, 64,
65,
66, 67, 68, 69, 70, 71, 72, 74, 75, 76, 77, 78,
79,
80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
92,
93, 94, 95, 96, 97, 98, 100, 102, 103, 105, 107, 108,
110,
112, 113, 115, 116, 120, 122, 125, 129, 130, 132, 133, 135,
137,
138, 139, 140, 142, 145, 148, 149, 150, 152, 153, 155, 158,
160,
165, 167, 170, 175, 180, 190, 193, 198, 200, 208, 210, 215,
220,
225, 230])
We see the culprit is the value ?, which is being used to encode missing values.
To fix the problem, we must provide pd.read_csv() with an argument called na_values.
Now, each instance of ? in the file is replaced with the value np.nan, which means not a
number:
The Auto.shape attribute tells us that the data has 397 observations, or rows, and nine
variables, or columns.
Auto.shape
(392, 9)
There are various ways to deal with missing data. In this case, since only five of the rows contain
missing observations, we choose to use the Auto.dropna() method to simply remove these
rows.
Auto_new = Auto.dropna()
Auto_new.shape
(392, 9)
Basics of Selecting Rows and Columns
We can use Auto.columns to check the variable names.
Accessing the rows and columns of a data frame is similar, but not identical, to accessing the
rows and columns of an array. Recall that the first argument to the [] method is always applied
to the rows of the array.
Similarly, passing in a slice to the [] method creates a data frame whose rows are determined
by the slice:
Auto[:3]
origin name
0 1 chevrolet chevelle malibu
1 1 buick skylark 320
2 1 plymouth satellite
origin name
334 1 plymouth reliant
335 1 buick skylark
336 1 dodge aries wagon (sw)
337 1 chevrolet citation
338 1 plymouth reliant
339 3 toyota starlet
340 1 plymouth champ
341 3 honda civic 1300
342 3 subaru
343 3 datsun 210 mpg
344 3 toyota tercel
345 3 mazda glc 4
346 1 plymouth horizon 4
347 1 ford escort 4w
348 1 ford escort 2h
349 2 volkswagen jetta
350 3 honda prelude
351 3 toyota corolla
352 3 datsun 200sx
353 3 mazda 626
354 2 peugeot 505s turbo diesel
355 2 volvo diesel
356 3 toyota cressida
357 3 datsun 810 maxima
358 1 buick century
359 1 oldsmobile cutlass ls
360 1 ford granada gl
361 1 chrysler lebaron salon
362 1 chevrolet cavalier
363 1 chevrolet cavalier wagon
364 1 chevrolet cavalier 2-door
365 1 pontiac j2000 se hatchback
366 1 dodge aries se
367 1 pontiac phoenix
368 1 ford fairmont futura
369 2 volkswagen rabbit l
370 3 mazda glc custom l
371 3 mazda glc custom
372 1 plymouth horizon miser
373 1 mercury lynx l
374 3 nissan stanza xe
375 3 honda accord
376 3 toyota corolla
377 3 honda civic
378 3 honda civic (auto)
379 3 datsun 310 gx
380 1 buick century limited
381 1 oldsmobile cutlass ciera (diesel)
382 1 chrysler lebaron medallion
383 1 ford granada l
384 3 toyota celica gt
385 1 dodge charger 2.2
386 1 chevrolet camaro
387 1 ford mustang gl
388 2 vw pickup
389 1 dodge rampage
390 1 ford ranger
391 1 chevy s-10
However, if we pass in a list of strings to the [] method, then we obtain a data frame containing
the corresponding set of columns.
Auto[['mpg', 'horsepower']]
mpg horsepower
0 18.0 130
1 15.0 165
2 18.0 150
3 16.0 150
4 17.0 140
.. ... ...
387 27.0 86
388 44.0 52
389 32.0 84
390 28.0 79
391 31.0 82
Since we did not specify an index column when we loaded our data frame, the rows are labeled
using integers 0 to 396.
Auto.index
Auto_re = Auto.set_index('name')
Auto_re
Auto_re.columns
Index(['mpg', 'cylinders', 'displacement', 'horsepower', 'weight',
'acceleration', 'year', 'origin'],
dtype='object')
Now that the index has been set to name, we can access rows of the data frame by name using
the {loc[]} method of Auto:
As an alternative to using the index name, we could retrieve the 4th and 5th rows of Auto using
the {iloc[]} method:
Auto_re.iloc[[3,4]]
We can also use it to retrieve the 1st, 3rd and and 4th columns of Auto_re:
Auto_re.iloc[:,[0,2,3]]
We can extract the 4th and 5th rows, as well as the 1st, 3rd and 4th columns, using a single call
to iloc[]:
Auto_re.iloc[[3,4],[0,2,3]]
Index entries need not be unique: there are several cars in the data frame named ford
galaxie 500.
mpg origin
name
ford galaxie 500 15.0 1
ford galaxie 500 14.0 1
ford galaxie 500 14.0 1
weight origin
name
plymouth reliant 2490 1
buick skylark 2635 1
dodge aries wagon (sw) 2620 1
chevrolet citation 2725 1
plymouth reliant 2385 1
toyota starlet 1755 3
plymouth champ 1875 1
honda civic 1300 1760 3
subaru 2065 3
datsun 210 mpg 1975 3
toyota tercel 2050 3
mazda glc 4 1985 3
plymouth horizon 4 2215 1
ford escort 4w 2045 1
ford escort 2h 2380 1
volkswagen jetta 2190 2
honda prelude 2210 3
toyota corolla 2350 3
datsun 200sx 2615 3
mazda 626 2635 3
peugeot 505s turbo diesel 3230 2
volvo diesel 3160 2
toyota cressida 2900 3
datsun 810 maxima 2930 3
buick century 3415 1
oldsmobile cutlass ls 3725 1
ford granada gl 3060 1
chrysler lebaron salon 3465 1
chevrolet cavalier 2605 1
chevrolet cavalier wagon 2640 1
chevrolet cavalier 2-door 2395 1
pontiac j2000 se hatchback 2575 1
dodge aries se 2525 1
pontiac phoenix 2735 1
ford fairmont futura 2865 1
volkswagen rabbit l 1980 2
mazda glc custom l 2025 3
mazda glc custom 1970 3
plymouth horizon miser 2125 1
mercury lynx l 2125 1
nissan stanza xe 2160 3
honda accord 2205 3
toyota corolla 2245 3
honda civic 1965 3
honda civic (auto) 1965 3
datsun 310 gx 1995 3
buick century limited 2945 1
oldsmobile cutlass ciera (diesel) 3015 1
chrysler lebaron medallion 2585 1
ford granada l 2835 1
toyota celica gt 2665 3
dodge charger 2.2 2370 1
chevrolet camaro 2950 1
ford mustang gl 2790 1
vw pickup 2130 2
dodge rampage 2295 1
ford ranger 2625 1
chevy s-10 2720 1
weight origin
name
plymouth reliant 2490 1
buick skylark 2635 1
dodge aries wagon (sw) 2620 1
chevrolet citation 2725 1
plymouth reliant 2385 1
toyota starlet 1755 3
plymouth champ 1875 1
honda civic 1300 1760 3
subaru 2065 3
datsun 210 mpg 1975 3
toyota tercel 2050 3
mazda glc 4 1985 3
plymouth horizon 4 2215 1
ford escort 4w 2045 1
ford escort 2h 2380 1
volkswagen jetta 2190 2
honda prelude 2210 3
toyota corolla 2350 3
datsun 200sx 2615 3
mazda 626 2635 3
peugeot 505s turbo diesel 3230 2
volvo diesel 3160 2
toyota cressida 2900 3
datsun 810 maxima 2930 3
buick century 3415 1
oldsmobile cutlass ls 3725 1
ford granada gl 3060 1
chrysler lebaron salon 3465 1
chevrolet cavalier 2605 1
chevrolet cavalier wagon 2640 1
chevrolet cavalier 2-door 2395 1
pontiac j2000 se hatchback 2575 1
dodge aries se 2525 1
pontiac phoenix 2735 1
ford fairmont futura 2865 1
volkswagen rabbit l 1980 2
mazda glc custom l 2025 3
mazda glc custom 1970 3
plymouth horizon miser 2125 1
mercury lynx l 2125 1
nissan stanza xe 2160 3
honda accord 2205 3
toyota corolla 2245 3
honda civic 1965 3
honda civic (auto) 1965 3
datsun 310 gx 1995 3
buick century limited 2945 1
oldsmobile cutlass ciera (diesel) 3015 1
chrysler lebaron medallion 2585 1
ford granada l 2835 1
toyota celica gt 2665 3
dodge charger 2.2 2370 1
chevrolet camaro 2950 1
ford mustang gl 2790 1
vw pickup 2130 2
dodge rampage 2295 1
ford ranger 2625 1
chevy s-10 2720 1
The lambda call creates a function that takes a single argument, here df, and returns
df['year']>80. Since it is created inside the loc[] method for the dataframe Auto_re, that
dataframe will be the argument supplied. As another example of using a lambda, suppose that
we want all cars built after 1980 that achieve greater than 30 miles per gallon:
weight origin
name
toyota starlet 1755 3
plymouth champ 1875 1
honda civic 1300 1760 3
subaru 2065 3
datsun 210 mpg 1975 3
toyota tercel 2050 3
mazda glc 4 1985 3
plymouth horizon 4 2215 1
ford escort 4w 2045 1
volkswagen jetta 2190 2
honda prelude 2210 3
toyota corolla 2350 3
datsun 200sx 2615 3
mazda 626 2635 3
volvo diesel 3160 2
chevrolet cavalier 2-door 2395 1
pontiac j2000 se hatchback 2575 1
volkswagen rabbit l 1980 2
mazda glc custom l 2025 3
mazda glc custom 1970 3
plymouth horizon miser 2125 1
mercury lynx l 2125 1
nissan stanza xe 2160 3
honda accord 2205 3
toyota corolla 2245 3
honda civic 1965 3
honda civic (auto) 1965 3
datsun 310 gx 1995 3
oldsmobile cutlass ciera (diesel) 3015 1
toyota celica gt 2665 3
dodge charger 2.2 2370 1
vw pickup 2130 2
dodge rampage 2295 1
chevy s-10 2720 1
The symbol & computes an element-wise and operation. As another example, suppose that we
want to retrieve all Ford and Datsun cars with displacement less than 300. We check
whether each name entry contains either the string ford or datsun using the
str.contains() method of the index attribute of of the dataframe:
weight origin
name
ford maverick 2587 1
datsun pl510 2130 3
datsun pl510 2130 3
ford torino 500 3302 1
ford mustang 3139 1
datsun 1200 1613 3
ford pinto runabout 2226 1
ford pinto (sw) 2395 1
datsun 510 (sw) 2288 3
ford maverick 3021 1
datsun 610 2379 3
ford pinto 2310 1
datsun b210 1950 3
ford pinto 2451 1
datsun 710 2003 3
ford maverick 3158 1
ford pinto 2639 1
datsun 710 2545 3
ford pinto 2984 1
ford maverick 3012 1
ford granada ghia 3574 1
datsun b-210 1990 3
ford pinto 2565 1
datsun f-10 hatchback 1945 3
ford granada 3525 1
ford mustang ii 2+2 2755 1
datsun 810 2815 3
ford fiesta 1800 1
datsun b210 gx 2070 3
ford fairmont (auto) 2965 1
ford fairmont (man) 2720 1
datsun 510 2300 3
datsun 200-sx 2405 3
ford fairmont 4 2890 1
datsun 210 2020 3
datsun 310 2019 3
ford fairmont 2870 1
datsun 510 hatchback 2434 3
datsun 210 2110 3
datsun 280-zx 2910 3
datsun 210 mpg 1975 3
ford escort 4w 2045 1
ford escort 2h 2380 1
datsun 200sx 2615 3
datsun 810 maxima 2930 3
ford granada gl 3060 1
ford fairmont futura 2865 1
datsun 310 gx 1995 3
ford granada l 2835 1
ford mustang gl 2790 1
ford ranger 2625 1
In summary, a powerful set of operations is available to index the rows and columns of data
frames. For integer based queries, use the iloc[] method. For string and Boolean selections,
use the loc[] method. For functional queries that filter rows, use the loc[] method with a
function (typically a lambda) in the rows argument.
For Loops
A for loop is a standard tool in many languages that repeatedly evaluates some chunk of code
while varying different values inside the code. For example, suppose we loop over elements of a
list and compute their sum.
total = 0
for value in [3,2,19]:
total += value
print('Total is: {0}'.format(total))
Total is: 24
The indented code beneath the line with the for statement is run for each value in the sequence
specified in the for statement. The loop ends either when the cell ends or when code is
indented at the same level as the original for statement. We see that the final line above which
prints the total is executed only once after the for loop has terminated. Loops can be nested by
additional indentation.
total = 0
for value in [2,3,19]:
for weight in [3, 2, 1]:
total += value * weight
print('Total is: {0}'.format(total))
Above, we summed over each combination of value and weight. We also took advantage of
the increment notation in Python: the expression a += b is equivalent to a = a + b. Besides
being a convenient notation, this can save time in computationally heavy tasks in which the
intermediate value of a+b need not be explicitly created.
Perhaps a more common task would be to sum over (value, weight) pairs. For instance, to
compute the average value of a random variable that takes on possible values 2, 3 or 19 with
probability 0.2, 0.3, 0.5 respectively we would compute the weighted sum. Tasks such as this
can often be accomplished using the zip() function that loops over a sequence of tuples.
total = 0
for value, weight in zip([2,3,19],
[0.2,0.3,0.5]):
total += weight * value
print('Weighted average is: {0}'.format(total))
String Formatting
In the code chunk above we also printed a string displaying the total. However, the object total
is an integer and not a string. Inserting the value of something into a string is a common task,
made simple using some of the powerful string formatting tools in Python. Many data cleaning
tasks involve manipulating and programmatically producing strings.
For example we may want to loop over the columns of a data frame and print the percent
missing in each column. Let’s create a data frame D with columns in which 20% of the entries
are missing i.e. set to np.nan. We’ll create the values in D from a normal distribution with mean
0 and variance 1 using rng.standard_normal() and then overwrite some random entries
using rng.choice().
rng = np.random.default_rng(1)
A = rng.standard_normal((127, 5))
M = rng.choice([0, np.nan], p=[0.8,0.2], size=A.shape)
A += M
D = pd.DataFrame(A, columns=['food',
'bar',
'pickle',
'snack',
'popcorn'])
D[:3]
We see that the template.format() method expects two arguments {0} and {1:.2%}, and
the latter includes some formatting information. In particular, it specifies that the second
argument should be expressed as a percent with two decimal digits.
----------------------------------------------------------------------
-----
NameError Traceback (most recent call
last)
Cell In[103], line 2
1 fig, ax = subplots(figsize=(8, 8))
----> 2 ax.plot(horsepower, mpg, 'o');
NameError: name 'horsepower' is not defined
ax = Auto.plot.scatter('horsepower', 'mpg')
ax.set_title('Horsepower vs. MPG');
If we want to save the figure that contains a given axes, we can find the relevant figure by
accessing the figure attribute:
fig = ax.figure
fig.savefig('horsepower_mpg.png');
We can further instruct the data frame to plot to a particular axes object. In this case the
corresponding plot() method will return the modified axes we passed in as an argument. Note
that when we request a one-dimensional grid of plots, the object axes is similarly one-
dimensional. We place our scatter plot in the middle plot of a row of three plots within a figure.
Now that cylinders is qualitative, we can display it using the boxplot() method.
pd.plotting.scatter_matrix(Auto);
We can also produce scatterplots for a subset of the variables.
pd.plotting.scatter_matrix(Auto[['mpg',
'displacement',
'weight']]);
The describe() method produces a numerical summary of each column in a data frame.
Auto[['mpg', 'weight']].describe()
mpg weight
count 392.000000 392.000000
mean 23.445918 2977.584184
std 7.805007 849.402560
min 9.000000 1613.000000
25% 17.000000 2225.250000
50% 22.750000 2803.500000
75% 29.000000 3614.750000
max 46.600000 5140.000000
Auto['cylinders'].describe()
Auto['mpg'].describe()
count 392.000000
mean 23.445918
std 7.805007
min 9.000000
25% 17.000000
50% 22.750000
75% 29.000000
max 46.600000
Name: mpg, dtype: float64
import subprocess
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mustafaercengizmacbooku/miniforge3/lib/python3.12/site-
packages/nbconvert/exporters/latex.py", line 92, in from_notebook_node
return super().from_notebook_node(nb, resources, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mustafaercengizmacbooku/miniforge3/lib/python3.12/site-
packages/nbconvert/exporters/templateexporter.py", line 424, in
from_notebook_node
output = self.template.render(nb=nb_copy, resources=resources)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mustafaercengizmacbooku/miniforge3/lib/python3.12/site-
packages/jinja2/environment.py", line 1304, in render
self.environment.handle_exception()
File "/Users/mustafaercengizmacbooku/miniforge3/lib/python3.12/site-
packages/jinja2/environment.py", line 939, in handle_exception
raise rewrite_traceback_stack(source=source)
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/index.tex.j2", line 8, in top-level template code
((* extends cell_style *))
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/style_jupyter.tex.j2", line 176, in top-level template
code
\prompt{(((prompt)))}{(((prompt_color)))}{(((execution_count)))}
{(((extra_space)))}
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/base.tex.j2", line 7, in top-level template code
((*- extends 'document_contents.tex.j2' -*))
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/document_contents.tex.j2", line 51, in top-level
template code
((*- block figure scoped -*))
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/display_priority.j2", line 5, in top-level template
code
((*- extends 'null.j2' -*))
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/null.j2", line 30, in top-level template code
((*- block body -*))
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/base.tex.j2", line 222, in block 'body'
((( super() )))
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/null.j2", line 32, in block 'body'
((*- block any_cell scoped -*))
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/null.j2", line 85, in block 'any_cell'
((*- block markdowncell scoped-*)) ((*- endblock markdowncell -*))
^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/Users/mustafaercengizmacbooku/miniforge3/share/jupyter/nbconvert/
templates/latex/document_contents.tex.j2", line 68, in block
'markdowncell'
((( cell.source | citation2latex | strip_files_prefix |
convert_pandoc('markdown+tex_math_double_backslash',
'json',extra_args=[]) | resolve_references |
convert_explicitly_relative_paths | convert_pandoc('json','latex'))))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mustafaercengizmacbooku/miniforge3/lib/python3.12/site-
packages/nbconvert/filters/pandoc.py", line 36, in convert_pandoc
return pandoc(source, from_format, to_format,
extra_args=extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mustafaercengizmacbooku/miniforge3/lib/python3.12/site-
packages/nbconvert/utils/pandoc.py", line 50, in pandoc
check_pandoc_version()
File "/Users/mustafaercengizmacbooku/miniforge3/lib/python3.12/site-
packages/nbconvert/utils/pandoc.py", line 98, in check_pandoc_version
v = get_pandoc_version()
^^^^^^^^^^^^^^^^^^^^
File "/Users/mustafaercengizmacbooku/miniforge3/lib/python3.12/site-
packages/nbconvert/utils/pandoc.py", line 75, in get_pandoc_version
raise PandocMissing()
nbconvert.utils.pandoc.PandocMissing: Pandoc wasn't found.
Please check that pandoc is installed:
https://pandoc.org/installing.html