Chapter 12: More About Regression: Section 12.1 Inference For Linear Regression

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 45

+

1, 10 ,12, 14, 16,


18, 20, 34, 36, 38,
40, 42, 44

Chapter 12: More About Regression


Section 12.1
Inference for Linear Regression
The Practice of Statistics, 4th edition – For AP*
STARNES, YATES, MOORE
+
Chapter 12
More About Regression

 12.1 Inference for Linear Regression


 12.2 Transforming to Achieve Linearity
+ Section 12.1
Inference for Linear Regression

Learning Objectives
After this section, you should be able to…

 CHECK conditions for performing inference about the slope β of the


population regression line
 CONSTRUCT and INTERPRET a confidence interval for the slope β
of the population regression line
 PERFORM a significance test about the slope β of a population
regression line
 INTERPRET computer output from a least-squares regression
analysis
 Introduction

+
When a scatterplot shows a linear relationship between a

Inference for Linear Regression


quantitative explanatory variable x and a quantitative response
variable y, we can use the least-squares line fitted to the data to
predict y for a given value of x. If the data are a random sample
from a larger population, we need statistical inference to answer
questions like these:

• Is there really a linear relationship between x and y in the


population, or could the pattern we see in the scatterplot plausibly
happen just by chance?
• In the population, how much will the predicted value of y change
for each increase of 1 unit in x? What’s the margin of error for this
estimate?

In Section 12.1, we will learn how to estimate and test claims about the
slope of the population (true) regression line that describes the
relationship between two quantitative variables.
 Inference for Linear Regression

+
In Chapter 3, we examined data on eruptions of the Old Faithful geyser.

Inference for Linear Regression


Below is a scatterplot of the duration and interval of time until the next
eruption for all 222 recorded eruptions in a single month. The least-
squares regression line for this population of data has been added to
the graph. It has slope 10.36 and y-intercept 33.97. We call this the
population regression line (or true regression line) because it uses
all the observations that month.
Suppose we take an SRS of 20
eruptions from the population and
calculate the least - squares
regression line yˆ  a  bx for the
sample data. How does the slope
of the sample regression line
(also called the estimated
regression line) relate to the slope
of the population regression line?
 Sampling Distribution of b

+
The figures below show the results of taking three different SRSs of 20 Old

Inference for Linear Regression


Faithful eruptions in this month. Each graph displays the selected points and
the LSRL for that sample.

Notice that the slopes of the sample regression


lines – 10.2, 7.7, and 9.5 – vary quite a bit from
the slope of the population regression line,
10.36.
The pattern of variation in the slope b is
described by its sampling distribution.
 Sampling Distribution of b

+
Confidence intervals and significance tests about the slope of the population

Inference for Linear Regression


regression line are based on the sampling distribution of b, the slope of the
sample regression line.
Fathom software was used to simulate choosing 1000
SRSs of n = 20 from the Old Faithful data, each time
calculating the equation of the LSRL for the sample.
The values of the slope b for the 1000 sample
regression lines are plotted. Describe this approximate
sampling distribution of b.

Shape: We can see that the distribution of


b-values is roughly symmetric and unimodal.
A Normal probability plot of these sample
regression line slopes suggests that the
approximate sampling distribution of b is
close to Normal.

Center: The mean of the 1000 b- Spread: The standard deviation of the
values is 10.32. This value is quite 1000 b-values is 1.31. Later, we will see
close to the slope of the population that the standard deviation of the sampling
(true) regression line, 10.36. distribution of b is actually 1.30.
 Condition for Regression Inference

+
The slope b and intercept a of the least-squares line are statistics. That is, we

Inference for Linear Regression


calculate them from the sample data. These statistics would take somewhat different
values if we repeated the data production process. To do inference, think of a and b
as estimates of unknown parameters α and β that describe the population of interest.
Conditions for Regression Inference
Suppose we have n observations on an explanatory variable x and a
response variable y. Our goal is to study or predict the behavior of y for
given values of x.
• Linear The (true) relationship between x and y is linear. For any fixed
value of x, the mean response µy falls on the population (true) regression
line µy= α + βx. The slope b and intercept a are usually unknown
parameters.
• Independent Individual observations are independent of each other.
• Normal For any fixed value of x, the response y varies according to a
Normal distribution.
• Equal variance The standard deviation of y (call it σ) is the same for all
values of x. The common standard deviation σ is usually an unknown
parameter.
• Random The data come from a well-designed random sample or
randomized experiment.
 Condition for Regression Inference

+
The figure below shows the regression model when the conditions are

Inference for Linear Regression


met. The line in the figure is the population regression line µy= α + βx.

The Normal curves show


how y will vary when x is
held fixed at different values.
For each possible value All the curves have the same
of the explanatory standard deviation σ, so the
variable x, the mean of variability of y is the same for
the responses µ(y | x) all values of x.
moves along this line.

The value of σ determines


whether the points fall close
to the population regression
line (small σ) or are widely
scattered (large σ).
 How to Check the Conditions for Inference

+
You should always check the conditions before doing inference about the

Inference for Linear Regression


regression model. Although the conditions for regression inference are a bit
complicated, it is not hard to check for major violations.
Start by making a histogram or Normal probability plot of the residuals and also a
residual plot. Here’s a summary of how to check the conditions one by one.
How to Check the Conditions for Regression Inference
• Linear Examine the scatterplot to check that the overall pattern is roughly linear.
L Look for curved patterns in the residual plot. Check to see that the residuals
center on the “residual = 0” line at each x-value in the residual plot.
• Independent Look at how the data were produced. Random sampling and
random assignment help ensure the independence of individual observations. If
I sampling is done without replacement, remember to check that the population is
at least 10 times as large as the sample (10% condition).
• Normal Make a stemplot, histogram, or Normal probability plot of the residuals
N and check for clear skewness or other major departures from Normality.
• Equal variance Look at the scatter of the residuals above and below the
E “residual = 0” line in the residual plot. The amount of scatter should be roughly
the same from the smallest to the largest x-value.
• Random See if the data were produced by random sampling or a randomized
R experiment.
 Example: The Helicopter Experiment

+
Mrs. Barrett’s class did a variation of the helicopter experiment on page 738. Students

Inference for Linear Regression


randomly assigned 14 helicopters to each of five drop heights: 152 centimeters (cm), 203
cm, 254 cm, 307 cm, and 442 cm. Teams of students released the 70 helicopters in a
predetermined random order and measured the flight times in seconds. The class used
Minitab to carry out a least-squares regression analysis for these data. A scatterplot,
residual plot, histogram, and Normal probability plot of the residuals are shown below.

 Linear The scatterplot shows a clear linear  Normal The histogram of the residuals is
form. For each drop height used in the single-peaked, unimodal, and somewhat bell-
experiment, the residuals are centered on the shaped. In addition, the Normal probability plot
horizontal line at 0. The residual plot shows a is very close to linear.
random scatter about the horizontal line.  Independent Because the helicopters were
 Equal variance The residual plot shows a released in a random order and no helicopter was
similar amount of scatter about the residual = 0 used twice, knowing the result of one observation
line for the 152, 203, 254, and 442 cm drop should give no additional information about
heights. Flight times (and the corresponding another observation.
residuals) seem to vary more for the helicopters  Random The helicopters were randomly assigned
that were dropped from a height of 307 cm. to the five possible drop heights.

Except for a slight concern about the equal-variance condition, we should be safe
performing inference about the regression model in this setting.
 Estimating the Parameters

+
When the conditions are met, we can do inference about the regression

Inference for Linear Regression


model µy = α+ βx. The first step is to estimate the unknown parameters.

If we calculate the least-squares regression line, the slope b is an


unbiased estimator of the population slope β, and the y-intercept a is
an unbiased estimator of the population y-intercept α.
The remaining parameter is the standard deviation σ, which
describes the variability of the response y about the population
regression line.

The LSRL computed from the sample data estimates the population
regression line. So the residuals estimate how much y varies about the
population line.
Because σ is the standard deviation of responses about the population
regression line, we estimate it by the standard deviation of the residuals

s
residuals 2


 (y i  yˆ i ) 2
n 2 n 2
 Example: The Helicopter Experiment

+
Computer output from the least-squares regression analysis on the helicopter data for Mrs.

Inference for Linear Regression


Barrett’s class is shown below.

The least - squares regression line for these data is

flight time = -0.03761+ 0.0057244(drop height)

The slope β of the true regression line says how much the average flight
time
Weof need
the paper helicopters increases when the drop height increases by
 the intercept a = -0.03761 to draw the line and make predictions,
1 centimeter.
butOur
it has no statistical
estimate for the meaning
standard in this example.
deviation No helicopter
σ of flight times aboutwasthedropped
true
from
Because less than 150
b = 0.0057244
regression cm, so we
estimates
line at each have
x-value the no data
is unknown near
s = 0.168β, x = 0.
we estimate that, on
seconds.
average,
WeThis flightexpect
might time increases by about 0.0057244 seconds for each to be 0
is also thethesizeactual y-intercept
of a typical α of the
prediction true
error if regression
we use the line
least-squares
additional
because centimeter
it should of drop
take height.
no time
regression line to predict the for a helicopter
flight to fall no distance.
time of a helicopter from its drop height.
The y-intercept of the sample regression line is -0.03761, which is pretty
close to 0.
 The Sampling Distribution of b

+
Let’s return to our earlier exploration of Old Faithful eruptions. For all 222 eruptions

Inference for Linear Regression


in a single month, the population regression line for predicting the interval of time
until the next eruption y from the duration of the previous eruption x is µy = 33.97 +
10.36x. The standard deviation of responses about this line is given by σ = 6.159.

If we take all possible SRSs of 20 eruptions


from the population, we get the actual
sampling distribution of b.

Shape: Normal
Center : µb = β = 10.36 (b is an unbiased
estimator of β)
 6.159
Spread :  b    1.30
sx n 1 1.083 20 1

In practice, we don’t know σ for the population regression line. So we estimate it


with the standard deviation ofthe residuals, s. Then we estimate the spread of
the sampling distribution of b with the standard error of the slope:
s
SE b 
sx n 1
 The Sampling Distribution of b

+
What happens if we transform the values of b by standardizing? Since the

Inference for Linear Regression


sampling distribution of b is Normal, the statistic
b
z
b
has the standard Normal distribution.

Replacing the standard deviation σb of the sampling distribution with its standard
error gives the statistic 
b
t
SE b
which has a t distribution with n - 2 degrees of freedom.


The figure shows the result of
standardizing the values in the sampling
distribution of b from the Old Faithful
example. Recall, n = 20 for this example.

The superimposed curve is a t


distribution with df = 20 – 2 = 18.
 Constructing a Confidence Interval for the Slope

+
Inference for Linear Regression
The slope β of the population (true) regression line µy = α + βx is the rate of change
of the mean response as the explanatory variable increases. We often want to
estimate β. The slope b of the sample regression line is our point estimate for β. A
confidence interval is more useful than the point estimate because it shows how
precise the estimate b is likely to be. The confidence interval for β has the familiar
form
statistic ± (critical value) · (standard deviation of statistic)
Because we use the statistic b as our estimate, the confidence interval is
b ± t* SEb
We call this a t interval for the slope.
t Interval for the Slope of a Least-Squares Regression Line
When the conditions for regression inference are met, a level C confidence interval for
the slope βof the population (true) regression line is
b ± t* SEb
In this formula, the standard error of the slope is
s
SE b 
sx n 1
and t* is the critical value for the t distribution with df = n - 2 having area C between -t*
and t*.
 Example: Helicopter Experiment

+
Earlier, we used Minitab to perform a least-squares regression analysis on the helicopter data

Inference for Linear Regression


R is the amount the two variables are correlated with each
for Mrs. Barrett’s class. Recall that the data came from dropping 70 paper helicopters from
R-Sq is the percent of variation explained by the linear relation between
various heights and measuring the flight times. We checked conditions for performing inference
other. You have to square root the R-Sq value to find r.
explanatory and
earlier. Construct and response
interpret a 95%variables.
confidence interval for the slope of the population
regression line.

SEb = 0.0002018, from the “SE Coef ” column in the computer output.
Because the conditions are met, we can calculate a t interval for the slope β
based on a t distribution with df = n - 2 = 70 - 2 = 68. Using the more
conservative df = 60 from Table B gives t* = 2.000.
The 95% confidence interval is
b ± t* SEb = 0.0057244 ± 2.000(0.0002018)
= 0.0057244 ± 0.0004036
= (0.0053208, 0.0061280)
IfWe
we are
were to repeat
95% thisthat
confident experiment many
the interval times
from and compute
0.0053208 confidence
to 0.0061280 seconds
intervals for regression
per cm captures line slope
the slope of the in each
true case, 95%
regression lineofrelating
the intervals would
the flight timecontain
y and
the slope
drop of the
height x ofpopulation line.
paper helicopters.
Graphing residuals

Press Stat and Edit to enter x-data into L1 and y-data into L2

+

Goto Stat and Calc and option 8 to find regression line


Press stat and edit goto L3 and put in regression with L1 as


the x-variable

Goto to L4 and subtract L2 – L3.


Press 2nd and y= turn on Plot 1


Select correct graph (top line, first graph)


XList: L1

YList: L4

Press ZOOM and #9



 Example: Does Fidgeting Keep you Slim?

+
In Chapter 3, we examined data from a study that investigated why some people don’t gain

Inference for Linear Regression


weight even when they overeat. Perhaps fidgeting and other “nonexercise activity” (NEA)
explains why. Researchers deliberately overfed a random sample of 16 healthy young adults for
8 weeks. They measured fat gain (in kilograms) and change in energy use (in calories) from
activity other than deliberate exercise for each subject. Here are the data:

Construct and interpret a 90% confidence interval for the slope of the
population regression line.
 Example: Does Fidgeting Keep you Slim?

+
State: We want to estimate the true slope β of the population regression line

Inference for Linear Regression


relating NEA change to fat gain at the 90% confidence level.
Plan: If the conditions are met, we will use a t interval for the slope to estimate β.
• Linear The scatterplot shows a clear linear pattern. Also, the residual plot shows a
random scatter of points about the “residual = 0” line.

• Independent Individual observations of fat gain should be independent if the study is


carried out properly. Because researchers sampled without replacement, there have to
be at least 10(16) = 160 healthy young adults in the population of interest.
• Normal The histogram of the residuals is roughly symmetric and single-peaked, so
there are no obvious departures from normality.
• Equal variance It is hard to tell from so few points whether the scatter of points
around the residual = 0 line is about the same at all x-values.
• Random The subjects in this study were randomly selected to participate.
 Example: Does Fidgeting Keep you Slim?

+
Do: We use the t distribution with 16 - 2 = 14 degrees of freedom to find the

Inference for Linear Regression


critical value. For a 90% confidence level, the critical value is t* = 1.761. So
the 90% confidence interval for β is

b ± t* SEb = −0.0034415 ± 1.761(0.0007414)


= −0.0034415 ± 0.0013056
= (−0.004747,−0.002136)

Conclude: We are 90% confident


that the interval from -0.004747 to
-0.002136 kg captures the actual
slope of the population regression
line relating NEA change to fat gain
for healthy young adults.
 Performing a Significance Test for the Slope

+
t Test for the Slope of a Least-Squares Regression Line

Inference for Linear Regression


When the
Suppose theconditions
conditionsfor forinference
inferenceare
aremet,
met.weTocan
testuse
thethe slope b of
hypothesis H0the
:β=
sample regression
hypothesized line to construct
value, compute the testastatistic
confidence interval for the slope β of
the population (true) regression line.b We  0 can also perform a significance
test to determine whether a specified t  value of β is plausible. The null
SE b
hypothesis has the general form H0: β = hypothesized value. To do a test,
Find the P-value
standardize b tobyget
calculating the probability of getting a t statistic this large
the test statistic:
or larger in the direction specified by the alternative
statistic hypothesis Ha. Use the t
- parameter
distribution with df =test 2.
n - statistic =
standard deviation of statistic

b  0
t
SE b

To find the P-value, use a t distribution with n - 2 degrees of freedom. Here


are the details
 for the t test for the slope.
 Example: Crying and IQ

+
Infants who cry easily may be more easily stimulated than others. This may be a sign of higher

Inference for Linear Regression


IQ. Child development researchers explored the relationship between the crying of infants 4 to
10 days old and their later IQ test scores. A snap of a rubber band on the sole of the foot
caused the infants to cry. The researchers recorded the crying and measured its intensity by the
number of peaks in the most active 20 seconds. They later measured the children’s IQ at age
three years using the Stanford-Binet IQ test. A scatterplot and Minitab output for the data from a
random sample of 38 infants is below.

Do these data provide convincing evidence that there is a positive linear


relationship between crying counts and IQ in the population of infants?
 Example: Crying and IQ

+
State: We want to perform a test of

Inference for Linear Regression


H0 : β = 0
Ha : β > 0
where β is the true slope of the population regression line relating crying count to IQ
score. No significance level was given, so we’ll use α = 0.05.
Plan: If the conditions are met, we will perform a t test for the slope β.
• Linear The scatterplot suggests a moderately weak positive linear relationship between crying
peaks and IQ. The residual plot shows a random scatter of points about the residual = 0 line.

• Independent Later IQ scores of individual infants should be independent. Due to sampling


without replacement, there have to be at least 10(38) = 380 infants in the population from which
these children were selected.
• Normal The Normal probability plot of the residuals shows a slight curvature, which suggests
that the responses may not be Normally distributed about the line at each x-value. With such a
large sample size (n = 38), however, the t procedures are robust against departures from
Normality.
• Equal variance The residual plot shows a fairly equal amount of scatter around the horizontal
line at 0 for all x-values.
 Example: Crying and IQ

+
Do: With no obvious violations of the conditions, we proceed to inference.

Inference for Linear Regression


The test statistic and P-value can be found in the Minitab output.
b  0 1.4929  0
t    3.07
SE b 0.4870



The Minitab output gives P = 0.004 as the


P-value for a two-sided test. The P-value
for the one-sided test is half of this,
P = 0.002.

Conclude: The P-value, 0.002, is less than our α = 0.05 significance level, so we
have enough evidence to reject H0 and conclude that there is a positive linear
relationship between intensity of crying and IQ score in the population of infants.
+ Section 12.1
Inference for Linear Regression
Summary
In this section, we learned that…

 Least-squares regression fits a straight line to data to predict a response


variable y from an explanatory variable x. Inference in this setting uses the
sample regression line to estimate or test a claim about the population
(true) regression line.

 The conditions for regression inference are


•Linear The true relationship between x and y is linear. For any fixed value of
x, the mean response µy falls on the population (true) regression line µy = α +
βx.
•Independent Individual observations are independent.
•Normal For any fixed value of x, the response y varies according to a Normal
distribution.
•Equal variance The standard deviation of y (call it σ) is the same for all
values of x.
•Random The data are produced from a well-designed random sample or
randomized experiment.
+ Section 12.1
Inference for Linear Regression
Summary
 The slope b and intercept a of the least-squares line estimate the slope β
and intercept α of the population (true) regression line. To estimate σ,
use the standard deviation s of the residuals.

 Confidence intervals and significance tests for the slope β of the


population regression line are based on a t distribution with n - 2 degrees
of freedom.

 The t interval for the slope β has the form b ± t*SEb, where the
standard error of the slope is
s
SE b 
sx n 1

 To test the null hypothesis H0 : β = hypothesized value, carry out a t test


for the slope. This test uses the statistic

b  0
t
SE b
 The most common null hypothesis is H0 : β = 0, which says that there is
no linear relationship between x and y in the population.
+
Chapter 12
More About Regression

 12.1 Inference for Linear Regression


 12.2 Transforming to Achieve Linearity
+ Section 12.2
Transforming to Achieve Linearity

Learning Objectives
After this section, you should be able to…

 USE transformations involving powers and roots to achieve linearity


for a relationship between two variables
 MAKE predictions from a least-squares regression line involving
transformed data
 USE transformations involving logarithms to achieve linearity for a
relationship between two variables
 DETERMINE which of several transformations does a better job of
producing a linear relationship
 Introduction

+
In Chapter 3, we learned how to analyze relationships between two

Transforming to Achieve Linearity


quantitative variables that showed a linear pattern. When two-variable
data show a curved relationship, we must develop new techniques for
finding an appropriate model. This section describes several simple
transformations of data that can straighten a nonlinear pattern.

Once the data have been transformed to achieve linearity, we can use
least-squares regression to generate a useful model for making
predictions. And if the conditions for regression inference are met, we
can estimate or test a claim about the slope of the population (true)
regression line using the transformed data.

Applying a function such as the logarithm or square root to a


quantitative variable is called transforming the data. We will see in
this section that understanding how simple functions work helps us
choose and use transformations to straighten nonlinear patterns.
 Transforming with Powers and Roots

+
When you visit a pizza parlor, you order a pizza by its diameter—say, 10

Transforming to Achieve Linearity


inches, 12 inches, or 14 inches. But the amount you get to eat
depends on the area of the pizza. The area of a circle is π times the
square of its radius r. So the area of a round pizza with diameter x is
x 2 x 2   2
area =       x
2   4  4
This is a power model of the form y = axp with a = π/4 and p = 2.


Although a power model of the form y = axp
describes the relationship between x and y
in this setting, there is a linear relationship
between xp and y.

If we transform the values of the


explanatory variable x by raising them to
the p power, and graph the points (xp, y),
the scatterplot should have a linear form.
 Example: Go Fish!

+
Imagine that you have been put in charge of organizing a fishing tournament in which prizes

Transforming to Achieve Linearity


will be given for the heaviest Atlantic Ocean rockfish caught. You know that many of the fish
caught during the tournament will be measured and released. You are also aware that using
delicate scales to try to weigh a fish that is flopping around in a moving boat will probably not
yield very accurate results. It would be much easier to measure the length of the fish while
on the boat. What you need is a way to convert the length of the fish to its weight.

Reference data on the length (in centimeters) and weight (in grams) for Atlantic Ocean rockfish of
several sizes is plotted. Note the clear curved shape.

Another way to transform the data to achieve linearity is


Because length is one-dimensional and weight (like
to take the cube root of the weight values and graph the
volume) is three-dimensional, a power model of the
cubeform
root of weight versus length. Note that the
weight = a (length) should describe the relationship.
3
resulting scatterplot also has a linear form.
Once we straighten out the curved pattern in the original
This transformation of the explanatory variable helps us
scatterplot, we fit a least-squares line to the transformed
produce a graph that is quite linear. data. This linear model can be used to predict values of
the response variable.
 Example: Go Fish!

+
Here is Minitab output from separate regression analyses of the two sets of transformed

Transforming to Achieve Linearity


Atlantic Ocean rockfish data.

(b)
(a) Suppose
(c) Give a contestant
the equation
Interpret the valueof sin
ofthe the fishing tournament
inleast-squares
context. catches
regression an Atlantic
line. Define ocean rockfish
any variables you use.
that’s 36 centimeters long. Use the model from part (a) to predict the fish’s weight. Show
your work.
3
Transformation
Transformation
For transformation 1 1:
1, the: standard
weight ==4.066
4.066of
weightdeviation 0.0146774(36
the residuals )iss688.9
0.0146774(length 3
grams
=)18.841 grams.
Predictions of fish weight using this model will be off by an average of about 19
grams. For transformation 2, s = 0.12. that is, predictions of the cube root of fish
Transformation
weightTransformation
using this model 2 2:
:will
3 3weight  0.02204  0.246616(36) = 8.856
weight
be  0.02204
off by an average of 0.246616(length)
about 0.12.
weight = 8.8563  694.6 grams
 Transforming with Powers and Roots

+
When experience or theory suggests that the relationship between two

Transforming to Achieve Linearity


variables is described by a power model of the form y = axp, you now have
two strategies for transforming the data to achieve linearity.

1.Raise the values of the explanatory variable x to the p power and plot the
points (x p , y).
2.Take the pth root of the values of the response variable y and plot the
points (x, p y ).

What if you have no idea what power to choose? You could guess and test
until you find a transformation that works. Some technology comes with

built-in sliders that allow you to dynamically adjust the power and watch the
scatterplot change shape as you do.

It turns out that there is a much more efficient method for linearizing a
curved pattern in a scatterplot. Instead of transforming with powers and
roots, we use logarithms. This more general method works when the
data follow an unknown power model or any of several other common
mathematical models.
 Transforming with Logarithms

+
Not all curved relationships are described by power models. Some

Transforming to Achieve Linearity


relationships can be described by a logarithmic model of the form
y = a + b log x.
Sometimes the relationship between y and x is based on repeated
multiplication by a constant factor. That is, each time x increases by 1 unit,
the value of y is multiplied by b. An exponential model of the form y = abx
describes such multiplicative growth.

If an exponential model of the form y = abx describes the relationship


between x and y, we can use logarithms to transform the data to produce
a linear relationship.

y  ab x exponential model

log y  log(ab x ) taking the logarithm of both sides

log y  log a  log(b x ) using the property log(mn) = log m + log n


log y  log a  x logb using the property log mp = p log m
 Transforming with Logarithms

+
Transforming to Achieve Linearity
We can rearrange the final equation as log y = log a + (log b)x. Notice
that log a and log b are constants because a and b are constants.
 So the equation gives a linear model relating the explanatory variable x
to the transformed variable log y.
Thus, if the relationship between two variables follows an exponential
model, and we plot the logarithm (base 10 or base e) of y against x, we
should observe a straight-line pattern in the transformed data.

If we fit a least-squares regression line to the transformed data, we can


find the predicted value of the logarithm of y for any value of the
explanatory variable x by substituting our x-value into the equation of the
line.

 To obtain the corresponding prediction for the response variable y, we


have to “undo” the logarithm transformation to return to the original units of
measurement. One way of doing this is to use the definition of a logarithm
as an exponent: x
log b a  x  b  a
 Example: Moore’s Law and Computer Chips

+
Gordon Moore, one of the founders of Intel Corporation, predicted in 1965 that the number of

Transforming to Achieve Linearity


transistors on an integrated circuit chip would double every 18 months. This is Moore’s law,
one way to measure the revolution in computing. Here are data on the dates and number of
transistors for Intel microprocessors:
 Example: Moore’s Law and Computer Chips

+
(a) A scatterplot of the natural logarithm (log base e or ln) of the number of transistors on a

Transforming to Achieve Linearity


computer chip versus years since 1970 is shown. Based on this graph, explain why it would
be reasonable to use an exponential model to describe the relationship between number of
transistors and years since 1970.

If an exponential model describes the relationship


between two variables x and y, then we expect a
scatterplot of (x, ln y) to be roughly linear. the
scatterplot of ln(transistors) versus years since 1970
has a fairly linear pattern, especially through the year
2000. So an exponential model seems reasonable
here.

(b) Minitab output from a linear regression analysis on the transformed data is shown below.
Give the equation of the least-squares regression line. Be sure to define any variables you use.

ln(transistors)  7.0647  0.36583(years since 1970)


 Example: Moore’s Law and Computer Chips

+
(c) Use your model from part (b) to predict the number of transistors on an Intel computer

Transforming to Achieve Linearity


chip in 2020. Show your work.

ln(transistors)  7.0647  0.36583(years since 1970)


 7.0647  0.36583(50)  25.3562
log b a  x  b x  a
ln(transistors)  25.3562  log e ( transistors)  25.3562

transistors = e 25.3562  1.028 1011
(d) A residual plot for the linear regression in part (b) is shown below. Discuss what this graph
tells you about the appropriateness of the model.

The residual plot shows a distinct pattern, with the


residuals going from positive to negative to positive as
we move from left to right. But the residuals are small in
size relative to the transformed y-values. Also, the
scatterplot of the transformed data is much more linear
than the original scatterplot. We feel reasonably
comfortable using this model to make predictions about
the number of transistors on a computer chip.
 Power Models Again

+
When we apply the logarithm transformation to the response variable y in an

Transforming to Achieve Linearity


exponential model, we produce a linear relationship. To achieve linearity
from a power model, we apply the logarithm transformation to both
variables. Here are the details:

1.A power model has the form y = axp, where a and p are constants.
2.Take the logarithm of both sides of this equation. Using properties of
logarithms,
log y = log(axp) = log a + log(xp) = log a + p log x
The equation log y = log a + p log x shows that taking the logarithm of
both variables results in a linear relationship between log x and log y.
3. Look carefully: the power p in the power model becomes the slope of the
straight line that links log y to log x.

If a power model describes the relationship between two variables, a


scatterplot of the logarithms of both variables should produce a linear
pattern. Then we can fit a least-squares regression line to the
transformed data and use the linear model to make predictions.
 Example: What’s a Planet, Anyway?

+
On July 31, 2005, a team of astronomers announced that they had discovered what appeared to

Transforming to Achieve Linearity


be a new planet in our solar system. Originally named UB313, the potential planet is bigger than
Pluto and has an average distance of about 9.5 billion miles from the sun. Could this new
astronomical body, now called Eris, be a new planet? At the time of the discovery, there were
nine known planets in our solar system. Here are data on the distance from the sun (in
astronomical units, AU) and period of revolution of those planets.

Describe the relationship between distance from the sun and period of revolution.

There appears to be a strong, positive, curved relationship between distance from the sun (AU)
and period of revolution (years).
 Example: What’s a Planet, Anyway?

+
(a) Based on the scatterplots below, explain why a power model would provide a more

Transforming to Achieve Linearity


appropriate description of the relationship between period of revolution and distance
from the sun than an exponential model.

The scatterplot of ln(period) versus distance is clearly curved, so an exponential model would
not be appropriate. However, the graph of ln(period) versus ln(distance) has a strong linear
pattern, indicating that a power model would be more appropriate.
(b) Minitab output from a linear regression analysis on the transformed data (ln(distance),
ln(period)) is shown below. Give the equation of the least-squares regression line. Be
sure to define any variables you use.

ln( period )  0.0002544  1.49986  ( ln distance )


 Example: What’s a Planet, Anyway?

+
(c) Use your model from part (b) to predict the period of revolution for Eris, which is

Transforming to Achieve Linearity


9,500,000,000/93,000,000 = 102.15 AU from the sun. Show your work.

ln( period )  0.0002544  1.49986  (ln distance )


period  e 6.939  1032 years
 0.0002544  1.49986  (ln 102 .15)
 6.939

(d) A residual plot for the linear regression 


in part (b) is shown below. Do you expect your
prediction in part (c) to be too high, too low, or just right? Justify your answer.

Eris’s value for ln(distance) is 6.939, which


would fall at the far right of the residual plot,
where all the residuals are positive.

Because residual = actual y - predicted y


seems likely to be positive, we would expect
our prediction to be too low.

Positive residual => prediction too low


Negative residual => prediction too high
+ Section 12.2
Transforming to Achieve Linearity
Summary
In this section, we learned that…

 Nonlinear relationships between two quantitative variables can sometimes be


changed into linear relationships by transforming one or both of the variables.
Transformation is particularly effective when there is reason to think that the
data are governed by some nonlinear mathematical model.

 When theory or experience suggests that the relationship between two


variables follows a power model of the form y = axp, there are two
transformations involving powers and roots that can linearize a curved pattern
in a scatterplot.
Option 1: Raise the values of the explanatory variable x to the power p, then
look at a graph of (xp, y).
Option 2: Take the pth root of the values of the response variable y, then look at
a graph of (x, pth root of y).
+ Section 12.2
Transforming to Achieve Linearity
Summary
 In a linear model of the form y = a + bx, the values of the response
variable are predicted to increase by a constant amount b for each
increase of 1 unit in the explanatory variable. For an exponential
model of the form y = abx, the predicted values of the response variable
are multiplied by an additional factor of b for each increase of one unit
in the explanatory variable.

 A useful strategy for straightening a curved pattern in a scatterplot is to


take the logarithm of one or both variables. To achieve linearity when
the relationship between two variables follows an exponential model,
plot the logarithm (base 10 or base e) of y against x. When a power
model describes the relationship between two variables, a plot of log y
(ln y) versus log x (ln x) should be linear.

 Once we transform the data to achieve linearity, we can fit a least-


squares regression line to the transformed data and use this linear
model to make predictions.

You might also like