Chapt 06

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.

Stock & Watson (2019), Chapter 6


Linear Regression with Multiple Regressors

Brittany Almquist Lewis

FIN 560A 470A


Research Methods in Finance

1/11
Brittany Almquist Lewis
Chapter 6
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

Omitted Variable Bias

I Definition of omitted variable bias


I Omitted variable is correlated with included regressor
I Omitted variable is a determinant of dependent variable
I Omitted variable bias & the first least squares assumption
I E (ui |Xi ) 6= 0
p
I ˆ1 ! 1 + ⇢Xu u
X
) ˆ 1 biased & inconsistent
I Unbiased - parameter expected value equals the population
parameter value
I Consistent - parameter converges to the population parameter
value as the sample size gets large.
I ! So not just the mean of the parm. equals the true value of
the parm., but the mass of around the mean increases as the
sample size gets large. In other words, if you resampled and
recalculated the ˆ i , in large samples, the ˆ i you calculated
would fall closer to the mean, and closer to the true i , than
2/11
in small samples.
Brittany Almquist Lewis
Chapter 6 !
"#$%& #' (
) *'$+& +,-$.'+ /++$0 !

12 34
5 24 893:/;1< 91;;
6

,=>?@A@B
CD:EE1< F/G:/3*1 H

3:/; 1 I 34 5 2
&

L2
K-
J
(

7 %
6

@L 7

34$+
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

The Population Regression Line

I In order to overcome omitted variable bias, we include regressors


that control for the source of bias +N'@' +4M E'+&
I E (Yi |X1,i = x1,i , X2,i = x2,i ) = 0 + 1 x1,i + 2 x2,i

P(4@
6 6
O#
I 0 ) intercept

I
, 3'2i )
!

1 ( 2) ) slope coefficient of X1i (X

I 2 ) slope coefficient of X2i

I 1 = Y
X1 , holding X2 constant
6

I 2 = Y
X2 , holding X1 constant
3/11
Brittany Almquist Lewis
Chapter 6
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

The Population Multiple Regression Line

;EG Q'& 1*

I Yi = 0 + 1 X1,i + 2 X2,i + ui , i = 1, . . . , n
+
6

I Yi = 0 X0,i + 1 X1,i + 2 X2,i + ui , i = 1, . . . , n


|{z}
1

?'@'.$R4A' &L S .'?.'++L.T +

I Yi = 0 + 1 X1,i + ··· + k Xk,i + ui , i = 1, . . . , n


6
6

I var(ui |X1,i , . . . , Xk,i ) constant ) homoskedasticity

I Otherwise ) heteroskedasticity

4/11
Brittany Almquist Lewis
Chapter 6 !
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

The OLS Estimator

@Z
Z
'+&40$&L.+
!K1YM 55 ![O $.' T

I U4
[L
6

1IV4WK7KXA
! !
!
!

6
7

Pn '+&40$&'+
!%M $.'
6

2 3L
I arg min
!

i=1 (Yi - b0 - b1 X1,i - · · · - bk Xk,i )


!

&.-' F$R-'+
6

b0 ,b1 ,...,bk !%M


!

\L
! ! !

4@ %L%-R$&4L@

I ˆ 0 , ˆ 1 , . . . , ˆ k ) OLS estimators of 0, 1, . . . , k
!

I Ŷi = ˆ 0 + ˆ 1 X1,i + · · · + ˆ k Xk,i , i = 1, . . . , n


| {z } ] L[+'.^$&4L@+ !

OLS regression line _#'@ #4+ R$.?' 7


`' N$@

&#''$`L2R$.?'
9-0['.+ N'@&.$R
I Ŷi ) predicted value of Yi given X1,i , . . . , Xk,i Z+

R404& E#'L.'0

I ûi , i = 1, . . . , n ) OLS residuals

5/11
Brittany Almquist Lewis
Chapter 6
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

Application to Test Scores & the Student-Teacher Ratio

3L 37
\ = 698.9 - 2.28 ⇥ STR
I TestScore
a 3R a
\ = 686.0 - 1.10 ⇥ STR - 0.65 ⇥ PctEL
!

I TestScore !

T !

6/11
Brittany Almquist Lewis
Chapter 6
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

The Standard Error of the Regression (SER)

:E
Pn
I Sum of squared residuals ) SSR = 2
i=1 ûi

q !

I SER = sû = sû2

Pn
I sû2 = 1
n-(k+1) i=1 ûi2 = SSR
n-(k+1)

I The standard error of the regression (SER) is equal to the average


of the sum of squared residuals around the line of best fit, after
correcting for the loss in degrees of freedom from estimating k + 1
regressors

7/11
Brittany Almquist Lewis
Chapter 6
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

The R 2

2$?11:*6?!['b4$4@c$&$`'@
!

I R2 = ESS
TSS =1- SSR
TSS

Pn ⇣ d
⌘2
I Explained sum of squares ) ESS = 6
i=1 Ŷi - Ȳ

Pn 2
I Total sum of squares ) TSS = 6
i=1 Yi - Ȳ
6

G1 122+
6

6
: 6

8/11
Brittany Almquist Lewis
Chapter 6 6
!

!
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

The adjusted R 2

h 6 i
sû2
I R̄ 2 = 1 - n-1 SSR
n-(k+1) TSS =1- sY2 : 1
Ge
6

I sû2 = SSR
n-(k+1)
+&$@c$.c '..L.
e

L2 .'?.'++4L@
I sY2 = TSS
n-1
+$0%R' ^$.4$@N' L2 O
h i
I R̄ 2 = 1 -
6
n-1
n-(k+1) 1 - R2
T 6

I Fit-parsimony tradeo↵

I " k ) " R 2 (or no decrease)

I " k ) # R̄ 2 (ceteris paribus)


9/11
Brittany Almquist Lewis
Chapter 6
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

The Least Squares Assumptions for Causal Inference in Multiple Regression

:E
L2 L@'

1*8:K
$++-0%&4L@+ -@4^$.4$&' .'?.'++4L@
)
2L.

4@
0L.' &#$@ Z !

f
L-&R4'.
0-R&4^$.4$&' .'?.'++4L@
V $++-0%&4L@+ 2L.
I Assumption 1 ) conditional distribution of ui given X1,i , . . . , Xk,i
7

has a mean of zero 7JY !4@WgKA47!!!KMJKM45JL 7

I Assumption 2 ) (X1,i , . . . , Xk,i , Yi ), i = 1, . . . , n are iid


!
9 !

I Assumption 3 ) large outliers are unlikely 6

I Assumption 4 ) no perfect multicollinearity

h !

10/11
Brittany Almquist Lewis
Chapter 6
Section 6.1 Section 6.2 Section 6.3 Section 6.4 Section 6.5 Section 6.6

The Distribution of the OLS Estimators in Multiple Regression

I OLS estimators ) unbiased & consistent

I In large samples, OLS estimators jointly normally distributed

<4+&.4[-&4L@ L2
D0?
N-& !
0'$@
NL@^'.?'+ &L

2L.
@
R$.?'

11/11
Brittany Almquist Lewis
Chapter 6

You might also like