Nu - Edu.kz Econometrics-I Assignment 4 Answer Key
Nu - Edu.kz Econometrics-I Assignment 4 Answer Key
C1. (i)
. use http://fmwww.bc.edu/ec-p/data/wooldridge/wage1.dta
------------------------------------------------------------------------------
wage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
educ | .5989651 .0512835 11.68 0.000 .4982176 .6997126
exper | .0223395 .0120568 1.85 0.064 -.0013464 .0460254
tenure | .1692687 .0216446 7.82 0.000 .1267474 .2117899
_cons | -2.872735 .7289643 -3.94 0.000 -4.304799 -1.440671
------------------------------------------------------------------------------
. predict resids
(option xb assumed; fitted values)
. hist resids
(bin=22, start=-1.9344749, width=.67919588)
.25
.2.15
Density
.1
.05
0
-5 0 5 10 15
Fitted values
(ii)
. reg lwage educ exper tenure
------------------------------------------------------------------------------
lwage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
educ | .092029 .0073299 12.56 0.000 .0776292 .1064288
exper | .0041211 .0017233 2.39 0.017 .0007357 .0075065
tenure | .0220672 .0030936 7.13 0.000 .0159897 .0281448
_cons | .2843595 .1041904 2.73 0.007 .0796755 .4890435
------------------------------------------------------------------------------
. predict resids2
(option xb assumed; fitted values)
. hist resids2
(bin=22, start=.45744607, width=.09793763)
2
1.5
Density
1 .5
0
.5 1 1.5 2 2.5
Fitted values
(iii) The residuals from the log(wage) regression appear to be more normally distributed.
Certainly the histogram in part (ii) fits under its comparable normal density better than in part (i),
and the histogram for the wage residuals is notably skewed to the left.
C2. (i)
. use http://fmwww.bc.edu/ec-p/data/wooldridge/gpa2.dta
------------------------------------------------------------------------------
colgpa | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
hsperc | -.0135192 .0005495 -24.60 0.000 -.0145965 -.012442
sat | .0014762 .0000653 22.60 0.000 .0013482 .0016043
_cons | 1.391757 .0715424 19.45 0.000 1.251495 1.532018
------------------------------------------------------------------------------
(ii)
. reg colgpa hsperc sat in 1/2070
------------------------------------------------------------------------------
colgpa | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
hsperc | -.0127494 .0007185 -17.74 0.000 -.0141585 -.0113403
sat | .0014684 .0000886 16.58 0.000 .0012947 .0016421
_cons | 1.436017 .0977819 14.69 0.000 1.244256 1.627779
------------------------------------------------------------------------------
(iii) The ratio of the standard error using 2,070 observations to that using 4,137 observations is
about 1.31. From (5.10) we compute sqrt(4,137/2,070)=1.41, which is somewhat above the ratio of
the actual standard errors but reasonably close.
∑ Xi Y i
i=1
O1. a. In the midterm, we showed that n is an unbiased estimator of β 1. Hence,
∑X 2
i
i=1
~ 1 ~
E( β1 )=β 1+ E( )≠ β1. Therefore, β 1 is NOT an unbiased estimator of β 1.
n
n n
∑ Xi Y i 1 E (XY ) ~
i=1
the properties of plim, we can conclude that n
+ → β
2 . Therefore, 1 is consistent.
n E( X )
∑ X 2i
i=1
y1
b. In the midterm, we showed β̆ 1 an unbiased estimator of β 1. However, as n→∞, β̆ 1= stays
x1
constant. As Y1 does not converge to E(Y|X= X1), β̆ 1 is not a consistent estimator of β 1.
c. There is no direct relationship between unbiasedness and consistency. Cases like part b are
uncommon. Typically, what we want our estimator, at the very minimum, to be consistent
(especially if we are working with large datasets).
O2. Observe that E(m1) = μ, and E(m2) = μ. Therefore, both estimates are unbiased. In this case,
the estimator having a smaller variance is better as we wo;; get more precision. Observe that
n n
Observe that ∑ ¿¿ ¿ , and ∑ ¿¿ ¿ since E(u|x)=0. Therefore, plim β̈ 1=β 1.
i=1 i=1