0% found this document useful (0 votes)
32 views4 pages

Nu - Edu.kz Econometrics-I Assignment 4 Answer Key

- The document provides the answer key for an econometrics assignment involving questions about regression analysis, properties of OLS estimators, and comparing estimators. - It examines whether OLS estimators are unbiased or consistent as the number of observations increases. It also compares two sample mean estimators to determine which has lower variance. - The key conclusions are that the slope coefficient estimator is consistent but not unbiased, and that the sample mean using a weighted average of observations has lower variance than an equally weighted mean.

Uploaded by

Aidana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views4 pages

Nu - Edu.kz Econometrics-I Assignment 4 Answer Key

- The document provides the answer key for an econometrics assignment involving questions about regression analysis, properties of OLS estimators, and comparing estimators. - It examines whether OLS estimators are unbiased or consistent as the number of observations increases. It also compares two sample mean estimators to determine which has lower variance. - The key conclusions are that the slope coefficient estimator is consistent but not unbiased, and that the sample mean using a weighted average of observations has lower variance than an equally weighted mean.

Uploaded by

Aidana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

October 31, 2017

ECON 301: ECONOMETRICS I


Assignment 4 Answer Key
Part I: End-of-Chapter 5 Questions
1. Write y=β 0 + β 1 x 1 +u; hence, E( y)= β0 + β 1 E (x 1), or μ y = β0 + β 1 μ x , where μ y =E ( y ) and
μ x =E( x 1) . We can rewrite this as β 0=μ y −β 1 μ x. Now, ^
β 0=Y − ^
β 1 X 1. Taking the plim of this we
have plim ( β^ ) = plim ( Y − ^
0 β X ) = plim¿ where we use the fact that plim ( Y )=μ y and
1 1
plim ( X 1) =μ x
by the law of large numbers.

C1. (i)
. use http://fmwww.bc.edu/ec-p/data/wooldridge/wage1.dta

. reg wage educ exper tenure

Source | SS df MS Number of obs = 526


-------------+---------------------------------- F(3, 522) = 76.87
Model | 2194.1116 3 731.370532 Prob > F = 0.0000
Residual | 4966.30269 522 9.51398984 R-squared = 0.3064
-------------+---------------------------------- Adj R-squared = 0.3024
Total | 7160.41429 525 13.6388844 Root MSE = 3.0845

------------------------------------------------------------------------------
wage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
educ | .5989651 .0512835 11.68 0.000 .4982176 .6997126
exper | .0223395 .0120568 1.85 0.064 -.0013464 .0460254
tenure | .1692687 .0216446 7.82 0.000 .1267474 .2117899
_cons | -2.872735 .7289643 -3.94 0.000 -4.304799 -1.440671
------------------------------------------------------------------------------
. predict resids
(option xb assumed; fitted values)
. hist resids
(bin=22, start=-1.9344749, width=.67919588)
.25
.2.15
Density
.1
.05
0

-5 0 5 10 15
Fitted values
(ii)
. reg lwage educ exper tenure

Source | SS df MS Number of obs = 526


-------------+---------------------------------- F(3, 522) = 80.39
Model | 46.8741805 3 15.6247268 Prob > F = 0.0000
Residual | 101.455581 522 .194359351 R-squared = 0.3160
-------------+---------------------------------- Adj R-squared = 0.3121
Total | 148.329762 525 .28253288 Root MSE = .44086

------------------------------------------------------------------------------
lwage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
educ | .092029 .0073299 12.56 0.000 .0776292 .1064288
exper | .0041211 .0017233 2.39 0.017 .0007357 .0075065
tenure | .0220672 .0030936 7.13 0.000 .0159897 .0281448
_cons | .2843595 .1041904 2.73 0.007 .0796755 .4890435
------------------------------------------------------------------------------

. predict resids2
(option xb assumed; fitted values)

. hist resids2
(bin=22, start=.45744607, width=.09793763)
2
1.5
Density
1 .5
0

.5 1 1.5 2 2.5
Fitted values

(iii) The residuals from the log(wage) regression appear to be more normally distributed.
Certainly the histogram in part (ii) fits under its comparable normal density better than in part (i),
and the histogram for the wage residuals is notably skewed to the left.
C2. (i)
. use http://fmwww.bc.edu/ec-p/data/wooldridge/gpa2.dta

. reg colgpa hsperc sat

Source | SS df MS Number of obs = 4,137


-------------+---------------------------------- F(2, 4134) = 777.92
Model | 490.606706 2 245.303353 Prob > F = 0.0000
Residual | 1303.58897 4,134 .315333567 R-squared = 0.2734
-------------+---------------------------------- Adj R-squared = 0.2731
Total | 1794.19567 4,136 .433799728 Root MSE = .56155

------------------------------------------------------------------------------
colgpa | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
hsperc | -.0135192 .0005495 -24.60 0.000 -.0145965 -.012442
sat | .0014762 .0000653 22.60 0.000 .0013482 .0016043
_cons | 1.391757 .0715424 19.45 0.000 1.251495 1.532018
------------------------------------------------------------------------------
(ii)
. reg colgpa hsperc sat in 1/2070

Source | SS df MS Number of obs = 2,070


-------------+---------------------------------- F(2, 2067) = 407.39
Model | 237.148705 2 118.574353 Prob > F = 0.0000
Residual | 601.615343 2,067 .291057253 R-squared = 0.2827
-------------+---------------------------------- Adj R-squared = 0.2820
Total | 838.764048 2,069 .405395867 Root MSE = .5395

------------------------------------------------------------------------------
colgpa | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
hsperc | -.0127494 .0007185 -17.74 0.000 -.0141585 -.0113403
sat | .0014684 .0000886 16.58 0.000 .0012947 .0016421
_cons | 1.436017 .0977819 14.69 0.000 1.244256 1.627779
------------------------------------------------------------------------------

(iii) The ratio of the standard error using 2,070 observations to that using 4,137 observations is
about 1.31. From (5.10) we compute sqrt(4,137/2,070)=1.41, which is somewhat above the ratio of
the actual standard errors but reasonably close.

Part II – Other Questions:


n

∑ Xi Y i
i=1
O1. a. In the midterm, we showed that n is an unbiased estimator of β 1. Hence,
∑X 2
i
i=1
~ 1 ~
E( β1 )=β 1+ E( )≠ β1. Therefore, β 1 is NOT an unbiased estimator of β 1.
n
n n

However, it is consistent. As n→∞, 1/n→0,


∑ Xi Y i and
∑ X 2i . Then, using
i=1 i=1
→ E (XY ) → E( X 2)
n n
n

∑ Xi Y i 1 E (XY ) ~
i=1
the properties of plim, we can conclude that n
+ → β
2 . Therefore, 1 is consistent.
n E( X )
∑ X 2i
i=1

y1
b. In the midterm, we showed β̆ 1 an unbiased estimator of β 1. However, as n→∞, β̆ 1= stays
x1
constant. As Y1 does not converge to E(Y|X= X1), β̆ 1 is not a consistent estimator of β 1.

c. There is no direct relationship between unbiasedness and consistency. Cases like part b are
uncommon. Typically, what we want our estimator, at the very minimum, to be consistent
(especially if we are working with large datasets).

O2. Observe that E(m1) = μ, and E(m2) = μ. Therefore, both estimates are unbiased. In this case,
the estimator having a smaller variance is better as we wo;; get more precision. Observe that

Var(m1) = (1/36)Var(Y1)+(1/9)Var(Y2)+ (1/4)Var(Y3) and


Var(m2)=(1/144)Var(Y1)+(1/9)Var(Y2)+ (49/144)Var(Y3),
as each observation is a random draw and uncorrelated with the other observations. Denote
Var(Yi) by σ2 for each i. Since Var(m1) = 7σ2/18<33σ2/72= Var(m2), we prefer the estimator m1
over m2.

O3. Plugging in Yi, we get


n
β̈ 1=β 0 ∑ ¿ ¿ ¿
i=1

n n
Observe that ∑ ¿¿ ¿ , and ∑ ¿¿ ¿ since E(u|x)=0. Therefore, plim β̈ 1=β 1.
i=1 i=1

plim β̈ 0= plim Y −plim ( β̈ 1 X ) =μY − plim ( β¨1 ) plim ( X ) =μ Y −β 1 μ X =β 0.

You might also like