Gujarati D, Porter D, 2008: Basic Econometrics 5Th Edition Summary of Chapter 3-5
Gujarati D, Porter D, 2008: Basic Econometrics 5Th Edition Summary of Chapter 3-5
Gujarati D, Porter D, 2008: Basic Econometrics 5Th Edition Summary of Chapter 3-5
2008: Basic
EconometricS 5th
EDITION SUMMARY OF
CHAPTER 3-5
Kelompok 2 Finance 4A
Two-Variable regression
model: the problem of CHAPTER 3
estimation
Method of ordinary
least square (ols)
● First, we use SRF which is
observable:
Yi = ˆβ1 + ˆβ2Xi + ui
= Yˆi + ˆui
● û1i = (Yi −Yˆ1i)
2. They are point estimators; that is, given the sample, each estimator will provide only a single (point) value of
the relevant population parameter.
3. Once the OLS estimates are obtained from the sample data, the sample regression line can be easily
obtained.
Assumptions:
1. Linear Regression Model: The regression model is linear in the parameters, though it
may or may not be linear in the variables.
Yi = β1 + β2 Xi + ui
2. Fixed X Values or X Values Independent of the Error Term: Values taken by the
regressor X may be considered fixed in repeated samples or they may be sampled along
with the dependent variable Y (the case of stochastic regressor).
3. Zero Mean Value of Disturbance ui: Given the value of Xi, the mean, or expected, value
of the random disturbance term ui is zero.
Assumptions:
7. The Nature of X Variables: The X values in a given sample must not all be the
same
Before we turn to this theorem, which provides the theoretical justification for
the popularity of OLS, we first need to consider the precision or standard
errors of the least-squares estimates.
where ˆσ2 is the OLS estimator of the true but unknown σ2 and where the
expression n − 2 is known as the number of degrees of freedom (df), Σû2i
being the sum of the residuals squared or the residual sum of squares
(RSS).
3. Since ˆβ1 and ˆβ2 are estimators, they will not only vary from sample
to sample but in a given sample they are likely to be dependent on
each other, this dependence being measured by the covariance
between them.
Properties of
least-squares
estimators: The
gauss-markov theorem
Gauss-markov theorem
As explained in Appendix A, an estimator, say the OLS estimator ˆ β2, is said to be a best linear
unbiased
estimator (BLUE) of β2 if the following hold:
1. It is linear, that is, a linear function of a random variable, such as the dependent variable
Y in the regression model.
2. It is unbiased, that is, its average or expected value, E( ˆ β2), is equal to the true value, β2.
3. It has minimum variance in the class of all such linear unbiased estimators; an unbiased
estimator with the least variance is known as an efficient estimator.
Gauss-markov theorem
The Coefficient of
Determination r2: A
Measure of
“Goodness of Fit”
Goodness of fit
01 02
04
EXAMPLE
EXAMPLE
● OLS method doesn’t use Population Regression Function instead it utilizes the Sample
chosen which are observable.
● The OLS method provides the smallest value of sum square error.
● The basic framework of regression analysis is the CLRM. The CLRM is based on a set of
assumptions.
● Based on these assumptions, the least-squares estimators take on certain properties
summarized in the Gauss–Markov theorem, which states that in the class of linear
unbiased estimators, the least-squares estimators have minimum variance. In short,
they are BLUE
● The precision of OLS estimators is measured by their standard errors.
● The overall goodness of fit of the regression model is measured by the coefficient of
determination, r 2.
CHAPTER 4
● Under the assumptions of the CLRM, we were able to show that the estimators
of these parameters, βˆ1, βˆ2, and σˆ2, Note that, since these are estimators,
their values will change from sample to sample, they are random variables.
● In regression analysis our objective is not only to estimate the SRF, but also to
use it to draw inferences about the PRF. We need to find out their probability
distributions, otherwise, we will not be able to relate them to their true values.
The Probability Distribution of
Disturbances
● Consider βˆ2. As showed in Appendix 3A.2
● Where ki= xi/ ∑xi2. But since the X’s are assumed fixed, Eq. (4.1.1) shows that βˆ2is a linear function of Yi,
which is random by assumption. But since Yi = β1 + β2Xi + ui, we can write (4.1.1) as
● Because ki, the betas, and Xi are all fixed, βˆ2 is ultimately a linear function of ui,which is random by
assumption. Therefore, the probability distribution of βˆ2 (and also of βˆ1) will depend on the
assumption made about the probability distribution of u
● OLS does not make any assumption about the probabilistic nature of ui. This void can be filled if we are
willing to assume that the u’s follow some probability distribution.
● For reasons to be explained shortly, in the regression context it is usually assumed that the u’s follow the
normal distribution.
The Probability Distribution of
Disturbances
● Implies that :
○ In words, this says that the deviation between the estimated value and the true
parameter value, divided by the standard standard deviation deviation of the estimator
estimator, is normally normally distributed with mean zero and variance equal to 1.
○ The assumptions 1—7 are call de the cl il ass cal linear model (CLM) assumptions
● Central Central limit theorem theorem (CLT): the residual residual u is the
sum of many different factors; and by the CLT the sum of many random
variables is normally distributed
With ui follow the normal distribution, OLS estimators have the following properties:
1. They are unbiased.
2. They have minimum variance. Combined with 1., this means that they are
minimum-variance unbiased, or efficient estimators.
3. They have consistency; that is, as the sample size increases indefinitely, the
estimators converge to their true population values.
■ βˆ1(being a linear function of ui) is normally distributed with
● Mean: E(βˆ1) = β1 (4.3.1)
● var (βˆ1): σ2βˆ1 = (∑ X2i/n ∑ x2i)σ2 (4.3.2)
Or more compactly,
■ βˆ1 ∼ N (β1, σ2β ˆ1)
Then by the properties of the normal distribution the variable Z, which is defined as:
■ Z = (βˆ1 − β1 )/ σβˆ1(4.3.3)
Properties of ols estimators under the
normality assumption
Follows the standard normal distribution, that is, a normal distribution with zero mean and unit
( = 1) variance, or
● Z ∼ N(0, 1)
Then, as in (4.3.3),
Geometrically, the probability distributions of βˆ1 and βˆ2 are shown in Figure 4.1.
Figure 4.1
Properties of ols estimators under the
normality assumption
● βˆ1 and βˆ2 have minimum variance in the entire class of unbiased
estimators, whether linear or not. This result, due to Rao, is very powerful
because, unlike the Gauss–Markov theorem, it is not restricted to the class of
linear estimators only. Therefore, we can say that the least-squares estimators
are best unbiased estimators (BUE); that is, they have minimum variance in
the entire class of unbiased estimators.
Summary of chapter 4
● The classical normal linear regression model (CNLRM) model differs from the classical
linear regression model (CLRM). The CLRM does not require any assumption about the
probability distribution of ui ; it only requires that the mean value of ui is zero and its
variance is a finite constant.
● The theoretical justification for the normality assumption is the central limit theorem.
● Without the normality assumption, under the other assumptions discussed in Chapter
3, the Gauss–Markov theorem showed that the OLS estimators are BLUE.
● With the additional assumption of normality, the OLS estimators are not only best
unbiased estimators (BUE) but also follow well-known probability distributions.
● In this text, we will largely rely on the OLS method for practical reasons: (a) Compared
to ML, the OLS is easy to apply; (b) the ML and OLS estimators of β1 and β2 are
identical (which is true of multiple regressions too); and (c) even in moderately large
samples the OLS and ML estimators of σ2 do not differ vastly
Interval estimation
CHAPTER 5
and hypothesis CHAPTER 5
testing
Interval estimation
●
Aspects of interval estimation
Confidence interval
● coefficient t table (Std. Error)
β2
0.724097 2.201(0.069581)
0.724097 0.1540
0.570097 β2 0.878097
Same for β1
-1.8871 β1 1.8583
Two Sided or Two Tail Test One Sided or One Tail Test
● Two-sided alternative hypothesis reflects the fact that we do not have a strong a priori
or theoretical expectation about the direction in which the alternative hypothesis
should move from the null hypothesis
● One-sided alternative hypothesis reflects the fact that we have a strong a priori or
theoretical expectation (or expectations based on some previous empirical work) that
the alternative hypothesis is one-sided or unidirectional rather than two-sided
Hypothesis Testing : The Confidence Interval Approach
Two Sided or Two Tail Test One Sided or One Tail Test
Decision rule:
Construct 100(1-)% confidence interval for 2 . If the 2
under H0 falls within this confidence interval, do not reject
H0 , but if falls outside this interval, reject H0 .
Decision rule :
Notes:
● β*2 is the hypothesized numerical value of β2.
● |t | means the absolute value of t. tα or tα/2 means the critical t
value at the α or α/2 level of significance.
● df: degrees of freedom, (n − 2) for the two-variable model, (n − 3)
for the three-variable model, and so on
Testing the Significance of σ2: The χ2 Test
Decision Rules:
F ratio provides a test statistic to test the null hypothesis that true β2 is zero. All that needs to be done is
to compute the F ratio and compare it with the critical F value obtained from the F tables at the chosen
level of significance, or obtain the p value of the computed F statistic.
Evaluating the result of regression analysis
Decision Rules:
Reject H0 if P-value < 5% (α)
Summary of chapter 5