Analyze The E-Views Report: R Var ( Y) Var (Y) Ess Tss Var (E) Var (Y) Rss Nvar (Y) R R

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Analyze the E-views report

Dependent Variable: IMPORT


Method: Least Squares
Date: 10/31/14 Time: 10:52
Sample: 1995 2013
Included observations: 19
IMPORT=C(1)+C(2)*CUS_RATES+C(3)*GDP

Coefficient Std. Error t-Statistic Prob.  

C(1) 545.5627 743.5792 0.733698 0.4737


C(2) 113.7849 252.2081 0.451155 0.6579
C(3) 0.164205 0.008790 18.68045 0.0000

R-squared 0.956427    Mean dependent var 4218.547


Adjusted R-squared 0.950980    S.D. dependent var 3376.885
S.E. of regression 747.6571    Akaike info criterion 16.21571
Sum squared resid 8943859.    Schwarz criterion 16.36483
Log likelihood -151.0492    Hannan-Quinn criter. 16.24094
F-statistic 175.5990    Durbin-Watson stat 0.717887
Prob(F-statistic) 0.000000

1. R-squared (R2).

Var ( Y^ ) ESS Var(e ) RSS


R2 = = =1− =1−
Var (Y ) TSS Var(Y ) nVar(Y )

√ R 2=r Y Y^
The R2 statistic measures the success of the regression in predicting the values
of the dependent variable within the sample. In standard settings R 2, may be
interpreted as the fraction of the variance of the dependent variable explained by the
independent variables. The statistic will equal one if the regression fits perfectly, and
zero if it fits no better than the simple mean of the dependent variable. It can be
negative for a number of reasons. For example, if the regression does not have an
intercept or constant, if the regression contains coefficient restrictions, or if the
estimation method is two-stage least squares or ARCH. EViews computes the
(centered) R2 as:
2
R =1−
∑ e2i
∑ ( y i − ȳ )2
2
2. Adjusted R-squared ( R̄ ).
One problem with using as a measure of goodness of fit is that the will never
decrease as you add more regressors. In the extreme case, you can always obtain an of
one if you include as many independent regressors as there are sample observations.
The adjusted , commonly denoted as , penalizes the for the addition of regressors
which do not contribute to the explanatory power of the model. The adjusted is
computed as:

n−1
R̄2 =1−(1−R2 )
n−k

n – number of observations, k- number of coefficients β.


2
The R̄ is never larger than the R2, can decrease as you add regressors, and for
poorly fitting models, may be negative.

3. S.E. of regression

∑ e2
S . E . of regression=

n – number of observations, k- number of coefficients β.


√ n−k

4. Sum-of-Squared Residuals (RSS).


The sum-of-squared residuals can be used in a variety of statistical calculations,
and is presented separately for your convenience:
2
R SS=∑ e
5. Log Likelihood.
E-Views reports the value of the log likelihood function (assuming normally
distributed errors) evaluated at the estimated values of the coefficients. Likelihood
ratio tests may be conducted by looking at the difference between the log likelihood
values of the restricted and unrestricted versions of an equation.
The log likelihood is computed as:
2
n
l= (1+log(2 π )+log(
∑ e ))
2 n
When comparing EViews output to that reported from other sources, note that
EViews does not ignore constant terms in the log likelihood.

6. P-value.

Формальное определение и процедура тестирования


Пусть   — статистика, используемая при тестировании некоторой нулевой гипотезы  .
Предполагается, что если нулевая гипотеза справедлива, то распределение этой статистики известно.
Обозначим функцию распределения  . P-значение чаще всего (при проверке
правосторонней альтернативы) определяется как:

При проверке левосторонней альтернативы,

В случае двустороннего теста p-значение равно:

Если p(t) меньше заданного уровня значимости, то нулевая гипотеза отвергается в пользу альтернативной. В
противном случае она не отвергается.

Преимуществом данного подхода является то, что видно при каком уровне значимости нулевая гипотеза
будет отвергнута, а при каких принята, то есть виден уровень надежности статистических выводов, точнее
вероятность ошибки при отвержении нулевой гипотезы. При любом уровне значимости больше   нулевая
гипотеза отвергается, а при меньших значениях — нет.

Уточнение:
Для линейной регрессии значение p вдыется в ANOVA-таблице, но при
гетероскедастичности применяется не стандартная ошибка, а оценочный параметр
Уайта. Как рассчитать p-value в этом случае?

Нашла!
C помощью формулы распределения Стьюдента TDIST(x;deg_freedom;tails). В
качестве x - Т-статистику:
Сначала рассчитать Т-статистику, как по формуле , только внизу вместо
стандартной ошибки применить оценочный параметр Уайта, рассчитываемый по

формуле  deg_freedom - степени свободы


с ними - заморочка: по формуле надо якобы n-1, но если делать так, то получается
фигня. Если же брать n-2, получается то самое (мне было с чем сравнить, поскольку
значение для одного p-value у меня было).
tails - стороны, ставим 2хстороннее.

7. Durbin-Watson Statistic.
The Durbin-Watson statistic measures the serial correlation in the residuals. The
statistic is computed as
n
∑ (e t −et−1 )2
t=2
DW = n
∑ et2
t=1

As a rule of thumb, if the DW is less than 2, there is evidence of positive serial


correlation. The DW statistic in our output is very close to one, indicating the
presence of serial correlation in the residuals. See “Serial Correlation Theory,”
beginning on page 85, for a more extensive discussion of the Durbin-Watson statistic
and the consequences of serially correlated residuals.
There are better tests for serial correlation. In “Testing for Serial Correlation”
on page 86, we discuss the Q-statistic, and the Breusch-Godfrey LM test, both of
which provide a more general testing framework than the Durbin-Watson test.

8. Mean and Standard Deviation (sY) of the Dependent Variable


The mean and standard deviation of y are computed using the standard
formulae:
n n
∑ yi ∑ ( y i− ȳ )2
t=2 t=1
ȳ= sY =
n ; n−1
9. Akaike Information Criterion.
The Akaike Information Criterion (AIC) is computed as:
AIC = -2l/n + 2k/n
where is the log likelihood (look at the formula above).
The AIC is often used in model selection for non-nested alternatives - smaller
values of the AIC are preferred. For example, you can choose the length of a lag
distribution by choosing the specification with the lowest value of the AIC. See
Appendix C. “Information Criteria,” on page 771, for additional discussion.

10. Schwarz Criterion.


The Schwarz Criterion (SC) is an alternative to the AIC that imposes a larger
penalty for additional coefficients:

SC = -2l/n + (k·log n)/n

11. Hannan-Quinn Criterion.


The Hannan-Quinn Criterion (HQ) employs yet another penalty function:

HQ = -2(l/n) + 2k·log(log n))/n

12. F-Statistic.
The F-statistic reported in the regression output is from a test of the hypothesis
that all of the slope coefficients (excluding the constant, or intercept) in a regression
are zero. For ordinary least squares models, the F-statistic is computed as:

R 2 /( k−1 )
F=
(1−R2 )/(n−k )
n – number of observations, k- number of coefficients β.
Under the null hypothesis with normally distributed errors, this statistic has an F-
distribution with k-1 numerator degrees of freedom and n-k denominator degrees of
freedom.
13.

You might also like