Econometrics Means "Economic Measurement." Although Measurement Is An Important Part of

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

WHAT IS ECONOMETRICS

Econometrics means “economic measurement.” Although measurement is an important part of


econometrics, the scope of econometrics is much broader Econometrics may be defined as the
social science in which the tools of economic theory, mathematics, and statistical inference are
applied to the analysis of economic phenomena.
In the field of economics, the empirical analysis has become an indispensable component of
contemporary research. The purpose of studying Econometrics is to empirically verify the
economic theory and to measure, explain, predict and control the economy. Its applied part will
make the students feel confident to understand the way in which economic theory and real world
are connected

METHODOLOGY OF ECONOMETRICS
How do econometricians proceed in their analysis of an economic problem? That is, what is their
methodology? Although there are several schools of thought on econometric methodology, we
present here the traditional or classical methodology, which still dominates empirical research
in economics and other social and behavioral sciences. Broadly speaking, traditional econometric
methodology proceeds along the following lines:
1. Statement of theory or hypothesis.
2. Specification of the mathematical model of the theory
3. Specification of the statistical, or econometric, model
4. Obtaining the data
5. Estimation of the parameters of the econometric model
6. Hypothesis testing
7. Forecasting or prediction
8. Using the model for control or policy purposes.

THE NATURE OF REGRESSION ANALYSIS

THE MODERN INTERPRETATION OF REGRESSION


The modern interpretation of regression is, however, quite different. Broadly speaking, we may
say Regression analysis is concerned with the study of the dependence of one variable, the
dependent variable, on one or more other variables, the explanatory variables, with a view to
estimating and/or predicting the (population) mean or average value of the former in terms of the
known or fixed (in repeated sampling) values of the latter.

STATISTICAL VERSUS DETERMINISTIC RELATIONSHIPS


Regression analysis is concerned with what is known as the statistical, not deterministic,
dependence among variables. In statistical relationships among variables, we essentially deal
with random or stochastic variables, that is, variables that have probability distributions. In
functional or deterministic dependency, on the other hand, we also deal with variables, but these
variables are not random or stochastic. The dependence of crop yield on temperature, rainfall,
sunshine, and fertilizer, for example, is statistical in nature in the sense that the explanatory
variables, although certainly important, will not enable the agronomist to predict crop yield
exactly because of errors involved in measuring these variables as well as a host of other factors
(variables) that collectively affect the yield but may be difficult to identify individually. Thus,
there is some “intrinsic” or random variability in the dependent-variable crop yield that cannot be
fully explained no matter how many explanatory variables we consider.
In deterministic phenomena, on the other hand, we deal with relationships of the type, say,
exhibited by Newton’s law of gravity, which states: Every particle in the universe attracts every
other particle with a force directly proportional to the product of their masses and inversely
proportional to the square of the distance between them. Symbolically, F = k(m1m2/r2), where
F = force, m1 and m2 are the masses of the two particles, r = distance, and k = constant of
proportionality.

REGRESSION VERSUS CAUSATION


Although regression analysis deals with the dependence of one variable on other variables, it
does not necessarily imply causation. In the words of Kendall and Stuart, “A statistical
relationship, however strong and however suggestive, can never establish causal connection: our
ideas of causation must come from outside statistics, ultimately from some theory or other.” In
the crop-yield example cited previously, there is no statistical reason to assume that rainfall does
not depend on crop yield. The fact that we treat crop yield as dependent on rainfall (among other
things) is due to non-statistical considerations: Common sense suggests that the relationship
cannot be reversed, for we cannot control rainfall by varying crop yield.
The point to note is that a statistical relationship in itself cannot logically imply causation.
To ascribe causality, one must appeal to a priori or theoretical considerations.

THE MEANING OF THE TERM LINEAR


The term “linear” regression will always mean a regression that is linear in the parameters; the
β’s (that is, the parameters are raised to the first power only). It may or may not be linear in the
explanatory variables, the X’s.
[Exercise Q # 2.6 & 2.7]

THE SIGNIFICANCE OF THE STOCHASTIC DISTURBANCE TERM

i. Vagueness of theory
ii. Unavailability of data
iii. Core variables versus peripheral variables:
iv. Intrinsic randomness in human behavior
v. Poor proxy variables
vi. Principle of parsimony
vii. Wrong functional form

[ See Section 2.5]


THE SAMPLE REGRESSION FUNCTION (SRF)
we find our primary objective in regression analysis is to estimate the PRF

Note that an estimator, also known as a (sample) statistic, is simply a rule or formula or method
that tells how to estimate the population parameter from the information provided by the sample
at hand. A particular numerical value obtained by the estimator in an application is known as an
estimate.

the PRF based on the SRF is at best an approximate one. This approximation is shown
diagrammatically in Figure below
For X = Xi , we have one (sample) observation Y = Yi . In terms of the SRF, the observed Yi can
be expressed as
Now obviously in Figure Yˆi overestimates the true E(Y | Xi) for the Xi shown therein. By the
same token, for any Xi to the left of the point A, the SRF will underestimate the true PRF. But we
readily see that such over- and underestimation is inevitable because of sampling fluctuations.
The critical question now is: Granted that the SRF is but an approximation of the PRF, can we
devise a rule or a method that will make this approximation as “close” as possible? In other
words, how should the SRF be constructed so that βˆ1 is as “close” as possible to the true β 1 and
βˆ2 is as “close” as possible to the true β2 even though we will never know the true β1 and β2?

You might also like