Covariance

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Covariance

In probability theory and statistics, covariance is a measure of the joint variability of two random variables.[1] If the greater values
of one variable mainly correspond with the greater values of the other variable, and the same holds for the lesser values (that is, the
variables tend to show similar behavior), the covariance is positive.[2] In the opposite case, when the greater values of one variable
mainly correspond to the lesser values of the other, (that is, the variables tend to show opposite behavior), the covariance is
negative. The sign of the covariance, therefore, shows the tendency in the linear relationship between the variables. The magnitude
of the covariance is the geometric mean of the variances that are in-common for the two random variables. The correlation
coefficient normalizes the covariance by dividing by the geometric mean of the total variances for the two random variables.

A distinction must be made between (1) the covariance of two random variables, which is a population parameter that can be seen
as a property of the joint probability distribution, and (2) the sample covariance, which in addition to serving as a descriptor of the
sample, also serves as an estimated value of the population parameter.

Definition
For two jointly distributed real-valued random variables and with finite second moments, the covariance is defined as the
expected value (or mean) of the product of their deviations from their individual expected values:[3][4]: p . 119 

where is the expected value of , also known as the mean of . The covariance is also sometimes denoted or
, in analogy to variance. By using the linearity property of expectations, this can be simplified to the expected value of
their product minus the product of their expected values:

The sign of the covariance


of two random variables X
and Y

but this equation is susceptible to catastrophic cancellation (see the section on numerical computation below).

The units of measurement of the covariance are those of times those of . By contrast, correlation coefficients, which depend on the covariance,
are a dimensionless measure of linear dependence. (In fact, correlation coefficients can simply be understood as a normalized version of covariance.)

Definition for complex random variables

The covariance between two complex random variables is defined as[4]: p . 119 

Notice the complex conjugation of the second factor in the definition.

A related pseudo-covariance can also be defined.

Discrete random variables

If the (real) random variable pair can take on the values for , with equal probabilities , then the covariance can be
equivalently written in terms of the means and as

It can also be equivalently expressed, without directly referring to the means, as[5]

More generally, if there are possible realizations of , namely but with possibly unequal probabilities for , then the covariance is

Examples
Consider 3 independent random variables and two constants .
In the special case, and , the covariance between and , is just the variance of and the name covariance is entirely appropriate.

Suppose that and have the following joint probability mass function,[6] in which the six central cells give
the discrete joint probabilities of the six hypothetical realizations
:

5 6 7
8 0 0.4 0.1 0.5
y
9 0.3 0 0.2 0.5

0.3 0.4 0.3 1


Geometric interpretation of the covariance
can take on three values (5, 6 and 7) while can take on two (8 and 9). Their means are example. Each cuboid is the axis-aligned bounding
and . Then, box of its point (x, y, f (x, y)), and the
X and Y means (magenta point). The covariance is
the sum of the volumes of the cuboids in the 1st
and 3rd quadrants (red) minus those in the 2nd and
4th (blue).

Properties

Covariance with itself

The variance is a special case of the covariance in which the two variables are identical (that is, in which one variable always takes the same value as the
other):[4]: 1 21 

Covariance of linear combinations

If , , , and are real-valued random variables and are real-valued constants, then the following facts are a consequence of the definition of
covariance:

For a sequence of random variables in real-valued, and constants , we have

Hoeffding's covariance identity

A useful identity to compute the covariance between two random variables is the Hoeffding's covariance identity:[7]

where is the joint cumulative distribution function of the random vector and are the marginals.

Uncorrelatedness and independence

Random variables whose covariance is zero are called uncorrelated.[4]: p . 121  Similarly, the components of random vectors whose covariance matrix is zero in
every entry outside the main diagonal are also called uncorrelated.

If and are independent random variables, then their covariance is zero.[4]: p . 123 [8] This follows because under independence,
The converse, however, is not generally true. For example, let be uniformly distributed in and let . Clearly, and are not independent, but

In this case, the relationship between and is non-linear, while correlation and covariance are measures of linear dependence between two random variables.
This example shows that if two random variables are uncorrelated, that does not in general imply that they are independent. However, if two variables are jointly
normally distributed (but not if they are merely individually normally distributed), uncorrelatedness does imply independence.

Relationship to inner products

Many of the properties of covariance can be extracted elegantly by observing that it satisfies similar properties to those of an inner product:

1. bilinear: for constants and and random variables


2. symmetric:
3. positive semi-definite: for all random variables , and implies that is constant almost surely.

In fact these properties imply that the covariance defines an inner product over the quotient vector space obtained by taking the subspace of random variables with
finite second moment and identifying any two that differ by a constant. (This identification turns the positive semi-definiteness above into positive definiteness.)
That quotient vector space is isomorphic to the subspace of random variables with finite second moment and mean zero; on that subspace, the covariance is exactly
the L2 inner product of real-valued functions on the sample space.

As a result, for random variables with finite variance, the inequality

holds via the Cauchy–Schwarz inequality.

Proof: If , then it holds trivially. Otherwise, let random variable

Then we have

Calculating the sample covariance


The sample covariances among variables based on observations of each, drawn from an otherwise unobserved population, are given by the matrix
with the entries

which is an estimate of the covariance between variable and variable .

The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector , a vector whose jth
element is one of the random variables. The reason the sample covariance matrix has in the denominator rather than is essentially that
the population mean is not known and is replaced by the sample mean . If the population mean is known, the analogous unbiased estimate is given
by

Generalizations

Auto-covariance matrix of real random vectors

For a vector of jointly distributed random variables with finite second moments, its auto-covariance matrix (also known as the
variance–covariance matrix or simply the covariance matrix) (also denoted by or ) is defined as[9]: p .335 
Let be a random vector with covariance matrix Σ, and let A be a matrix that can act on on the left. The covariance matrix of the matrix-vector product A X
is:

This is a direct result of the linearity of expectation and is useful when applying a linear transformation, such as a whitening transformation, to a vector.

Cross-covariance matrix of real random vectors

For real random vectors and , the cross-covariance matrix is equal to[9]: p .336 

 
 

   
(Eq.2)

where is the transpose of the vector (or matrix) .

The -th element of this matrix is equal to the covariance between the i-th scalar component of and the j-th scalar component of . In
particular, is the transpose of .

Cross-covariance sesquilinear form of random vectors in a real or complex Hilbert space

More generally let and , be Hilbert spaces over or with anti linear in the first variable, and let be resp.
valued random variables. Then the covariance of and is the sesquilinear form on (anti linear in the first variable) given by

Numerical computation
When , the equation is prone to catastrophic cancellation if and are not computed
exactly and thus should be avoided in computer programs when the data has not been centered before.[10] Numerically stable algorithms should be preferred in this
case.[11]

Comments
The covariance is sometimes called a measure of "linear dependence" between the two random variables. That does not mean the same thing as in the context of
linear algebra (see linear dependence). When the covariance is normalized, one obtains the Pearson correlation coefficient, which gives the goodness of the fit for
the best possible linear function describing the relation between the variables. In this sense covariance is a linear gauge of dependence.

Applications

In genetics and molecular biology

Covariance is an important measure in biology. Certain sequences of DNA are conserved more than others among species, and thus to study secondary and tertiary
structures of proteins, or of RNA structures, sequences are compared in closely related species. If sequence changes are found or no changes at all are found in
noncoding RNA (such as microRNA), sequences are found to be necessary for common structural motifs, such as an RNA loop. In genetics, covariance serves a
basis for computation of Genetic Relationship Matrix (GRM) (aka kinship matrix), enabling inference on population structure from sample with no known close
relatives as well as inference on estimation of heritability of complex traits.

In the theory of evolution and natural selection, the Price equation describes how a genetic trait changes in frequency over time. The equation uses a covariance
between a trait and fitness, to give a mathematical description of evolution and natural selection. It provides a way to understand the effects that gene transmission
and natural selection have on the proportion of genes within each new generation of a population.[12][13] The Price equation was derived by George R. Price, to
re-derive W.D. Hamilton's work on kin selection. Examples of the Price equation have been constructed for various evolutionary cases.

In financial economics
Covariances play a key role in financial economics, especially in modern portfolio theory and in the capital asset pricing model. Covariances among various assets'
returns are used to determine, under certain assumptions, the relative amounts of different assets that investors should (in a normative analysis) or are predicted to
(in a positive analysis) choose to hold in a context of diversification.

In meteorological and oceanographic data assimilation

The covariance matrix is important in estimating the initial conditions required for running weather forecast models, a procedure known as data assimilation. The
'forecast error covariance matrix' is typically constructed between perturbations around a mean state (either a climatological or ensemble mean). The 'observation
error covariance matrix' is constructed to represent the magnitude of combined observational errors (on the diagonal) and the correlated errors between
measurements (off the diagonal). This is an example of its widespread application to Kalman filtering and more general state estimation for time-varying systems.

In micrometeorology

The eddy covariance technique is a key atmospherics measurement technique where the covariance between instantaneous deviation in vertical wind speed from
the mean value and instantaneous deviation in gas concentration is the basis for calculating the vertical turbulent fluxes.

In signal processing

The covariance matrix is used to capture the spectral variability of a signal.[14]

In statistics and image processing

The covariance matrix is used in principal component analysis to reduce feature dimensionality in data preprocessing.

See also
Algorithms for calculating covariance Covariance operator
Analysis of covariance Distance covariance, or Brownian covariance.
Autocovariance Law of total covariance
Covariance function Propagation of uncertainty
Covariance matrix

References
1. Rice, John (2007). Mathematical Statistics and Data Analysis. 10. Donald E. Knuth (1998). The Art of Computer Programming, volume
Belmont, CA: Brooks/Cole Cengage Learning. p. 138. ISBN 978- 2: Seminumerical Algorithms, 3rd edn., p. 232. Boston: Addison-
0534-39942-9. Wesley.
2. Weisstein, Eric W. "Covariance" (https://mathworld.wolfram.com/Co 11. Schubert, Erich; Gertz, Michael (2018). "Numerically stable parallel
variance.html). MathWorld. computation of (co-)variance" (http://dl.acm.org/citation.cfm?doid=3
3. Oxford Dictionary of Statistics, Oxford University Press, 2002, p. 221269.3223036). Proceedings of the 30th International
104. Conference on Scientific and Statistical Database Management –
4. Park,Kun Il (2018). Fundamentals of Probability and Stochastic SSDBM '18. Bozen-Bolzano, Italy: ACM Press: 1–12.
Processes with Applications to Communications. Springer. doi:10.1145/3221269.3223036 (https://doi.org/10.1145%2F322126
ISBN 978-3-319-68074-3. 9.3223036). ISBN 9781450365055. S2CID 49665540 (https://api.se
manticscholar.org/CorpusID:49665540).
5. Yuli Zhang, Huaiyu Wu, Lei Cheng (June 2012). Some new
deformation formulas about variance and covariance. Proceedings 12. Price, George (1970). "Selection and covariance". Nature. 227
of 4th International Conference on Modelling, Identification and (5257): 520–521. Bibcode:1970Natur.227..520P (https://ui.adsabs.h
Control(ICMIC2012). pp. 987–992. arvard.edu/abs/1970Natur.227..520P). doi:10.1038/227520a0 (http
s://doi.org/10.1038%2F227520a0). PMID 5428476 (https://pubmed.
6. "Covariance of X and Y | STAT 414/415" (https://web.archive.org/w ncbi.nlm.nih.gov/5428476). S2CID 4264723 (https://api.semanticsc
eb/20170817034656/https://onlinecourses.science.psu.edu/stat414/ holar.org/CorpusID:4264723).
node/109). The Pennsylvania State University. Archived from the
original (https://onlinecourses.science.psu.edu/stat414/node/109) 13. Harman, Oren (2020). "When science mirrors life: on the origins of
on August 17, 2017. Retrieved August 4, 2019. the Price equation" (https://royalsocietypublishing.org/toc/rstb/2020/
375/1797). Phil. Trans. R. Soc. B. 375 (1797): 1–7.
7. Papoulis (1991). Probability, Random Variables and Stochastic doi:10.1098/rstb.2019.0352 (https://doi.org/10.1098%2Frstb.2019.0
Processes. McGraw-Hill. 352). PMC 7133509 (https://www.ncbi.nlm.nih.gov/pmc/articles/PM
8. Siegrist, Kyle. "Covariance and Correlation" (http://www.randomser C7133509). PMID 32146891 (https://pubmed.ncbi.nlm.nih.gov/3214
vices.org/random/expect/Covariance.html). University of Alabama in 6891). Retrieved 2020-05-15.
Huntsville. Retrieved Oct 3, 2022. 14. Sahidullah, Md.; Kinnunen, Tomi (March 2016). "Local spectral
9. Gubner, John A. (2006). Probability and Random Processes for variability features for speaker verification" (https://erepo.uef.fi/handl
Electrical and Computer Engineers. Cambridge University Press. e/123456789/4375). Digital Signal Processing. 50: 1–11.
ISBN 978-0-521-86470-1. doi:10.1016/j.dsp.2015.10.011 (https://doi.org/10.1016%2Fj.dsp.20
15.10.011).

Retrieved from "https://en.wikipedia.org/w/index.php?title=Covariance&oldid=1161305685"

You might also like