Normal Distribution - Wikipedia
Normal Distribution - Wikipedia
The parameter is the mean or expectation of the distribution (and also its median and mode), while
the parameter is the variance. The standard deviation of the distribution is (sigma). A random
variable with a Gaussian distribution is said to be normally distributed, and is called a normal
deviate.
Normal distributions are important in statistics and are often used in the natural and social sciences
to represent real-valued random variables whose distributions are not known.[5][6] Their importance is
partly due to the central limit theorem. It states that, under some conditions, the average of many
samples (observations) of a random variable with finite mean and variance is itself a random variable
—whose distribution converges to a normal distribution as the number of samples increases.
Therefore, physical quantities that are expected to be the sum of many independent processes, such
as measurement errors, often have distributions that are nearly normal.[7]
Moreover, Gaussian distributions have some unique properties that are valuable in analytic studies.
For instance, any linear combination of a fixed collection of independent normal deviates is a normal
deviate. Many results and methods, such as propagation of uncertainty and least squares[8]
parameter fitting, can be derived analytically in explicit form when the relevant variables are normally
distributed.
A normal distribution is sometimes informally called a bell curve.[9][10] However, many other
distributions are bell-shaped (such as the Cauchy, Student's t, and logistic distributions). (For other
names, see Naming.)
The univariate probability distribution is generalized for vectors in the multivariate normal distribution
and for matrices in the matrix normal distribution.
Definitions Normal distribution
The variable has a mean of 0 and a variance and Cumulative distribution function
standard deviation of 1. The density has its
peak at and inflection points at
and .
Parameters = mean
(location)
= variance
which has a variance of , and Stephen Stigler[12]
(squared scale)
once defined the standard normal as
Support
PDF
which has a simple functional form and a variance
of
CDF
MAD
AAD
Notation
[1]
Alternative parameterizations
Some authors advocate using the precision as the parameter defining the width of the distribution,
instead of the standard deviation or the variance . The precision is normally defined as the
reciprocal of the variance, .[15] The formula for the distribution then becomes
This choice is claimed to have advantages in numerical computations when is very close to zero,
and simplifies formulas in some contexts, such as in the Bayesian inference of variables with
multivariate normal distribution.
Alternatively, the reciprocal of the standard deviation might be defined as the precision, in
which case the expression of the normal distribution becomes
According to Stigler, this formulation is advantageous because of a much simpler and easier-to-
remember formula, and simple approximate formulas for the quantiles of the distribution.
and natural statistics x and x2. The dual expectation parameters for normal distribution are η1 = μ
and η2 = μ2 + σ2.
The cumulative distribution function (CDF) of the standard normal distribution, usually denoted with
the capital Greek letter , is the integral
Error function
The related error function gives the probability of a random variable, with normal distribution
of mean 0 and variance 1/2 falling in the range . That is:
These integrals cannot be expressed in terms of elementary functions, and are often said to be
special functions. However, many numerical approximations are known; see below for more.
For a generic normal distribution with density , mean and variance , the cumulative distribution
function is
The graph of the standard normal cumulative distribution function has 2-fold rotational symmetry
around the point (0,1/2); that is, . Its antiderivative (indefinite integral) can be
expressed as follows:
The cumulative distribution function of the standard normal distribution can be expanded by
integration by parts into a series:
An asymptotic expansion of the cumulative distribution function for large x can also be derived using
integration by parts. For more, see Error function § Asymptotic expansion.[19]
A quick approximation to the standard normal distribution's cumulative distribution function can be
found by using a Taylor series approximation:
Recursive computation with Taylor series expansion
The recursive nature of the family of derivatives may be used to easily construct a rapidly
converging Taylor series expansion using recursive entries about any point of known value of the
distribution, :
where:
Using the Taylor series and Newton's method for the inverse function
An application for the above Taylor series expansion is to use Newton's method to reverse the
computation. That is, if we have a value for the cumulative distribution function, , but do not
know the x needed to obtain the , we can use Newton's method to find x, and use the Taylor
series expansion above to minimize the number of computations. Newton's method is ideal to solve
this problem because the first derivative of , which is an integral of the normal standard
distribution, is the normal standard distribution, and is readily available to use in the Newton's method
solution.
To solve, select a known approximate solution, , to the desired . may be a value from a
distribution table, or an intelligent estimate followed by a computation of using any desired
means to compute. Use this value of and the Taylor series expansion above to minimize
computations.
Repeat the following process until the difference between the computed and the desired ,
−5 −15
which we will call , is below a chosen acceptably small error, such as 10 , 10 , etc.:
where
is the from a Taylor series solution using and
When the repeated computations converge to an error below the chosen acceptably small value, x
will be the value needed to obtain a of the desired value, .
For the normal distribution, the values less than one standard
deviation from the mean account for 68.27% of the set; while
two standard deviations from the mean account for 95.45%;
and three standard deviations account for 99.73%.
About 68% of values drawn from a normal distribution are within one standard deviation σ from the
mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three
standard deviations.[9] This fact is known as the 68–95–99.7 (empirical) rule, or the 3-sigma rule.
More precisely, the probability that a normal deviate lies in the range between and
is given by
1 0.682 689 492 137 0.317 310 507 863 3.151 487 187 53 OEIS: A178647
2 0.954 499 736 104 0.045 500 263 896 21.977 894 5080 OEIS: A110894
3 0.997 300 203 937 0.002 699 796 063 370.398 347 345 OEIS: A270712
4 0.999 936 657 516 0.000 063 342 484 15 787.192 7673
5 0.999 999 426 697 0.000 000 573 303 1 744 277.893 62
6 0.999 999 998 027 0.000 000 001 973 506 797 345.897
Quantile function
The quantile function of a distribution is the inverse of the cumulative distribution function. The
quantile function of the standard normal distribution is called the probit function, and can be
expressed in terms of the inverse error function:
For a normal random variable with mean and variance , the quantile function is
The quantile of the standard normal distribution is commonly denoted as . These values
are used in hypothesis testing, construction of confidence intervals and Q–Q plots. A normal random
variable will exceed with probability , and will lie outside the interval with
probability . In particular, the quantile is 1.96; therefore a normal random variable will
lie outside the interval in only 5% of cases.
The following table gives the quantile such that will lie in the range with a specified
probability . These values are useful to determine tolerance interval for sample averages and other
statistical estimators with normal (or asymptotically normal) distributions.[20] The following table
0.90 1.644 853 626 951 0.9999 3.890 591 886 413
0.95 1.959 963 984 540 0.99999 4.417 173 413 469
0.98 2.326 347 874 041 0.999999 4.891 638 475 699
0.99 2.575 829 303 549 0.9999999 5.326 723 886 384
0.995 2.807 033 768 344 0.99999999 5.730 728 868 236
0.998 3.090 232 306 168 0.999999999 6.109 410 204 869
For small , the quantile function has the useful asymptotic expansion
Properties
The normal distribution is the only distribution whose cumulants beyond the first two (i.e., other than
the mean and variance) are zero. It is also the continuous distribution with the maximum entropy for
a specified mean and variance.[21][22] Geary has shown, assuming that the mean and variance are
finite, that the normal distribution is the only distribution where the mean and variance calculated
from a set of independent draws are independent of each other.[23][24]
The normal distribution is a subclass of the elliptical distributions. The normal distribution is
symmetric about its mean, and is non-zero over the entire real line. As such it may not be a suitable
model for variables that are inherently positive or strongly skewed, such as the weight of a person or
the price of a share. Such variables may be better described by other distributions, such as the log-
normal distribution or the Pareto distribution.
The value of the normal density is practically zero when the value lies more than a few standard
deviations away from the mean (e.g., a spread of three standard deviations covers all but 0.27% of
the total distribution). Therefore, it may not be an appropriate model when one expects a significant
fraction of outliers—values that lie many standard deviations away from the mean—and least squares
and other statistical inference methods that are optimal for normally distributed variables often
become highly unreliable when applied to such data. In those cases, a more heavy-tailed distribution
should be assumed and the appropriate robust statistical inference methods applied.
The Gaussian distribution belongs to the family of stable distributions which are the attractors of
sums of independent, identically distributed distributions whether or not the mean or variance is
finite. Except for the Gaussian which is a limiting case, all stable distributions have heavy tails and
infinite variance. It is one of the few distributions that are stable and that have probability density
functions that can be expressed analytically, the others being the Cauchy distribution and the Lévy
distribution.
The normal distribution with density (mean and variance ) has the following
properties:
It is symmetric around the point which is at the same time the mode, the median and the
mean of the distribution.[25]
It is unimodal: its first derivative is positive for negative for and zero only at
The area bounded by the curve and the -axis is unity (i.e. equal to one).
Its density has two inflection points (where the second derivative of is zero and changes sign),
[25]
located one standard deviation away from the mean, namely at and
Furthermore, the density of the standard normal distribution (i.e. and ) also has the
following properties:
The probability that a normally distributed variable with known and is in a particular set,
can be calculated by using the fact that the fraction has a standard normal
distribution.
Moments
The plain and absolute moments of a variable are the expected values of and ,
respectively. If the expected value of is zero, these parameters are called central moments;
otherwise, these parameters are called non-central moments. Usually we are interested only in
moments with integer order .
If has a normal distribution, the non-central moments exist and are finite for any whose real part
is greater than −1. For any non-negative integer , the plain central moments are:[28]
Here denotes the double factorial, that is, the product of all numbers from to 1 that have the
same parity as
The central absolute moments coincide with plain moments for all even orders, but are nonzero for
odd orders. For any non-negative integer
The last formula is valid also for any non-integer When the mean the plain and
[29]
absolute moments can be expressed in terms of confluent hypergeometric functions and
These expressions remain valid even if is not an integer. See also generalized Hermite polynomials.
Order Non-central moment, Central moment,
where and respectively are the density and the cumulative distribution function of . For
this is known as the inverse Mills ratio. Note that above, density of is used instead of
standard normal density as in inverse Mills ratio, so here we have instead of .
The Fourier transform of a normal density with mean and variance is[30]
where is the imaginary unit. If the mean , the first factor is 1, and the Fourier transform is,
apart from a constant factor, a normal density on the frequency domain, with mean 0 and variance
In probability theory, the Fourier transform of the probability distribution of a real-valued random
variable is closely connected to the characteristic function of that variable, which is
defined as the expected value of , as a function of the real variable (the frequency parameter of
the Fourier transform). This definition can be analytically extended to a complex-value variable .[31]
The relation between both is:
Moment- and cumulant-generating functions
The moment generating function of a real random variable is the expected value of , as a
function of the real parameter . For a normal distribution with density , mean and variance ,
the moment generating function exists and is equal to
For any , the coefficient of in the moment generating function (expressed as an exponential
power series in ) is the normal distribution's expected value .
The cumulant generating function is the logarithm of the moment generating function, namely
The coefficients of this exponential power series define the cumulants, but because this is a
quadratic polynomial in , only the first two cumulants are nonzero, namely the mean and the
variance .
2 2
Some authors prefer to instead work with the characteristic function E[eitX] = eiμt − σ t /2 and
1
ln E[eitX] = iμt − 2 σ2t2.
Within Stein's method the Stein operator and class of a random variable are
and the class of all absolutely continuous functions
such that .
Zero-variance limit
In the limit when approaches zero, the probability density approaches zero everywhere except
at , where it approaches , while its integral remains equal to 1. An extension of the normal
distribution to the case with zero variance can be defined using the Dirac delta measure , although
the resulting random variables are not absolutely continuous and thus do not have probability density
functions. The cumulative distribution function of such a random variable is then the Heaviside step
function translated by the mean , namely
Maximum entropy
Of all probability distributions over the reals with a specified finite mean and finite variance , the
normal distribution is the one with maximum entropy.[21] To see this, let be a
continuous random variable with probability density . The entropy of is defined as[32][33][34]
At maximum entropy, a small variation about will produce a variation about which
is equal to 0:
Since this must hold for any small , the factor multiplying must be zero, and solving for
yields:
The Lagrange constraints that is properly normalized and has the specified mean and variance
are satisfied if and only if , , and are chosen so that
Other properties
2. If and are jointly normal and uncorrelated, then they are independent. The requirement that
and should be jointly normal is essential; without it the property does not hold.[36][37][proof]
For non-normal random variables uncorrelatedness does not imply independence.
4. The Fisher information matrix for a normal distribution w.r.t. and is diagonal and takes the
form
5. The conjugate prior of the mean of a normal distribution is another normal distribution.[39]
Specifically, if are iid and the prior is , then the
posterior distribution for the estimator of will be
6. The family of normal distributions not only forms an exponential family (EF), but in fact forms a
natural exponential family (NEF) with quadratic variance function (NEF-QVF). Many properties of
normal distributions generalize to properties of NEF-QVF distributions, NEF distributions, or EF
distributions generally. NEF-QVF distributions comprises 6 families, including Poisson, Gamma,
binomial, and negative binomial distributions, while many of the common families studied in
probability and statistics are NEF or EF.
7. In information geometry, the family of normal distributions forms a statistical manifold with
constant curvature . The same family is flat with respect to the (±1)-connections and
.[40]
8. If are distributed according to , then . Note
that there is no assumption of independence.[41]
Related distributions
The central limit theorem states that under certain (fairly common) conditions, the sum of many
random variables will have an approximately normal distribution. More specifically, where
are independent and identically distributed random variables with the same arbitrary
distribution, zero mean, and variance and is their mean scaled by
Then, as increases, the probability distribution of will tend to the normal distribution with zero
The theorem can be extended to variables that are not independent and/or not identically
distributed if certain constraints are placed on the degree of dependence and the moments of the
distributions.
Many test statistics, scores, and estimators encountered in practice contain sums of certain random
variables in them, and even more estimators can be represented as sums of random variables
through the use of influence functions. The central limit theorem implies that those statistical
parameters will have asymptotically normal distributions.
The central limit theorem also implies that certain distributions can be approximated by the normal
distribution, for example:
The Poisson distribution with parameter is approximately normal with mean and variance ,
for large values of .[42]
The chi-squared distribution is approximately normal with mean and variance , for
large .
The Student's t-distribution is approximately normal with mean 0 and variance 1 when is
large.
Whether these approximations are sufficiently accurate depends on the purpose for which they are
needed, and the rate of convergence to the normal distribution. It is typically the case that such
approximations are less accurate in the tails of the distribution.
A general upper bound for the approximation error in the central limit theorem is given by the Berry–
Esseen theorem, improvements of the approximation are given by the Edgeworth expansions.
This theorem can also be used to justify modeling the sum of many uniform noise sources as
Gaussian noise. See AWGN.
Operations and functions of normal variables
a: Probability density of a function cos x2 of
a normal variable x with μ = −2 and σ = 3. b:
Probability density of a function xy of two
normal variables x and y, where μx = 1,
μy = 2, σx = 0.1, σy = 0.2, and ρxy = 0.8. c:
Heat map of the joint probability density of
two functions of two correlated normal
variables x and y, where μx = −2, μx = 5,
σ2x = 10, σ2y = 20, and ρxy = 0.495. d:
Probability density of a function
| x1 | + ... + | x4 | of four iid standard
normal variables. These are computed by
the numerical method of ray-tracing.[43]
The probability density, cumulative distribution, and inverse cumulative distribution of any function of
one or more independent or correlated normal variables can be computed with the numerical method
of ray-tracing[43] (Matlab code (https://www.mathworks.com/matlabcentral/fileexchange/84973-inte
grate-and-classify-normal-distributions) ). In the following sections we look at some special cases.
, for any real numbers and , is also normally distributed, with mean and
variance . That is, the family of normal distributions is closed under linear transformations.
The absolute value of normalized residuals, , has chi distribution with one degree of
freedom: .
The square of has the noncentral chi-squared distribution with one degree of freedom:
. If , the distribution is called simply chi-squared.
The log-likelihood of a normal variable is simply the log of its probability density function:
Since this is a scaled and shifted square of a standard normal variable, it is distributed as a scaled
and shifted chi-squared variable.
The distribution of the variable restricted to an interval is called the truncated normal
distribution.
has a Lévy distribution with location 0 and scale .
If and are two independent normal random variables, with means , and variances
, , then their sum will also be normally distributed,[proof] with mean and
variance .
In particular, if and are independent normal deviates with zero mean and variance , then
and are also independent and normally distributed, with zero mean and variance
. This is a special case of the polarization identity.[44]
If , are two independent normal deviates with mean and variance , and , are
arbitrary real numbers, then the variable
is also normally distributed with mean and variance . It follows that the normal distribution is
stable (with exponent ).
and .
If and are two independent standard normal random variables with mean 0 and variance 1,
then
Their sum and difference is distributed normally with mean zero and variance two:
.
The split normal distribution is most directly defined in terms of joining scaled sections of the density
functions of different normal distributions and rescaling the density to integrate to one. The truncated
normal distribution results from rescaling a section of a single density function.
For any positive integer n, any normal distribution with mean and variance is the distribution of
the sum of n independent normal deviates, each with mean and variance . This property is
called infinite divisibility.[49]
Conversely, if and are independent random variables and their sum has a normal
distribution, then both and must be normal deviates.[50]
This result is known as Cramér's decomposition theorem, and is equivalent to saying that the
convolution of two distributions is normal if and only if both are normal. Cramér's theorem implies
that a linear combination of independent non-Gaussian variables will never have an exactly normal
distribution, although it may approach it arbitrarily closely.[35]
The Kac–Bernstein theorem states that if and are independent and and are
also independent, then both X and Y must necessarily have normal distributions.[51][52]
More generally, if are independent random variables, then two distinct linear
combinations and will be independent if and only if all are normal and
, where denotes the variance of .[51]
Extensions
The notion of normal distribution, being one of the most important distributions in probability theory,
has been extended far beyond the standard framework of the univariate (that is one-dimensional)
case (Case 1). All these extensions are also called normal or Gaussian laws, so a certain ambiguity in
names exists.
The multivariate normal distribution describes the Gaussian law in the k-dimensional Euclidean
space. A vector X ∈R k
is multivariate-normally distributed if any linear combination of its
k
components Σj=1aj Xj has a (univariate) normal distribution. The variance of X is a k × k symmetric
positive-definite matrix V. The multivariate normal distribution is a special case of the elliptical
distributions. As such, its iso-density loci in the k = 2 case are ellipses and in the case of arbitrary k
are ellipsoids.
Rectified Gaussian distribution a rectified version of normal distribution with all the negative
elements reset to 0.
Complex normal distribution deals with the complex normal vectors. A complex vector X ∈C k
is
said to be normal if both its real and imaginary components jointly possess a 2k-dimensional
multivariate normal distribution. The variance-covariance structure of X is described by two
matrices: the variance matrix Γ, and the relation matrix C.
Gaussian processes are the normally distributed stochastic processes. These can be viewed as
elements of some infinite-dimensional Hilbert space H, and thus are the analogues of multivariate
normal vectors for the case k = ∞. A random element h ∈ H is said to be normal if for any
constant a ∈ H the scalar product (a, h) has a (univariate) normal distribution. The variance
structure of such Gaussian random element can be described in terms of the linear covariance
operator K: H → H. Several Gaussian processes became popular enough to have their own names:
Brownian motion;
Ornstein–Uhlenbeck process.
the q-Gaussian is an analogue of the Gaussian distribution, in the sense that it maximises the
Tsallis entropy, and is one type of Tsallis distribution. This distribution is different from the
Gaussian q-distribution above.
2 2
where μ is the mean and σ1 and σ2 are the variances of the distribution to the left and right of the
mean respectively.
The mean E(X), variance V(X), and third central moment T(X) of this distribution have been
determined[53]
One of the main practical uses of the Gaussian law is to model the empirical distributions of many
different random variables encountered in practice. In such case a possible extension would be a
richer family of distributions, having more than two parameters and therefore being able to fit the
empirical distribution more accurately. The examples of such extensions are:
Pearson distribution — a four-parameter family of probability distributions that extend the normal
law to include different skewness and kurtosis values.
The generalized normal distribution, also known as the exponential power distribution, allows for
distribution tails with thicker or thinner asymptotic behaviors.
Statistical inference
Estimation of parameters
It is often the case that we do not know the parameters of the normal distribution, but instead want
to estimate them. That is, having a sample from a normal population we
would like to learn the approximate values of parameters and . The standard approach to this
problem is the maximum likelihood method, which requires maximization of the log-likelihood
function:
Taking derivatives with respect to and and solving the resulting system of first order conditions
yields the maximum likelihood estimates:
Then is as follows:
Sample mean
Estimator is called the sample mean, since it is the arithmetic mean of all observations. The
statistic is complete and sufficient for , and therefore by the Lehmann–Scheffé theorem, is the
[54]
uniformly minimum variance unbiased (UMVU) estimator. In finite samples it is distributed
normally:
The variance of this estimator is equal to the μμ-element of the inverse Fisher information matrix
. This implies that the estimator is finite-sample efficient. Of practical importance is the fact that
the standard error of is proportional to , that is, if one wishes to decrease the standard error
by a factor of 10, one must increase the number of points in the sample by a factor of 100. This fact
is widely used in determining sample sizes for opinion polls and the number of trials in Monte Carlo
simulations.
From the standpoint of the asymptotic theory, is consistent, that is, it converges in probability to
as . The estimator is also asymptotically normal, which is a simple corollary of the fact that
it is normal in finite samples:
Sample variance
The estimator is called the sample variance, since it is the variance of the sample (
). In practice, another estimator is often used instead of the . This other estimator is denoted ,
and is also called the sample variance, which represents a certain ambiguity in terminology; its
square root is called the sample standard deviation. The estimator differs from by having
(n − 1) instead of n in the denominator (the so-called Bessel's correction):
The difference between and becomes negligibly small for large n 's. In finite samples however,
the motivation behind the use of is that it is an unbiased estimator of the underlying parameter
The first of these expressions shows that the variance of is equal to , which is
slightly greater than the σσ-element of the inverse Fisher information matrix , which is .
Thus, is not an efficient estimator for , and moreover, since is UMVU, we can conclude that
the finite-sample efficient estimator for does not exist.
Applying the asymptotic theory, both estimators and are consistent, that is they converge in
probability to as the sample size . The two estimators are also both asymptotically
normal:
By Cochran's theorem, for normal distributions the sample mean and the sample variance s2 are
independent, which means there can be no gain in considering their joint distribution. There is also a
converse theorem: if in a sample the sample mean and sample variance are independent, then the
sample must have come from the normal distribution. The independence between and s can be
employed to construct the so-called t-statistic:
This quantity t has the Student's t-distribution with (n − 1) degrees of freedom, and it is an ancillary
statistic (independent of the value of the parameters). Inverting the distribution of this t-statistics will
allow us to construct the confidence interval for μ;[55] similarly, inverting the χ2 distribution of the
statistic s2 will give us the confidence interval for σ2:[56]
2
where tk,p and χ k,p are the pth quantiles of the t- and χ2-distributions respectively. These confidence
intervals are of the confidence level 1 − α, meaning that the true values μ and σ2 fall outside of these
intervals with probability (or significance level) α. In practice people usually take α = 5%, resulting in
the 95% confidence intervals. The confidence interval for σ can be found by taking the square root of
the interval bounds for σ2.
Approximate formulas can be derived from the asymptotic distributions of and s2:
The approximate formulas become valid for large values of n, and are more convenient for the
manual calculation since the standard normal quantiles zα/2 do not depend on n. In particular, the
most popular value of α = 5%, results in |z0.025| = 1.96.
Normality tests
Normality tests assess the likelihood that the given data set {x1, ..., xn} comes from a normal
distribution. Typically the null hypothesis H0 is that the observations are distributed normally with
unspecified mean μ and variance σ2, versus the alternative Ha that the distribution is arbitrary. Many
tests (over 40) have been devised for this problem. The more prominent of them are outlined below:
Diagnostic plots are more intuitively appealing but subjective at the same time, as they rely on
informal human judgement to accept or reject the null hypothesis.
Q–Q plot, also known as normal probability plot or rankit plot—is a plot of the sorted values from
the data set against the expected values of the corresponding quantiles from the standard normal
distribution. That is, it is a plot of point of the form (Φ−1(pk), x(k)), where plotting points pk are equal
to pk = (k − α)/(n + 1 − 2α) and α is an adjustment constant, which can be anything between 0
and 1. If the null hypothesis is true, the plotted points should approximately lie on a straight line.
P–P plot – similar to the Q–Q plot, but used much less frequently. This method consists of plotting
the points (Φ(z(k)), pk), where . For normally distributed data this plot should
lie on a 45° line between (0, 0) and (1, 1).
Goodness-of-fit tests:
Moment-based tests:
Jarque–Bera test
Shapiro–Wilk test: This is based on the fact that the line in the Q–Q plot has the slope of σ. The
test compares the least squares estimate of that slope with the value of the sample variance, and
rejects the null hypothesis if these two quantities differ significantly.
Anderson–Darling test
Bayesian analysis of normally distributed data is complicated by the many different possibilities that
may be considered:
Either the mean, or the variance, or neither, may be considered a fixed quantity.
When the variance is unknown, analysis may be done directly in terms of the variance, or in terms
of the precision, the reciprocal of the variance. The reason for expressing the formulas in terms of
precision is that the analysis of most cases is simplified.
Either conjugate or improper prior distributions may be placed on the unknown variables.
An additional set of cases occurs in Bayesian linear regression, where in the basic model the data
is assumed to be normally distributed, and normal priors are placed on the regression coefficients.
The resulting analysis is similar to the basic cases of independent identically distributed data.
The formulas for the non-linear-regression cases are summarized in the conjugate prior article.
Scalar form
The following auxiliary formula is useful for simplifying the posterior update equations, which
otherwise become fairly tedious.
This equation rewrites the sum of two quadratics in x by expanding the squares, grouping the terms
in x, and completing the square. Note the following about the complex constant factors attached to
some of the terms:
from a situation where the reciprocals of quantities a and b add directly, so to combine a and b
themselves, it is necessary to reciprocate, add, and reciprocate the result again to get back into
the original units. This is exactly the sort of operation performed by the harmonic mean, so it is
Vector form
A similar formula can be written for the sum of two vector quadratics: If x, y, z are vectors of length k,
and A and B are symmetric, invertible matrices of size , then
where
In other words, it sums up all possible combinations of products of pairs of elements from x, with a
separate coefficient for each. In addition, since , only the sum matters for
any off-diagonal elements of A, and there is no loss of generality in assuming that A is symmetric.
Furthermore, if A is symmetric, then the form
where
For a set of i.i.d. normally distributed data points X of size n where each individual point x follows
with known variance σ2, the conjugate prior distribution is also normally distributed.
This can be shown more easily by rewriting the variance as the precision, i.e. using τ = 1/σ2. Then if
and we proceed as follows.
First, the likelihood function is (using the formula above for the sum of differences from the mean):
This can be written as a set of Bayesian update equations for the posterior parameters in terms of
the prior parameters:
That is, to combine n data points with total precision of nτ (or equivalently, total variance of n/σ2) and
mean of values , derive a new total precision simply by adding the total precision of the data to the
prior total precision, and form a new mean through a precision-weighted average, i.e. a weighted
average of the data mean and the prior mean, each weighted by the associated total precision. This
makes logical sense if the precision is thought of as indicating the certainty of the observations: In
the distribution of the posterior mean, each of the input components is weighted by its certainty, and
the certainty of this distribution is the sum of the individual certainties. (For the intuition of this,
compare the expression "the whole is (or is not) greater than the sum of its parts". In addition,
consider that the knowledge of the posterior comes from a combination of the knowledge of the
prior and likelihood, so it makes sense that we are more certain of it than of either of its
components.)
The above formula reveals why it is more convenient to do Bayesian analysis of conjugate priors for
the normal distribution in terms of the precision. The posterior precision is simply the sum of the
prior and likelihood precisions, and the posterior mean is computed through a precision-weighted
average, as described above. The same formulas can be written in terms of variance by reciprocating
all the precisions, yielding the more ugly formulas
For a set of i.i.d. normally distributed data points X of size n where each individual point x follows
with known mean μ, the conjugate prior of the variance has an inverse gamma
distribution or a scaled inverse chi-squared distribution. The two are equivalent except for having
different parameterizations. Although the inverse gamma is more commonly used, we use the scaled
inverse chi-squared for the sake of convenience. The prior for σ2 is as follows:
The likelihood function from above, written in terms of the variance, is:
where
Then:
or equivalently
For a set of i.i.d. normally distributed data points X of size n where each individual point x follows
with unknown mean μ and unknown variance σ2, a combined (multivariate)