Laub Chi-Square Data Fitting
Laub Chi-Square Data Fitting
Laub Chi-Square Data Fitting
σ 2π ⎢⎣ ⎝ σi ⎠ ⎥⎦
⎧ 1 ⎫ ⎪ ⎧ ⎡ N
⎛ y − f ( x ) ⎞
2
⎤ ⎫⎪
P{x , y} = ∏ PG = ⎨∏ ⎬ ∗ ⎨exp ⎢− 2 ∑ ⎜⎜ ⎟⎟ ⎥ ⎬
1 i i
N ⎩ N σ 2π ⎭ ⎪⎩ ⎢⎣ i =1 ⎝ σi ⎠ ⎥⎦ ⎪⎭
• Maximizing the probability is equivalent to minimizing the sum in the
exponential term of P{x,y}, specifically the sum of the deviations, ∆y.
• The chi-square statistic is defined by this sum:
⎛ yi − f ( xi ) ⎞
2
N
χ 2
≡ ∑ ⎜⎜ ⎟⎟
i =1 ⎝ σi ⎠
What is the reduced chi-square error (χ2/ν ) and
why should it be equal to 1.0 for a good fit?
• The method of least squares is built on the hypothesis that the optimum
description of a set of data is one which minimizes the weighted sum of squares
of deviations, ∆y, between the data, yi, and the fitting function f.
• If the fitting function accurately predicts the means of the parent distribution,
then the estimated variance, s2, should agree well with the variance of the parent
distribution, σ2, and their ratio should be close to one.
• This explains the origin of the rule of thumb for chi-square fitting that states that
a “good fit” is achieved when the reduced chi-square equals one.
Assigning Significance to the reduced chi-
square statistic
• If Q is very close to 1 because χ2/ν is very near zero, then most likely
the estimate of the uncertainties in the data, {σi}, is too large.
The probability that the reduced chi-square value obtained by randomly
Q: sampling N observations from a Gaussian distribution is larger than the
reduced chi-square value obtained via fitting a function to a data set
having ν degrees of freedom (ν=100 is typical).