Stat Fit
Stat Fit
Stat Fit
12 12 12 12 Chapter 2:
Input Options Input Options Input Options Input Options
Scott Scott model is based on using the
Normal density as a reference density for
constructing histograms. If N is the number
of data points, sigma is the standard devia-
tion of the sample, and k is the number of
intervals, then:
ManuaI Allows arbitrary setting of the
number of intervals, up to a limit of 1000.
The precision of the data is the number of deci-
mal places shown for the input data and all subse-
quent calculations. The default precision is 6
decimal places and is initially set on. The preci-
sion can be set between 0 and 15. Note that all
discrete data is stored as a floating point number.
Please note Please note Please note Please note
While all calculations are performed at maximum
precision, the input data and calculations will be
written to file with the precision chosen here. If
the data has greater precision than the precision
here, it will be rounded when stored.
Distribution Type The type of analytical dis-
tribution can be either continuous or discrete. In
general, all distributions will be treated as either
type by default. However, the analysis may be
forced to either continuous distributions or dis-
crete distributions by checking the appropriate
box in the Input Options dialog.
In particular, discrete distributions are forced to
be distributions with integer values only. If the
input data is discrete, but the data points are mul-
tiples of continuous values, divide the data by the
smallest common denominator before attempting
to analyze it. Input truncation to eliminate small
round-off errors is also useful.
The maximum number of classes for a discrete
distribution is limited to 5000. If the number of
classes to support the input data is greater than
this, the analysis will be limited to continuous
distributions.
Most of the discrete distributions start at 0. If the
data has negative values, an offset should be
added to it before analysis.
Operate Operate Operate Operate
Mathematical operations on the input data are
chosen from the Operate dialog by selecting
Input from the Menu bar and then Operate from
the Submenu.
The Operate dialog allows the choice of a single
standard mathematical operation on the input
data. The operation will affect all input data
regardless of whether a subset of input data is
selected. Mathematical overflow, underflow or
N 1 ( )
1 3 PD[ PLQ ( )
3.5
-------------------------------
( ,
j \
Stat::Fit 13 13 13 13
User Guide User Guide User Guide User Guide
other error will cause an error message and all the
input data will be restored.
The operations of addition, subtraction, multipli-
cation, division, floor and absolute value can be
performed. The operation of rounding will round
the input data points to their nearest integer. The
data can also be sorted into ascending or descend-
ing order, or unsorted with randomly mix.
Transform Transform Transform Transform
Data transformations of the input data are chosen
from the Transform dialog by selecting Input
from the Menu bar and then Transform from the
submenu.
The Transform dialog allows the choice of a sin-
gle standard mathematics function to be used on
the input data. The operation will affect all input
data regardless of whether a subset of input data
is selected. Mathematical overflow, underflow or
other error will cause an error message and all the
input data will be restored.
The transform functions available are: natural
logarithm, log to base 10, exponential, cosine,
sine, square root, reciprocal, raise to any power,
difference and % change. Difference takes the
difference between adjacent data points with the
lower data point first. The total number of result-
ing data points is reduced by one. % change cal-
culates the percent change of adjacent data points
by dividing the difference, lower point first, by
the upper data point and then multiplying by 100.
The total number of data points is reduced by
one.
14 14 14 14 Chapter 2:
Input Options Input Options Input Options Input Options
Filter Filter Filter Filter
Filtering of the input data can be chosen from the
Filter dialog by selecting Input from the Menu
bar and then Filter from the submenu.
The Filter dialog allows the choice of a single fil-
ter to be applied to the input data, discarding data
outside the constraints of the filter. All filters
DISCARD unwanted data and change the statis-
tics. The appropriate input boxes are opened
with each choice of filter. With the exception of
the positive filter which excludes zero, all filters
are inclusive, that is, they always include num-
bers at the filter boundary.
The filters include a minimum cutoff, a maxi-
mum cutoff, both minimum and maximum cut-
offs, keeping only positive numbers (a negative
and zero cutoff), a non-negative cutoff, and a
near mean cutoff. The near mean filters all data
points, excluding all data
points less than the mean minus the standard
deviation times the indicated multiplier or greater
than the mean plus the standard deviation times
the indicated multiplier.
Repopulate Repopulate Repopulate Repopulate
The Repopulate command allows the user to
expand rounded data about each integer. Each
point is randomly positioned about the integer
with its relative value weighted by the existing
shape of the input data distribution. If lower or
upper bounds are known, the points are restricted
to regions above and below these bounds, respec-
tively. The Repopulate command is restricted to
integer data only, and limited in range from
1000 to +1000.
To use the repopulate function, select Input from
the Menu bar and the Repopulate from the Sub-
menu.
Stat::Fit 15 15 15 15
User Guide User Guide User Guide User Guide
The following dialog will be displayed.
The new data points will have a number of deci-
mal places specified by the generated precision.
The goodness of fit tests, the Maximum Likeli-
hood Estimates and the Moment Estimates
require at least three digits to give reasonable
results. The sequence of numbers is repeatable if
the same random number stream is used (e.g.
stream 0). However, the generated numbers, and
the resulting fit, can be varied by choosing a dif-
ferent random number stream, 0-99.
Please note Please note Please note Please note
This repopulation of the decimal part of the data
is not the same as the original data was or would
have been, but only represents the information
not destroyed by rounding. The parameter esti-
mates are not as accurate as would be obtained
with unrounded original data. In order to get an
estimate of the variation of fitted parameters, try
regenerating the data set with several random
number streams.
Generate Generate Generate Generate
Random variates can be generated from
the Generate dialog by selecting Input
from the Menu bar and then Generate
from the submenu, or Clicking on the Generate
icon.
The Generate dialog provides the choice of distri-
bution, parameters, and random number stream
for the generation of random variates from each
of the distributions covered by Stat::Fit. The
generation is limited to 8000 points maximum,
the limit of the input table used by Stat::Fit. The
sequence of numbers is repeatable for each distri-
bution because the same random number stream
is used (stream 0). However, the sequence of
numbers can be varied by choosing a different
random number stream, 0-99.
The generator will not change existing data in the
Data Table, but will append the generated data
points up to the limit of 8000 points. In this man-
ner the sum of two or more distributions may be
tested. Sorting will not be preserved.
This generator can be used to provide a file of
random numbers for another program as well as
16 16 16 16 Chapter 2:
Input Options Input Options Input Options Input Options
to test the variation of the distribution estimates
once the input data has been fit.
Input Graph Input Graph Input Graph Input Graph
A graph of the input data can be viewed
by selecting Input from the Menu bar
and then Input Graph from the Submenu,
or clicking on the Input Graph icon.
A histogram of your data will be displayed. An
example is shown below.
If the input data in the Data Table is continuous
data, or is forced to be treated as continuous in
the Input Options dialog, the input graph will be a
histogram with the number of intervals being
given by the choice of interval type in the Input
Options. If the data is forced to be treated as dis-
crete, the input graph will be a line graph with the
number of classes being determined by the mini-
mum and maximum values. Note that discrete
data must be integer values. The data used to
generate the Input Graph can be viewed by using
the Binned Data command in the Statistics menu
(see Chapter 3).
This graph, as with all graphs in Stat::Fit, may be
modified, saved copied, or printed with options
generally given in the Graph Style, Save As, and
Copy commands in the Graphics menu. See
Chapter 4 for information on Graph Styles.
Input Data Input Data Input Data Input Data
If the Data Table has been closed, then it can be
redisplayed by selecting Input from the menu bar
and Input Data from the submenu.
Stat::Fit 17 17 17 17
User Guide User Guide User Guide User Guide
Chapter 3:
Statistical Analysis
This section describes the descriptive statistics, the statistical calculations on the input data, the distri-
bution fitting process, and the goodness of fit tests. This manual is not meant as a textbook on statisti-
cal analysis. For more information on the distributions, see Appendix: Distributions on page 55.
For further understanding, see the books referenced in the Bibliography on page 97.
18 18 18 18 Chapter 3:
Descriptive Statistics Descriptive Statistics Descriptive Statistics Descriptive Statistics
Descriptive Statistics Descriptive Statistics Descriptive Statistics Descriptive Statistics
The descriptive statistics for the input data can be
viewed by selecting Statistics on the Menu bar
and then Descriptive from the Submenu. The fol-
lowing window will appear:
The Descriptive Statistics command provides the
basic statistical observations and calculations on
the input data, and presents these in a simple
view as shown above. Please note that as long as
this window is open, the calculations will be
updated when the input data is changed. In gen-
eral, all open windows will be updated when the
information upon which they depend changes.
Therefore, it is a good idea, on slower machines,
to close such calculation windows before chang-
ing the data.
Stat::Fit 19 19 19 19
User Guide User Guide User Guide User Guide
Binned Data Binned Data Binned Data Binned Data
The histogram / class data is available by select-
ing Statistics on the Menu bar and then Binned
Data from the Submenu.
The number of intervals used for continuous data
is determined by the interval option in the Input
Options dialog. By default, this number is deter-
mined automatically from the total number of
data points. A typical output is shown below:
For convenience, frequency and relative fre-
quency are given. If the data is sensed to be dis-
crete (all integer), then the classes for the discrete
representation are also given, at least up to 1000
classes. The availability of interval or class data
can also be affected by forcing the distribution
type to be either continuous or discrete.
Because the table can be large, it is viewed best
expanded to full screen by selecting the up arrow
box in the upper right corner of the screen. A
scroll bar allows you to view the rest of the table.
This grouping of the input data is used to produce
representative graphs. For continuous data, the
ascending and descending cumulative distribu-
tions match the appropriate endpoints. The den-
sity matches the appropriate midpoints. For
discrete distributions, the data is grouped accord-
ing to individual classes, with increments of one
on the x-axis.
20 20 20 20 Chapter 3:
Independence Tests Independence Tests Independence Tests Independence Tests
Independence Tests Independence Tests Independence Tests Independence Tests
All of the fitting routines assume that your data
are independent, identically distributed (IID), that
is, each point is independent of all the other data
points and all data points are drawn from identi-
cal distributions. Stat::Fit provides three types of
tests for independence.
The Independence Tests are chosen by selecting
Statistics on the Menu bar and then Independence
from the Submenu. The following submenu will
be shown:
Scatter Plot: Scatter Plot: Scatter Plot: Scatter Plot:
This is a plot of adjacent points in the sequence of
input data against each other. Thus each plotted
point represents a pair of data points [X
i+1
, X
i
].
This is repeated for all pairs of adjacent data
points. If the input data are somewhat dependent
on each other, then this plot will exhibit that
dependence. Time series, where the current data
point may depend on the nearest previous
value(s), will show that pattern here as a struc-
tured curve rather than a seemingly independent
scatter of points. An example is shown below.
The structure of dependent data can be visualized
graphically by starting with randomly generated
data, choosing this plot, and then putting the data
in ascending order with the Input / Operate com-
mands. The position of each point is now depen-
dent on the previous points and this plot would be
close to a straight line.
Autocorrelation: Autocorrelation: Autocorrelation: Autocorrelation:
The autocorrelation calculation used here
assumes that the data are taken from a stationary
process, that is, the data would appear the same
(statistically) for any reasonable subset of the
data. In the case of a time series, this implies that
the time origin may be shifted without affecting
the statistical characteristics of the series. Thus
the variance for the whole sample can be used to
represent the variance of any subset. For a simu-
lation study, this may mean discarding an early
warm-up period (see Law & Kelton
1
). In many
other applications involving ongoing series,
including financial, a suitable transformation of
the data might have to be made. If the process
being studied is not stationary, the calculation
1. 'Simulation Modeling & Analysis, Averill M.
Law, W. David Kelton, 1991, McGraw-Hill, p. 293
Stat::Fit 21 21 21 21
User Guide User Guide User Guide User Guide
and discussion of autocorrelation is more com-
plex (see Box
1
).
A graphical view of the autocorrelation can be
displayed by plotting the scatter of related data
points. The Scatter Plot, as previously described,
is a plot of adjacent data points, that is, of separa-
tion or lag 1. Scatter plots for data points further
removed from each other in the series, that is, for
lag j, could also be plotted, but the autocorrela-
tion is more instructive. The autocorrelation, rho,
is calculated from the equation:
where j is the lag between data points, s is the
standard deviation of the population, approxi-
mated by the standard deviation of the sample,
and xbar is the sample mean. The calculation is
carried out to 1/5 of the length of the data set
where diminishing pairs start to make the calcula-
tion unreliable.
The autocorrelation varies between 1 and -1,
between positive and negative correlation. If the
autocorrelation is near either extreme, the data
are autocorrelated. Note, however, that the auto-
correlation can assume finite values due to the
randomness of the data even though no signifi-
cant autocorrelation exists.
The numbers in parentheses along the x-axis are
the maximum positive and negative correlations.
For large data sets, this plot can take a while to
get to the screen. The overall screen redrawing
can be improved by viewing this plot and closing
it thereafter. The calculation is saved internally
and need not be recalculated unless the input data
changes.
Runs Tests Runs Tests Runs Tests Runs Tests
The Runs Test command calculates two different
runs tests for randomness of the data and displays
a view of the results. The result of each test is
either DO NOT REJECT the hypothesis that the
series is random or REJECT that hypothesis with
the level of significance given. The level of sig-
nificance is the probability that a rejected hypoth-
esis is actually true, that is, that the test rejects the
randomness of the series when the series is actu-
ally random.
A run in a series of observations is the occurrence
of an uninterrupted sequence of numbers with the
same attribute. For instance, a consecutive set of
increasing or decreasing numbers is said to pro-
vide runs up or down respectively. In particu-
lar, a single isolated occurrence is regarded as a
run of one.
The number of runs in a series of observations
indicates the randomness of those observations.
Too few runs indicate strong correlation, point to
point. Too many runs indicate cyclic behavior.
The first runs test is a median test which mea-
sures the number of runs, that is, sequences of
numbers, above and below the median (see
Brunk
2
). The run can be a single number above
or below the median if the numbers adjacent to it
are in the opposite direction. If there are too
many or too few runs, the randomness of the
series is rejected. This median runs test uses a
1. 'Time Series Analysis, George E. P. Box, Gwilym
M. Jenkins, Gregory C. Reinsel, 1994, Prentice-Hall
[
L
[ ( ) [
L M
[ ( )
2
Q M ( )
------------------------------------------
L D
Q
'
( , )
'
PD[
L
Q
--- ) [ ( )
( ,
j \
'
-
PD[ ) [ ( )
L 1 ( )
Q
---------------
( ,
j \
28 28 28 28 Chapter 3:
Independence Tests Independence Tests Independence Tests Independence Tests
To visualize this process for continuous data,
consider the two graphs below:
The first is the normal P-P plot, the cumulative
probability of the input data versus a continuous
plot of the fitted cumulative distribution. How-
ever, for the KS test, the comparison is made
between the probability of the input data having a
value at or below a given point and the probabil-
ity of the cumulative distribution at that point.
This is represented in the second graph by com-
paring the cumulative probability for the
observed data, the straight line, with the expected
probability from the fitted cumulative distribu-
tion as square points. The KS test measures the
largest difference between these, being careful to
account for the discrete nature of the measure-
ment.
Note that the KS test can be applied to discrete
data in slightly different manner, and the result-
ing test is even more conservative than the KS
test for continuous data. Also, the test may be
further strengthened for discrete data (see
Gleser
1
).
While the test statistic for the Kolmogorov-
Smirnov test can be useful, the p-value is more
useful in determining the goodness of fit. The p-
value is defined as the probability that another
sample will be as unusual as the current sample
given that the fit is appropriate. A small p-value
indicates that the current sample is highly
unlikely, and therefore, the fit should be rejected.
Conversely, a high p-value indicates that the sam-
ple is likely and would be repeated, and therefore,
the fit should not be rejected. Thus, the HIGHER
the p-value, the more likely that the fit is appro-
priate. When comparing two different fitted dis-
tributions, the distribution with the higher p-
value is likely to be the better fit regardless of the
level of significance.
Anderson Darling Test Anderson Darling Test Anderson Darling Test Anderson Darling Test
The Anderson Darling test is a test of the good-
ness of fit of the fitted cumulative distribution to
the input data in the Data Table, weighted heavily
in the tails of the distributions. This test calcu-
lates the integral of the squared difference
between the input data and the fitted distribution,
with increased weighting for the tails of the dis-
tribution, by the equation:
where W
n
2
is the AD statistic, n is the number of
data points, F(x) is the fitted cumulative distribu-
tion, and F
n
(x) is the cumulative distribution of
1. 'Exact Power of Goodness-of-Fit of Kolmogorov
Type for Discontinuous Distributions Leon Jay
Glese, J.Am.Stat.Assoc., 80 (1985) p. 954
W n
F x F(x
F(x F(x
dF(x
n 2
2
1
[ ( ) )]
)[ )]
)
Stat::Fit 29 29 29 29
User Guide User Guide User Guide User Guide
the input data. This can be reduced to the more
useful computational equation:
where
i
is the value of the fitted cumulative
distribution, F(x
i
), for the ith data point (see
Law & Kelton
1
, Anderson & Darling
2,3)
)
.
The resulting test statistic is then compared to a
standard value of the AD statistic with the appro-
priate number of data points and level of signifi-
cance, usually labeled alpha. The limitations of
the AD test are similar to the Kolmogorov
Smirnov test with the exception of the boundary
conditions discussed below. The AD test is not a
limiting distribution; it is appropriate for any
sample size. While the AD test is only valid if
none of the parameters in the test have been esti-
mated from the data, it can be used for fitted dis-
tributions with the understanding that it is then a
conservative test, that is, less likely to reject the
fit in error. The validity of the AD test can be
improved for some specific distributions. These
more stringent tests take the form of a multiplica-
tive adjustment to the general AD statistic.
The goodness of fit view also reports a REJECT
or DO NOT REJECT decision for each AD test
based on the comparison between the calculated
test statistic and the standard statistic for the
given level of significance. The AD test is very
sensitive to the tails of the distribution. For this
reason, the test must be used with discretion for
many of the continuous distributions with lower
bounds and finite values at that lower bound.
The test is inaccurate for discrete distributions as
the standard statistic is not easily calculated.
While the test statistic for the Anderson Darling
test can be useful, the p-value is more useful in
determining the goodness of fit. The p-value is
defined as the probability that another sample
will be as unusual as the current sample given
that the fit is appropriate. A small p-value indi-
cates that the current sample is highly unlikely,
and therefore, the fit should be rejected. Con-
versely, a high p-value indicates that the sample
is likely and would be repeated, and therefore, the
fit should not be rejected. Thus, the HIGHER the
p-value, the more likely that the fit is appropriate.
When comparing two different fitted distribu-
tions, the distribution with the higher p-value is
likely to be the better fit regardless of the level of
significance
General General General General
Each of these tests has its own regions of greater
sensitivity, but they all have one criterion in com-
mon. The fit and the tests are totally insensitive
for fewer than 10 data points (Stat::Fit will not
respond to less data), and will not achieve much
accuracy until 100 data points. On the order of
200 data points seems to be optimum. For large
data sets, greater than 4000 data points, the tests
can become too sensitive, occasionally rejected a
proposed distribution when it is actually a useful
fit. This can be easily tested with the Generate
command in the Input menu.
While the calculations are being performed, a
window at the bottom of the screen shows its
progress and allows for a Cancel option at any
time.
1. 'Simulation Modeling & Analysis, Averill M.
Law, W. David Kelton, 1991, McGraw-Hill, p. 392
2. 'A Test of Goodness of Fit, T. W. Anderson, D. A.
Darling, J.Am.Stat.Assoc., 1954, p. 765
3. 'Asymptotic Theory of Certain Goodness of Fit`
Criteria Based on Stochastic Processes, T. W.
Anderson, D. A. Darling, Ann.Math.Stat., 1952, p.
193
30 30 30 30 Chapter 3:
Independence Tests Independence Tests Independence Tests Independence Tests
The results are shown in a table. An example is
given below:
In the summary section, the distributions you
have selected for fitting are shown along with the
results of the Goodness of Fit Test(s). The num-
bers in parentheses after the type of distribution
are the parameters and they are shown explicitly
in the detailed information, below the summary
table.
Please note Please note Please note Please note
The above table shows results for the Chi-
Squared Test. The number in parentheses is the
degrees of freedom. When you want to compare
Chi-Squared from different distributions, you can
make a comparison only when they have the same
degrees of freedom.
The detailed information, following the summary
table, includes a section for each fitted distribu-
tion. This section includes:
parameter values
Chi Squared Test
Kolmogorov Smirnov Test
Anderson Darling Test
Please note Please note Please note Please note
If an error occurred in the calculations, the error
message is displayed instead.
For the Chi Squared Test, the details show:
total classes [intervals]
interval type [equal length, equal probable]
net bins [reduced intervals]
chi**2 [the calculated statistic]
degrees of freedom [net bins-1 here]
alpha [level of significance]
chi**2(n, alpha) [the standard statistic]
p-value
result
For both the Kolmogorov Smirnov and Anderson
Darling tests, the details show:
data points
stat [the calculated statistic]
alpha [level of significance]
stat (n, alpha) [the standard statistic]
p-value
result
Stat::Fit 31 31 31 31
User Guide User Guide User Guide User Guide
Distribution Fit - Auto::Fit Distribution Fit - Auto::Fit Distribution Fit - Auto::Fit Distribution Fit - Auto::Fit
Automatic fitting of continuous dis-
tributions can be performed by
clicking on the Auto::Fit icon or by
selecting Fit from the Menu bar and then
Auto::Fit from the Submenu.
This command follows the same procedure as
previously discussed for manual fitting.
Auto::Fit will automatically choose appropriate
continuous distributions to fit to the input data,
calculate Maximum Likelihood Estimates for
those distributions, test the results for Goodness
of Fit, and display the distributions in order of
their relative rank. The relative rank is deter-
mined by an empirical method which uses effec-
tive goodness of fit calculations. While a good
rank usually indicates that the fitted distribution
is a good representation of the input data, an
absolute indication of the goodness of fit is also
given.
An example is shown below:
For continuous distributions, the Auto::Fit dialog
limits the number of distributions by choosing
only those distributions with a lower bound or by
forcing a lower bound to a specific value as in Fit
Setup. Also, the number of distributions will be
limited if the skewness of the input data is nega-
tive. Many continuous distributions with lower
bounds do not have good parameter estimates in
this situation.
For discrete distributions, the Auto::Fit dialog
limits the distributions by choosing only those
distributions that can be fit to the data. The dis-
crete distributions must have a lower bound.
The acceptance of fit usually reflects the results
of the goodness of fit tests at the level of signifi-
cance chosen by the user. However, the accep-
tance may be modified if the fitted distribution
would generate significantly more data points in
the tails of the distribution than are indicated by
the input data.
32 32 32 32 Chapter 3:
Distribution Fit - Auto::Fit Distribution Fit - Auto::Fit Distribution Fit - Auto::Fit Distribution Fit - Auto::Fit
Replication and Confidence Level Replication and Confidence Level Replication and Confidence Level Replication and Confidence Level
Calculator Calculator Calculator Calculator
The Replications command allows the user to
calculate the number of independent data points,
or replications, of an experiment that are neces-
sary to provide a given range, or confidence
interval, for the estimate of a parameter. The con-
fidence interval is given for the confidence level
specified, with a default of 0.95. The resulting
number of replications is calculated using the t
distribution
1
.
To use the Replications calculator, select Utilities
from the Menu bar and then Replications.
The following dialog will be displayed.
The expected variation of the parameter must be
specified by either its expected maximum range
or its expected standard deviation. Quite fre-
quently, this variation is calculated by pilot runs
of the experiment or simulation, but can be cho-
sen by experience if necessary. Be aware that this
is just an initial value for the required replica-
tions, and should be refined as further data are
available.
Alternatively, the confidence interval for a given
estimate of a parameter can be calculated from
the known number of replications and the
expected or estimated variation of the parameter.
1. 'Discrete-Event System Simulation,
Second Edition, Jerry Banks, John
S. Carson II, Barry L. Nelson, 1966,
Prentice-Hall, p. 447is c
Stat::Fit 33 33 33 33
User Guide User Guide User Guide User Guide
Chapter 4:
Graphs
This chapter describes the types of graphs and the Graphics Style options. Graphical analysis and out-
put is an important part of Stat::Fit. The input data in the Data Table may be graphed as a histogram
or line chart and analyzed by a scatter plot or autocorrelation graph. The resulting fit of a distribution
may be compared to the input via a direct comparison, a difference plot, a Q-Q plot, and a P-P plot for
each analytical distribution chosen. The analytical distributions can be displayed for any set of param-
eters.
The resulting graphs can be modified in a variety of ways using the Graphics Style dialog in the Graph-
ics menu, which becomes active when a graph is the currently active window.
34 34 34 34 Chapter 4:
Result Graphs Result Graphs Result Graphs Result Graphs
Result Graphs Result Graphs Result Graphs Result Graphs
A density graph of your input data and the fitted
density can be viewed by choosing Fit from the
Menu bar and then Result Graphs.
This graph displays a histogram of the input data
overlaid with the fitted densities for specific dis-
tributions.
From the next menu that appears (see above),
choose Comparison.
Quicker access to this graph is accom-
plished by clicking on the Graph icon on
the Control bar.
The graph will appear with the default settings
of the input data in a blue histogram and the fitted
data in a red polygon, as shown below.
The distribution being fit is listed in the lower
box on the right. If you have selected more than
one distribution to be fit, a list of the distributions
is given in the upper box on the right. Select addi-
tional distributions to be displayed, as compari-
sons, by clicking on the distribution name(s) in
the upper box. The additional fit(s) will be added
to the graph and the name of the distribution(s)
added to the box on the lower right. There will be
a Legend at the bottom of the graph, as shown
below:
To remove distributions from the graph, click on
the distribution name in the box on the lower
right side and it will be removed from the graphic
display.
Stat::Fit provides many options for graphs in the
Graphics Style dialog, including changes in the
graph character, the graph scales, the title texts,
the graph fonts and the graph colors.
This dialog can be activated by selecting Graph-
ics from the Menu bar and then Graphics Style
from the Submenu.
The graph remains modified as long as the docu-
ment is open, even if the graph itself is closed and
reopened. It will also be saved with the project as
modified. Note that any changes are singular to
that particular graph; they do not apply to any
other graph in that document or any other docu-
ment.
If a special style is always desired, the default
values may be changed by changing any graph to
suit, and checking the Save Apply button at the
bottom of the dialog.
Stat::Fit 35 35 35 35
User Guide User Guide User Guide User Guide
Graphics Style Graphics Style Graphics Style Graphics Style
Graph Graph Graph Graph
The Graphics Style dialog box has 5 tabs (or
pages). When you select a tab, the dialog box
changes to display the options and default set-
tings for that tab. You determine the settings for
any tab by selecting or clearing the check boxes
on the tab. The new settings take effect when
you close the dialog box. If you want your new
settings to be permanent, select Save to Default
and they will remain in effect until you wish to
change them again.
The dialog box for the graph type options is
shown below:
The Graph Type chooses between three types of
distribution functions:
Density indicates the probability density
function, f(x), for continuous random vari-
ables and the probability mass function, p(j),
for discrete random variables. Quite fre-
quently, f(x) is substituted for p(j) with the
understanding that x then takes on only inte-
ger values.
Ascending cumulative indicates the cumu-
lative distribution function, F(x), where x
can be either a continuous random variable
or a discrete random variable. F(x) is contin-
uous or discrete accordingly. F(x) varies
from 0 to 1.
Descending cumulative indicates the sur-
vival function, (1-F(x)).
Graph Type is not available for Scatter Plot,
Autocorrelation, Q-Q plot and P-P plot.
The Normalization area indicates whether the
graph represents actual counts or a relative frac-
tion of the total counts.
Frequency represents actual counts for each
interval (continuous random variable) or
class (discrete random variable).
Relative Frequency represents the relative
fraction of the total counts for each interval
(continuous random variable) or class (dis-
crete random variable).
Normalization is only available for distribu-
tion graph types, such as Comparison and
Difference.
The graph style can be modified for both the
input data and the fitted distribution. Choices
include points, line, bar, polygon, filled polygon
and histogram. For Scatter Plots, the choices are
modified and limited to: points, cross, dots.
36 36 36 36 Chapter 4:
Graphics Style Graphics Style Graphics Style Graphics Style
Scale Scale Scale Scale
The dialog box for Scale is shown below:
The Scale page allows the x and y axes to be
scaled in various ways, as well as modifying the
use of a graph frame, a grid, or tick marks. The
default settings for Scale allow the data and fitted
distribution to be displayed. These settings can
be changed by deselecting the default and adding
Min and Max values.
Moreover, the printed graph will maintain that
aspect ratio as will the bitmap that can be saved
to file or copied to the Clipboard.
The Frame option allows you to have a full, par-
tial or no frame around your graph. A grid can be
added to your graph in both x and y, or just a hor-
izontal or vertical grid can be displayed. Tick
marks can be selected to be inside, outside, or
absent. Both ticks and the grid can overlay the
data.
Text Text Text Text
The dialog box for Text is shown below:
The Text function allows you to add text to your
graph. A Main Title, x-axis and y-axis titles, and
legends can be included. Scale factors can be
added. The layout of the y-axis title can be mod-
ified to be at the top, on the side or rotated along
the side of the y-axis. Some graphs load default
titles initially.
Fonts Fonts Fonts Fonts
The dialog box for Fonts is shown below:
Stat::Fit 37 37 37 37
User Guide User Guide User Guide User Guide
The Fonts page of the dialog provides font selec-
tion for the text titles and scales in the currently
active graph. The font type is restricted to True-
Type
Stat::Fit 61 61 61 61
User Guide User Guide User Guide User Guide
Examples of each of the regions of the Chi Squared distribution are shown above. Note that the peak of
the distribution moves away from the minimum value for increasing n, but with a much broader distri-
bution. More examples can be viewed by using the Distribution Viewer.
62 62 62 62
Discrete Uniform Distribution (min, max) Discrete Uniform Distribution (min, max) Discrete Uniform Distribution (min, max) Discrete Uniform Distribution (min, max)
Discrete Uniform Distribution (min, max) Discrete Uniform Distribution (min, max) Discrete Uniform Distribution (min, max) Discrete Uniform Distribution (min, max)
x = min, min+1, ..., max
min = minimum x
max = maximum x
Description Description Description Description
The Discrete Uniform distribution is a discrete distribution bounded on [min, max] with constant proba-
bility at every value on or between the bounds. Sometimes called the discrete rectangular distribution,
it arises when an event can have a finite and equally probable number of outcomes. (see Johnson et al.
1
Note that the probabilities are actually weights at each integer, but are represented by broader bars for
visibility.
1. 'Univariate Discrete Distributions, Norman L. Johnson, Samuel Kotz, Adrienne W. Kemp, 1992, John Wiley &
Sons, p. 272
p x ( )
max min
+ ++ +
1
1
Stat::Fit 63 63 63 63
User Guide User Guide User Guide User Guide
Erlang Distribution (min, m, beta) Erlang Distribution (min, m, beta) Erlang Distribution (min, m, beta) Erlang Distribution (min, m, beta)
min = minimum x
m = shape factor = positive integer
= scale factor > 0
Description Description Description Description
The Erlang distribution is a continuous distribution bounded on the lower side. It is a special case of the
Gamma distribution where the parameter, m, is restricted to a positive integer. As such, the Erlang dis-
tribution has no region where f(x) tends to infinity at the minimum value of x [m<1], but does have a
special case at m=1, where it reduces to the Exponential distribution.
The Erlang distribution has been used extensively in reliability and in queuing theory, thus in discrete
event simulation, because it can be viewed as the sum of m exponentially distributed random variables,
each with mean beta. It can be further generalized (see Johnson
1
, Banks & Carson
2
).
1. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1994,
John Wiley & Sons
2. 'Discrete-Event System Simulation, Jerry Banks, John S. Carson II, 1984, Prentice-Hall
f x
x
m
x
m
m
( )
( min)
( )
exp
[ min]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
1
64 64 64 64
Erlang Distribution (min, m, beta) Erlang Distribution (min, m, beta) Erlang Distribution (min, m, beta) Erlang Distribution (min, m, beta)
As can be seen in the previous examples, the Erlang distribution follows the Exponential distribution at
m=1, has a positive skewness with a peak near 0 for m between 2 and 9, and tends to a symmetrical dis-
tribution offset from the minimum at larger m.
Stat::Fit 65 65 65 65
User Guide User Guide User Guide User Guide
Exponential Distribution (min, beta) Exponential Distribution (min, beta) Exponential Distribution (min, beta) Exponential Distribution (min, beta)
min = minimum x value
= scale parameter = mean
Description Description Description Description
The Exponential distribution is a continuous distribution bounded on the lower side Its shape is always
the same, starting at a finite value at the minimum and continuously decreasing at larger x. As shown in
the examples above, the Exponential distribution decreases rapidly for increasing x.
The Exponential distribution is frequently used to represent the time between random occurrences, such
as the time between arrivals at a specific location in a queuing model or the time between failures in
reliability models. It has also been used to represent the services times of a specific operation. Further,
it serves as an explicit manner in which the time dependence on noise may be treated. As such, these
models are making explicit use of the lack of history dependence of the exponential distribution; it has
the same set of probabilities when shifted in time. Even when Exponential models are known to be
inadequate to describe the situation, their mathematical tractability provides a good starting point.
Later, a more complex distribution such as Erlang or Weibull may be investigated (see Law & Kelton
1
,
Johnson et al.
2
)
1. 'Simulation Modeling & Analysis, Averill M. Law, W. David Kelton, 1991, McGraw-Hill, p. 330
2. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1994,
John Wiley & Sons, p. 499
f x
x
( ) exp
[ min]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
1
66 66 66 66
Extreme Value type 1A Distribution (tau, beta) Extreme Value type 1A Distribution (tau, beta) Extreme Value type 1A Distribution (tau, beta) Extreme Value type 1A Distribution (tau, beta)
Extreme Value type 1A Distribution (tau, beta) Extreme Value type 1A Distribution (tau, beta) Extreme Value type 1A Distribution (tau, beta) Extreme Value type 1A Distribution (tau, beta)
= threshold/shift parameter
= scale parameter
Description Description Description Description
The Extreme Value 1A distribution is an unbounded continuous distribution. Its shape is always the
same but may be shifted or scaled to need. It is also called the Gumbel distribution.
The Extreme Value 1A distribution describes the limiting distribution of the extreme values of many
types of samples. Actually, the Extreme Value distribution given above is usually referred to as Type 1,
with Type 2 and Type 3 describing other limiting cases. If x is replaced by -x, then the resulting distri-
bution describes the limiting distribution for the least values of many types of samples. These reflected
pair of distributions are sometimes referred to as Type 1A and Type 1B.
The Extreme Value distribution has been used to represent parameters in growth models, astronomy,
human lifetimes, radioactive emissions, strength of materials, flood analysis, seismic analysis, and rain-
fall analysis. It is also directly related to many learning models (see Johnson
1
).
The Extreme Value 1A distribution starts below , is skewed in the positive direction peaking at , then
decreasing monotonically thereafter. determines the breadth of the distribution.
1. 'Continuous Univariate Distributions, Volume 2, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1995,
John Wiley & Sons
f x
x x
( ) exp
[ ]
exp exp
[ ]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
1
Stat::Fit 67 67 67 67
User Guide User Guide User Guide User Guide
Extreme Value type 1B Distribution (tau, beta) Extreme Value type 1B Distribution (tau, beta) Extreme Value type 1B Distribution (tau, beta) Extreme Value type 1B Distribution (tau, beta)
= threshold/shift parameter
= scale parameter
Description Description Description Description
The Extreme Value 1B distribution is an unbounded continuous distribution. Its shape is always the
same but may be shifted or scaled to need.
The Extreme Value 1B distribution describes the limiting distribution of the least values of many types
of samples. Actually, the Extreme Value distribution given above is usually referred to as Type 1, with
Type 2 and Type 3 describing other limiting cases. If x is replaced by x, then the resulting distribution
describes the limiting distribution for the greatest values of many types of samples. These reflected pair
of distributions are sometimes referred to as Type 1A and Type 1B. Note that the complimentary distri-
bution can be used to represent samples with positive skewness.
The Extreme Value distribution has been used to represent parameters in growth models, astronomy,
human lifetimes, radioactive emissions, strength of materials, flood analysis, seismic analysis, and rain-
fall analysis. It is also directly related to many learning models. (see Johnson et. al.4 )
1
The Extreme Value 1B distribution starts below , is skewed in the negative direction peaking at ,
then decreasing monotonically thereafter. determines the breadth of the distribution.
1. 'Continuous Univariate Distributions, Volume 2, Norman L. Johnson, Samuel Kotz, N.
Balakrishnan, 1995, John Wiley & Sons`` uAA
I [ ( )
1
---
[
-----------
( ,
j \
exp exp
[
-----------
( ,
j \
( ,
j \
exp
68 68 68 68
Gamma Distribution (min, alpha, beta) Gamma Distribution (min, alpha, beta) Gamma Distribution (min, alpha, beta) Gamma Distribution (min, alpha, beta)
Gamma Distribution (min, alpha, beta) Gamma Distribution (min, alpha, beta) Gamma Distribution (min, alpha, beta) Gamma Distribution (min, alpha, beta)
min = minimum x
= shape parameter > 0
= scale parameter > 0
Description Description Description Description
The Gamma distribution is a continuous distribution bounded at the lower side. It has three distinct
regions. For =1, the Gamma distribution reduces to the Exponential distribution, starting at a finite
value at minimum x and decreasing monotonically thereafter. For <1, the Gamma distribution tends to
infinity at minimum x and decreases monotonically for increasing x. For >1, the Gamma distribution
is 0 at minimum x, peaks at a value that depends on both alpha and beta, decreasing monotonically
thereafter. If alpha is restricted to positive integers, the Gamma distribution is reduced to the Erlang
distribution.
Note that the Gamma distribution also reduces to the Chi-squared distribution for min=0, =2, and
=n/2. It can then be viewed as the distribution of the sum of squares of independent unit normal vari-
ables, with n degrees of freedom and is used in many statistical tests.
f x
x x
( )
( min)
( )
exp
[ min]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
1
Stat::Fit 69 69 69 69
User Guide User Guide User Guide User Guide
The Gamma distribution can also be used to approximate the Normal distribution, for large alpha, while
maintaining its strictly positive values of x [actually (x-min)].
The Gamma distribution has been used to represent lifetimes, lead times, personal income data, a popu-
lation about a stable equilibrium, interarrival times, and service times. In particular, it can represent
lifetime with redundancy (see Johnson
1
, Shooman
2
).
Examples of each of the regions of the Gamma distribution are shown above. Note the peak of the dis-
tribution moving away from the minimum value for increasing alpha, but with a much broader distribu-
tion.
1. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1994,
John Wiley & Sons, p. 343
2. 'Probabilistic Reliability: An Engineering Approach, Martin L. Shooman, 1990, Robert E. Krieger
70 70 70 70
Geometric Distribution (p) Geometric Distribution (p) Geometric Distribution (p) Geometric Distribution (p)
Geometric Distribution (p) Geometric Distribution (p) Geometric Distribution (p) Geometric Distribution (p)
p = probability of occurrence
Description Description Description Description
The Geometric distribution is a discrete distribution bounded at 0 and unbounded on the high side. It is
a special case of the Negative Binomial distribution. In particular, it is the direct discrete analog for the
continuous Exponential distribution. The Geometric distribution has no history dependence, its proba-
bility at any value being independent of a shift along the axis.
The Geometric distribution has been used for inventory demand, marketing survey returns, a ticket con-
trol problem, and meteorological models (see Johnson
1
, Law & Kelton
2
)
Several examples with decreasing probability are shown above. Note that the probabilities are actually
weights at each integer, but are represented by broader bars for visibility.
1. 'Univariate Discrete Distributions, Norman L. Johnson, Samuel Kotz, Adrienne W. Kemp, 1992, John Wiley &
Sons, p. 201
2. 'Simulation Modeling & Analysis, Averill M. Law, W. David Kelton, 1991, McGraw-Hill, p. 366
p x p p
x
( ) ( ) 1
Stat::Fit 71 71 71 71
User Guide User Guide User Guide User Guide
Inverse Gaussian Distribution (min, alpha, beta) Inverse Gaussian Distribution (min, alpha, beta) Inverse Gaussian Distribution (min, alpha, beta) Inverse Gaussian Distribution (min, alpha, beta)
min = minimum x
= shape parameter > 0
= mixture of shape and scale > 0
Description Description Description Description
The Inverse Gaussian distribution is a continuous distribution with a bound on the lower side. It is
uniquely zero at the minimum x, and always positively skewed. The Inverse Gaussian distribution is
also known as the Wald distribution.
The Inverse Gaussian distribution was originally used to model Brownian motion and diffusion pro-
cesses with boundary conditions. It has also been used to model the distribution of particle size in
aggregates, reliability and lifetimes, and repair time (see Johnson
1
)
Examples of Inverse Gaussian distributions are shown above. In particular, notice the drastically
increased upper tail for increasing .
1. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1994,
John Wiley & Sons, p. 290
f x
x
x
x
( )
( min)
exp
( min )
( min)
/
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
, ,, ,
, ,, ,
] ]] ]
] ]] ]
] ]] ]
2 2
3
1 2
2
2
72 72 72 72
Inverse Weibull Distribution (min, alpha, beta) Inverse Weibull Distribution (min, alpha, beta) Inverse Weibull Distribution (min, alpha, beta) Inverse Weibull Distribution (min, alpha, beta)
Inverse Weibull Distribution (min, alpha, beta) Inverse Weibull Distribution (min, alpha, beta) Inverse Weibull Distribution (min, alpha, beta) Inverse Weibull Distribution (min, alpha, beta)
min = minimum x
= shape parameter > 0
= mixture of shape and scale > 0
Description Description Description Description
The Inverse Weibull distribution is a continuous distribution with a bound on the lower side. It is
uniquely zero at the minimum x, and always positively skewed. In general, the Inverse Weibull distribu-
tion fits bounded, but very peaked, data with a long positive tail.
The Inverse Weibull distribution has been used to describe several failure processes as a distribution of
lifetime. (see Calabria & Pulcini 6 )
1
It can also be used to fit data with abnormal large outliers on the
positive side of the peak.
Examples of Inverse Weibull distribution are shown above. In particular, notice the increased peaked-
ness and movement from the minimum for increasing .
1. R. Calabria, G. Pulcini, 'On the maximum likelihood and least-squares estimation in the
Inverse Weibull Distribution, Statistica Applicata, Vol. 2, n.1, 1990, p.53
I [ ( )
1
[ PLQ ( )
--------------------------
( ,
j \
1
1
[ PLQ ( )
--------------------------
( ,
j \
( ,
j \
exp
Stat::Fit 73 73 73 73
User Guide User Guide User Guide User Guide
Johnson SB Distribution (min, lamda, gamma, delta) Johnson SB Distribution (min, lamda, gamma, delta) Johnson SB Distribution (min, lamda, gamma, delta) Johnson SB Distribution (min, lamda, gamma, delta)
where: ; min = minimum value of x
= range of x above the minimum
= skewness parameter
= shape parameter > 0
Description Description Description Description
The Johnson SB distribution is a continuous distribution has both upper and lower finite bounds, similar
to the Beta distribution. The Johnson SB distribution, together with the Lognormal and the Johnson SU
distributions, are transformations of the Normal distribution and can be used to describe most naturally
occurring unimodal sets of data. However, the Johnson SB and SU distributions are mutually exclusive,
I [ ( )
2\ 1 \ ( )
---------------------------------- 1 2 ( ) 1Q
\
1 \
-----------
( ,
j \
( ,
j \
2
( ,
j \
exp
\
[ PLQ
------------------
74 74 74 74
Johnson SB Distribution (min, lamda, gamma, delta) Johnson SB Distribution (min, lamda, gamma, delta) Johnson SB Distribution (min, lamda, gamma, delta) Johnson SB Distribution (min, lamda, gamma, delta)
each describing data in specific ranges of skewness and kurtosis. This leaves some cases where the nat-
ural boundedness of the population cannot be matched.
The family of Johnson distributions have been used in quality control to describe non-normal processes,
which can then be transformed to the Normal distribution for use with standard tests. As can be seen in
the following examples, the Johnson SB distribution goes to zero at both of its bounds, with control-
ling the skewness and controlling the shape. The distribution can be either unimodal or bimodal. (see
Johnson et al.
1
and N. L. Johnson
2
)
1. 'Continuous Univariate distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N.
Balakrishnan, 1994, Johns Wiley & Sons, p. 34t/oS,,
2. N. L. Johnson, 'Systems of frequency curves generated by methods of translation,
Biometrika, Vol. 36, 1949, p. 149
Stat::Fit 75 75 75 75
User Guide User Guide User Guide User Guide
Johnson SU Distribution (xi, lamda, gamma, delta) Johnson SU Distribution (xi, lamda, gamma, delta) Johnson SU Distribution (xi, lamda, gamma, delta) Johnson SU Distribution (xi, lamda, gamma, delta)
Where:
= range of x above the minimum
= skewness parameter
= shape parameter > 0
Description Description Description Description
The Johnson SU distribution is an unbounded continuous distribution. The Johnson SU distribution,
together with the Lognormal and the Johnson SB distributions, can be used to describe most naturally
occurring unimodal sets of data. However, the Johnson SB and SU distributions are mutually exclusive,
I [ ( )
2 \
2
1
--------------------------------- 1/2 1Q \ \
2
1
( ,
j \
2
( ,
j \
exp
\
[
-----------
76 76 76 76
Johnson SU Distribution (xi, lamda, gamma, delta) Johnson SU Distribution (xi, lamda, gamma, delta) Johnson SU Distribution (xi, lamda, gamma, delta) Johnson SU Distribution (xi, lamda, gamma, delta)
each describing data in specific ranges of skewness and kurtosis. This leaves some cases where the nat-
ural boundedness of the population cannot be matched.
The family of Johnson distributions have been used in quality control to describe non-normal processes,
which can then be transformed to the Normal distribution for use with standard tests.
The Johnson SU distribution can be used in place of the notoriously unstable Pearson IV distribution,
with reasonably good fidelity over the most probable range of values.
As can be see in the examples above, the Johnson SU distribution is one of the few unbounded distribu-
tions that can vary its shape, with controlling the skewness and controlling the shape. The
scale is controlled by , , and . (see Johnson et al.
1
and N. L. Johnson
2
)
1. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N.
Balakrishnan, 1994, John Wiley & Sons, p. 34toriouslyunstable oS,,
2. N. L. Johnson, 'Systems of frequency curves generated by methods of translation,
Biometrika, Vol. 36, 1949, p. 149
Stat::Fit 77 77 77 77
User Guide User Guide User Guide User Guide
Logistic Distribution (alpha, beta) Logistic Distribution (alpha, beta) Logistic Distribution (alpha, beta) Logistic Distribution (alpha, beta)
= shift parameter
= scale parameter > 0
Description Description Description Description
The Logistic distribution is an unbounded continuous distribution which is symmetrical about its mean
(and shift parameter), . As shown in the example above, the shape of the Logistic distribution is very
much like the Normal distribution, except that the Logistic distribution has broader tails.
The Logistic function is most often used as a growth model; for populations, for weight gain, for busi-
ness failure, etc.. The Logistic distribution can be used to test for the suitability of such a model, with
transformation to get back to the minimum and maximum values for the Logistic function. Occasion-
ally, the Logistic function is used in place of the Normal function where exceptional cases play a larger
role (see Johnson
1
).
1. 'Continuous Univariate Distributions, Volume 2, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1995,
John Wiley & sons, p.113
f x
x
x
( )
exp
[ ]
exp
[ ]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
+ ++ +
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
, ,, ,
, ,, ,
] ]] ]
] ]] ]
] ]] ]
1
2
78 78 78 78
Log-Logistic Distribution (min, p, beta) Log-Logistic Distribution (min, p, beta) Log-Logistic Distribution (min, p, beta) Log-Logistic Distribution (min, p, beta)
Log-Logistic Distribution (min, p, beta) Log-Logistic Distribution (min, p, beta) Log-Logistic Distribution (min, p, beta) Log-Logistic Distribution (min, p, beta)
min = minimum x
p = shape parameter > 0
= scale parameter > 0
Description Description Description Description
The Log-Logistic distribution is a continuous distribution bounded on the lower side. Like the Gamma
distribution, it has three distinct regions. For p=1, the Log-Logistic distribution resembles the Exponen-
tial distribution, starting at a finite value at minimum x and decreasing monotonically thereafter. For
p<1, the Log-Logistic distribution tends to infinity at minimum x and decreases monotonically for
increasing x. For p>1, the Log-Logistic distribution is 0 at minimum x, peaks at a value that depends on
both p and , decreasing monotonically thereafter.
f x
p
x
x
p
p
( )
min
min)
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
+ ++ +
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
, ,, ,
, ,, ,
, ,, ,
] ]] ]
] ]] ]
] ]] ]
] ]] ]
1
2
1
Stat::Fit 79 79 79 79
User Guide User Guide User Guide User Guide
By definition, the natural logarithm of a Log-Logistic random variable is a Logistic random variable,
and can be related to the included Logistic distribution in much the same way that the Lognormal distri-
bution can be related to the included Normal distribution. The parameters for the included Logistic dis-
tribution, Lalpha and Lbeta, are given in terms of the Log-Logistic parameters, LLp and LL, by
Lalpha = ln (LL)
Lbeta = 1/LLp
The Log-Logistic distribution is used to model the output of complex processes such as business failure,
product cycle time, etc. (see Johnson
1
).
Note for p=1, the Log-Logistic distribution decreases more rapidly than the Exponential distribution but
has a broader tail. For large p, the distribution becomes more symmetrical and moves away from the
minimum.
1. 'Continuous Univariate Distributions, Volume 2, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1995,
John Wiley & Sons, p. 151
80 80 80 80
Lognormal Distribution (min, mu, sigma) Lognormal Distribution (min, mu, sigma) Lognormal Distribution (min, mu, sigma) Lognormal Distribution (min, mu, sigma)
Lognormal Distribution (min, mu, sigma) Lognormal Distribution (min, mu, sigma) Lognormal Distribution (min, mu, sigma) Lognormal Distribution (min, mu, sigma)
min = minimum x
= mean of the included Normal
= standard deviation of the included Normal
Description Description Description Description
The Lognormal distribution is a continuous distribution bounded on the lower side. It is always 0 at
minimum x, rising to a peak that depends on both and , then decreasing monotonically for increasing
x.
By definition, the natural logarithm of a Lognormal random variable is a Normal random variable. Its
parameters are usually given in terms of this included Normal.
The Lognormal distribution can also be used to approximate the Normal distribution, for small , while
maintaining its strictly positive values of x [actually (x-min)].
The Lognormal distribution is used in many different areas including the distribution of particle size in
naturally occurring aggregates, dust concentration in industrial atmospheres, the distribution of minerals
present in low concentrations, duration of sickness absence, physicians consultant time, lifetime distri-
f x
x
x
( )
( min)
exp
[ln( min) ]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
1
2
2
2
2
2
Stat::Fit 81 81 81 81
User Guide User Guide User Guide User Guide
butions in reliability, distribution of income, employee retention, and many applications modeling
weight, height, etc. (see Johnson
1
).
The Lognormal distribution can provide very peaked distributions for increasing , indeed, far more
peaked than can be easily represented in graphical form.
1. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1994,
John Wiley & Sons, p. 207
82 82 82 82
Negative Binomial Distribution (p,k) Negative Binomial Distribution (p,k) Negative Binomial Distribution (p,k) Negative Binomial Distribution (p,k)
Negative Binomial Distribution (p,k) Negative Binomial Distribution (p,k) Negative Binomial Distribution (p,k) Negative Binomial Distribution (p,k)
x = number of trials to get k events...
p = probability of event = [0,1]
k = number of desired events = positive integer
Description Description Description Description
The Negative Binomial distribution is a discrete distribution bounded on the low side at 0 and
unbounded on the high side. The Negative Binomial distribution reduces to the Geometric Distribution
for k=1. The Negative Binomial distribution gives the total number of trials, x to get k events (fail-
ures...), each with the constant probability, p, of occurring.
The Negative Binomial distribution has many uses; some occur because it provides a good approxima-
tion for the sum or mixing of other discrete distributions. By itself, it is used to model accident statis-
p x
k x
x
p p
k x
( ) ( )
+ ++ +
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
1
1
Stat::Fit 83 83 83 83
User Guide User Guide User Guide User Guide
tics, birth-and-death processes, market research and consumer expenditure, lending library data,
biometrics data, and many others (see Johnson
1
).
Several examples with increasing k are shown above. With smaller probability, p, the number of classes
is so large that the distribution is best plotted as a filled polygon. Note that the probabilities are actually
weights at each integer, but are represented by broader bars for visibility.
1. 'Univariate Discrete Distributions, Norman L. Johnson, Samuel Kotz, Adrienne W. Kemp, 1992, John Wiley &
Sons, p. 223
84 84 84 84
Normal Distribution (mu, sigma) Normal Distribution (mu, sigma) Normal Distribution (mu, sigma) Normal Distribution (mu, sigma)
Normal Distribution (mu, sigma) Normal Distribution (mu, sigma) Normal Distribution (mu, sigma) Normal Distribution (mu, sigma)
= shift parameter
= scale parameter = standard deviation
Description Description Description Description
The Normal distribution is a unbounded continuous distribution. It is sometimes called a Gaussian dis-
tribution or the bell curve. Because of its property of representing an increasing sum of small, indepen-
dent errors, the Normal distribution finds many, many uses in statistics. It is wrongly used in many
situations. Possibly, the most important test in the fitting of analytical distributions is the elimination of
the Normal distribution as a possible candidate. (See Johnson
1
).
The Normal distribution is used as an approximation for the Binomial distribution when the values of n,
p are in the appropriate range. The Normal distribution is frequently used to represent symmetrical
data, but suffers from being unbounded in both directions. If the data is known to have a lower bound,
it may be better represented by suitable parameterization of the Lognormal, Weibull or Gamma distribu-
tions. If the data is known to have both upper and lower bounds, the Beta distribution can be used,
although much work has been done on truncated Normal distributions (not supported in Stat::Fit).
The Normal distribution, shown above, has the familiar bell shape. It is unchanged in shape with
changes in or .
1. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1994,
John Wiley & Sons, p. 80
f x
x
( ) exp
[ ]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
1
2
2
2
2
2
Stat::Fit 85 85 85 85
User Guide User Guide User Guide User Guide
Pareto Distribution (min, alpha) Pareto Distribution (min, alpha) Pareto Distribution (min, alpha) Pareto Distribution (min, alpha)
min = minimum x
= scale parameter > 0
Description Description Description Description
The Pareto distribution is a continuous distribution bounded on the lower side. It has a finite value at
the minimum x and decreases monotonically for increasing x. A pareto random variable is the exponen-
tial of an Exponential random variable, and possesses many of the same characteristics.
The Pareto distribution has, historically, been used to represent the income distribution of a society. It is
also used to model many empirical phenomena with very long right tails, such as city population sizes,
occurrence of natural resources, stock price fluctuations, size of firms, brightness of comets, and error
clustering in communication circuits (see Johnson
1
).
The shape of the Pareto curve changes slowly with , but the tail of the distribution increases dramati-
cally with decreasing .
1. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1994,
John Wiley & Sons, p. 607
f x
x
( )
min
+ ++ +
1
86 86 86 86
Pearson 5 Distribution (min, alpha, beta) Pearson 5 Distribution (min, alpha, beta) Pearson 5 Distribution (min, alpha, beta) Pearson 5 Distribution (min, alpha, beta)
Pearson 5 Distribution (min, alpha, beta) Pearson 5 Distribution (min, alpha, beta) Pearson 5 Distribution (min, alpha, beta) Pearson 5 Distribution (min, alpha, beta)
min = minimum x
= shape parameter > 0
= scale parameter > 0
Description Description Description Description
The Pearson 5 distribution is a continuous distribution with a bound on the lower side. The Pearson 5
distribution is sometimes called the Inverse Gamma distribution due to the reciprocal relationship
between a Pearson 5 random variable and a Gamma random variable.
The Pearson 5 distribution is useful for modeling time delays where some minimum delay value is
almost assured and the maximum time is unbounded and variably long, such as time to complete a diffi-
cult task, time to respond to an emergency, time to repair a tool, etc. Similar space situations also exist
such as manufacturing space for a given process (see Law & Kelton
1
).
The Pearson 5 distribution starts slowly near its minimum and has a peak slightly removed from it, as
shown above. With decreasing , the peak gets flatter (see vertical scale) and the tail gets much
broader.
1. 'Simulation Modeling & Analysis, Averill M. Law, W. David Kelton, 1991, McGraw-Hill, p. 339
f x
x x
( )
( )( min)
exp
[ min]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
+ ++ +
1
Stat::Fit 87 87 87 87
User Guide User Guide User Guide User Guide
Pearson 6 Distribution (min, beta, p, q) Pearson 6 Distribution (min, beta, p, q) Pearson 6 Distribution (min, beta, p, q) Pearson 6 Distribution (min, beta, p, q)
x > min
min (-,)
> 0
p > 0
q > 0
Description Description Description Description
The Pearson 6 distribution is a continuous distribution bounded on the low side. The Pearson 6 distribu-
tion is sometimes called the Beta distribution of the second kind due to the relationship of a Pearson 6
random variable to a Beta random variable. When min=0, =1, p=nu
1
/2, q=nu
2,
/2, the Pearson 6 distri-
bution reduces to the F distribution of nu
1
, nu
2
which is used for many statistical tests of goodness of fit
(see Johnson
1
).
f x
x
x
B p q
p
p q
( )
min
min
( , )
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
+ ++ +
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
, ,, ,
, ,, ,
] ]] ]
] ]] ]
] ]] ]
+ ++ +
1
1
88 88 88 88
Pearson 6 Distribution (min, beta, p, q) Pearson 6 Distribution (min, beta, p, q) Pearson 6 Distribution (min, beta, p, q) Pearson 6 Distribution (min, beta, p, q)
Like the Gamma distribution, it has three distinct regions. For p=1, the Pearson 6 distribution resembles
the Exponential distribution, starting at a finite value at minimum x and decreasing monotonically
thereafter. For p<1, the Pearson 6 distribution tends to infinity at minimum x and decreases monotoni-
cally for increasing x. For p>1, the Pearson 6 distribution is 0 at minimum x, peaks at a value that
depends on both p and q, decreasing monotonically thereafter.
The Pearson 6 distribution appears to have found little direct use, except in its reduced form as the F dis-
tribution where it serves as the distribution of the ratio of independent estimators of variance and pro-
vides the final test for the analysis of variance.
The three regions of the Pearson 6 distribution on shown above. Also note that the distribution becomes
sharply peaked just off the minimum for increasing q.
1. 'Continuous Univariate Distributions, Volume 2 Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1995,
John Wiley & Sons, p. 322
Stat::Fit 89 89 89 89
User Guide User Guide User Guide User Guide
Poisson Distribution (lambda) Poisson Distribution (lambda) Poisson Distribution (lambda) Poisson Distribution (lambda)
= rate of occurrence
Description Description Description Description
The Poisson distribution is a discrete distribution bounded at 0 on the low side and unbounded on the
high side. The Poisson distribution is a limiting form of the Hypergeometric distribution.
The Poisson distribution finds frequent use because it represents the infrequent occurrence of events
whose rate is constant. This includes many types of events in time or space such as arrivals of telephone
calls, defects in semiconductor manufacturing, defects in all aspects of quality control, molecular distri-
butions, stellar distributions, geographical distributions of plants, shot noise, etc.. It is an important
starting point in queuing theory and reliability theory.
1
Note that the time between arrivals (defects) is
Exponentially distributed, which makes this distribution a particularly convenient starting point even
when the process is more complex.
The Poisson distribution peaks near and falls off rapidly on either side. Note that the probabilities are
actually weights at each integer, but are represented by broader bars for visibility.
1. 'Univariate Discrete Distributions, Norman L. Johnson, Samuel Kotz, Adrienne W. Kemp, 1992, John Wiley &
Sons, p. 151
p x
e
x
x
( )
!
90 90 90 90
Power Function Distribution (min, max, alpha) Power Function Distribution (min, max, alpha) Power Function Distribution (min, max, alpha) Power Function Distribution (min, max, alpha)
Power Function Distribution (min, max, alpha) Power Function Distribution (min, max, alpha) Power Function Distribution (min, max, alpha) Power Function Distribution (min, max, alpha)
min = minimum value of x
max = maximum value of x
shape parameter > 0
Description Description Description Description
The Power Function distribution is a continuous distribution that has both upper and lower finite
bounds, and is a special case of the Beta distribution with q=1. (see Johnson et al.
1
) The Uniform distri-
bution is a special case of the Power Function distribution with p=1.
As can be seen from the examples above, the Power Function distribution can approach zero or infinity
at its lower bound, but always has a finite value at its upper bound. Alpha controls the value at the lower
bound as well as the shape.
1. 'Continuous Univariate Distributions, Volume 2, Norman L. Johnson, Samuel Kotz, N.
Balakrishnan, 1995, John Wiley & Sons, p. 210AAAAAAAAAoS,,
I [ ( )
[ PLQ ( )
1
PD[ PLQ ( )
--------------------------------------
Stat::Fit 91 91 91 91
User Guide User Guide User Guide User Guide
Rayleigh Distribution (min, sigma) Rayleigh Distribution (min, sigma) Rayleigh Distribution (min, sigma) Rayleigh Distribution (min, sigma)
min = minimum x
= scale parameter > 0
Description Description Description Description
The Rayleigh distribution is a continuous distribution bounded on the lower side. It is a special case of
the Weibull distribution with alpha =2 and beta/sqrt(2) =sigma. Because of the fixed shape parameter,
the Rayleigh distribution does not change shape although it can be scaled.
The Rayleigh distribution is frequently used to represent lifetimes because its hazard rate increases lin-
early with time, e.g. the lifetime of vacuum tubes. This distribution also finds application in noise prob-
lems in communications. (see Johnson et al.
1
and Shooman
2
)
1. 'Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N.
Balakrishnan, 1994, John Wiley & Sons, p. 456. (see Johnson et aloS,,
2. 'Probabilistic Reliability: An Engineering Approach, Martin L. Shooman, 1990, RobertE.
Krieger, p. 48
I [ ( )
[ PLQ ( )
2
-----------------------
[ PLQ ( )
2
2
2
--------------------------
( ,
, (
j \
exp
92 92 92 92
Triangular Distribution (min, max, mode) Triangular Distribution (min, max, mode) Triangular Distribution (min, max, mode) Triangular Distribution (min, max, mode)
Triangular Distribution (min, max, mode) Triangular Distribution (min, max, mode) Triangular Distribution (min, max, mode) Triangular Distribution (min, max, mode)
min < x mode
mode < x max
min = minimum x
max = maximum x
mode = most likely x
Description Description Description Description
The Triangular distribution is a continuous distribution bounded on both sides.
The Triangular distribution is often used when no or little data is available; it is rarely an accurate repre-
sentation of a data set (see Law & Kelton
1
). However, it is employed as the functional form of regions
for fuzzy logic due to its ease of use.
The Triangular distribution can take on very skewed forms, as shown above, including negative skew-
ness. For the exceptional cases where the mode is either the min or max, the Triangular distribution
becomes a right triangle.
1. 'Simulation Modeling & Analysis, Averill M. Law, W. David Kelton, 1991, McGraw-Hill, p. 341
f x
x
e
x
e
( )
( min)
(max min)(mod min)
(max )
(max min)(max mod )
2
2
Stat::Fit 93 93 93 93
User Guide User Guide User Guide User Guide
Uniform Distribution (min, max) Uniform Distribution (min, max) Uniform Distribution (min, max) Uniform Distribution (min, max)
min = minimum x
max = maximum x
Description Description Description Description
The Uniform distribution is a continuous distribution bounded on both sides. Its density does not
depend on the value of x. It is a special case of the Beta distribution. It is frequently called the rectan-
gular distribution (see Johnson
1
). Most random number generators provide samples from the Uniform
distribution on (0,1) and then convert these samples to random variates from other distributions.
The Uniform distribution is used to represent a random variable with constant likelihood of being in any
small interval between min and max. Note that the probability of either the min or max value is 0; the
end points do NOT occur. If the end points are necessary, try the sum of two opposing right Triangular
distributions.
1. 'Continuous Univariate Distributions, Volume 2, Norman L. Johnson, Samuel Kotz, N. Balakrishnan, 1995,
John Wiley & Sons, p. 276
f x ( )
max min
1
94 94 94 94
Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta)
Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta)
min = minimum x
= shape parameter > 0
= scale parameter > 0
Description Description Description Description
The Weibull distribution is a continuous distribution bounded on the lower side. Because it provides
one of the limiting distributions for extreme values, it is also referred to as the Frechet distribution and
the Weibull-Gnedenko distribution. Unfortunately, the Weibull distribution has been given various
functional forms in the many engineering references; the form above is the standard form given in
Johnson
1
).
Like the Gamma distribution, it has three distinct regions. For =1, the Weibull distribution is reduced
to the Exponential distribution, starting at a finite value at minimum x and decreasing monotonically
thereafter. For <1, the Weibull distribution tends to infinity at minimum x and decreases monotoni-
1. Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrish-
nan, 1994, John Wiley & Sons, p. 628
f x
x x
( )
min
exp
[ min]
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
j jj j
( (( (
, ,, ,
\ \\ \
, ,, ,
( (( (
j jj j
( (( (
, ,, ,
, ,, ,
\ \\ \
, ,, ,
( (( (
( (( (
1
Stat::Fit 95 95 95 95
User Guide User Guide User Guide User Guide
cally for increasing x. For >1, the Weibull distribution is 0 at minimum x, peaks at a value that
depends on both and , decreasing monotonically thereafter. Uniquely, the Weibull distribution has
negative skewness for >3.6.
The Weibull distribution can also be used to approximate the Normal distribution for =3.6, while
maintaining its strictly positive values of x [actually (x-min)], although the kurtosis is slightly smaller
than 3, the Normal value.
The Weibull distribution derived its popularity from its use to model the strength of materials, and has
since been used to model just about everything. In particular, the Weibull distribution is used to repre-
sent wearout lifetimes in reliability, wind speed, rainfall intensity, health related issues, germination,
duration of industrial stoppages, migratory systems, and thunderstorm data (see Johnson
1
and
Shooman
2
).
1. ibid.
2. 'Probabilistic Reliability: An Engineering Approach, Martin L.
Shooman, 1990, Robert E. Krieger, p. 190
96 96 96 96
Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta) Weibull Distribution (min, alpha, beta)
Stat::Fit 97 97 97 97
User Guide User Guide User Guide User Guide
Bibliography
An Introduction in Mathematical Statistics H. D. Brunk, 1960, Ginn & Co.
Continuous Univariate Distributions, Volume 1, Norman L. Johnson, Samuel Kotz, N. Balakrishnan,
1994, John Wiley & Sons
Continuous Univariate Distributions, Volume 2, Norman L. Johnson, Samuel Kotz, N. Balakrishnan,
1995, John Wiley & Sons
Discrete Event System Simulation, Jerry Banks, John S. Carson II, 1984, Prentice-Hall
Introductory Statistical Analysis, Donald L. Harnett, James L. Murphy, 1975, Addison-Wesley
Kendalls Advanced Theory of Statistics, Volume 1 - Distribution Theory, Alan Stuart & J. Keith Ord,
1994, Edward Arnold
Kendalls Advanced Theory of Statistics, Volume 2 Alan Stuart & J. Keith Ord, 1991, Oxford Univer-
sity Press
Seminumerical Algorithms, Volume 2 Donald E. Knuth, 1981, Addison-Wesley
Simulation Modeling & Analysis Averill M. Law, W. David Kelton, 1991, McGraw-Hill
Univariate Discrete Distributions Normal L. Johnson, Samuel Kotz, Adrienne W. Kemp, 1992, John
Wiley & Sons
Statistical Distributions Second Edition, Merran Evans, Nicholas Hastings, Brian Peacock, 1993,
John Wiley & Sons
98 98 98 98
Stat::Fit 99 99 99 99
User Guide User Guide User Guide User Guide
Index
A
absolute value 13
AD statistic 28, 29
analytical distributions 22, 25, 84
Anderson Darling test 25, 28, 29, 30
ascending cumulative 35
Auto 22, 31, 46
autocorrelation 20, 21, 33, 35
B
Beta distribution 56, 57, 84, 87, 93
binned data 16, 19
Binomial distribution 58, 59, 84
Box Plot 38
C
Chi Squared Distribution 60
chi squared test 25, 26, 27, 51
classes 11, 12, 16, 19, 25, 26, 30
comparison graph 26, 34, 37
continuous data 9, 11, 16, 19, 25, 26, 28
continuous distributions 11, 12, 22, 24, 25, 29,
31
cumulative distribution 19, 27, 28, 29, 35, 39,
52
D
Data Table 9, 50
density 35
descending cumulative 35
descriptive statistics 18, 50
difference graph 38, 52
discrete data 11, 12, 16, 25, 28
discrete distribution 11, 12, 19, 22, 29, 58, 62,
70, 82, 89
Discrete Uniform distribution 62
distribution fit 23, 51
Distribution Graph 38
distribution viewer 40
E
Erlang distribution 63, 68
Exponential distribution 63, 64, 65, 68, 78, 79,
85, 88, 94
Export 46
Export of Empirical Distributions 46
Extreme Value distribution 66
Extreme Value type 1B Distribution 67
F
file output 46
filter 14
100 100 100 100
fit setup 23, 26, 27, 51
fonts 36, 37, 45
frequency 35
G
Gamma distribution 63, 68, 69, 78, 84, 86, 88,
94
generate 15, 29, 50
Geometric distribution 70, 82
goodness of fit 11, 22, 23, 25, 26, 27, 28, 29, 31,
51, 87
graph colors 37
graph scale 36
graph text 36
graphics 16, 33, 34, 35, 41, 46
graphics style 33, 34, 35
H
histogram 11, 16, 19, 26, 33, 34, 35, 37
I
independence tests 20
input data 16
input graph 16, 50
input options 11, 16, 19, 25
intervals 9, 11, 12, 16, 19, 25, 26, 27, 30
Inverse gaussian distribution 71
Inverse Weibull Distribution 72
J
Johnson SB Distribution 73
Johnson SU Distribution 75
K
Kolmogorov Smirnov test 25, 27, 29
kurtosis 95
L
level of significance 21, 24, 26, 27, 28, 29, 30
Logistic distribution 51, 77
Log-Logistic distribution 78
Lognormal distribution 80, 81
lower bound 29, 56, 84
Lower Bounds 11
M
manual data entry 6, 9
maximum likelihood 23, 24
maximum likelihood estimates 23, 31
mean 20, 21, 65, 77, 80
mode 92
moments 23, 24, 25
N
Negative Binomial distribution 82
Normal distribution 51, 52, 58, 69, 77, 79, 80,
84, 95
normalization 35
O
operate 12, 20
Stat::Fit 101 101 101 101
User Guide User Guide User Guide User Guide
P
Pareto distribution 85
Pearson 5 distribution 86
Pearson 6 distribution 87, 88
Poisson distribution 89
Power Function Distribution 90
P-P Plot 39, 53
precision 11, 12
print 44, 45
printer set-up 45
p-value 28
Q
Q-Q Plot 39, 52
R
random number stream 15
random variates 15, 46, 93
rank 31
Rayleigh Distribution 91
reciprocal 86
relative frequency 19, 26
Repopulate 14
report 44, 45
result graphs 25, 34, 52
runs test 21, 22
S
scatter plot 20, 21, 35, 51
Scott 12
skewness 24, 25, 64, 92, 95
standard deviation 21, 80, 84
statistics 16, 18, 19, 20, 21, 23, 26, 27, 50
Sturges 11, 25
T
test statistic 25, 26, 27, 28, 29
transform 13
Triangular distribution 92, 93
truncation 12
U
Uniform distribution 46, 56, 93
V
variance 20, 88
W
Weibull distribution 53, 65, 84, 94, 95
102 102 102 102