Distribution Free Methods, Which Do Not Rely On Assumptions That The
Distribution Free Methods, Which Do Not Rely On Assumptions That The
Distribution Free Methods, Which Do Not Rely On Assumptions That The
attribute of a population, test hypotheses about that attribute, its relationship with some other
attribute, or differences on that attribute across populations , across time or across related constructs,
that require no assumptions about the form of the population data distribution(s) nor require interval
level measurement.
1. Binomial Test procedureThe Binomial Test procedure compares an observed proportion of cases
to the proportion expected under a binomial distribution with a specified probability parameter.
The observed proportion is defined either by the number of cases having the first value of a
dichotomous variable or by the number of cases at or below agiven cut point on a scale variable.
By default, the probability parameter for both groups is 0.5, although this may be changed. To
change the probability, youenter a test proportion for the first group. The probability for the
second group is equal to 1 minus the probability for the first group. Additionally, descriptive
statistics and/or quartiles for the test variable may be displayed.
2. The Chi-Square Test procedure tabulates a variable into categories and tests thehypothesis that
the observed frequencies do not differ from their expected values.Chi-Square Test allows you
to:
Include all categories of the test variable, or limit the test to a specific range.
Use standard or customized expected values.
Obtain descriptive statistics and/or quartiles on the test variable.
RunsTest
A statistical procedure that examines whether a string of data is occurring randomly given a
specific distribution. The runs test analyzes the occurrence of similar events that are
separated by events that are different.
the Kolmogorov–Smirnov test (K–S test) is a nonparametric test for the equality of continuous, one-
dimensional probability distributions that can be used to compare a sample with a reference probability
distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). The Kolmogorov–
Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the
cumulative distribution function of the reference distribution, or between the empirical distribution
functions of two samples. The null distribution of this statistic is calculated under the null hypothesis
that the samples are drawn from the same distribution (in the two-sample case) or that the sample is
drawn from the reference distribution (in the one-sample case). In each case, the distributions
considered under the null hypothesis are continuous distributions but are otherwise unrestricted.
The two-sample KS test is one of the most useful and general nonparametric methods for comparing
two samples, as it is sensitive to differences in both location and shape of the empirical cumulative
distribution functions of the two samples.
In statistics, the Siegel–Tukey test, named after Sidney Siegel and John Tukey, is a non-parametric test
which may be applied to data measured at least on an ordinal scale. It tests for differences in scale
between two groups.
The test is used to determine if one of two groups of data tends to have more widely dispersed values
than the other. In other words, the test determines whether one of the two groups tends to move,
sometimes to the right, sometimes to the left, but away from the center (of the ordinal scale).
Parametric statistical procedures rely on assumptions about the shape of the distribution
(i.e., assume a normal distribution) in the underlying population and about the form or
parameters of the population distribution from which the sample was drawn.
Paired t test
This function gives a paired Student t test, confidence intervals for the difference between a pair of
means and, optionally, limits of agreement for a pair of samples (Armitage and Berry, 1994; Altman,
1991).
The paired t test provides an hypothesis test of the difference between population means for a pair of
random samples whose differences are approximately normally distributed. Please note that a pair of
samples, each of which are not from normal a distribution, often yields differences that are normally
distributed.
This function gives a single sample Student t test with a confidence interval for the mean difference.
The single sample t method tests a null hypothesis that the population mean is equal to a specified
value. If this value is zero (or not entered) then the confidence interval for the sample mean is given
The unpaired t method tests the null hypothesis that the population means related to
two independent, random samples from an approximately normal distribution are equal