What Statistical Analysis Should I Use?: Sunday, June 4, 2017 04:22 AM
What Statistical Analysis Should I Use?: Sunday, June 4, 2017 04:22 AM
What Statistical Analysis Should I Use?: Sunday, June 4, 2017 04:22 AM
should I use?
1
Sunday, June 4, 2017 04:22 AM
2
Introduction
For a useful general guide see
Policy: Twenty tips for interpreting scientific claims : Nature N
ews & Comment
William J. Sutherland, David Spiegelhalter and Mark Burgman
Nature Volume: 503, Pages: 335337 Date published: (21
November 2013).
Index End
3
Introduction
These examples are loosely based on a UCLA tutorial sheet. All
can be realised via the syntax window, appropriate command
strokes are also indicated. The guidelines to the APA reporting
style is motivated by Using SPSS for Windows and Macintosh:
Analyzing And Understanding Data Samuel B. Green and Neil J.
Salkind.
Index End
4
About the A data file
Most of the examples in this document will use a data file called A, high
school and beyond. This data file contains 200 observations from a
sample of high school students with demographic information about the
students, such as their gender (female), socio-economic status (ses) and
ethnic background (race). It also contains a number of scores on
standardized tests, including tests of reading (read), writing (write),
mathematics (math) and social studies (socst).
5
About the A data file
Syntax:-
display dictionary
/VARIABLES id female race ses schtyp prog read write math science socst.
Variable Position Label Value Label
id 1
female 2 .00 Male
1.00 Female
1.00 Hispanic
race 3 2.00 Asian
3.00 african-amer
4.00 White
1.00 Low
ses 4 2.00 Middle
3.00 High
1.00 Public
schtyp 5 type of school
2.00 private
1.00 general
prog 6 type of program 2.00 academic
3.00 vocation
read 7 reading score
write 8 writing score
math 9 math score
science 10 science score
socst 11 social studies score
6
About the A data file
Index End 7
One sample t-test
A one sample t-test allows us to test whether a sample mean (of a
normally distributed interval variable) significantly differs from a
hypothesized value. For example, using the A data file, say we wish to
test whether the average writing score (write) differs significantly
from 50. Test variable writing score (write), Test value 50. We can do
this as shown below.
Syntax:- t-test
/testval=50
/variable=write.
8
One sample t-test
One-Sample Test
significantly higher mean on the
Test Value = 50
writing test than 50. This is
95% Confidence Interval of the
Difference consistent with the reported
Lower Upper confidence interval (1.45,4.10)
writing score 1.4533 4.0967 that is (51.45,54.10) which
excludes 50, of course the mid-
Confidence interval point is the mean.
Crichton, N.
Journal Of Clinical Nursing 8(5) 618-618 1999
10
One sample t-test
Effect Size Statistics
SPSS supplies all the information necessary to compute an effect size, d ,
given by:
d = Mean Difference / SD
where the mean difference and standard deviation are reported in the SPSS
output. We can also compute d from the t value by using the equation
t
d
N
where N is the total sample size. d evaluates the degree that the mean on
the test variable differs from the test value in standard deviation units.
Potentially, d can range in value from negative infinity to positive infinity. If
d equals 0, the mean of the scores is equal to the test value. As d deviates
from 0, we interpret the effect size to be stronger. What is a small versus a
large d is dependent on the area of investigation. However, d values of .2, .5,
and .8, regardless of sign, are by convention interpreted as small, medium,
and large effect sizes, respectively.
11
One sample t-test
An APA Results Section
A one-sample t test was conducted to evaluate whether the mean of the
writing scores was significantly different from 50, the accepted mean. The
sample mean of 52.78 ( SD = 9.48) was significantly different from 50, t
(199) = 4.14, p < .001. The 95% confidence interval for the writing scores
mean ranged from 51.45 to 54.10. The effect size d of .29 indicates a
medium effect.
Index End
12
One sample median test
A one sample median test allows us to test whether a sample median
differs significantly from a hypothesized value. We will use the same
variable, write, as we did in the one sample t-test example above, but
we do not need to assume that it is interval and normally distributed
(we only need to assume that write is an ordinal variable).
Syntax:- nptests
/onesample test (write) wilcoxon(testvalue=50).
13
One sample median test
14
One sample median test
15
One sample median test
16
One sample median test
Index End
18
Binomial test
A one sample binomial test allows us to test whether the proportion of
successes on a two-level categorical dependent variable significantly
differs from a hypothesized value. For example, using the A data file, say
we wish to test whether the proportion of females (female) differs
significantly from 50%, i.e., from .5. We can do this as shown below.
Either
Menu selection:- Analyze > Nonparametric Tests > One Sample
19
Binomial test
20
Binomial test
21
Binomial test
22
Binomial test
23
Binomial test
24
Binomial test
Or
Menu selection:- Analyze > Nonparametric Tests > Legacy Dialogs > Binomial
26
Binomial test
Binomial Test
Observed Exact Sig.
Category N Prop. Test Prop. (2-tailed)
female Group 1 Male 91 .46 .50 .229
Group 2 Female 109 .54
Total 200 1.00
27
Binomial test
An APA Results Section
Index End
28
Chi-square goodness of fit
A chi-square goodness of fit test allows us to test whether the observed
proportions for a categorical variable differ from hypothesized
proportions. For example, let's suppose that we believe that the general
population consists of 10% Hispanic, 10% Asian, 10% African American
and 70% White folks. We want to test whether the observed
proportions from our sample differ significantly from these
hypothesized proportions. Note this example employs input data
(10, 10, 10, 70), in addition to A.
Menu selection:- At present the drop down menus cannot provide this
analysis.
29
Chi-square goodness of fit
race
Observed Expected
N N Residual These results show that racial
hispanic 24 20.0 4.0 composition in our sample does not
asian
african-
11
20
20.0
20.0
-9.0
.0
differ significantly from the
amer hypothesized values that we
white 145 140.0 5.0 supplied (chi-square with three
Total 200
degrees of freedom=5.029,
Test Statistics
p= 0.170).
race
Chi-Square 5.029a
df 3
Asymp. .170
Sig.
a. 0 cells (.0%) have expected frequencies less
than 5. The minimum expected cell frequency is
20.0.
Index End
30
Two independent samples
t-test
An independent samples t-test is used when you want to compare the
means of a normally distributed interval dependent variable for two
independent groups. For example, using the A data file, say we wish to
test whether the mean for write is the same for males and females.
Menu selection:- Analyze > Compare Means > Independent Samples T test
31
Two independent samples
t-test
32
Two independent samples
t-test
33
Two independent samples
t-test
34
Levene's test
In statistics, Levene's test is an inferential statistic used to assess the
equality of variances in different samples. Some common statistical
procedures assume that variances of the populations from which different
samples are drawn are equal. Levene's test assesses this assumption. It tests
the null hypothesis that the population variances are equal (called
homogeneity of variance or homoscedasticity). If the resulting p-value of
Levene's test is less than some critical value (typically 0.05), the obtained
differences in sample variances are unlikely to have occurred based on random
sampling from a population with equal variances. Thus, the null hypothesis of
equal variances is rejected and it is concluded that there is a difference
between the variances in the population.
37
Two independent samples t-
test - Effect Size Statistic
Eta square, 2 , may be computed . An 2 ranges in value from 0 to 1. It is
interpreted as the proportion of variance of the test variable that is a
function of the grouping variable. A value of 0 indicates that the
difference in the mean scores is equal to 0, whereas a value of 1 indicates
that the sample means differ, and the test scores do not differ within
each group (i.e., perfect replication). You can compute 2 with the following
equation: 2
2 t
2
t N1 N 2 2
What is a small versus a large 2 is dependent on the area of investigation.
However, 2 of .01, .06, and .14 are, by convention, interpreted as small,
medium, and large effect sizes, respectively.
38
Two independent samples t-
test - An APA Results Section
An independent-samples t test was conducted to evaluate the hypothesis
that the mean writing score was gender dependent. The test was
significant, t (198) = -3.656, p < .0005. Male students scored lower (M =
50.12, SD = 10.31), on average, than females (M = 54.99, SD = 8.13). The
95% confidence interval for the difference in means was quite wide,
ranging from -7.44 to -2.30. The eta square index indicated that 7% ( 2 = .
066) of the variance of the writing score was explained by gender.
Index End 39
Wilcoxon-Mann-Whitney
test
The Wilcoxon-Mann-Whitney test is a non-parametric analog to the
independent samples t-test and can be used when you do not assume that the
dependent variable is a normally distributed interval variable (you only assume
that the variable is at least ordinal). You will notice that the SPSS syntax for
the Wilcoxon-Mann-Whitney test is almost identical to that of the
independent samples t-test. We will use the same data file (the A data file)
and the same variables in this example as we did in the independent t-test
example above and will not assume that write, our dependent variable, is
normally distributed.
Nadim Nachar
Tutorials in Quantitative Methods for Psychology 2008 4(1) 13-20
43
Wilcoxon-Mann-Whitney
test
44
Wilcoxon-Mann-Whitney
test
45
Wilcoxon-Mann-Whitney
test
Ranks
Mean Sum of
female N Rank Ranks
writing male 91 85.63 7792.00
score female 109 112.92 12308.00 The results suggest that there is a
Total 200 statistically significant difference
between the underlying distributions
Test Statisticsa
of the write scores of males and the
writing
score write scores of females (z=-3.329,
Mann-Whitney U 3606.000 p=0.001).
Wilcoxon W 7792.000
Z -3.329
Asymp. Sig. (2- .001
tailed)
a. Grouping Variable: female
46
Wilcoxon-Mann-Whitney test
- An APA Results Section
Index End
47
Chi-square test
(Contingency table)
A chi-square test is used when you want to see if there is a relationship
between two categorical variables. It is equivalent to the correlation
between nominal variables.
48
Chi-square test
(Contingency table)
In SPSS, the chisq option is used on the statistics subcommand of the
crosstabs command to obtain the test statistic and its associated p-
value. Using the A data file, let's see if there is a relationship between
the type of school attended (schtyp) and students' gender (female).
Remember that the chi-square test assumes that the expected value for
each cell is five or higher. This assumption is easily met in the examples
below. However, if this assumption is not met in your data, please see
the section on Fisher's exact test, below.
Either
Menu selection:- Analyze > Tables > Custom Tables
Syntax:- crosstabs
/tables=schtyp by female
/statistic=chisq phi. 49
Chi-square test
50
Chi-square test
Select
chi-squared
Alternately
52
Chi-square test
54
Chi-square test
55
Chi-square test
Case Processing Summary
Cases
Valid Missing Total
N Percent N Percent N Percent
type of school * 200 100.0% 0 .0% 200 100.0%
female These results indicate
type of school * female Crosstabulation that there is no
Count
statistically significant
Female
Male female Total relationship between the
type of
school
public 77 91 168
type of school attended
private 14 18 32
Total 91 109 200 and gender (chi-square
with one degree of
Chi-Square Tests
Asymp. Sig. Exact Sig. Exact Sig. freedom=0.047,
Pearson Chi-Square
Value
.047a
Df
1
(2-sided)
.828
(2-sided) (1-sided)
p=0.828).
Continuity Correctionb .001 1 .981
56
Chi-square test
An APA Results Section
57
Chi-square test
Let's look at another example, this time looking at the relationship
between gender (female) and socio-economic status (ses). The point of
this example is that one (or both) variables may have more than two
levels, and that the variables do not have to have the same number of
levels. In this example, female has two levels (male and female) and ses
has three levels (low, medium and high).
Syntax:- crosstabs
/tables=female by ses
/statistic=chisq phi.
58
Chi-square test
Case Processing Summary
Cases
Valid Missing Total
N Percent N Percent N Percent Again we find that there is no
female * ses 200 100.0% 0 .0% 200 100.0%
statistically significant
female * ses Crosstabulation relationship between the
Count
ses
variables (chi-square with two
low middle high Total degrees of freedom=4.577,
female male
female
15
32
47
48
29
29
91
109
p=0.101).
Total 47 95 58 200
Chi-Square Tests
Note the absence of Fishers
Asymp. Sig. Exact Test!
Value df (2-sided)
Pearson Chi-Square 4.577a 2 .101
Likelihood Ratio 4.679 2 .096
Linear-by-Linear 3.110 1 .078
Association
N of Valid Cases 200
a. 0 cells (.0%) have expected count less than 5. The
minimum expected count is 21.39.
59
Chi-square test
An APA Results Section
Index End
60
Phi coefficient
The measure of association, phi, is a measure which adjusts the chi
square statistic by the sample size. The phi coefficient is the equivalent
of the correlation between nominal variables.
Select Phi
61
Phi coefficient
The p values for the tests are
identical.
2 = 4.577
2 4.577
= 0.151 0.151
n 200
n = 200
Index End
62
Fisher's exact test
The Fisher's exact test is used when you want to conduct a chi-square
test but one or more of your cells has an expected frequency of five or
less. Remember that the chi-square test assumes that each cell has an
expected frequency of five or more, but the Fisher's exact test has no
such assumption and can be used regardless of how small the expected
frequency is. In SPSS you can only perform a Fisher's exact test on a
2x2 table, and these results are presented by default. Please see the
results from the chi-square example above.
63
Fisher's exact test
A simple web search should reveal specific tools developed for different
size tables. For example
64
Fisher's exact test
For larger examples you might try
Algorithm 643
66
One-way ANOVA
67
One-way ANOVA
68
One-way ANOVA
ANOVA
writing score
Sum of Mean
Squares df Square F Sig.
Between 3175.698 2 1587.849 21.275 .000
Groups
Within Groups 14703.177 197 74.635
Total 17878.875 199
69
One-way ANOVA
To see the mean of write for each level of program type,
70
One-way ANOVA
71
One-way ANOVA
72
One-way ANOVA Case Processing Summary
Cases
Included Excluded Total
N Percent N Percent N Percent
writing score * type of 200 100.0% 0 .0% 200 100.0%
program
Report
writing score
type of Std.
program Mean N Deviation
general 51.3333 45 9.39778
academic 56.2571 105 7.94334
vocation 46.7600 50 9.31875
Total 52.7750 200 9.47859
From this we can see that the students in the academic program have
the highest mean writing score, while students in the vocational
program have the lowest. For a more detailed analysis refer to
Bonferroni for pairwise comparisons .
73
One-way ANOVA
For 2 need to run a general linear model, an alternate approach
74
One-way ANOVA
75
One-way ANOVA
76
One-way ANOVA
An APA Results Section
Index End 77
Kruskal Wallis test
The Kruskal Wallis test is used when you have one independent
variable with two or more levels and an ordinal dependent variable.
In other words, it is the non-parametric version of ANOVA and a
generalized form of the Mann-Whitney test method since it permits
two or more groups. We will use the same data file as the one way
ANOVA example above (the A data file) and the same variables as in
the example above, but we will not assume that write is a normally
distributed interval variable.
78
Kruskal Wallis test
79
Kruskal Wallis test
80
Kruskal Wallis test
81
Kruskal Wallis test
Ranks
type of Mean
program N Rank
writing general 45 90.64
score academic 105 121.56 If some of the scores receive tied
vocation 50 65.14 ranks, then a correction factor is
Total 200
used, yielding a slightly different
value of chi-squared. With or without
Test Statisticsa,b
writing
ties, the results indicate that there
score is a statistically significant
Chi-Square 34.045 difference (p < .0005) among the
df
Asymp.
2
.000
three type of programs.
Sig.
a. Kruskal Wallis Test
b. Grouping Variable: type of
program
82
Kruskal Wallis test
An APA Results Section
Index End
83
Paired t-test
A paired (samples) t-test is used when you have two related
observations (i.e., two observations per subject) and you want to see if
the means on these two normally distributed interval variables differ
from one another. For example, using the A data file we will test
whether the mean of read is equal to the mean of write.
84
Paired t-test
85
Paired t-test
86
Paired Samples Statistics
Paired t-test
Std. Std. Error
87
Paired t-test
An APA Results Section
A paired-samples t test was conducted to evaluate whether reading and
writing scores were related. The results indicated that the mean score
for writing (M = 52.78, SD = 9.48) was not significantly greater than the
mean score fro reading ( M = 52.23, SD = 10.25), t (199) = -.87, p = ..39.
The standardized effect size index, d , was .06. The 95% confidence
interval for the mean difference between the two ratings was -1.78 to .
69.
t
Recall d
N
Index End
88
Wilcoxon signed rank sum
test
The Wilcoxon signed rank sum test is the non-parametric version of a
paired samples t-test. You use the Wilcoxon signed rank sum test when
you do not wish to assume that the difference between the two
variables is interval and normally distributed (but you do assume the
difference is ordinal). We will use the same example as above, but we will
not assume that the difference between read and write is interval and
normally distributed.
90
Wilcoxon signed rank sum
test
Select Wilcoxon
91
Wilcoxon signed rank sum
test
Ranks
Mean Sum of
N Rank Ranks The results suggest that
reading score - writing Negative 97a 95.47 9261.00 there is not a
score Ranks
Positive Ranks 88b 90.27 7944.00
statistically significant
Ties 15 c difference (p= 0.366)
Total 200 between read and write.
a. reading score < writing score
b. reading score > writing score
c. reading score = writing score
Test Statisticsb
reading score
- writing
score
Z -.903a
Asymp. Sig. (2- .366
tailed)
a. Based on positive ranks.
b. Wilcoxon Signed Ranks Test
Index End
92
Sign test
If you believe the differences between read and write were not ordinal
but could merely be classified as positive and negative, then you may
want to consider a sign test in lieu of sign rank test. The Sign test
answers the question How Often?, whereas other tests answer the
question How Much?. Again, we will use the same variables in this
example and assume that this difference is not ordinal.
93
Sign test
94
Sign test
Select Sign
95
Sign test
Frequencies
N We conclude that no statistically
writing score - reading Negative 88 significant difference was found
score Differencesa (p = 0.556).
Positive 97
Differencesb
Tiesc 15
Total 200
a. writing score < reading score
b. writing score > reading score
c. writing score = reading score
Test Statisticsa
writing score
- reading
score
Z -.588
Asymp. Sig. (2- .556
tailed)
a. Sign Test
96
Sign test
An APA Results Section
A Wilcoxon signed ranks test was conducted to evaluate whether
reading and writing scores differed. The results indicated a non-
significant difference, z = -.59, p = .56. The mean of the negative
ranks were 95.47 and the positive were 90.27.
Index End
97
McNemar test
McNemar's test is a statistical test used on paired nominal data. It is applied to
22 contingency tables with a dichotomous trait, with matched pairs of
subjects, to determine whether the row and column marginal frequencies are
equal (that is, whether there is marginal homogeneity). For k groups use
Cochrans Q test.
You would perform McNemar's test if you were interested in the marginal
frequencies of two binary outcomes. These binary outcomes may be the same
outcome variable on matched pairs (like a case-control study) or two outcome
variables from a single group. Continuing with the A dataset used in several
above examples, let us create two binary outcomes in our dataset: himath and
hiread. These outcomes can be considered in a two-way contingency table.
The null hypothesis is that the proportion of students in the himath group is
the same as the proportion of students in hiread group (i.e., that the
contingency table is symmetric).
CROSSTABS
/TABLES=himath BY hiread
/STATISTICS=MCNEMAR
/CELLS=COUNT.
99
McNemar test
100
McNemar test
102
McNemar test
103
McNemar test
Select McNemar
104
McNemar test
Case Processing Summary
McNemar's chi-square
Cases
Valid Missing Total statistic suggests that
N Percent N Percent N Percent there is not a statistically
himath *
hiread
200 100.0% 0 .0% 200 100.0%
significant difference in
the proportion of students
himath * hiread Crosstabulation in the himath group and
Count
hiread the proportion of students
.00 1.00 Total in the hiread group.
himath .00 135 21 156
1.00 18 26 44
Total 153 47 200 Alternately accessing the
command directly.
Chi-Square Tests
Exact Sig.
Value (2-sided)
McNemar Test .749a
N of Valid 200
Cases
a. Binomial distribution used.
105
McNemar test
Menu selection:- Analyze > Nonparametric Tests > Legacy Dialogs
> 2 Related Samples
106
McNemar test
107
McNemar test
108
McNemar test
McNemar's chi-square
statistic suggests that there
is not a statistically
significant difference in the
proportion of students in the
himath group and the
proportion of students in the
hiread group.
109
McNemar test
An APA Results Section
Proportions of student scoring high in math and reading were .22
and .24, respectively. A McNemar test, which evaluates
differences among related proportions, was not significant, 2 (1,
N = 200) = .10, p = .75.
Index End
110
Cochrans Q test
In the analysis of two-way randomized block designs where the response
variable can take only two possible outcomes (coded as 0 and 1), Cochran's Q
test is a non-parametric statistical test to verify whether k treatments have
identical effects. Your data for the k groups came from the same participants
(i.e. the data were paired).
You would perform Cochrans Q test if you were interested in the marginal
frequencies of three or more binary outcomes. Continuing with the A dataset
used in several above examples, let us create three binary outcomes in our
dataset: himath, hiread and hiwrite. The null hypothesis is that the proportion
of students in each group is the same.
NPAR TESTS
/COCHRAN=himath hiread hiwrite
/MISSING LISTWISE
/METHOD=EXACT TIMER(5).
112
Cochrans Q test
First transform
113
Cochrans Q test
115
Cochrans Q test
116
Cochrans Q test
117
Cochrans Q test
Necessary for summary
118
Cochrans Q test
An APA Results Section
Proportions of student scoring high in math, reading and writing were .
22, .24, and .25, respectively. A Cochran test, which evaluates
differences among related proportions, was not significant, 2 (2, N =
200) = .60, p = .74. The Kendall coefficient of concordance was .002.
Index End
119
Cochrans Q test
When you find any significant effect, you need to do a post-hoc test (as
you do for ANOVA). For Cochran's Q test: run multiple McNemar's
tests and adjust the p values with Bonferroni correction (a method used
to address the problem of multiple comparisons, over corrects for Type
I error).
Index End
120
About the B data file
We have an example data set called B, which is used in Roger E.
Kirk's book Experimental Design: Procedures for Behavioral
Sciences (Psychology) (ISBN 0534250920).
121
About the B data file
Index End
122
One-way repeated
measures ANOVA
You would perform a one-way repeated measures analysis of variance if you
had one categorical independent variable and a normally distributed interval
dependent variable that was repeated at least twice for each subject. This
is the equivalent of the paired samples t-test, but allows for two or more
levels of the categorical variable. This tests whether the mean of the
dependent variable differs by the categorical variable. In data set B, y (y1
y2 y3 y4) is the dependent variable, a is the repeated measure (a name you
assign) and s is the variable that indicates the subject number.
Menu selection:- Analyze > General Linear Model > Repeated Measures
Syntax:- glm y1 y2 y3 y4
/wsfactor a(4).
123
One-way repeated
measures ANOVA
124
One-way repeated
measures ANOVA
125
One-way repeated
measures ANOVA
126
One-way repeated
measures ANOVA
Finally
127
One-way repeated
measures ANOVA
Tests of Within-Subjects Effects
Within-Subjects
Loads of output!!
Measure:MEASURE_1
Factors Type III Sum Mean
Measure:MEASURE Source of Squares df Square F Sig.
_1
A Sphericity 49.000 3 16.333 11.627 .000
Dependent Assumed
a Variable
Greenhouse- 49.000 1.859 26.365 11.627 .001
1 y1 Geisser
2 y2
Huynh-Feldt 49.000 2.503 19.578 11.627 .000
3 y3
Lower-bound 49.000 1.000 49.000 11.627 .011
4 y4
Error(a) Sphericity 29.500 21 1.405
Assumed
Multivariate Testsb Greenhouse- 29.500 13.010 2.268
Hypothesis Geisser
Effect Value F df Error df Sig. Huynh-Feldt 29.500 17.520 1.684
a Pillai's Trace .754 5.114a 3.000 5.000 .055 Lower-bound 29.500 7.000 4.214
a
Wilks' Lambda .246 5.114 3.000 5.000 .055
Hotelling's Trace 3.068 5.114a 3.000 5.000 .055
Tests of Within-Subjects Contrasts
Roy's Largest 3.068 5.114a 3.000 5.000 .055 Measure:MEASURE_1
Root
Type III Sum Mean
a. Exact statistic Source a of Squares df Square F Sig.
b. Design: Intercept A Linear 44.100 1 44.100 19.294 .003
Within Subjects Design: a
Quadrati 4.500 1 4.500 3.150 .119
c
Mauchly's Test of Sphericityb Cubic .400 1 .400 .800 .401
Measure:MEASURE_1
Error(a) Linear 16.000 7 2.286
Within Subjects Mauchly's Approx. Chi-
Quadrati 10.000 7 1.429
Effect W Square df Sig.
c
a .339 6.187 5 .295
Cubic 3.500 7 .500
129
One-way repeated
measures ANOVA
An APA Results Section
A one-way within-subjects ANOVA was conducted with the factor being the
number of years married and the dependent variable being y. The means and
standard deviations for scores are presented above . The results for the ANOVA
indicated a significant time effect, Wilkss = .25, F (3, 21) = 11.63, p < .0005,
multivariate 2 = .75 (1-).
Index End
130
Bonferroni for pairwise
comparisons
This is a minor extension of the
previous analysis.
Menu selection:-
Analyze
> General Linear Model
> Repeated Measures
Syntax:-
GLM y1 y2 y3 y4
/WSFACTOR=a 4 Polynomial
/METHOD=SSTYPE(3)
132
Bonferroni for pairwise
comparisons
133
Bonferroni for pairwise
comparisons
The results presented in the
previous Tests of Within-Subjects
Effects table, the Huynh-Feldt
(p < .0005) informed us that we
have an overall significant
difference in means, but we do not
know where those differences
occurred.
135
Bonferroni for pairwise
comparisons
The table provides four variants of the F test. Wilks' lambda is the most
commonly reported. Usually the same substantive conclusion emerges from
any variant. For these data, we conclude that none of effects are
significant (p = 0.055). See next slide.
136
Bonferroni for pairwise
comparisons
Wilks lambda is the easiest to understand and therefore the most frequently used. It
has a good balance between power and assumptions. Wilks lambda can be interpreted as
the multivariate counterpart of a univariate R-squared, that is, it indicates the
proportion of generalized variance in the dependent variables that is accounted for by
the predictors.
Correct Use of Repeated Measures Analysis of Variance E. Park, M. Cho and C.-S. Ki. Korean J. Lab. Med. 2009 29 1-9
Information Point: Wilks' lambda Nicola Crichton Journal of Clinical Nursing, 9, 381-381, 2000.
138
About the C data file
140
Repeated measures logistic
regression
While the drop down menus can be employed to set the arguments it is
simpler to employ the syntax window.
For completeness the drop down menu saga is shown, some 9 slides!
141
Repeated measures logistic
regression
142
Repeated measures logistic
regression
143
Repeated measures logistic
regression
144
Repeated measures logistic
regression
145
Repeated measures logistic
regression
146
Repeated measures logistic
regression
147
Repeated measures logistic
regression
148
Repeated measures logistic
regression
149
Repeated measures logistic
regression
150
Repeated measures logistic
regression
Goodness of Fitb
Model Information
Value
Dependent Variable highpulsea
Quasi Likelihood under 113.986
Probability Distribution Binomial
Independence Model
Link Function Logit
Criterion (QIC)a
Subject 1 id
Corrected Quasi Likelihood 111.340
Effect
under Independence Model
Working Correlation Matrix Structure Exchangeable
Criterion (QICC)a
a. The procedure models .00 as the response, treating 1.00 as the
Dependent Variable: highpulse
reference category.
Model: (Intercept), diet
a. Computed using the full log quasi-
Case Processing Summary likelihood function.
N Percent b. Information criteria are in small-is-
Included 90 100.0% better form.
Exclude 0 .0%
d Tests of Model Effects
Total 90 100.0% Type III
Loads of output!!
Wald Chi-
Correlated Data Summary Source Square df Sig.
Number of Levels Subject id 30 (Intercept) 8.437 1 .004
Effect diet 1.562 1 .211
Number of Subjects 30 Dependent Variable: highpulse
Number of Minimum 3 Model: (Intercept), diet
Measurements per Maximum 3
Subject
Parameter Estimates
Correlation Matrix Dimension 3
95% Wald Confidence Interval
Parameter B Std. Error Lower Upper
Categorical Variable Information
(Intercept) 1.253 .4328 .404 2.101
N Percent [diet=2.00] -.754 .6031 -1.936 .428
Dependent highpulse .00 63 70.0% [diet=1.00] 0a . . .
Variable 1.00 27 30.0% (Scale) 1
Total 90 100.0%
Factor diet 2.00 45 50.0%
1.00 45 50.0%
Total 90 100.0%
151
Repeated measures logistic
regression
Parameter Estimates
Hypothesis Test
Wald
Chi-
Parameter Square df Sig.
(Intercept) 8.377 1 .004
[diet=2.00] 1.562 1 .211
[diet=1.00] . . .
(Scale)
Dependent Variable: highpulse
Model: (Intercept), diet
a. Set to zero because this parameter is
redundant.
Index End
152
Factorial ANOVA
A factorial ANOVA has two or more categorical independent variables
(either with or without the interactions) and a single normally distributed
interval dependent variable. For example, using the A data file we will look
at writing scores (write) as the dependent variable and gender (female)
and socio-economic status (ses) as independent variables, and we will
include an interaction of female by ses. Note that in SPSS, you do not
need to have the interaction term(s) in your data set. Rather, you can
have SPSS create it/them temporarily by placing an asterisk between the
variables that will make up the interaction term(s). For the approach
adopted here, this step is automatic. However see the syntax example
below.
153
Factorial ANOVA
Alternate
Syntax:- UNIANOVA write BY female ses
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/CRITERIA=ALPHA(0.05)
/DESIGN=female ses female*ses.
154
Factorial ANOVA
155
Factorial ANOVA
156
Factorial ANOVA
These results indicate that
the overall model is
statistically significant
(F=5.666, p<0.0005). The
variables female and ses are
also statistically significant
(F=16.595, p<0.0005 and
F=6.611, p=0.002,
respectively). However, note
that interaction between
female and ses is not
statistically significant
(F=0.133, p=0.875).
Index End
157
Friedman test
You perform a Friedman test when you have one within-subjects
independent variable with two or more levels and a dependent variable
that is not interval and normally distributed (but at least ordinal). We will
use this test to determine if there is a difference in the reading, writing
and math scores. The null hypothesis in this test is that the distribution
of the ranks of each type of score (i.e., reading, writing and math) are the
same. To conduct a Friedman test, the data need to be in a long format (
see the next topic).
158
Friedman test
159
Friedman test
160
Friedman test
Ranks
Mean Friedman's chi-square has a value of 0.645 and a
Rank
reading 1.96
p-value of 0.724 and is not statistically
score significant. Hence, there is no evidence that the
writing score 2.04 distributions of the three types of scores are
math score 2.01
different.
Test Statisticsa
N 200
Chi- .645
Square
df 2
Asymp. .724
Sig.
a. Friedman Test
Index End
161
Reshaping data
This example illustrates a wide data file and reshapes it into long form.
Consider the data containing the kids and their heights at one year of age
(ht1) and at two years of age (ht2).
FAMID BIRTH HT1 HT2
This is called a wide format since the heights are wide. We may want the
data to be long, where each height is in a separate observation.
162
Reshaping data
FAMID BIRTH AGE HT
We may want the data to be long, where
1.00 1.00 1.00 2.80
1.00 1.00 2.00 3.40 each height is in a separate observation.
1.00 2.00 1.00 2.90
1.00 2.00 2.00 3.80
Index End
163
Ordered logistic regression
Ordered logistic regression is used when the dependent variable is
ordered, but not continuous. For example, using the A data file we will
create an ordered variable called write3. This variable will have the
values 1, 2 and 3, indicating a low, medium or high writing score. We do
not generally recommend categorizing a continuous variable in this way;
we are simply creating a variable to use for this example.
164
Ordered logistic regression
165
Ordered logistic regression
166
Ordered logistic regression
167
Ordered logistic regression
finally continue
168
Ordered logistic regression
169
Ordered logistic regression
We will use gender (female), reading score (read) and social studies score
(socst) as predictor variables in this model. We will use a logit link and on
the print subcommand we have requested the parameter estimates, the
(model) summary statistics and the test of the parallel lines assumption.
170
Ordered logistic regression
171
Ordered logistic regression
172
Ordered logistic regression
173
Ordered logistic regression
The results indicate that the overall model is statistically
significant (p < .0005), as are each of the predictor
variables (p < .0005). There are two thresholds for this
model because there are three levels of the outcome
variable. We also see that the test of the proportional
odds assumption is non-significant (p= 0.563). One of the
assumptions underlying ordinal logistic (and ordinal probit)
regression is that the relationship between each pair of
outcome groups is the same. In other words, ordinal logistic
regression assumes that the coefficients that describe the
relationship between, say, the lowest versus all higher
categories of the response variable are the same as those
that describe the relationship between the next lowest
category and all higher categories, etc. This is called the
proportional odds assumption or the parallel regression
assumption. Because the relationship between all pairs of
groups is the same, there is only one set of coefficients
(only one model). If this was not the case, we would need
different models (such as a generalized ordered logit
model) to describe the relationship between each pair of
outcome groups.
Index End
174
Factorial logistic regression
A factorial logistic regression is used when you have two or more categorical
independent variables but a dichotomous dependent variable. For example,
using the A data file we will use female as our dependent variable, because it
is the only dichotomous variable in our data set; certainly not because it is
common practice to use gender as an outcome variable. We will use type of
program (prog) and school type (schtyp) as our predictor variables. Because
prog is a categorical variable (it has three levels), we need to create dummy
codes for it. SPSS will do this for you by making dummy codes for all
variables listed after the keyword with. SPSS will also create the interaction
term; simply list the two variables that will make up the interaction
separated by the keyword by.
176
Factorial logistic regression
177
Factorial logistic regression
Use Ctrl with left mouse key to select two variables then >a*b> for the
product term.
178
Factorial logistic regression
179
Factorial logistic regression
180
Factorial logistic regression
Loads of output!!
181
Factorial logistic regression
Index End
182
Correlation
A correlation (Pearson correlation) is useful when you want to see the
relationship between two (or more) normally distributed interval variables.
For example, using the A data file we can run a correlation between two
continuous variables, read and write.
Syntax:- correlations
/variables = read write.
183
Correlation
184
Correlation
Select Pearson
185
Correlation
In the first example above, we see that the correlation between read
and write is 0.597. By squaring the correlation and then multiplying by
100, you can determine what percentage of the variability is shared,
0.597 when squared is .356409, multiplied by 100 would be 36%. Hence
read shares about 36% of its variability with write.
186
Correlation
As a rule of thumb use the following guide for the absolute value of correlation (r):
.00-.19 very weak
.20-.39 weak
.40-.59 moderate
.60-.79 strong
.80-1.0 very strong
Syntax:- correlations
/variables = female write.
188
Correlation
In the output for the second example, we can see the correlation
between write and female is 0.256. Squaring this number yields .
065536, meaning that female shares approximately 6.5% of its
variability with write.
189
Correlation
An APA Results Section For
A Table Of Correlations
Correlation coefficients were computed among the five self-concept
scales. Using the Bonferroni approach to control for Type I error
across the 10 correlations, a p value of less than .005 (.05/10 = .005)
was required for significance. The results of the correlational analyses
presented in the table show that 7 out of the 10 correlations were
statistically significant and were greater than or equal to .35 (the
critical value at p = .005 and N-2 degrees of freedom).
Index End
190
Simple linear regression
Simple linear regression allows us to look at the linear relationship
between one normally distributed interval predictor and one normally
distributed interval outcome variable. For example, using the A data file,
say we wish to look at the relationship between writing scores (write) and
reading scores (read); in other words, predicting write from read.
Syntax:- regression
/missing listwise
/statistics coeff outs ci(95) r anova
/criteria=pin(.05) pout(.10)
/noorigin
/dependent write
/method=enter read.
192
Simple linear regression
193
Simple linear regression
194
Simple linear regression
We see that the relationship
between write and read is positive
(.552) and based on the t-value
(10.47) and p-value (<0.0005), we
would conclude this relationship is
statistically significant. Hence, we
would say there is a statistically
significant positive linear
relationship between reading and
writing.
195
Simple linear regression
An APA Results Section
Index End
The 95% confidence interval for the slope, .448 to .656 does not contain the
value of zero, and therefore overall strength is significantly related to the
overall injury index. The correlation between the reading and writing scores
was .60. Approximately 36% of the variance of the writing score was
accounted for by its linear relationship with the reading score. 196
Non-parametric correlation
A Spearman correlation is used when one or both of the variables are
not assumed to be normally distributed and interval (but are assumed to
be ordinal). The values of the variables are converted to ranks and then
correlated. In our example, we will look for a relationship between read
and write. We will not assume that both of these variables are normal
and interval.
197
Non-parametric correlation
198
Non-parametric correlation
Select Spearman
199
Non-parametric correlation
The results suggest that the relationship between read and write
(=0.617, p < 0.0005) is statistically significant.
200
Non-parametric correlation
Spearmans correlation works by calculating Pearsons correlation on
the ranked values of this data. Ranking (from low to high) is obtained
by assigning a rank of 1 to the lowest value, 2 to the next lowest and
so on. Thus the p value is only correct if there are no ties in the
data. In the event that ties occur an exact calculation should be
employed. SPSS does not do this. However the estimated value is
usually reliable enough.
Index End
201
Simple logistic regression
Logistic regression assumes that the outcome variable is binary (i.e.,
coded as 0 and 1). We have only one variable in the A data file that is
coded 0 and 1, and that is female. We understand that female is a silly
outcome variable (it would make more sense to use it as a predictor
variable), but we can use female as the outcome variable to illustrate
how the code for this command is structured and how to interpret the
output. The first variable listed after the logistic command is the
outcome (or dependent) variable, and all of the rest of the variables
are predictor (or independent) variables. In our example, female will be
the outcome variable, and read will be the predictor variable. As with
ordinary least squares regression, the predictor variables must be
either dichotomous or continuous; they cannot be categorical.
203
Simple logistic regression
204
Simple logistic regression
Loads of output!!
205
Simple logistic regression
The results indicate that reading score
(read) is not a statistically significant
predictor of gender (i.e., being
female), Wald= 0.562, p=0.453.
Likewise, the test of the overall model
is not statistically significant,
likelihood ratio Chi-squared = 0.564,
p=0.453.
Index End
206
Multiple regression
Multiple regression is very similar to simple regression, except that in
multiple regression you have more than one predictor variable in the
equation. For example, using the A data file we will predict writing score
from gender (female), reading, math, science and social studies (socst)
scores.
207
Multiple regression
208
Multiple regression
209
Multiple regression
Variables Entered/Removedb
Variables Variables
Model Entered Removed Method
1 social studies . Enter
score, female,
science score,
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta t Sig.
1 (Constant) 6.139 2.808 2.186 .030
female 5.493 .875 .289 6.274 .000
reading score .125 .065 .136 1.931 .055
math score .238 .067 .235 3.547 .000
science score .242 .061 .253 3.986 .000
social studies score .229 .053 .260 4.339 .000
a. Dependent Variable: writing score
210
Multiple regression
An APA Results Section
A multiple regression analysis was conducted to evaluate how well gender, reading and
math scores predicted writing score. The predictors were the three indices, while the
criterion variable was the writing score. The linear combination of measures was
significantly related to the score, F (5, 194) = 58.6, p <.0005. The sample multiple
correlation coefficient was .78, indicating that approximately 60% of the variance of
the writing score in the sample can be accounted for by the linear combination of the
other measures. The table presents indices to indicate the relative strength of the
variables.
211
Multiple regression -
Alternatives
There are problems with stepwise model selection procedures, these notes
are a health warning.
Various algorithms have been developed for aiding in model selection. Many
of them are automatic, in the sense that they have a stopping rule
(which it might be possible for the researcher to set or change from a
default value) based on criteria such as value of a t-statistic or an F-
statistic. Others might be better termed semi-automatic, in the sense
that they automatically list various options and values of measures that
might be used to help evaluate them.
Caution: Different regression software may use the same name (e.g.,
Forward Selection or Backward Elimination) to designate different
algorithms. Be sure to read the documentation to know find out just what
the algorithm does in the software you are using - in particular, whether it
has a stopping rule or is of the semi-automatic variety.
212
Multiple regression -
Alternatives
The reasons for not using a stepwise procedure are as follows.
213
Multiple regression -
Alternatives
Stepwise regressions are nevertheless important for three reasons.
First, to emphasise that there is a considerable problem in choosing a
model out of so many, so considerable that a variety of automated
procedures have been devised to help. Second to show that while
purely statistical methods of choice can be constructed, they are
unsatisfactory. And third, because they are fairly popular ways of
avoiding constructive thinking about model selection, you may well come
across them. You should know that they exist and roughly how they
work.
Good P.I. and Hardin J.W., Common Errors in Statistics (and How
to Avoid Them), 4th Edition, Wiley, 2012, p. 3.
Good P.I. and Hardin J.W., Common Errors in Statistics (and How
to Avoid Them), 4th Edition, Wiley, 2012, p. 152.
215
Multiple regression -
Alternatives
We do not recommend such stopping rules for routine use since
they can reject perfectly reasonable sub-models from further
consideration. Stepwise procedures are easy to explain,
inexpensive to compute, and widely used. The comparative
simplicity of the results from stepwise regression with model
selection rules appeals to many analysts. But, such algorithmic
model selection methods must be used with caution.
216
Multiple regression -
Alternatives
Stopping stepwise: Why stepwise and similar selection methods a
re bad, and what you should use
Pitt M.A., Myung I.J. and Zhang S. 2002. Toward a method for
selecting among computational models for cognition Psychol.
Rev. 109 472491 DOI: 10.1037/0033-295X.109.3.472
218
Multiple regression -
Alternatives
What strategies might we adopt?
219
Multiple regression -
Alternatives
Many definitions of heuristics exist Shah and Oppenheimer
(2008) proposed that all heuristics rely on effort reduction by
one or more of the following:
Budescu, D. V. 1993 Dominance analysis: A new approach to the problem of relative importance of predictors in
multiple regression Psychological Bulletin, 114, 542551 DOI: 10.1037/0033-2909.114.3.542.
Johnson, J. W. 2000 A heuristic method for estimating the relative weight of predictor variables in multiple
regression Multivariate Behavioral Research, 35, 119 DOI: 10.1207/S15327906MBR3501_1.
LeBreton, J. M., Ployhart, R. E. and Ladd, R. T. 2004 A Monte Carlo comparison of relative importance
methodologies Organizational Research Methods, 7, 258282 DOI: 10.1177/1094428104266017.
Tonidandel, S., and LeBreton, J. M. 2011 Relative importance analyses: A useful supplement to multiple
regression analyses Journal of Business and Psychology, 26, 19 DOI: 10.1007/s10869-010-9204-3.
222
Analysis of covariance
223
Analysis of covariance
224
Analysis of covariance
Between-Subjects Factors
Value Label N
The results indicate
that even after
type of program 1.00 general 45
2.00 academic 105
3.00 vocation 50 adjusting for the
reading score (read),
Tests of Between-Subjects Effects the writing scores still
Dependent Variable:writing score
significantly differ by
program type (prog),
Type III Sum of
Source Squares df Mean Square F Sig.
Corrected Model 7017.681 a
3 2339.227 42.213 .000 F=5.867, p=0.003.
Intercept 4867.964 1 4867.964 87.847 .000
read 3841.983 1 3841.983 69.332 .000
prog 650.260 2 325.130 5.867 .003
Error 10861.194 196 55.414
Total 574919.000 200
Corrected Total 17878.875 199
a. R Squared = 0393 (Adjusted R Squared = 0383)
Index End
225
Multiple logistic regression
Multiple logistic regression is like simple logistic regression, except that
there are two or more predictors. The predictors can be interval variables or
dummy variables, but cannot be categorical variables. If you have categorical
predictors, they should be coded into one or more dummy variables. We have
only one variable in our data set that is coded 0 and 1, and that is female. We
understand that female is a silly outcome variable (it would make more sense
to use it as a predictor variable), but we can use female as the outcome
variable to illustrate how the code for this command is structured and how
to interpret the output. The first variable listed after the logistic
regression command is the outcome (or dependent) variable, and all of the
rest of the variables are predictor (or independent) variables (listed after
the keyword with). In our example, female will be the outcome variable, and
read and write will be the predictor variables.
227
Multiple logistic regression
228
Multiple logistic regression
Block 1: Method = Enter
Case Processing Summary
a
Unweighted Cases N Percent Omnibus Tests of Model Coefficients
Selected Cases Included in Analysis 200 100.0 Chi-square df Sig.
Missing Cases 0 .0 Step 1 Step 27.819 2 .000
Total 200 100.0 Block 27.819 2 .000
Unselected Cases 0 .0
Model 27.819 2 .000
Total 200 100.0
a. If weight is in effect, see classification table for the total number
Model Summary
of cases.
Loads of output!!
-2 Log Cox & Snell R Nagelkerke R
Step likelihood Square Square
Dependent Variable Encoding
1 247.818a .130 .174
Original Value Internal Value
a. Estimation terminated at iteration number 4 because
male 0
parameter estimates changed by less than .001.
female 1
a,b Predicted
Classification Table
female Percentage
Predicted
Observed male female Correct
female Percentage
Observed Correct Step 1 female male 54 37 59.3
male female
female 30 79 72.5
Step 0 female male 0 91 .0
Overall Percentage 66.5
female 0 109 100.0
a. The cut value is .500
Overall Percentage 54.5
a. Constant is included in the model.
b. The cut value is .500 Variables in the Equation
B S.E. Wald df Sig. Exp(B)
Variables in the Equation Step 1a read -.071 .020 13.125 1 .000 .931
B S.E. Wald df Sig. Exp(B) write .106 .022 23.075 1 .000 1.112
Step 0 Constant .180 .142 1.616 1 .204 1.198 Constant -1.706 .923 3.414 1 .065 .182
a. Variable(s) entered on step 1: read, write.
229
Multiple logistic regression
Variables in the Equation
B S.E. Wald df Sig. Exp(B)
Step 1a read -.071 .020 13.125 1 .000 .931
write .106 .022 23.075 1 .000 1.112
Constant -1.706 .923 3.414 1 .065 .182
a. Variable(s) entered on step 1: read, write.
Index End
230
Discriminant analysis
Discriminant analysis is used when you have one or more normally
distributed interval independent variable(s) and a categorical dependent
variable. It is a multivariate technique that considers the latent
dimensions in the independent variables for predicting group membership
in the categorical dependent variable. For example, using the A data file,
say we wish to use read, write and math scores to predict the type of
program a student belongs to (prog).
231
Discriminant analysis
232
Discriminant analysis
233
Discriminant analysis
Analysis Case Processing Summary Wilks' Lambda
Unweighted Cases N Percent Test of Function(s) Wilks' Lambda Chi-square df Sig.
Valid 200 100.0 1 through 2 .734 60.619 6 .000
Excluded Missing or out-of-range 0 .0 2 .995 .888 2 .641
group codes
At least one missing 0 .0
Standardized Canonical Discriminant
discriminating variable
Function Coefficients
Both missing or out-of- 0 .0
Function
range group codes and at
1 2
least one missing
discriminating variable reading score .273 -.410
Total 0 .0 writing score .331 1.183
Total 200 100.0 math score .582 -.656
Loads of output!!
type of program Unweighted Weighted 1 2
Clearly, the SPSS output for this procedure is quite lengthy, and it is
beyond the scope of this page to explain all of it. However, the main point
is that two canonical variables are identified by the analysis, the first of
which seems to be more related to program type than the second.
Index End
235
One-way MANOVA
MANOVA (multivariate analysis of variance) is like ANOVA, except that
there are two or more dependent variables. In a one-way MANOVA,
there is one categorical independent variable and two or more dependent
variables. For example, using the A data file, say we wish to examine the
differences in read, write and math broken down by program type (prog).
236
One-way MANOVA
237
One-way MANOVA
238
One-way MANOVA
Between-Subjects Factors
Value Label N
type of program 1.00 general 45
2.00 academic 105
3.00 vocation 50
Multivariate Testsc
Effect Value F Hypothesis df Error df Sig.
Intercept Pillai's Trace .978 2883.051a 3.000 195.000 .000
Wilks' Lambda .022 2883.051a 3.000 195.000 .000
Hotelling's Trace 44.355 2883.051a 3.000 195.000 .000
Roy's Largest Root 44.355 2883.051a 3.000 195.000 .000
prog Pillai's Trace .267 10.075 6.000 392.000 .000
Wilks' Lambda .734 10.870a 6.000 390.000 .000
Hotelling's Trace .361 11.667 6.000 388.000 .000
Roy's Largest Root .356 23.277b 3.000 196.000 .000
a. Exact statistic
b. The statistic is an upper bound on F that yields a lower bound on the significance level.
c. Design: Intercept + prog
Concentrate on
math score 453421.258 1 453421.258
prog reading score 3716.861 2 1858.431
writing score 3175.698 2 1587.849
Error
math score
reading score
4002.104
17202.559
2
197
2001.052
87.323 the third table
writing score 14703.177 197 74.635
math score 13463.691 197 68.344
Total reading score 566514.000 200
writing score 574919.000 200
math score 571765.000 200
Corrected Total reading score 20919.420 199
writing score 17878.875 199
math score 17465.795 199
239
One-way MANOVA
Concluding output table.
241
Multivariate multiple
regression
242
Multivariate multiple
regression
243
Multivariate multiple
regression
Multivariate Testsb Tests of Between-Subjects Effects
Effect Value F Hypothesis df Error df Sig. Type III Sum of
Intercept Pillai's Trace .030 3.019a
2.000 194.000 .051 Source Dependent Variable Squares df Mean Square
Wilks' Lambda .970 3.019a
2.000 194.000 .051 Corrected Model writing score 10620.092a 4 2655.023
b
Hotelling's Trace .031 3.019a
2.000 194.000 .051 reading score 12219.658 4 3054.915
Roy's Largest Root .031 3.019a 2.000 194.000 .051 Intercept writing score 202.117 1 202.117
female Pillai's Trace .170 19.851a 2.000 194.000 .000 reading score 55.107 1 55.107
Wilks' Lambda .830 19.851a 2.000 194.000 .000 female writing score 1413.528 1 1413.528
Hotelling's Trace .205 19.851a 2.000 194.000 .000 reading score 12.605 1 12.605
Roy's Largest Root .205 19.851a 2.000 194.000 .000 math writing score 714.867 1 714.867
math Pillai's Trace .160 18.467a 2.000 194.000 .000 reading score 1025.673 1 1025.673
Wilks' Lambda .840 18.467a
2.000 194.000 .000 science writing score 857.882 1 857.882
Hotelling's Trace .190 18.467a
2.000 194.000 .000 reading score 946.955 1 946.955
Roy's Largest Root .190 18.467a
2.000 194.000 .000 socst writing score 1105.653 1 1105.653
science Pillai's Trace .166 19.366a 2.000 194.000 .000 reading score 1475.810 1 1475.810
Wilks' Lambda .834 19.366a
2.000 194.000 .000 Error writing score 7258.783 195 37.225
Hotelling's Trace .200 19.366a
2.000 194.000 .000 reading score 8699.762 195 44.614
Roy's Largest Root .200 19.366a
2.000 194.000 .000 Total writing score 574919.000 200
socst Pillai's Trace .221 27.466a 2.000 194.000 .000 reading score 566514.000 200
244
Multivariate multiple
regression
Concluding table.
Tests of Between-Subjects Effects
Source Dependent Variable F Sig.
Corrected Model writing score 71.325 .000
reading score 68.474 .000 These results show that all of the
Intercept writing score 5.430 .021 variables in the model have a
reading score 1.235 .268 statistically significant relationship with
female writing score 37.973 .000
the joint distribution of write and
reading score .283 .596
math writing score 19.204 .000
read.
reading score 22.990 .000
science writing score 23.046 .000
reading score 21.225 .000
socst writing score 29.702 .000
reading score 33.079 .000
Error writing score
reading score
Total writing score
reading score
Corrected Total writing score
reading score
a. R Squared = 0594 (Adjusted R Squared = 0586) Index End 245
b. R Squared = 0584 (Adjusted R Squared = 0576)
Canonical correlation
Canonical correlation is a multivariate technique used to examine the
relationship between two groups of variables. For each set of variables, it
creates latent variables and looks at the relationships among the latent
variables. It assumes that all variables in the model are interval and
normally distributed. SPSS requires that each of the two groups of
variables be separated by the keyword with. There need not be an equal
number of variables in the two groups (before and after the with). In this
case {read, write} with {math, science}.
Suppose you have given a group of students two tests of ten questions
each and wish to determine the overall correlation between these two
tests. Canonical correlation finds a weighted average of the questions
from the first test and correlates this with a weighted average of the
questions from the second test. The weights are constructed to maximize
the correlation between these two averages. This correlation is called the
first canonical correlation coefficient.
You can create another set of weighted averages unrelated to the first
and calculate their correlation. This correlation is the second canonical
correlation coefficient. This process continues until the number of
canonical correlations equals the number of variables in the smallest
group. 247
Canonical correlation
In statistics, canonical-correlation analysis is a way of making sense of cross-
covariance matrices. If we have two vectors X = (X 1, ..., Xn) and Y = (Y1, ..., Ym)
of random variables, and there are correlations among the variables, then
canonical-correlation analysis will find linear combinations of the X i and Yj
which have maximum correlation with each other (Hrdle and Lopold 2007).
T. R. Knapp notes virtually all of the commonly encountered parametric tests
of significance can be treated as special cases of canonical-correlation
analysis, which is the general procedure for investigating the relationships
between two sets of variables. The method was first introduced by Harold
Hotelling in 1936.
249
Canonical correlation
The output shows
the linear
combinations
corresponding to
the first canonical
correlation. At the
bottom of the
output are the two
canonical
correlations. These
results indicate
that the first
canonical
correlation is .7728.
250
Canonical correlation
The F-test in this
output tests the
hypothesis that the
first canonical
correlation is not
equal to zero.
Clearly, F=56.4706
is statistically
significant.
However, the
second canonical
correlation of .0235
is not statistically
significantly
different from zero
(F=0.1087,
p=0.742).
Index End 251
Factor analysis
Factor analysis is a form of exploratory multivariate analysis that is used to
either reduce the number of variables in a model or to detect relationships
among variables. All variables involved in the factor analysis need to be interval
and are assumed to be normally distributed. The goal of the analysis is to try
to identify factors which underlie the variables. There may be fewer factors
than variables, but there may not be more factors than variables. For our
example, let's suppose that we think that there are some common factors
underlying the various test scores. We will include subcommands for varimax
rotation and a plot of the eigenvalues. We will use a principal components
extraction and will retain two factors.
253
Factor analysis
254
Factor analysis
255
Factor analysis
256
Factor analysis
Communalities
Initial Extraction
reading score
writing score
1.000
1.000
.736
.704
Communality (which is the opposite
math score
science score
1.000
1.000
.750
.849
of uniqueness) is the proportion of
social studies score 1.000
Extraction Method: Principal Component
.900
variance of the variable (i.e., read)
Analysis. that is accounted for by all of the
Total Variance Explained factors taken together, and a very
Extraction
Sums of low communality can indicate that a
Initial Eigenvalues
Squared
Loadings
variable may not belong with any of
Component
1
Total
3.381
% of Variance
67.616
Cumulative %
67.616
Total
3.381
the factors.
2 .557 11.148 78.764 .557
3 .407 8.136 86.900
4 .356 7.123 94.023
5 .299 5.977 100.000
257
Factor analysis
The scree plot may be useful in determining how many factors to retain.
258
Factor analysis
Component Matrixa
Component
reading score
1
.858
2
-.020
From the component matrix table, we can see
writing score
math score
.824
.844
.155
-.195
that all five of the test scores load onto the
science score
social studies score
.801
.783
-.456
.536
first factor, while all five tend to load not so
Extraction Method: Principal Component heavily on the second factor. The purpose of
rotating the factors is to get the variables to
Analysis.
a. 2 components extracted.
Rotated Component Matrixa load either very high or very low on each factor.
Component
1 2
In this example, because all of the variables
reading score .650 .559 loaded onto factor 1 and not on factor 2, the
rotation did not aid in the interpretation.
writing score .508 .667
math score .757 .421
to interpret.
Extraction Method: Principal Component
Analysis.
Rotation Method: Varimax with Kaiser
Normalization.
a. Rotation converged in 3 iterations.
260
Normal probability
Following a thorough review, it is suggested
a compelling rationalization of the widely noted
robustness of the fixed-effects ANOVA to non-
normality.
It has often been reported that violation of the
normality assumption should be of little concern.
262
Normal probability plot
Tools for Assessing Normality include
Histogram and Boxplot
Normal Quantile Plot (also called Normal
Probability Plot)
264
Normal probability plot
Analyze
> Descriptive Statistics
> Explore
265
Normal probability plot
266
Normal probability plot
267
Normal probability plot
269
Normal probability plot
271
Normal probability plot
272
Normal probability plot
Graphs
> Legacy Dialogs
> Histogram
273
Normal probability plot
274
Normal probability plot
275
Normal probability plot
Graphs
> Legacy Dialogs
> Line
276
Normal probability plot
277
Normal probability plot
278
Normal probability plot
280
Normal probability plot
281
Index End
Skewness
The skewness is the third centralised normalised moment.
If skewness is positive, the data are positively skewed or skewed right, meaning
that the right tail of the distribution is longer than the left. If skewness is
negative, the data are negatively skewed or skewed left, meaning that the left tail
is longer.
If skewness = 0, the data are perfectly symmetrical. But a skewness of exactly zero
is quite unlikely for real-world data, so how can you interpret the skewness number?
Bulmer (1979) suggests this rule of thumb:
If skewness is less than 1 or greater than +1, the distribution is highly skewed.
If skewness is between 1 and or between + and +1, the distribution is
moderately skewed.
If skewness is between and +, the distribution is approximately symmetric.
282
Skewness
But what do I mean by too much for random chance to be the explanation? To answer that,
you need to divide the sample skewness G 1 by the standard error of skewness (SES) to get the
test statistic, which measures how many standard errors separate the sample skewness from
zero:
6n(n 1)
SES
(n 2)(n 1)(n 3)
test statistic: ZG1 = G1/SES where
If ZG1 < 2, the population is very likely skewed negatively (though you dont know by how much).
If ZG1 is between 2 and +2, you cant reach any conclusion about the skewness of
the population: it might be symmetric, or it might be skewed in either direction.
If ZG1 > 2, the population is very likely skewed positively (though you dont know by how much).
The syntax is
284
Skewness
Descriptive Statistics
N Skewness
Std.
Statistic Statistic Ratio
Error
x3 90 2.208 0.254 8.69
x2 90 1.885 0.254 7.42
x 90 1.550 0.254 6.10
rt_x 90 1.380 0.254 5.43
ln_x 90 1.208 0.254 4.76
x_1 90 -0.864 0.254 -3.40
x_2 90 -0.526 0.254 -2.07
x_3 90 -0.203 0.254 -0.80
285
Index End
Kurtosis
The kurtosis is the fourth centralised normalised moment.
The question is similar to the question about skewness, and the answers
are similar too. You divide the sample excess kurtosis by the standard
error of kurtosis (SEK) to get the test statistic, which tells you how
many standard errors the sample excess kurtosis is from zero:
286
Kurtosis
The critical value of ZG2 is approximately 2. (This is a two-tailed test of
excess kurtosis 0 at approximately the 0.05 significance level.)
If ZG2 < 2, the population very likely has negative excess kurtosis
(kurtosis <3, platykurtic), though you dont know how much.
If ZG2 is between 2 and +2, you cant reach any conclusion about the
kurtosis: excess kurtosis might be positive, negative, or zero.
If ZG2 > +2, the population very likely has positive excess kurtosis
(kurtosis >3, leptokurtic), though you dont know how much.
287
Kurtosis!
The rules for determining the type of distribution based on skewness and
kurtosis may however vary among statisticians. Evans (2007) for instance,
suggested that distribution with skewness value of greater than 1 or less than -1
could be considered as highly skewed. Those with skewness value of between 0.5
and 1 or between -1 and -0.5 is said to have moderately skewed distribution
whereas a value between 0.5 and -0.5 indicates relative symmetry. Brown (1997)
on the other hand, proposed to the practitioners to make use of the standard
error of skewness (SES) and standard error of kurtosis (SEK) in deciding
whether the tested data could be assumed to come from a normal distribution.
He suggested that the data could be assumed as normally distributed if the
skewness and kurtosis values lie within the range of 2SES and 2SEK,
respectively. Some practitioners favour one and some favour the others.
Nonetheless, skewness and kurtosis do not provide conclusive information about
normality. Hence, it is always a good practice to supplement the skewness and
kurtosis coefficients with other methods of testing normality such as the
graphical methods and formal tests of normality.
J.R. Evans, Statistics, data analysis and decision making, 3rd edition, Prentice
Hall, pp. 60, 2007.
J.D. Brown, Skewness and kurtosis, Shiken: JALT Testing & Evaluation SIG
288
Newsletter, vol. 1, pp. 18 20, 1997.
Kurtosis
It is available via Analyze > Descriptive Statistics > Descriptives
The syntax is
289
Kurtosis
Descriptive Statistics
N Kurtosis
Std.
Statistic Statistic Ratio
Error
x3 90 4.776 0.503 9.50
x2 90 3.356 0.503 6.67
x 90 2.162 0.503 4.30
rt_x 90 1.651 0.503 3.28
ln_x 90 1.201 0.503 2.39
x_1 90 0.480 0.503 0.95
x_2 90 -0.001 0.503 0.00
x_3 90 -0.256 0.503 -0.51
290
Index End
Does It Really Matter?
Students t test and more generally the ANOVA F test
are robust to non-normality (Fayers 2011).
However
Nevertheless, there are certain generalities that can be used to direct your
efforts, as certain types of data typically respond to particular
transformations. For example Square-root transforms are often appropriate
for count data, which tend to follow Poisson distributions. Arcsine (sin -1)
transforms are used for data that are percentages or proportions, and tend
to fit binomial distributions. Log and square-root transforms are part of a
larger class of transforms known as the ladder of powers.
294
Tukey's ladder of powers
Transform > Compute Variable
15 10 10
0 0 0
Frequency
x0 x 1/2 x-1
20 16 16
10 8 8
0 0 0
x-2 x-3
16
10
8 5
0 0
297
Tukey's ladder of powers
Normal - 95% CI
x3 x2 x1
99.9 99.9 99.9
99 99 99
90 90 90
50 50 50
10 10 10
1 1 1
0.1 0.1 0.1
0 2000000 4000000 0 10000 20000 50 100 150
x0 x 1/2 x-1
99.9 99.9 99.9
99 99 99
Percent
90 90 90
50 50 50
10 10 10
1 1 1
0.1 0.1 0.1
4.0 4.5 5.0 8 10 12 0.005 0.010 0.015
x-2 x-3
99.9 99.9
99 99
90 90
50 50
10 10
1 1
0.1 0.1
0.0000 0.0001 0.0002 0.000000 0.000001 0.000002
298
Tukey's ladder of powers
299
Tukey's ladder of powers
To test for normality in SPSS
you can perform a Kolmogorov-
Smirnov Test,
Analyze
> Nonparametric tests
> Legacy Dialogs
> 1-Sample Kolmogorov-
Smirnov Test
300
Tukey's ladder of powers
301
Tukey's ladder of powers
302
Tukey's ladder of powers
One-Sample Kolmogorov-Smirnov Test
x3 x2 x rt_x ln_x x-1 x-2 x-3
N 90 90 90 90 90 90 90 90
Normal Mean 1061244.56 10158.41 99.7000 9.9599 4.5924 .0102 .0001 .0000
Parametersa,b 67 11
Std. Deviation 567251.119 3309.533 14.8584 .71097 .13698 .00130 .00003 .00000
96 01 7
Most Extreme Absolute .255 .221 .192 .178 .163 .133 .105 .079
Differences Positive .255 .221 .192 .178 .163 .063 .060 .054
Negative -.173 -.139 -.108 -.094 -.080 -.133 -.105 -.079
Kolmogorov-Smirnov Z 2.422 2.099 1.821 1.684 1.544 1.263 .993 .745
Asymp. Sig. (2-tailed) .000 .000 .003 .007 .017 .082 .278 .635
a. Test distribution is Normal.
The Asymp. Sig. (2 tailed) value is also known as the p-value. This tells
you the probability of getting the results if the null hypothesis were
actually true (i.e. it is the probability you would be in error if you
rejected the null hypothesis).
303
Tukey's ladder of powers
Despite the scaling
the log. transform
spoils the final plot.
304
Tukey's ladder of powers
You are seeking
the most normal
data visually.
Probably one of
y_1, y_2 or y_3
transform.
305
Tukey's ladder of powers
Many statistical methods require that the numeric
variables you are working with have an approximately
normal distribution. Reality is that this is often times
not the case. One of the most common departures from
normality is skewness, in particular, right skewness.
307
Tukeys Ladder of Powers
In general if the normal distribution fits the data,
then the plotted points will roughly form a
straight line. In addition the plotted points will
fall close to the fitted line. Also the Anderson-
Darling statistic will be small, and the associated
p-value will be larger than the chosen -level
(usually 0.05). So the test rejects the hypothesis
of normality when the p-value is less than or equal
to .
308
Tukeys Ladder of Powers
To test for normality is SPSS you can perform a
Kolmogorov-Smirnov Test
309
Tukeys Ladder of Powers
The hypothesis are
Allen, I.E., and Seaman, C.A. (2007). Likert scales and data analyses.
Quality Progress, 40(7), 64-65.
Dawes J. (2008)
Do data characteristics change according to the number of scale poin
ts used? An experiment using 5 point, 7 point and 10 point scales
International Journal of Market Research. 51 (1) 61-77.
315
Likert Scale
It may be better if the scale contained a neutral midpoint (Tsang
2012). This decision (an odd/even scale) depends whether
respondents are being forced to exclude the neutral position with an
even scale.
316
Likert Scale
An odd number of points allow people to select a middle option. An
even number forces respondents to take sides. An even number is
appropriate when you want to know what direction the people in the
middle are leaning. However, forcing people to choose a side, without
a middle point, may frustrate some respondents (Wong et al. 1993).
317
Likert Scale
Since they have no neutral point, even-numbered Likert scales force
the respondent to commit to a certain position (Brown, 2006) even if
the respondent may not have a definite opinion.
There are some researchers who prefer scales with 7 items or with
an even number of response items (Cohen, Manion, and Morrison,
2000).
318
Likert Scale
The change of response order in a Likert-type scale altered
participant responses and scale characteristics. Response order is the
order in which options of a Likert-type scale are offered (Weng 2000).
320
Winsorize
To Winsorize the data, tail values are set equal to some specified
percentile of the data. For example, for a 90% Winsorization, the
bottom 5% of the values are set equal to the value corresponding to
the 5th percentile while the upper 5% of the values are set equal to
the value corresponding to the 95th percentile.
321
Winsorize
The pulse data from data set C is
employed.
322
Winsorize
Select statistics
323
Winsorize
324
Winsorize
325
Winsorize
Note the percentiles and enter them into the next slide.
326
Winsorize
Transform > Compute Variable
327
Winsorize
Choose a
sensible new
name
328
Winsorize
Then Add
329
Winsorize
Then Add
330
Winsorize
Retain all
other values
Then Add
331
Winsorize
332
Winsorize
To check your results
333
Winsorize
OK
334
Winsorize
As desired.
335
Winsorize
Syntax:- freq var pulse /format = notable /percentiles = 5 95.
336
Winsorize
Trimming and Winsorization: A review
W. J. Dixon and K. K. Yuen
Statistische Hefte
June 1974, Volume 15, Issue 2-3, pp 157-170
337
Winsorize
Winsorisation for estimates of change
Daniel Lewis
Papers presented at the ICES-III, June 18-21, 2007, Montreal, Quebec, Canada paper
Outliers are a common problem in business surveys which, if left untreated, can have a
large impact on survey estimates. For business surveys in the UK Office for National
Statistics (ONS), outliers are often treated by modifying their values using a treatment
known as Winsorisation. The method involves identifying a cut-off for outliers. Any
values lying above the cut-offs are reduced towards the cut-off. The cut-offs are
derived in a way that approximately minimises the Mean Square Error of level estimates.
However, for many surveys estimates of change are more important. This paper looks at
a variety of methods for Winsorising specifically for estimates of change. The measure
of change investigated is the difference between two consecutive estimates of total.
The first step is to derive potential methods for Winsorising this type of change. Some
of these methods prove more practical than others. The methods are then evaluated,
using change estimates derived by taking the difference between two regular
Winsorised level estimates as a comparison.
338
Winsorize
Speaking Stata: Trimming to taste
Cox, N.J.
Stata Journal 2013 13(3) 640-666
Trimmed means are means calculated after setting aside zero or more
values in each tail of a sample distribution. Here we focus on trimming
equal numbers in each tail. Such trimmed means define a family or
function with mean and median as extreme members and are attractive
as simple and easily understood summaries of the general level
(location, central tendency) of a variable. This article provides a
tutorial review of trimmed means, emphasizing the scope for trimming
to varying degrees in describing and exploring data. Detailed remarks
are included on the idea's history, plotting of results, and confidence
interval procedures.
341
General Linear Models
Index End
342
Centre Data
Sometimes to facilitate analysis it is necessary to centre a data set,
for instance to give it a mean of zero. In this case consider the
reading score in data set A.
343
Centre Data
Why center? Different authors have made different recommendations
regarding the centring of independent variables. Some have
recommended mean-centring (i.e., subtracting the mean from the value
of the original variable so that it has a mean of 0); others z-
standardization (which does the same, and then divides by the
standard deviation, so that it has a mean of 0 and a standard deviation
of 1); others suggest leaving the variables in their raw form. In truth,
with the exception of cases of extreme multi-collinearity (which may
arise in multiple-regression/correlation), the decision does not make
any major difference. For instance the p value for an interaction term
any subsequent interaction plot should be identical whichever way it is
done (Dalal and Zickar 2012; Kromrey and Foster-Johnson 1998).
Dalal, D. K. and Zickar, M. J. 2012 Some common myths about centering predictor variables in
moderated multiple regression and polynomial regression Organizational Research Methods, 15,
339362 DOI: 10.1177/1094428111430540
Kromrey, J. D. and Foster-Johnson, L. 1998 Mean centering in moderated multiple regression: Much
ado about nothing Educational and Psychological Measurement, 58, 4267 DOI:
10.1177/0013164498058001005 344
Centre Data
To check we have achieved our
goal we generate descriptive
statistics.
345
Centre Data
To check we have achieved our
goal we generate descriptive
statistics.
Syntax
EXAMINE VARIABLES=read
/PLOT BOXPLOT STEMLEAF
/COMPARE GROUPS
/STATISTICS
DESCRIPTIVES
/CINTERVAL 95
/MISSING LISTWISE
/NOTOTAL.
346
Centre Data
Note, the mean is 53.23.
347
Centre Data
To create a column of means.
348
Centre Data
To create a column of means.
Syntax
AGGREGATE
/OUTFILE=*
MODE=ADDVARIABLES
/BREAK=
/read_mean=MEAN(read).
349
Centre Data
Finally compute the desired variable.
350
Centre Data
Finally compute the
desired variable.
Syntax
353
Correlation - Comparison
Several procedures that use summary data to test hypotheses about
Pearson correlations and ordinary least squares regression
coefficients have been described in various books and articles. No
single resource describes all of the most common tests.
Furthermore, many of these tests have not yet been implemented in
popular statistical software packages such as SPSS. The article
(next slide) describes all of the most common tests and provide
SPSS programs to perform them. When they are applicable, the
code also computes 100 (1 )% confidence intervals corresponding
to the tests. For testing hypotheses about independent regression
coefficients, they demonstrate one method that uses summary data
and another that uses raw data (i.e., Potthoff analysis). When the
raw data are available, the latter method is preferred, because use
of summary data entails some loss of precision due to rounding.
354
Correlation - Comparison
SPSS and SAS programs for comparing Pearson correlations and
OLS regression coefficients
DOI: 10.3758/s13428-012-0289-7
For the code access. Not the earlier version on the journal web site.
355
Index End
Sobel Test
The Sobel test will assess whether a mediator (see more extensive notes on
Mediation on the main web page) carries the influence of an independent
variable to a dependent variable.
The Sobel test works well only in large samples. It is recommended using this
test only if the user has no access to raw data. If you have the raw data,
bootstrapping offers a much better alternative that imposes no distributional
assumptions. Consult Preacher and Hayes (2004, 2008) for details and easy-to-
use macros that run the necessary regression analyses for you:
Preacher, K. J., & Hayes, A. F. (2008) Asymptotic and resampling strategies
for assessing and comparing indirect effects in multiple mediator models
Behavior Research Methods, 40, 879-891 DOI: 10.3758/BRM.40.3.879.
Preacher, K. J., & Hayes, A. F. (2004) SPSS and SAS procedures for
estimating indirect effects in simple Mediation models Behavior Research
Methods, Instruments, & Computers, 36, 717-731 DOI: 10.3758/BF03206553.
See How can I perform a Sobel test on a single mediation effect in SPSS?
But its not simple! 356
Index End
Structural Equation Modelling
This tutorial begins with an overview of structural equation modeling (SEM) that
includes the purpose and goals of the statistical analysis as well as terminology
unique to this technique. I will focus on confirmatory factor analysis (CFA), a
special type of SEM. After a general introduction, CFA is differentiated from
exploratory factor analysis (EFA), and the advantages of CFA techniques are
discussed. Following a brief overview, the process of modeling will be discussed
and illustrated with an example using data from a HIV risk behavior evaluation
of homeless adults (Stein & Nyamathi, 2000). Techniques for analysis of
nonnormally distributed data as well as strategies for model modification are
shown. The empirical example examines the structure of drug and alcohol use
problem scales. Although these scales are not specific personality constructs,
the concepts illustrated in this article directly correspond to those found when
analyzing personality scales and inventories. Computer program syntax and
output for the empirical example from a popular SEM program (EQS 6.1; Bentler,
2001) are included.
Bentler, P.M. (2001). EQS 6 structural equations program manual. Encino, CA:
Multivariate Software.
359
Does It Always Matter?
we slavishly lean on the crutch of significance testing
because, if we didnt, much of psychology would simply
fall apart. If he was right, then significance testing is
tantamount to psychologys dirty little secret.
360
Does It Always Matter?
The first rule of performing a project
361
Does It Always Matter? Probably!
Estimation based on effect sizes, confidence intervals, and meta-
analysis usually provides a more informative analysis of empirical
results than does statistical significance testing, which has long
been the conventional choice in psychology. The sixth edition of the
American Psychological Association Publication Manual now
recommends that psychologists should, wherever possible, use
estimation and base their interpretation of research results on point
and interval estimates.
362
Does It Always Matter? Probably!
The debate is ongoing!
Savalei V. and Dunn E. 2015 Is the call to abandon p-values the red
herring of the replicability crisis? Frontiers in Psychology 6:245.
DOI: 10.3389/fpsyg.2015.00245
364
Index