Meta-analysis of ratios of sample variances
LUKE A. PRENDERGAST
Department of Mathematics and Statistics, La Trobe University, Melbourne, Australia, 3086
arXiv:1412.3502v2 [stat.ME] 27 Jun 2015
ROBERT G. STAUDTE
Department of Mathematics and Statistics, La Trobe University, Melbourne, Australia, 3086
ABSTRACT. When conducting a meta-analysis of standardized mean differences
(SMDs), it is common to assume equal variances in the two arms of each study.
This leads to Cohen’s d estimates for which interpretation is simple. However,
this simplicity should not be used as a justification for the assumption of equal
variances in situations where evidence may suggest that it is incorrect. Until
now, researchers have either used an F -test for each individual study as a justification for the equality of variances or perhaps even conveniently ignored such
tools altogether. In this paper we propose using a meta-analysis of F -test statistics to estimate the ratio of variances prior to the combination of SMD’s. This
procedure allows some studies to be included that might otherwise be omitted
by individual fixed level tests for unequal variances, sometimes occur even when
the assumption of equal variances holds. The estimated ratio of variances, as
well as associated confidence intervals, can be used as guidance as to whether
the assumption of equal variances is violated. The estimators considered include
variance stabilization transformations (VST) of the F -test statistics as well as
MLE estimators. The VST approaches enable the use of QQ-plots to visually inspect for violations of equal variances while the MLE estimator easily allows for
the introduction of a random effect. When there is evidence of unequal variances,
this work provides a means to formally justify the use of less common methods
such as log ratio of means when studies are measured on a different scale.
Key words: F -test; normalization; variance stabilization; maximum likelihood estimation;
test for unequal variances
1
1
Introduction
The F -distribution (see, for e.g., Chapters 27 and 30 of [5]) is encountered in many applications of statistics including the testing of equality of variances, analysis of variance (ANOVA)
and more broadly a test for the overall model in linear regression analyses. A problem that
may be of interest to many is how best to combine the evidence from several independent
F -distributed random variables to improve power. For example, suppose that we have K independent studies reporting F -statistics (or summary measures that can be used to calculate
an F -statistic) for the testing of the same hypotheses. Rather than relying on K individual
hypothesis tests, we seek to combine these F -statistics to obtain a single measure that can
be used to test the hypotheses with increased power or to obtain an improved estimation of
an associated parameter.
Of particular interest here is a meta-analysis for standardized mean differences (SMDs). In
each study there are two populations with means µ1 and µ2 and the same variance σ 2 , and the
population SMD is δ = (µ1 −µ2 )/σ. An estimate of δ is Cohen’s d [3], denoted d = (x1 −x2 )/sp ,
where x1 and x2 are sample means estimates for n1 and n2 sampled observations from each
of the populations and s2p is the pooled sample variance estimate of σ 2 . Cohen [3] suggested
a ‘rule of the thumb’ for the SMD where 0.2 is considered small, 0.5 medium and 0.8 large
and values of d from studies with data collected on different measurement scales can be
readily compared. It is important here to ensure that the assumption of equal variances is
justified, or at least that there is a lack of evidence to suggest that the population variances
are not equal, and one possibility is to conduct an F -test for equality of variances [5, p.323]
in each study. Our interest is on the availability of more than one study and on whether the
assumption of equal variances is justified in general.
While many tests can be reasonably robust to even moderate departures from equal
variances (e.g. ANOVA), this is an entirely different scenario. Here, the equal variances assumption is used to obtain a simple-to-interpret measure, δ, or its variations. If the variances
are not equal, then the estimate sp may not resemble anything like the variances in each arm
and the resulting estimate d may be misleading. We emphasize this point in the next section.
In Section 2 we will provide a motivating example before discussing the transformation
of F -statistics in Section 3. Maximum likelihood estimators are discussed in Section 4 where
both fixed effect and random effects models are considered. Simulations are provided in
Section 5 that assess the performance of estimators of the ratio of variances. In Section
6 we reconsider the example in Section 2 and carry out the meta-analysis for F -statistics.
Concluding remarks are provided in Section 7.
2
2
A motivating example
Thakkinstian et al. [14] reported on 15 studies with the data shown for 13 of these given
in Table 1 below. Since two of the 15 studies were not used in their meta-analysis of SMDs
which we consider shortly, we have not included these studies in our table.
Study
BB
n Mean
SD
1
7 0.970 0.160
2
2 1.077 0.011
3 15 1.007 0.158
4 12 0.980 0.150
5 38 0.880 0.160
6 77 0.906 0.153
7
8 0.870 0.090
8 107 0.870 0.180
9 71 0.810 0.253
10 46 1.034 0.177
11 27 0.863 0.152
12 25 0.924 0.145
13 19 0.651 0.078
Bb
n Mean
35
14
36
19
134
276
43
306
219
98
72
34
59
1.040
1.083
1.047
0.970
0.870
0.932
0.860
0.870
0.846
1.024
0.871
0.951
0.718
SD
0.170
0.099
0.227
0.120
0.110
0.136
0.160
0.160
0.186
0.137
0.167
0.138
0.070
bb
n Mean
34
7
40
18
96
196
52
175
120
56
62
21
24
1.000
1.099
1.003
1.000
0.860
0.924
0.890
0.870
0.897
1.041
0.929
0.944
0.723
SD
0.190
0.171
0.166
0.130
0.130
0.128
0.150
0.150
0.136
0.122
0.124
0.131
0.083
Bb/bb
n Mean
69
21
76
37
230
472
95
481
339
154
134
55
83
1.020
1.088
1.024
0.985
0.866
0.929
0.876
0.870
0.864
1.030
0.898
0.948
0.719
SD
0.180
0.123
0.197
0.124
0.119
0.133
0.155
0.156
0.172
0.132
0.151
0.134
0.074
Table 1: Data from Table 2 of [14] (excluding two studies that were not part of their metaanalysis). Sample sizes (n), means and the standard deviations (SD) for three genotype
groups and including the combined grouping of Bb and bb (Bb/Bb).
There are three groups for each of the 13 studies; namely BB, Bb and bb which refer to
geno-types of individuals studied. The estimate is the mean spinal Bone Mass Density (BMD)
for premenopausal women and the question is whether there is a difference in mean BMD
with respect to geno-type. In a meta-analysis of SMDs, Thakkinstian et al. [14] compare
the combined group of Bb with bb which has the additional benefit of alleviating some small
samples sizes that may undermine the assumed normality of the estimated SMDs that is
required for the meta-analysis (for more on meta-analysis of SMDs, including normality of
the estimates, see, for e.g., Borenstein et al. [2]). However, some small sizes are still present
for the BB group. The summary statistics for this combined group are also included in Table
1 and is our focus here.
For comparison of the variances between the BB and Bb/bb groups, we take the ratio of
3
the estimated variances giving, to two decimal places,
0.79, 0.01, 0.64, 1.46, 1.82, 1.33, 0.34, 1.33, 2.18, 1.81, 1.01, 1.17, 1.13
and we refer to these values as f1 , . . . , f13 respectively. In the scientific literature it is common
to assume that BMD is normally distributed (see, for e.g., [6]). Under this assumption of
normality of the underlying populations, if the true variances are equal between the two
groups then the distribution of the ith ratio is Fni1 −1,ni2 −1 (i = 1, . . . , 13) where ni1 and ni2
are the sample sizes for each of the BB and Bb/bb groups data in the ith study. We reject
the possibility of equal variances in a test of variance equality from the ith study if fi is either
too small or too large. Below we provide the p-values for each of the 13 tests:
0.839, 0.140, 0.358, 0.380, 0.009, 0.084, 0.132, 0.051, 0.000, 0.008, 0.909, 0.622, 0.687
with the fifth, sixth, eighth, ninth and tenth studies providing some evidence of unequal
variances. Clearly the smallest of the p-values is very small, but this may suggest that a
trait specific to this study has resulted in unequal variances. A question one should ask in
such a situation is as to whether a meta-analysis of the SMDs sufficiently robust in regards
to departures from equal variances. In this particular example, it appears that the data
across studies is measured on the same scale. Consequently, the raw mean difference could
be used instead of the SMD which is often used not only for interpretative reasons, but also
to combine results from studies for which data has been measured on different scales. When
this is not the case, another option is to consider the ratio of means, or to be precise, the
log ratio of means [4] (when the means are all of the same sign as they are here) or to use
Glass’s effect which uses the variance estimate from one group only.
Est.
SMD
MD
MR
2
.15 /.15
0.956
0.961
0.962
2
2
.15 /.2
0.981
0.954
0.953
2
2
.1 /.2
0.998
0.955
0.957
2
ρ
.12 /.32
1.000
0.957
0.963
.22 /.152
0.955
0.968
0.966
.22 /.12
0.924
0.959
0.957
.32 /.12
0.916
0.942
0.935
Table 2: Simulated coverage probabilities for interval estimators of the SMD, Mean Difference
(MD) and Mean Ratio (MR) for varying choices of ρ.
In Table 2 we provide simulated coverage probabilities (1000 trials) for interval estimators
for Cohen’s d (SMD), the raw mean difference (MD) and the log ratio of means (MR). Sample
sizes were chosen equal to those for the comparison between the BB and Bb/bb groups where
data was sampled from normal distributions with equal means (both 1.0) and ρ = σ12 /σ22
4
shown in the table. The results were obtained using the metafor R package [15] with a
restricted maximum likelihood estimate for the variance of an assumed random effect. As
can be seen in the table, while the coverage probabilities for the mean difference and log ratio
of means are typically close to nominal (0.95), the coverage for the SMD is sensitive to ρ with
close to nominal coverage only for two choices of ρ, one of which is for equal variances. This
single example highlights some dangers with assuming equal variances and a more exhaustive
search for examples will no doubt yield even more evidence. It should be pointed out that
while the use of log ratio of means can be used when study variables are on a different scale
and when the means are all of the same sign, in practice it is very common to use Cohen’s d
and is perhaps even expected depending on the discipline.
n1
10
50
75
100 125 150 190
Mean d 0.505 0.544 0.576 0.606 0.649 0.694 0.849
SD d
0.210 0.131 0.134 0.145 0.169 0.212 0.514
Table 3: Mean and standard deviations (from 10000 iterations) of simulated Cohen’s d for n1
observations generated from N (1.1, 0.122 ) n2 = 200 − n1 from the N (1.0, 0.22 ) distribution.
The above simulation reported in Table 2 considered only the case when both means
are equal. When they are not equal, interpretation of Cohen’s d become difficult since the
estimate is highly dependent on the sample sizes when the variances are not equal. As an
example we consider a total sample size of 200 = n1 +n2 and focus on the estimated Cohen’s d
when n1 = 10, 50, 75, 100, 125, 150 and 190. For each choice of n1 , we generate n1 values from
N (1.1, 0.122 ) and n2 = 100 − n1 values from the N (1.0, 0.22 ) distribution. This is repeated
10,000 times and in Table 3 we report the average and standard deviations of the estimates.
We can see from the reported means that, on average, the estimated ds vary from moderate
to large depending on the sample sizes. Additionally, the reported standard deviations also
suggest that it would not be unusual to observe small to very large estimates again depending
on the sample size allocations. As this example highlights, interpretation of Cohen’s d when
the group standard deviations are not equal can be problematic.
Returning to our example data, what we would like to know is whether these results
provide persistent evidence of unequal variances in general and as a consequence mean that
one should use SMDs with caution, if at all. Rather than leave this as a subjective problem, we
provide a more formal approach to answering this which may benefit researchers by providing
at least some formal justification for a lack of evidence or as a means to move away from the
assumption of equal variances and adapt the meta-analysis accordingly. Of course one option
is to not use Cohen’s d altogether. However, given its widespread use and interpretation this
5
is unlikely to occur.
3
Variance stabilization of F statistics
An obvious way to query whether the assumption of equal variances is justified across K
studies is to consider the ratio of the estimated variances. However, with sample sizes often
varying greatly among studies, interpreting the estimated ratios (both collectively and individually) is not straightforward. Our first focus will therefore be on re-scaling the estimated
ratios of variances so that they are all on an easily interpreted scale.
Variance stabilization transformations seek to both normalize and scale an estimate; in
this case we are seeking an approximate variance of one. The benefits can include improved
coverage of confidence intervals and a transformed test statistic that is easy to interpret. Variance stabilization for commonly encountered estimates are still leading to improved inference,
as is evidenced by recent research. For example, Kulinskaya et al. [8] and Prendergast and
Staudte [13] consider variance stabilization of the difference in binomial proportions using a
transformation from Kulinskaya et al. [7]. When combining evidence from several studies,
variance stabilization can be particularly useful given the niceties of dealing with normally
distributed statistics. For some recent examples see [8, 9, 10]. For more discussion on some
of the advantages of variance stabilization see Morgenthaler and Staudte [11]. In this section
we describe variance stabilization of ratios of estimated variances which is assumed to be
F -distributed when the data has been sampled from two independent normal distributions.
Simulations will follow in Section 5.
Given S ∼ Fν1 ,ν2 , the goal is to find a variance stabilizing transformation (VST) T =
T (S) for which T ∼ N (0, 1) approximately. For a single study such a transformation is
not necessary since probabilities and percentiles from the F distribution are easily obtained.
However, one small advantage in the single study setting is that the transformed statistic is
immediately interpretable due to the distribution being free of the degrees of freedom. This
small advantage for a single study leads to a much bigger advantage for multiple studies. In
what follows we explore four possible VSTs that may be used to achieve such advantages.
3.1
Some possible transformations
Let Φ−1 denote the inverse cumulative distribution function (inverse cdf) for the standard
normal distribution and let F denote the cdf for the Fν1 ,ν2 distribution. Then, as pointed out
by a referee, a possible transformation is Φ−1 [F (S)] ∼ N (0, 1) when S ∼ Fν1 ,ν2 . While this
transformation can be obtained computationally, other simple transformations also exist and
6
we discuss these now.
We now consider four competing VSTs of S ∼ Fν1 ,ν2 . For our first two transformations,
the technical details are provided in the Appendix. These transformations are
√
c1
S
1
ln
(1)
+
T1 (S) = √
c1
E[S]
2
!
p
√
√
c2 (2ν1 + ν2 − 2)
S + S + ν2 /ν1
2
p
(2)
T2 (S) = √
+ p 2
ln p
c2
E[S] + E[S] + ν2 /ν1
4 ν1 + 2ν1 ν2 − 2ν1
where c1 = 2(ν1 + ν2 − 2)/[ν1 (ν2 − 4)], c2 = 2/(ν2 − 4) and E[S] = ν2 /(ν2 − 2) is the mean
for the Fν1 ,ν2 distribution for ν2 > 2. Both of these transformations have approximate mean
zero and variance one.
The next transformation that we consider is Paulson’s normalizing transformation [12] of
the Fν1 ,ν2 statistic given as
T3 (S) =
2
1−
9ν2
S
1/3
2
− 1−
9ν1
2S 2/3
2
+
9ν2
9ν1
−1/2
.
(3)
As was the case with T1 and T2 , a re-centering of T3 could also be carried out so that its
expected value is closer to zero. However, for this transformation the adjustment is very small
and our simulations reveal it is not necessary. In fact this transformation as it is is quite
remarkable in that when compared to Φ−1 [F (S)], the results are almost identical so that
T3 provides a simple alternative. For example, when simulating 1000 observations from the
Fν1 ,ν2 distribution for all choices of ν1 and ν2 in 5, . . . , 100, the minimum correlation between
the transformed data using each of the transformations was 0.99969.
The final transformation we now describe a slight variation of the VST introduced in
Chapter 23 of Kulinskaya et al. [7]. While it is more complicated than the other transformations, an advantage it has over T1 and T2 is that it exists for all degrees of freedom. It
is also more applicable in general than the other transformations since it can be applied to
non-central F distributed test statistics. Consider a test statistic Sλ for which Sλ ∼ Fν1 ,ν2 (λ)
where λ is the non-centrality parameter. Let ∆ = Fν−1
(0.5) where Fν−1
denotes the inverse
1 ,ν2
1 ,ν2
cumulative distribution function so that ∆ is the median of the Fν1 ,ν2 (i.e. with λ = 0; the
central F ) distribution. Let
(
Sλ ,
Sλ > ∆
(4)
Sλ∗ =
−1
Fν1 ,ν2 [1 − Fν1 ,ν2 (Sλ )] Sλ ≤ ∆
where Fν1 ,ν2 is the cumulative distribution function of Sλ .
7
For sign(Sλ ; ∆) = 1 if Sλ > ∆ and −1 otherwise, the transformation function is
!
r "
∗
ν2
ν2
ν
S
+
ν
1
2
T4 (Sλ ; ν1 , ν2 ) =
sign(Sλ ; ∆) ×
cosh−1 p λ
ν2 + 1
2
ν1 ν2 ∆ + ν22
#
p
− cosh−1
ν1 ∆/ν2 + 1
.
(5)
This VST is that of Kulinskaya et al. [7, p.199], but multiplied by ν2 /(ν2 + 1). We have used
this adjustment because our extensive simulations reveal that it improves performance when
ν2 is small. Kulinskaya et al. [7] show that h(S; ν1 , ν2 ) is approximately normally distributed
with variance one even when the degrees of freedom are small, and that it has mean 0 under
the null and mean increasing in λ under alternatives. They also provide applications for the
ANOVA F -tests.
While we have referred to these transformations as VSTs, this is strictly only true in
general for T1 for any ρ. The other transformations are VSTs for the special case of ρ = 1
although the transformations are still useful for any ρ in what follows.
3.2
Q-Q plots to assess violations of the assumption of equal variances
Returning to our meta analysis of SMD’s where we want a meta-analysis of variance ratios,
assume that we have K independent F -distributed random variables denoted S1 , . . . , SK with
potentially different degrees of freedom (denoted νk1 and νk2 for the kth study) and population
2
2
2
2
2
2
, then the Sk s are distributed as (σk1
/σk2
)Fν1 ,ν2 random
variances σk1
and σk2
. When σk1
6= σk2
variables.
Below let Ti denote one of the transformations T1 , T2 , T3 and T4 given in Section 3.1. We
have
approx.
2
2
Zk = Ti (Sk ; νk1 , νk2 ) ∼ N (0, 1) if σk1
= σk2
for each k
where the closeness of the approximation may depend on the VST and the degrees of freedom.
Since the Sk ’s are independent, if the variances within the K studies are in fact equal, then the
ordered Zk ’s should resemble data randomly generated from a standard normal distribution.
Let Z[1] ≤ . . . ≤ Z[K] denote the ordered Zk ’s. Then one could visually assess the possibility
of violations of the equal variance assumption by utilizing Quantile-Quantile (Q-Q) plots of
the
k − 0.5
−1
Z[k] versus Φ
K
8
B: QQ−Plot using T 2
5
A: QQ−Plot using T 1
●
3
4
4
●
●
●
●
1
●
2
1
● ●
●●
0
T(S)
●
T(S)
2
3
● ●
●
●
●
−1
●●
●
0
●
●
−1
−2
●
● ●
●
●
−1
0
1
2
3
4
−1
0
1
2
3
4
Standard Normal Quantiles
Standard Normal Quantiles
C: QQ−Plot using T 3
D: QQ−Plot using T 4
5
5
−2
●
3
3
4
4
●
● ●
2
2
● ●
●
1
●
1
●
T(S)
T(S)
●
●
●
●●
●●
●
−1
●
0
0
●
●
●
−1
●
●
●
●
−1
0
1
2
3
4
5
●
−1
Standard Normal Quantiles
0
1
2
3
4
Standard Normal Quantiles
Figure 1: Q-Q plots of the ordered VST transformed ratios for the BB vs Bb/bb data in
Table 1. Plots A, B, C and D are for the transformations in (1), (2), (3) and (5) respectively.
where Φ−1 (p) = zp is the p × 100th percentile of the standard normal distribution such that
P (Z ≤ zp ) = p for Z ∼ N (0, 1).
Returning to our motivating example of Section 2, in Figure 1 we provide the Q-Q plots
for each of the transformations T1 , . . . , T4 . All of the plots are similar with only some minor
differences and a more in depth comparison of the VSTs and their ability to achieve approximate standard normality will be explored in greater depth later. All of the Q-Q plots suggest
potential problems with the assumption of equal variances with a trend towards larger than
expected Zk ’s. While these Q-Q plots are informative and easy to obtain, in this example and
others some researchers would like an accompanying test for unequal variances. This is intro9
duced in the next section. For those who prefer to interpret a meta-analysis of effect sizes,
in this case the ratio of variances, one can skip to the following section. Both methodologies
are based on the same combination of transformed F-test statistics
3.3
Omnibus tests for unequal variances
Continuing with the notation introduced in the previous section, two meta-analytic test
statistics that may be used to combine the evidence are
Z=
√
K
1 X
approx.
K ×Z = √
Zk ∼ N (0, 1)
K k=1
and
2
X =
K
X
k=1
2
σk1
2
σk2
Zk2 ∼ χ2K
approx.
(6)
(7)
to test for H0 :
=
for every k = 1, . . . , K.
The two test statistics above apply an equal weighting for each study. However, another
possibility would be to apply a greater weighting to studies with smaller estimator variability
on the non-transformed scale. Hence, weighted versions of the above are
1
Zw = qP
K
2
k=1 wk
and
Xw2
×
=
K
X
k=1
K
X
k=1
approx.
wk Zk ∼ N (0, 1)
(8)
wk Zk2
(9)
P
2
where w1 , . . . , wK are nonzero weights where K
k=1 wk = 1. Under the assumption of σk1 =
2
σk2
the distribution of Xw2 is not straightforward although some approximating distributions
do exist [see, for e.g., 16]. However, for what we require shortly, it is quite efficient to simply
use a Monte-Carlo simulation to accurately obtain what is needed. In terms of the weights
there are many possibilities and we leave further guidance on this until the next section.
As expected, if the population variances are not equal across the studies, then the evidence
against equal variances is expected to grow with increasing K (the number of studies). Note
2
2
that there is also no requirement that σk1
6= σk2
across all k = 1, . . . , K simultaneously.
Given that the distribution of Z in (6) (and also Zw ) does not depend on the number of
studies, it is immediately interpretable as a measure of evidence against equal variances and,
2
2
if Z > 0, in favor of σk1
> σk2
for all k with inequality for at least one k (or similarly for
2
2
σk1
< σk2
if Z < 0). However, one is usually not willing or able to restrict the alternative
10
to such a ‘one-sided’ possibility, so the second test χ2 statistic may be more appropriate.
In a meta-analysis it is widely believed that differences in studies might lead to different
study-parameters of interest and random effects can be used to describe such differences. It
is therefore possible that differences in studies could result in different magnitudes of ratios of
variances. Therefore the test statistic in (7) is more powerful when some studies have ratios
greater than one and others less than one. A p-value for testing against the hypothesis of
equal variances can be computed using P (X > X ∈ ) where X ∼ χ∈K . A p-value associated
with the test statistic in (9) is not exactly determinable. However, to obtain an accurate
approximation via simulation, one simply needs to randomly generate M Xw2 s under the
null using Xw where X is an M × K matrix of randomly generated χ21 values and w is the
column vector of weights. The simulated p-value is then simply the proportion of values in
the resulting vector that are greater than the observed test statistic.
3.4
Estimates of an assumed fixed ratio of variances
In this section we provide our first insights into estimating an assumed fixed ratio of variances.
While the following estimators are motivated by the VSTs, it should be noted that there
is strong link with Maximum Likelihood Estimators (MLEs) that we consider in the next
section. The MLEs also allow for the simple introduction of a random effect and this will
also be considered in the next section.
2
2
Suppose that σk1
/σk2
= ρ for all k = 1, . . . , K. That is, the ratio of variances is assumed
fixed across all studies where ρ = 1 is assumed when utilizing SMDs. In this section we
consider combining evidence across all studies to obtain point and interval estimates for ρ.
Under the assumption that data are sampled from independent normal distributions, we have
that Sk /ρ ∼ Fnk1 −1,nk2 −1 so that, from Section 3.3,
approx.
Ti (Sk /ρ; νk1 , νk2 ) ∼ N (0, 1)
and subsequently
1
Ziw = qP
K
2
k=1 wk
×
K
X
k=1
approx.
wk Ti (Sk /ρ; νk1 , νk2 ) ∼ N (0, 1).
(10)
Let ck1 denote the kth study specific c1 used in (1) and also let E0 (Sk ) = E(Sk /ρ) =
νk2 /(νk2 − 1) denote the expected value of Sk under the null hypothesis of equal variances
and which is free of ρ. Consequently, Ziw is a pivotal quantity for ρ and an approximate
(1 − α) × 100% confidence interval for ρ can be obtained by solving
|Ziw | = z1−α/2
11
where z1−α/2 is the (1 − α/2)100% percentile of the N (0, 1) distribution.
For illustrative purposes we shall focus our attention on T1 from (1). For this simple VST,
formulae for the interval estimates of ρ can be derived as closed-form expressions which will
aid in discussion. Estimates for the other transformations can be obtained through simple
computational root-solving. Using the definition of T1 , simple rearranging of (10) leads to
K
X
wk
√
ck1
k=1
!−1
K
X
nk2 − 3
ck1 approx.
wk
+
ln(Sk ) + ln
∼ N [ln(ρ), V ]
√
c
n
2
k1
k2 − 1
k=1
where
V =
K
X
wk
√
ck1
k=1
!−2
K
X
(11)
wk2 .
k=1
Therefore, the left hand side of (11) is an estimator of ln(ρ) and which has variance V .
[ and a (1 − α) × 100% confidence interval for
Consequently, we denote this estimator as ln(ρ)
√
[ ± z1−α/2 V . Finally, we exponentiate to obtain our confidence interval for ρ
ln(ρ) is ln(ρ)
given as
√
√
[ − z1−α/2 V ], exp[ln(ρ)
[ + z1−α/2 V ]
exp[ln(ρ)
(12)
where ln(ρ) and V are given in (11). This confidence interval is simple to compute which is an
advantage for the T1 VST. For the other VSTs, the estimates can be obtained computationally
which, using a package such as R, is a simple task.
So far in this section we have not provided any guidance as to suitable choices for the
weights w1 , . . . , wK . In the next section, we show that choice of
1
wk = √ , k = 1, . . . , K
ck1
(13)
in (11) results in an MLE estimator of ρ (with an equivalent variance V ) based on the log of
the ratio of sample variances. This will therefore provide motivation for the choice of weights.
However, future work may include different choices of weights that may, for example, be used
to provide some degree of robustness in the estimation to protect against outliers.
4
Meta-analysis based on maximum likelihood estimation
In the previous section we considered meta-analysis for the ratio of variances following a
variance stabilization of the F -test statistics. Another possibility is to consider MLEs for the
ratio of variances. We will firstly consider MLE estimation based on the true distribution
12
of the ratio of sample variances which allows for estimation of an assumed fixed ratio across
studies. We will then consider an extension to allow for a simple random effect on the ratio
with estimation based on the approximate normal distribution for the log of the ratio of
sample variances. Throughout we will continue to use the notation of the previous sections.
4.1
MLE estimation based on the re-scaled F distribution
Let g(f ; ν1 , ν2 ) denote the probability density function for the F distribution with degrees of
freedom ν1 and ν2 . Then, under the assumption that the data are sampled from independent
normal distributions, it is simple to verify that the probability density function for Sk is
f (s) =
1
g(s/ρk ; nk1 − 1, nk2−1 )
ρk
(14)
2
2
/σk2
.
where ρk = σk1
It is not difficult to obtain the point and interval MLE estimates for a fixed ρk = ρ
(k = 1, . . . , K) based on the likelihood function above when using routine optimization
functions within a package such as R that includes a computationally computed Hessian
matrix.
4.2
MLE estimation based on the approximate normal distribution
The simplistic form of T1 from Section 3 provides an opportunity to obtain estimates for the
ratio of variances based on approximate normality of the log of the ratio of sample variances.
For simplicity throughout let ω = ln(ρ)
4.2.1
Fixed effect model
Using the fact that T1 (Sk ) is approximately N (0, 1) distributed we have
approx.
ln(Sk ) ∼ N (µk , c1k )
(15)
where µk = ω + ln {ν2k /(ν2k − 2)} − ck1 /2 and where ck1 is defined as in c1 in (1), but specific
to the kth study.
Under the assumption of a common fixed ρ = ρk across all studies, the log-likelihood
function obtained from (15) and ignoring constant terms is
K
l(s1 , . . . , sK ; ω) = −
1X 1
[ln(sk ) − ω − ln {ν2k /(ν2k − 2)} + ck1 /2]2 .
2 k=1 ck1
13
(16)
Solving for l′ (s1 , . . . , sK ; ln(ρ)) = 0 for ln(ρ) gives the MLE estimate
ω
bM LE =
K
X
1
ck1
k=1
!−1
K
X
1
ck1
nk2 − 3
ln(sk ) + ln
+
ck1
nk2 − 1
2
k=1
(17)
which is identical to the estimate obtained from (11) when the weights, wk s, are chosen to
√
be 1/ ck1 . A closed form solution for the variance of the MLE also exists and is given as
VM∗ LE
1
=
= ′′
−l (s1 , . . . , sK ; ω)
K
X
1
ck1
k=1
!−1
√
which is equivalent to V from (11) when the weights are chosen to be 1/ ck1 . This leads to
an approximate (1 − α) × 100% confidence interval for ρ as
p
p
∗
∗
ωM LE + z1−α/2 VM LE ] .
(18)
exp[b
ωM LE − z1−α/2 VM LE ], exp[b
4.2.2
Random effects model
As is done for classic random effects models for effect sizes, instead of assuming a fixed ρ across
all studies, we can instead introduce a random effect to explain unmeasurable heterogeneity
among the ρk s. From (15), a simple random effect model assumes
ln(Sk ) ∼ N (µk , c1k + τ 2 )
approx.
(19)
where τ 2 is the variance of an assumed central normal additive random effect for ln(Sk ). While
a closed-form solution for the MLEs of ρ and τ have not been obtained, computationally these
are not too difficult to achieve due to the niceties of dealing with the normal distribution.
For Yk = ln(Sk ) ∼ N (µk , c1k + τ 2 ), ignoring constants, a log-likelihood function is
K
(yk − µk )2
1X
2
ln(c1k + τ ) +
.
l(y1 , . . . , yK ; ω, τ ) = −
2 k=1
c1k + τ 2
(20)
Taking the first derivative of (20) with respect to both ω and τ 2 , the MLEs for each
(denoted ω
bM LE and τb2 M LE ) are the solutions to
K
K
X
∂l
1 X (yk − µk )2
1
∂l
y k − µk
= 0.
= 0 and
=
−
=
2
2
2 2
2
∂ω
c
∂τ
2
(c
c
1k + τ
1k + τ )
1k + τ
k=1
k=1
To assure that the estimate to τ is positive, we found that the best approach was to re-write
the likelihood and estimate the parameter ln(τ ). Exponentiating was then used to achieve
the estimate for τ .
14
We can also derive approximate variances for the MLEs (details are in Appendix B),
!−1
!−1
K
K
X
X
1
1
.
(21)
, VM LE,τ 2 = 2
VM LE,ω =
2
b
ck1 + τ M LE
(ck1 + τb2 M LE )2
k=1
k=1
When constructing intervals using the appropriate percentile from the normal distribution,
our simulation studies (to be discussed in the next section) showed that the intervals were
too narrow. Therefore our suggested approximate (1 − α) × 100% confidence interval for ρ is
p
p
exp[b
ωM LE − tK−1,1−α/2 VM LE,ω ], exp[b
ωM LE + tK−1,1−α/2 VM LE,ω ]
(22)
where tK−1,1−α/2 is the (1 − α/2) × 100% percentile from the tK−1 distribution which results
in intervals with improved coverage.
5
Simulation Studies
In this section we consider several simulation studies, the first of which are focused on the
performance of the transformation to achieve approximate standard normality followed by
assessment of performance when applied in a meta-analysis setting.
5.1
Simulation 1: Approximate standard normality of the transformed F -statistics under the assumption of equal variances
In Section 3 we noted that a correction to the T4 transformation function can be useful
to improve achieving approximate standard normality. This was discovered by considering
contour plots of the size of the tests (tail probabilities) from Section 3.3 under the assumption
of equal variances. We provide some examples of these plots here for T1 , T2 , T3 and T4 .
The simulated size of the test is depicted which is the proportion of times that Z 2 exceeded
the cutoff 1.962 for a nominal size of α = 5% where Z is from (6) but for a single study. That
is, it is simply the VST transformed test statistics using the VSTs T1 (Plot A) through to T4
(Plot D). While all the transformations perform well across most of the degrees of freedom
combinations (since the empirical size is close to nominal), the Paulson transformation T3 is
quite remarkable in the sense that it consistently results in an excellent empirical size across
all of the choices. As noted previously, T3 (S) is almost identical to Φ−1 [F (S)] for S ∼ Fν1 ,ν2 .
The white space in Plots A and B are for when the transformations are not defined for T1 and
T2 which occurs when ν2 ≤ 4. VST T4 is defined across all degrees of freedom and therefore
does not suffer from this same problem, but for consistency T3 is the overall best performer
when it comes to size. It should be pointed out that when the degrees of freedom is small,
15
A: T1
B: T2
Degrees of freedom 2
Degrees of freedom 2
0.09
150
100
50
0.08
150
0.07
0.06
100
0.05
0.04
50
0.03
0.02
50
100
150
50
Degrees of freedom 1
Degrees of freedom 2
Degrees of freedom 2
D: T4
150
100
50
100
150
Degrees of freedom 1
C: T3
50
100
150
100
50
150
50
Degrees of freedom 1
100
150
Degrees of freedom 1
Figure 2: Contour plots for the empirical size of the test following variance stabilization when
the nominal size is α = 0.05. Plots A, B, C and D are for the transformations in (1), (2),
(3) and (5) respectively. 10,000 iterations were used in the simulation for each combination
of degrees of freedom.
transformations such as T1 are sensitive to which group is used in the denominator and which
is used in the numerator for the ratio. So from a purely testing point of view, T3 is the safest
option. However, this is for a single study only and in a meta analysis the problem is likely
to only be problematic if there is a persistent small sample size in one group across several
16
studies.
Similar results were also found when we considered sizes of α = 0.01 and α = 0.1 (not
shown). While all of the VSTs performed well, it was the Paulson transformation that
consistently provided excellent results across all combinations for the degrees of freedom.
We now consider the performance of the transformation function in achieving standard
normality over varying ν1 and ν2 .
4
−2
2
4
−2
0
2
4
T2 : ν1 = 30, ν2 = 30
2
−3
−1
1
2
3
T2 : ν1 = 30, ν2 = 15
0.0
−2
0
2
−3
−1
1
2
3
4
0.4
4
6
0.4
T4 : ν1 = 15, ν2 = 30
0.2
2
4
−4
−2
0
2
4
T4 : ν1 = 30, ν2 = 30
0.4
−4
0.2
0.2
0.0
−4
0
T3 : ν1 = 30, ν2 = 30
4
0.4
0.4
T1 : ν1 = 30, ν2 = 15
−2
−2
0
2
T3 : ν1 = 30, ν2 = 15
0.0 0.1 0.2 0.3 0.4
0
2
0.2
0.2
0.0
−2
0
0.0
−4
0.4
0.4
0.2
0.0
−4
−4 −2
0.2
4
T1 : ν1 = 30, ν2 = 30
4
0.0
2
2
0.0 0.1 0.2 0.3 0.4
0
0
0.0
0.0
−2
−2
T2 : ν1 = 15, ν2 = 30 T3 : ν1 = 15, ν2 = 30
0.2
0.2
0.0
−4
0.0
−4
0.4
0.4
T1 : ν1 = 15, ν2 = 30
0
−4
−2
0
2
4
T4 : ν1 = 30, ν2 = 15
0.0 0.1 0.2 0.3 0.4
2
0.4
0
T4 : ν1 = 15, ν2 = 15
0.2
0.2
0.0
−2
0.0
0.2
0.4
0.2
0.0
−4
T3 : ν1 = 15, ν2 = 15
0.4
T2 : ν1 = 15, ν2 = 15
0.4
T1 : ν1 = 15, ν2 = 15
−4
−2
0
2
−4
−2
0
2
4
Figure 3: Histograms of 10,000 simulated values from the Fν1 ,ν2 distribution following transformation using each of the VSTs T1 , . . . , T4 . The black line denotes the standard normal
probability density function for comparison.
In Figure 3 we provide histograms of 10,000 observations generated form the Fν1 ,ν2 distribution after application of each of VSTs T1 , T2 , T3 and T4 . Some combinations of ν1 and
17
ν2 are from {15, 30} and the black line depicts the standard normal density function. All
VSTs achieve at least approximate normality for these combinations of degrees of freedom.
However, it is the T3 and T4 VSTs that are the best performers followed by T1 provided both
degrees of freedom are not too small. We also tried smaller degrees of freedom values (e.g.
combinations from 10,20; not shown) and the performance of T2 diminished the most. T1
still did a reasonable job of normalizing the data and T3 and T4 performed exceptionally well.
When increasing the degrees of freedom all methods did very well in normalizing F -statistics.
5.2
Simulation 2:
Meta-analysis for estimation of ρ.
In this section we consider a meta-analysis for the estimation of the ratio variances (ρ). We
consider both an assumed fixed ρ across all studies and also a random effect model that
allows ρ to vary across the studies. Simulations are used to obtain approximate values for
bias, actual coverage probability (CP) and confidence interval width (W) for ρ as well as
the bias of the variance term associated with the random effect for the log of the ratio of
variances.
In Table 4 we report the results for data simulated from K = 13 studies where the sample
sizes are chosen to be the same as those for comparison between the BB and Bb/bb groups
from Table 1. For simplicity we consider four estimators. Firstly we consider the interval
estimators for ρ based on the T1 and T3 transformations (similar results were achieved for
T2 and T4 and are therefore omitted) with weights according to (13). For T1 , this is then
also identical to the MLE estimators of ρ based on the log transformation that is reported in
Section 4.2.1. Given that our simulations have shown that T3 can be a slightly better transformation function to achieve approximate normality, we have chosen this transformation
for comparison to see if slightly better performance can be gained. The other two methods
are the MLE estimator based on the re-scaled F -distribution detailed in Section 4.1 and the
MLE estimator based on the random effects model introduced in Section 4.2.2. Throughout
we will refer to these methods as T1 , T3 , F E and RE respectively.
For each of the methods highlighted above, we consider three choices for the variance
parameter (τ 2 ) for the random effects model. The first choice, τ 2 = 0, equates to the fixed
effects model for which T1 , T2 and F E are well suited where τ is not estimated. The other
choices are for τ 2 = 0.22 and τ 2 = 0.42 which is a scenario favoring the RE approach. The
data is simulated according to the assumed data model (not the approximate normal model)
in that the ratio of sample variances are sampled from the re-scaled F -distribution.
When τ = 0, all approaches achieve very close to nominal coverage of 0.95, small bias
in estimating ρ and similar interval widths. The RE estimator of has an approximate bias
18
Bias
T1
CP
W
0.0 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
0.00
0.00
0.01
0.01
0.00
0.00
-0.00
0.02
0.95
0.95
0.95
0.95
0.96
0.95
0.94
0.95
0.2 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
0.00
0.01
0.00
0.00
-0.00
0.02
0.01
0.03
0.4 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
0.00
0.01
0.01
0.02
0.00
0.02
0.03
0.08
τ
ρ
Bias
T3
CP
FE
Bias CP
W
Bias
RE
CP W
W
0.06
0.15
0.24
0.30
0.37
0.44
0.59
1.48
0.00
0.00
0.01
0.01
0.00
0.00
-0.00
0.02
0.96
0.95
0.96
0.95
0.95
0.95
0.95
0.95
0.06
0.15
0.24
0.30
0.37
0.44
0.59
1.48
0.00
-0.00
0.01
0.01
-0.00
-0.00
-0.01
0.01
0.96
0.95
0.96
0.95
0.95
0.95
0.95
0.96
0.06
0.15
0.24
0.30
0.37
0.44
0.59
1.47
0.00
0.00
0.01
0.01
-0.00
-0.00
-0.01
0.01
0.98
0.97
0.98
0.97
0.98
0.97
0.97
0.98
0.07
0.17
0.28
0.35
0.45
0.54
0.71
1.77
0.05
0.04
0.05
0.04
0.06
0.06
0.05
0.05
0.82
0.82
0.83
0.83
0.83
0.82
0.84
0.83
0.06
0.15
0.24
0.29
0.37
0.44
0.59
1.48
0.00
0.01
0.01
0.01
0.00
0.02
0.02
0.04
0.83
0.83
0.84
0.83
0.83
0.82
0.84
0.83
0.06
0.15
0.24
0.30
0.37
0.45
0.60
1.49
0.00
0.01
0.01
0.01
0.01
0.03
0.03
0.07
0.83
0.81
0.83
0.83
0.84
0.82
0.84
0.83
0.06
0.15
0.24
0.30
0.37
0.45
0.60
1.49
0.00
0.00
0.00
0.00
-0.00
0.01
0.01
0.03
0.94
0.93
0.94
0.94
0.94
0.93
0.94
0.94
0.08
0.21
0.34
0.43
0.53
0.64
0.84
2.10
-0.05
-0.05
-0.06
-0.05
-0.05
-0.05
-0.05
-0.06
0.63
0.61
0.63
0.63
0.65
0.62
0.61
0.62
0.06
0.15
0.24
0.30
0.37
0.45
0.60
1.49
0.01
0.02
0.02
0.03
0.02
0.04
0.06
0.15
0.62
0.61
0.64
0.63
0.65
0.62
0.62
0.63
0.06
0.15
0.24
0.31
0.38
0.46
0.61
1.53
0.01
0.03
0.04
0.06
0.06
0.08
0.11
0.29
0.61
0.59
0.63
0.60
0.62
0.61
0.60
0.61
0.06
0.16
0.25
0.31
0.39
0.47
0.63
1.57
0.00
0.01
0.00
0.01
0.00
0.02
0.01
0.04
0.93
0.94
0.94
0.94
0.94
0.92
0.93
0.93
0.12
0.31
0.50
0.62
0.78
0.92
1.24
3.07
-0.05
-0.04
-0.04
-0.05
-0.04
-0.06
-0.04
-0.06
Biasτ
Table 4: K = 13 studies. Simulated comparisons between T1 , T3 , the fixed effects model
estimator based on the F -distribution (FE) and the random effects model estimator (RE)
for bias, coverage probability (CP) and confidence interval width (W) for 1,000 simulated
runs for various choices of ρ. The bias for the estimator of τ (Biasτ ) is also included for the
RE model. The number of studies and sample sizes chosen are set equal to those for the
comparison between the BB and Bb/bb groups from Table 1.
of around 0.05 for τ which is somewhat expected given that the minimum for the estimate
of τ is 0. The result is a slightly conservative interval for ρ. In the presence of a random
effect (i.e. when τ = 0.2 or 0.4 in this simulation), we observe much lower than nominal
19
coverages for T1 , T2 and F E. While the bias in estimating ρ is often small, these estimators
assume that τ = 0 which results in a smaller variance for the estimators and subsequently
intervals that are too narrow. The FE method performs typically worse with respect to the
bias in estimator ρ, indicating that the transformations can adequately isolate the component
ρ from the missing random effect. Close to nominal coverage is achieved when using the RE
estimator although τ tends to be underestimated.
We now repeat the simulation from Table 4 but this time we assume twice as many studies
(i.e. K = 26) and report the results in Table 5. To achieve this, the K = 13 sample sizes
were repeated. For τ = 0 we observe excellent coverage for the interval estimators of τ and
where the intervals are, on average, narrower. When τ > 0, poor coverage is again observed
for all methods except for the RE estimator where excellent coverage is obtained. The bias
in estimating τ has also decreased.
Finally, in Table 6 we repeat the simulation again for 26 studies, but this time we double
all of the sample sizes. We see the same patterns of coverage as before but with typically
narrower intervals. T1 does a good job at estimating ρ in the presence of a random effect
with small bias reported although the coverage is poor. The RE method has again achieved
very good results and the bias in estimating τ has decreased.
6
Bone mass density example continued
We now illustrate how the theory and simulations above inform and affect an analysis of the
bone mineral density data from Section 2 when comparing the data from the BB and Bb/bb
groups from Table 1. For simplicity we focus only on the Paulson transformation T3 .
In Plot A of Figure 4 we provide the forest plot for the meta analysis of the ratio of
variances. The study specific intervals are those arising from the standard F -test and the
estimates are simply the ratio of estimated variances. The estimate and interval labeled
“Overall” has arisen from the meta-analysis based on MLE estimation of the random effects
model (see Section 4.2.2). As can be seen from the forest plot, there appears to be a persistent
suggestion that the true ratio is greater than one with two of the studies with the smaller
estimates having high estimator variability and many of the studies with larger estimates
having more precision. This is verified by the overall large estimate of 1.36 with corresponding
95% confidence interval of (1.12, 1.66). The estimate for the variance of the random effect is
small τb2 = 0.035 with standard error 0.04. When the MLE fixed effect model was assumed,
the estimated ρ was 1.45 with approximate 95% confidence interval (1.26, 1.68). These results
should lead to a rejection of using SMDs for the analysis given that the required assumption
of equal variances in general is violated. The forest plot is simple to create using the R
20
Bias
T1
CP
W
0.0 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
-0.00
0.00
0.00
0.00
0.01
0.00
-0.00
0.01
0.94
0.96
0.95
0.94
0.95
0.94
0.94
0.95
0.2 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
0.00
0.00
-0.00
0.00
0.01
-0.00
0.01
-0.01
0.4 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
0.00
0.00
0.01
0.01
0.00
0.01
0.03
0.04
τ
ρ
Bias
T3
CP
FE
Bias CP
W
Bias
RE
CP W
W
0.04
0.10
0.17
0.21
0.26
0.31
0.41
1.04
-0.00
0.00
0.00
0.00
0.01
0.00
-0.00
0.01
0.94
0.96
0.95
0.95
0.95
0.94
0.95
0.95
0.04
0.10
0.17
0.21
0.26
0.31
0.42
1.04
-0.00
0.00
0.00
0.00
0.00
0.00
-0.00
0.01
0.94
0.96
0.95
0.95
0.94
0.94
0.95
0.95
0.04
0.10
0.17
0.21
0.26
0.31
0.41
1.04
-0.00
0.00
0.00
0.00
0.00
0.00
-0.00
0.02
0.96
0.98
0.97
0.97
0.97
0.96
0.96
0.97
0.05
0.12
0.18
0.23
0.29
0.34
0.46
1.15
0.04
0.04
0.04
0.04
0.04
0.04
0.04
0.04
0.84
0.83
0.83
0.83
0.83
0.83
0.85
0.83
0.04
0.10
0.17
0.21
0.26
0.31
0.42
1.04
0.00
0.00
0.00
0.01
0.01
0.01
0.02
0.01
0.84
0.84
0.84
0.82
0.83
0.84
0.85
0.83
0.04
0.10
0.17
0.21
0.26
0.31
0.42
1.04
0.00
0.01
0.01
0.01
0.02
0.02
0.03
0.05
0.83
0.83
0.82
0.81
0.83
0.84
0.84
0.81
0.04
0.10
0.17
0.21
0.26
0.32
0.42
1.05
-0.00
0.00
-0.00
-0.00
0.01
-0.00
0.01
-0.01
0.94
0.93
0.94
0.94
0.94
0.92
0.94
0.92
0.06
0.14
0.23
0.29
0.36
0.42
0.57
1.42
-0.03
-0.04
-0.03
-0.03
-0.03
-0.04
-0.04
-0.04
0.61
0.61
0.61
0.61
0.60
0.63
0.61
0.62
0.04
0.10
0.17
0.21
0.26
0.31
0.42
1.05
0.00
0.01
0.02
0.03
0.02
0.03
0.06
0.12
0.60
0.61
0.61
0.60
0.60
0.64
0.61
0.61
0.04
0.11
0.17
0.21
0.27
0.32
0.43
1.07
0.01
0.03
0.05
0.06
0.06
0.08
0.12
0.28
0.58
0.58
0.58
0.58
0.57
0.61
0.55
0.57
0.04
0.11
0.18
0.22
0.27
0.33
0.44
1.10
0.00
0.00
0.01
0.01
0.00
0.00
0.02
0.02
0.94
0.94
0.94
0.93
0.94
0.93
0.93
0.94
0.08
0.21
0.34
0.43
0.53
0.63
0.85
2.12
-0.02
-0.02
-0.02
-0.02
-0.02
-0.03
-0.02
-0.02
Biasτ
Table 5: K = 26 studies. Simulated comparisons between T1 , T3 , the fixed effects model
estimator based on the F -distribution (FE) and the random effects model estimator (RE) for
bias, coverage probability (CP) and confidence interval width (W) for 1,000 simulated runs
for various choices of ρ. The bias for the estimator of τ (Biasτ ) is also included for the RE
model. The number of studies is twice that used in Table 4 where sample sizes have been
repeated twice.
package metafor [15] since it includes functionality for the creation of forest plots that only
requires the point and interval estimates.
As seen in the forest plot, it is possible that this highly significant result is due to the
21
Bias
T1
CP
W
0.0 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
-0.00
0.00
0.00
0.00
-0.00
0.00
-0.00
-0.01
0.96
0.94
0.96
0.95
0.95
0.95
0.95
0.94
0.2 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
0.00
0.00
-0.00
0.00
-0.00
-0.00
0.00
0.02
0.4 0.20
0.50
0.80
1.00
1.25
1.50
2.00
5.00
0.00
0.00
0.01
0.00
0.01
0.00
0.01
0.03
τ
ρ
Bias
T3
CP
FE
Bias CP
W
Bias
RE
CP W
W
0.02
0.05
0.07
0.09
0.11
0.14
0.18
0.46
-0.00
0.00
0.00
0.00
-0.00
0.00
-0.00
-0.01
0.96
0.94
0.96
0.95
0.95
0.95
0.95
0.94
0.02
0.05
0.07
0.09
0.11
0.14
0.18
0.46
-0.00
0.00
0.00
0.00
-0.00
0.00
-0.00
-0.01
0.97
0.94
0.96
0.95
0.95
0.95
0.95
0.94
0.02
0.05
0.07
0.09
0.11
0.14
0.18
0.46
-0.00
0.00
0.00
0.00
-0.00
0.00
-0.00
-0.01
0.97
0.96
0.97
0.97
0.97
0.96
0.97
0.96
0.02
0.05
0.08
0.10
0.13
0.15
0.20
0.50
0.02
0.02
0.02
0.02
0.02
0.02
0.02
0.02
0.58
0.56
0.57
0.56
0.55
0.53
0.58
0.57
0.02
0.05
0.07
0.09
0.11
0.14
0.18
0.46
0.00
0.00
0.00
0.00
0.00
0.00
0.01
0.04
0.58
0.57
0.56
0.55
0.55
0.54
0.58
0.57
0.02
0.05
0.07
0.09
0.12
0.14
0.18
0.46
0.00
0.01
0.01
0.01
0.01
0.01
0.02
0.08
0.57
0.56
0.56
0.55
0.55
0.54
0.56
0.56
0.02
0.05
0.07
0.09
0.12
0.14
0.18
0.46
0.00
0.00
-0.00
0.00
-0.00
-0.00
0.00
0.01
0.95
0.93
0.95
0.93
0.93
0.93
0.93
0.91
0.04
0.10
0.16
0.20
0.25
0.30
0.39
0.98
-0.01
-0.01
-0.01
-0.01
-0.01
-0.01
-0.01
-0.02
0.35
0.32
0.33
0.33
0.34
0.32
0.33
0.33
0.02
0.05
0.07
0.09
0.12
0.14
0.18
0.46
0.01
0.01
0.02
0.02
0.03
0.03
0.04
0.11
0.33
0.33
0.32
0.31
0.35
0.34
0.32
0.34
0.02
0.05
0.07
0.09
0.12
0.14
0.19
0.47
0.01
0.03
0.05
0.05
0.07
0.08
0.10
0.21
0.29
0.31
0.29
0.30
0.34
0.32
0.31
0.34
0.02
0.05
0.08
0.10
0.12
0.14
0.19
0.48
0.00
0.00
0.00
0.00
0.00
-0.00
0.01
0.01
0.93
0.95
0.93
0.94
0.95
0.96
0.93
0.96
0.07
0.17
0.28
0.35
0.43
0.52
0.69
1.74
-0.01
-0.02
-0.01
-0.01
-0.01
-0.01
-0.02
-0.01
Biasτ
Table 6: Large sample, K = 26 studies. Simulated comparisons between T1 , T3 , the fixed
effects model estimator based on the F -distribution (FE) and the random effects model
estimator (RE) for bias, coverage probability (CP) and confidence interval width (W) for
1,000 simulated runs for various choices of ρ. The bias for the estimator of τ (Biasτ ) is also
included for the RE model. The number of studies is twice that used in Table 4 where sample
sizes have been repeated twice and have been doubled.
very large estimated ratio of variances from Study 9. In Plot B we explore this by iteratively
adding studies based on the magnitude of their transformed statistics. The p-value in Plot B
for k ∗ = 1 is associated with the test based only on the value from Study 11 (the study with
22
A: Forest plot
1.0
B: Meta p−values from iterative testing
0.00
4.00
0.8
● ●
●
●
0.6
●
0.4
●
0.2
●
●
0.0
0.79 [ 0.30 , 3.91 ]
0.01 [ 0.00 , 7.90 ]
0.64 [ 0.31 , 1.66 ]
1.46 [ 0.61 , 4.50 ]
1.82 [ 1.16 , 3.13 ]
1.33 [ 0.96 , 1.92 ]
0.34 [ 0.14 , 1.43 ]
1.33 [ 1.00 , 1.81 ]
2.18 [ 1.54 , 3.21 ]
1.81 [ 1.16 , 3.00 ]
1.01 [ 0.59 , 1.97 ]
1.17 [ 0.61 , 2.45 ]
1.13 [ 0.59 , 2.57 ]
1.36 [ 1.12 , 1.66 ]
P−value
Study 1
Study 2
Study 3
Study 4
Study 5
Study 6
Study 7
Study 8
Study 9
Study 10
Study 11
Study 12
Study 13
Overall
●
●
2
8.00
Ratio of variances
4
6
8
10
● ● ●
12
k*
Figure 4: Results of meta-analysis for the BB vs Bb/bb data in Table 1. Plot A depicts
the forest plot for the analysis where the measure of interest is the ratio of variances. The
meta-analytic interval and estimate for a ρ is given in the last row where the MLE estimator
assuming the random effects model was used. Plot B displays meta-analysis p-values for the
test for equal variances when studies are added incrementally according to the magnitude of
the |Zk |s.
the smallest Zk ) and, not surprisingly, it is not significant. For k ∗ = 2, the test is run again
but this time with both studies 11 and 1 (the two studies with the smallest in magnitude
Zk ’s) and so on. Here the weights used are calculated based only on the included studies.
There is a local minimum at k ∗ = 5 before three negative Zk ’s are included which reduces
the evidence against H0 . These studies are associated with small weights when considering
all K = 13 studies. However, when considering k ∗ = 8 their weights are comparatively not
so small allowing these studies to highly influence the test. From here the larger studies are
included and all with large positive Zk ’s. Because of the sample sizes for these studies, they
provide comparatively much more evidence for unequal variances. We can see that there is
very strong evidence for unequal variances even if Study 9 was ignored. This means it would
not simply suffice to remove Study 9 if SMDs were to be used since there is further evidence
of a violation of equal variances.
23
7
Discussion
For a meta-analysis comparing means of two independent groups, a common approach is to
consider SMDs which requires that the population variances within each of the groups within
each study are equal. However, when the population variances are not equal, the validity of
the meta-analysis should be queried and the interpretation of the meta estimated effect may
be unreliable. In this paper we have shown that the meta-analysis of ratio of variances can
be used to assess the validity of the equal variances assumption and consequently provide
researchers with a justification to move towards other, perhaps less common, estimated effects. Simple Q-Q plots of transformed ratios of sample variances allow visual inspection to
determine departures from this assumption. Alternatively, we also introduced meta-estimates
of the ratio of variances in both the fixed effect and random effect settings. Simulations show
that the estimators exhibit very good properties thus supporting their use in practice.
A
Technical details for the first two VSTs
The technical details for VSTs T1 and T2 are provided here. The mean and variance of
S ∼ Fν1 ,ν2 are (see, for e.g., p.326 of [5])
ν2
for ν2 > 2
E[S] =
ν2 − 2
2ν22 (ν1 + ν2 − 2)
for ν2 > 4 .
Var[S] =
ν1 (ν2 − 2)2 (ν2 − 4)
The variance of S can be written as function of its mean Var[S] = g(E[S]) in a number of
ways, including:
2(ν1 + ν2 − 2)
g1 (t) = c1 t2
where c1 =
,
ν1 (ν2 − 4)
ν2
2
g2 (t) = c2 t t +
where c2 =
.
ν1
ν2 − 4
√
This leads (see page 32 of [1]) to the respective VSTs h1 (x) = ln(x)/ c1 and h2 (x) =
p
√
√
.
2 ln{2( x + x + ν2 /ν1 )}/ c2 . Using the approximation given as E[h(S)] = h(E[S]) +
h′′ (E[S]) Var[S]/2, we obtain
√
c1
1
Var[S]
1
=
ln(E[S]) −
ln(E[S])
−
.
E[h1 (S)] = √
√
c1
2(E[S])2
c1
2
r
√
p
c2 (2ν1 + ν2 − 2)
2
ν2
E[h2 (S)] = √
− p 2
ln 2
E[S] + E[S] +
c2
ν1
4 ν1 + 2ν1 ν2 − 2ν1
which leads to (1) and (2) after re-centering to achieve approximate mean zero.
24
B
Variances for the random effects model MLE
For simplicity when needed let v = τ 2 . Using the notation from Section 4.2.2,
K
X
∂ 2l
1
=−
,
2
∂ω
c1k + τ 2
k=1
K
∂ 2l X 1
(yk − µk )2
1
,
=
−
·
∂v 2 k=1 2 (c1k + τ 2 )2 (c1k + τ 2 )3
K
1 X y k − µk
∂ 2l
.
=−
∂ω∂v
2 k=1 (c1k + τ 2 )2
Taking the expectation with respect to random ln(S1 ), . . . , ln(SK ) gives E[∂ 2 l/(∂v 2 )] =
P
2 −1
−(1/2) K
and E[∂ 2 l/(∂ρ∂v)] = 0. By inverting the negative of the information
k=1 (c1k +τ )
matrix, we then have approximate variances for the MLEs.
References
[1] Bickel, P J, & Doksum, K A. 1977. Mathematical statistics: Basic ideas and selected
topics. San Francisco: Holden–Day.
[2] Borenstein, M, Hedges, L V, Higgins, J P T, & Rothstein, H R. 2009. Introduction to Meta-Analysis. Chichester, West Sussex: Wiley.
[3] Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences (2nd Edition). 2
edn. Routledge.
[4] Hedges, L V, Gurevitch, J A, & Curtis, P S. 1999. The meta-analysis of response
ratios in experimental ecology. Ecology, 80(4), 1150–1156. DOI: 10.2307/177062.
[5] Johnson, NL, Kotz, S, & Balakrishnan, N. 1995. Continuous univariate distributions. Vol. 2. New York: John Wiley & Sons.
[6] Kanis, J A. 2002. Diagnosis of osteoporosis and assessment of fracture risk. Lancet,
359, 1929–36. DOI: 10.1016/S0140-6736(02)08761-5.
[7] Kulinskaya, E, Morgenthaler, S, & Staudte, R G. 2008. Meta analysis: a
Guide to Calibrating and Combining Statistical Evidence. Wiley Series in Probability and
Statistics. Chichester: John Wiley & Sons, Ltd.
25
[8] Kulinskaya, E, Morgenthaler, S, & Staudte, R G. 2010. Variance stabilizing
the difference of two binomial proportions. American Statistician, 64(4), 350–356. DOI:
10.1198/tast.2010.09080.
[9] Malloy, M J, Prendergast, L A, & Staudte, R G. 2011. Comparison of methods
for fixed effect meta-regression of standardized differences of means. Electronic Journal of
Statistics, 5, 83–101. DOI: 10.1214/11-EJS598.
[10] Malloy, M J, Prendergast, L A, & Staudte, R G. 2013. Transforming the
model t: Random effects meta-analysis with stable weights. Statistics in Medicine, 32(11),
1842–1864. DOI: 10.1002/sim.5666.
[11] Morgenthaler, S, & Staudte, R G. 2012. Advantages of variance stabilization. Scandinavian Journal of Statistics, 39(4), 714–728. DOI: 10.1111/j.14679469.2011.00768.x.
[12] Paulson, F. 1942. An approximate normalization of the analysis of variance distribution. Annals of Mathematical Statistics, 13, 233–235.
[13] Prendergast, L A, & Staudte, R G. 2014. Better than you think: interval estimators of the difference of binomial proportions. Journal of Statistical Planning and
Inference, 148, 38–48. DOI: 10.1016/j.jspi.2013.11.012.
[14] Thakkinstian, A, D’Este, C, Eisman, J, Nguyen, T, & Attia, J. 2004. MetaAnalysis of Molecular Association Studies: Vitamin D Receptor Gene Polymorphisms and
BMD as a Case Study. Journal of Bone and Mineral Research, 19, 419–428.
[15] Viechtbauer, W. 2010. Conducting meta-analyses in R with the metafor package.
Journal of Statistical Software, 36(3), 1–48. DOI: 10.1016/j.jspi.2013.11.012.
[16] Yuan, K H, & Bentler, P M. 2010. Two simple approximations to the distributions
of quadratic forms. British Journal of Mathematical and Statistical Psychology, 63(2),
273–291. DOI: 10.1348/000711009X449771.
26