Converting Among Effect Sizes
Converting Among Effect Sizes
Converting Among Effect Sizes
Introduction
Converting from the log odds ratio to d
Converting from d to the log odds ratio
Converting from r to d
Converting from d to r
INTRODUCTION
Earlier in this Part we discussed the case where different study designs were used
to compute the same effect size. For example, studies that used independent groups
and studies that used matched groups were both used to yield estimates of the
standardized mean difference, g. There is no problem in combining these estimates
in a meta-analysis since the effect size has the same meaning in all studies.
Consider, however, the case where some studies report a difference in means,
which is used to compute a standardized mean difference. Others report a difference
in proportions which is used to compute an odds ratio. And others report a correla-
tion. All the studies address the same broad question, and we want to include them
in one meta-analysis. Unlike the earlier case, we are now dealing with different
indices, and we need to convert them to a common index before we can proceed.
The question of whether or not it is appropriate to combine effect sizes from
studies that used different metrics must be considered on a case by case basis. The
key issue is that it only makes sense to compute a summary effect from studies that
we judge to be comparable in relevant ways. If we would be comfortable combining
these studies if they had used the same metric, then the fact that they used different
metrics should not be an impediment.
For example, suppose that several randomized controlled trials start with the
same measure, on a continuous scale, but some report the outcome as a mean
and others dichotomize the outcome and report it as success or failure. In this
case, it may be highly appropriate to transform the standardized mean differences
Standardized
Log odds ratio Mean Difference Fisher’s z
(Cohen’s d )
Bias-corrected
Standardized
Mean Difference
(Hedges’ g)
and the odds ratios to a common metric and then combine them across studies.
By contrast, observational studies that report correlations may be substantially
different from observational studies that report odds ratios. In this case, even if
there is no technical barrier to converting the effects to a common metric, it may be
a bad idea from a substantive perspective.
In this chapter we present formulas for converting between an odds ratio and d, or
between d and r. By combining formulas it is also possible to convert from an odds
ratio, via d, to r (see Figure 7.1). In every case the formula for converting the effect
size is accompanied by a formula to convert the variance.
When we convert between different measures we make certain assumptions
about the nature of the underlying traits or effects. Even if these assumptions do
not hold exactly, the decision to use these conversions is often better than the
alternative, which is to simply omit the studies that happened to use an alternate
metric. This would involve loss of information, and possibly the systematic loss of
information, resulting in a biased sample of studies. A sensitivity analysis to
compare the meta-analysis results with and without the converted studies would
be important.
Figure 7.1 outlines the mechanism for incorporating multiple kinds of data in the
same meta-analysis. First, each study is used to compute an effect size and variance
of its native index, the log odds ratio for binary data, d for continuous data, and r for
correlational data. Then, we convert all of these indices to a common index, which
would be either the log odds ratio, d, or r. If the final index is d, we can move from
there to Hedges’ g. This common index and its variance are then used in the
analysis.
Chapter 7: Converting Among Effect Sizes
and
3:14162
VLogOddsRatio 5 0:0205 5 0:0676:
3
To employ this transformation we assume that the continuous data have the logistic
distribution.
CONVERTING FROM r TO d
We convert from a correlation (r) to a standardized mean difference (d) using
2r
d 5 pffiffiffiffiffiffiffiffiffiffiffiffiffi : ð7:5Þ
1 r2
The variance of d computed in this way (converted from r) is
4Vr
Vd 5 : ð7:6Þ
ð1 r 2 Þ 3
In applying this conversion we assume that the continuous data used to compute r
has a bivariate normal distribution and that the two groups are created by dichot-
omizing one of the two variables.
CONVERTING FROM d TO r
We can convert from a standardized mean difference (d) to a correlation (r) using
d
r 5 pffiffiffiffiffiffiffiffiffiffiffiffiffi ð7:7Þ
2
d þa
where a is a correction factor for cases where n1 ¼
6 n2 ,
ðn1 þ n2 Þ 2
a5 : ð7:8Þ
n1 n2
The correction factor (a) depends on the ratio of n1 to n2, rather than the
absolute values of these numbers. Therefore, if n1 and n2 are not known precisely,
use n1 5 n2, which will yield a 5 4. The variance of r computed in this way
(converted from d) is
Chapter 7: Converting Among Effect Sizes
a2 V d
Vr 5 : ð7:9Þ
ð d 2 þ aÞ 3
SUMMARY POINTS
If all studies in the analysis are based on the same kind of data (means, binary,
or correlational), the researcher should select an effect size based on that kind
of data.
When some studies use means, others use binary data, and others use correla-
tional data, we can apply formulas to convert among effect sizes.
Studies that used different measures may differ from each other in substantive
ways, and we need to consider this possibility when deciding if it makes sense
to include the various studies in the same analysis.