2010 Evidence-Based Radiology Why and How
2010 Evidence-Based Radiology Why and How
2010 Evidence-Based Radiology Why and How
DOI 10.1007/s00330-009-1574-4
HEALTH ECONOMY
Francesco Sardanelli
Myriam G. Hunink
Fiona J. Gilbert
Giovanni Di Leo
Gabriel P. Krestin
M. G. Hunink
Program for Health Decision Science,
Harvard School of Public Health,
Boston, MA, USA
F. J. Gilbert
Aberdeen Biomedical Imaging Centre,
University of Aberdeen,
Aberdeen, AB25 2ZD, UK
Introduction
Over the past three decades, the medical community has
increasingly supported the principle that clinical practice
should be based on the critical evaluation of the results
through quality- and relevance-filtered secondary publications (meta-analyses, systematic reviews and guidelines).
This principlea clinical practice based on the results (the
evidence) given by the researchhas engendered a
discipline, evidence-based medicine (EBM), which is
increasingly expanding into healthcare and bringing a
striking change in teaching, learning, clinical practice and
decision making by physicians, administrators and policy
makers. EBM has entered radiology with a relative delay,
but a substantial impact of this approach is expected in the
near future.
The aim of this article is to provide an overview of EBM
in relation to radiology and to define a policy for this
principle in the European radiological community.
What is EBM?
Evidence-based medicine, also referred to as evidencebased healthcare or evidence-based practice [1], has been
defined as the systematic application of the best evidence
to evaluate the available options and decision making in
clinical management and policy settings, i.e. integrating
clinical expertise with the best available external clinical
evidence from research [2].
This concept is not new. The basis for this way of
thinking was developed in the nineteenth century (Pierre C.
A. Luis) and during the twentieth century (Ronald A.
Fisher, Austin Bradford Hill, Richard Doll and Archie
Cochrane). However, it was not until the second half of the
last century that the Canadian School led by Gordon Guyatt
and Dave L. Sackett at McMaster University (Hamilton,
Ontario, Canada) promoted the tendency to guide clinical
practice using the best resultsthe evidenceproduced by
scientific research [24]. This approach was subsequently
refined also by the Centre for Evidence-Based Medicine
(CEBM) at University of Oxford, England [1, 5].
Dave L. Sackett said that:
Does it work?
For whom?
At what cost?
How does it compare with alternatives?
6. Societal impact
5. Patient outcomes
combining intermediate outcome measures such as sensitivity and specificity obtained from published studies and
meta-analyses with long-term consequences of true and
false, positive and negative outcomes. Different diagnostic
or therapeutic alternatives are visually represented by
means of a decision tree and dedicated statistical methods
are used (e.g. Markov model, Monte Carlo simulation) [7,
65]. This method is typically used for cost-effectiveness
analysis.
This approach has been evaluated over a 20-year period
from 1985, when the first article concerning cost-effectiveness analysis in medical imaging was published and included
111 radiology-related articles [66]. The average number of
studies increased from 1.6 per year (19851995) to 9.4 per
year (19962005). Eighty-six studies were performed to
evaluate diagnostic imaging technologies and 25 were
performed to evaluate interventional imaging technologies.
Ultrasonography (35%), angiography (32%), MR imaging
(23%) and CT (20%) were evaluated most frequently. Using
a seven-point scale, from 1=low to 7=high, the mean quality
score was 4.21.1 (mean standard deviation), without
significant improvement over time. Note that quality was
measured according to US recommendations for costeffectiveness analyses, which are not identical to European
standards, and the power to demonstrate an improvement
was limited [67]. The authors concluded that improvement
in the quality of analyses is needed [66].
A simple way to appraise the intrinsic difficulty in HTA
of radiological procedures is to compare radiological with
pharmacological research. After chemical discovery of an
active molecule, development, cell and animal testing,
phase I and phase II studies are carried out by the industry
and very few cooperating clinicians (for phase I and II
studies). In this long phase (commonly about 10 years), the
majority of academic institutions and large hospitals are not
involved. When clinicians are involved in phase III studies,
i.e. large randomized trials for registration, the aims are
already at level 5 (outcome impact). Radiologists have to
climb 4 levels of impact before reaching the outcome level.
We can imagine a world in which new radiologic
procedures are also tested for cost-effectiveness or patient
outcome endpoints before entering routine clinical practice, but the real world is different and we have much more
technology-driven research from radiologists than radiologist-driven research on technology.
Several countries have well-developed strategies for HTA.
In the UK the government funds a HTA programme where
topics are prioritised and work is commissioned in relevant
areas. In Italy, the Section of Economics in Radiology of the
Italian Society of Medical Radiology has connections with the
Italian Society of HTA for dedicated research projects.
Research groups competitively bid to undertake this work
and close monitoring is undertaken to ensure value for money.
Radiologists in the USA have formed the American College
of Radiologists Imaging Network (ACRIN) (www.ACRIN.
org) to perform such studies. In Europe, EIBIR (http://www.
Study type
1a
1b
1c
2a
2b
3a
3b
4
5
Source: Centre for Evidence-Based Medicine, Oxford, UK (http://www.cebm.net/index.aspx?o=1025; accessed 24 Feb 2008); Dodd et al.
[33]; with modifications
*An examination is snout when its negative result excludes the possibility of the presence of the disease (when a test has a very high
sensitivity, a negative result rules out the diagnosis); it is instead spin when its positive result definitely confirms the presence of the disease
(when a test has a very high specificity, a positive result rules in the diagnosis) [33]
10
in 12 journals with impact factor of 4 or higher using the 25item STARD checklist. Only 41% of articles reported more
than 50% of STARD items, while no articles reported more
than 80%. A flow chart of the study was presented in only two
articles. The mean number of reported STARD items was
11.9. Smidt et al. concluded: Quality of reporting in
diagnostic accuracy articles published in 2000 is less than
optimal, even in journals with high impact factor [73].
The relatively low quality of studies on diagnostic
performance is a relevant threat to the successful
implementation of EBR. Hopefully, the adoption of the
STARD requisites will improve the quality of radiological
studies but the process seems to be very slow [11], as
demonstrated also by the recent study by Wilczynski [74].
Other shared rules are available for articles reporting the
results of randomized controlled trials, the CONSORT
statement [75], recently extended to trials assessing nonpharmacological treatments [76] or of meta-analyses, the
QUOROM statement [77].
In particular, systematic reviews and meta-analyses in
radiology should evaluate the study validity for specific
issues, as pointed out by Dodd et al. [33]: detailed imaging
methods; level of excellence of both imaging and reference
standard; adequacy of technology generation; level of ionizing
radiation; viewing conditions (hard versus soft copy).
Levels of evidence
The need to evaluate the relevance of the various studies in
relation to the reported level of evidence generated a hierarchy
of the levels of evidence based on study type and design.
According to the Centre for Evidence-Based Medicine
(Oxford, UK), studies on diagnostic performance can be
ranked on a five-level scale, from 1 to 5 (Table 2). Resting
on similar scales, four degrees of recommendations, from
A to D, can be distinguished (Table 3).
However, we should consider that we have today multiple
different classifications of the levels of evidence and of
degrees of recommendation. The same degree of recommendation can be represented in different systems using
capital letters, Roman or Arabic numerals, etc., generating
confusion and possible errors in clinical practice.
A new approach to evidence classification has been
recently proposed by the GRADE working group [78] with
special attention paid to the definition of standardized
criteria for releasing and applying clinical guidelines. The
GRADE system states the need for an explicit declaration
of the methodological core of a guideline, with particular
regard to: quality of evidence, relative importance, risk
benefit balance and value of the incremental benefit for
each outcome. This method, apparently complex, finally
provides four simple levels of evidence: high, when further
research is thought unlikely to modify the level of
confidence of the estimated effect; moderate, when further
research is thought likely to modify the level of confidence
Level 1 studies
Consistent level 2 or 3 studies or extrapolations*
from level 1 studies
Consistent level 4 studies or extrapolations* from
level 2 or 3 studies
Level 5 studies or low-quality or inconclusive
studies of any level
11
12
Subspecialty groups within EuroAIM and ESR subspecialty societies should collaborate on writing evidencebased guidelines for the appropriate use of imaging
technology.
Redirection of European radiological research
We propose elaborating a strategy to redirect the European
radiological research of primary studies (original research)
towards purposes defined on the basis of EBR methodology.
In particular:
13
Conclusions
European radiologists need to embrace EBM. Our specialty
will benefit greatly from the improvement in practice that
will result from this more rigorous approach to all aspects
of our work. Wherever radiologists are involved in
producing guidelines, refereeing manuscripts, publishing
work or undertaking research, cognizance of EBR
principles should be maintained. If we can make this
step-by-step change in our approach, we will improve
radiology for future generations and our patients. EBR
should be promoted by ESR and all the European
subspecialty societies.
Acknowledgement We sincerely thank Professor Yves Menu
(Department of Radiology, Saint Antoine Hospital, Paris) for his
suggestions regarding the subsection EBR at the ECR.
References
1. Malone DE (2007) Evidence-based
practice in radiology: an introduction to
the series. Radiology 242:1214
2. Evidence-Based Radiology Working
Group (2001) Evidence-based radiology: a new approach to the practice of
radiology. Radiology 220:566575
14
15
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.