Eng PDF
Eng PDF
Eng PDF
51
Improving healthcare 53
THE ROLE OF PUBLIC HEALTH ORGANIZATIONS IN ADDRESSING PUBLIC HEALTH PROBLEMS IN EUROPE
Quality improvement initiatives take many forms, from the creation of standards for health
professionals, health technologies and health facilities, to audit and feedback, and from
quality in Europe
Health Policy
Series
fostering a patient safety culture to public reporting and paying for quality. For policy-
makers who struggle to decide which initiatives to prioritise for investment, understanding
the potential of different quality strategies in their unique settings is key.
This volume, developed by the Observatory together with OECD, provides an overall conceptual
framework for understanding and applying strategies aimed at improving quality of care. Characteristics, effectiveness and
Crucially, it summarizes available evidence on different quality strategies and provides
recommendations for their implementation. This book is intended to help policy-makers to
implementation of different strategies
understand concepts of quality and to support them to evaluate single strategies and
combinations of strategies. Edited by
Quality of care is a political priority and an important contributor to population health. This Reinhard Busse
book acknowledges that "quality of care" is a broadly defined concept, and that it is often
Niek Klazinga
unclear how quality improvement strategies fit within a health system, and what their
particular contribution can be. This volume elucidates the concepts behind multiple elements Dimitra Panteli
of quality in healthcare policy (including definitions of quality, its dimensions, related activities, Wilm Quentin
and targets), quality measurement and governance and situates it all in the wider context of
health systems research. By so doing, this book is designed to help policy-makers prioritize
and align different quality initiatives and to achieve a comprehensive approach to quality
improvement.
The editors
Reinhard Busse, Professor, Head of Department, Department of Health Care Management,
Berlin University of Technology and European Observatory on Health Systems and Policies
and Berlin University of Technology
Niek Klazinga, Head of the OECD Health Care Quality Indicator Programme, Organisation for
Economic Co-operation and Development, and Professor of Social Medicine, Academic Medical
Centre, University of Amsterdam
The Observatory is a partnership hosted by the WHO Regional Office for Europe, which includes
other international organizations (the European Commission, the World Bank); national and regional
governments (Austria, Belgium, Finland, Ireland, Norway, Slovenia, Spain, Sweden, Switzerland,
the United Kingdom and the Veneto Region of Italy); other health system organizations (the French
National Union of Health Insurance Funds (UNCAM), the Health Foundation); and academia (the
London School of Economics and Political Science (LSE) and the London School of Hygiene & Tropical
Medicine (LSHTM)). The Observatory has a secretariat in Brussels and it has hubs in London (at LSE
and LSHTM) and at the Technical University of Berlin.
Improving healthcare quality
in Europe
Characteristics, effectiveness and
implementation of different strategies
Edited by:
Reinhard Busse
Niek Klazinga
Dimitra Panteli
Wilm Quentin
Keywords:
QUALITY ASSURANCE, HEALTH CARE - methods
DELIVERY OF HEALTH CARE - standards
OUTCOME AND PROCESS ASSESSMENT (HEALTH CARE)
COST-BENEFIT ANALYSIS
HEALTH POLICY
© World Health Organization (acting as the host organization for, and secretariat of, the European
Observatory on Health Systems and Policies) and OECD (2019)
All rights reserved. The European Observatory on Health Systems and Policies welcomes requests for
permission to reproduce or translate its publications, in part or in full.
Address requests about publications to: Publications, WHO Regional Office for Europe, UN City,
Marmorvej 51, DK-2100 Copenhagen Ø, Denmark
Alternatively, complete an online request form for documentation, health information, or for
permission to quote or translate, on the Regional Office web site (http://www.euro.who.int/
pubrequest).
The designations employed and the presentation of the material in this publication do not imply the
expression of any opinion whatsoever on the part of the European Observatory on Health Systems
and Policies concerning the legal status of any country, territory, city or area or of its authorities, or
concerning the delimitation of its frontiers or boundaries. Indeed, this document, as well as any data
and map included herein, is without prejudice to the status of or sovereignty over any territory, to the
delimitation of international frontiers and boundaries and to the name of any territory, city or area.
Dotted lines on maps represent approximate border lines for which there may not yet be full agreement.
The mention of specific companies or of certain manufacturers’ products does not imply that they are
endorsed or recommended by the European Observatory on Health Systems and Policies or OECD
in preference to others of a similar nature that are not mentioned. Errors and omissions excepted, the
names of proprietary products are distinguished by initial capital letters.
All reasonable precautions have been taken by the European Observatory on Health Systems and
Policies to verify the information contained in this publication. However, the published material is
being distributed without warranty of any kind, either express or implied. The responsibility for the
interpretation and use of the material lies with the reader. In no event shall the European Observatory
on Health Systems and Policies be liable for damages arising from its use. The opinions expressed and
arguments employed herein are solely those of the authors and do not necessarily reflect the official
views of the OECD or of its member countries or the decisions or the stated policies of the European
Observatory on Health Systems and Policies or any of its partners.
Part I
Part II
Part III
Foreword
from the OECD
Policy-makers and care providers share with patients a key concern: ensuring that
people using health services receive the best possible care, which is care that is
safe, effective and responsive to their needs. Yet large variation in care outcomes
persists both within and across countries. For example, avoidable hospital admis-
sions for chronic conditions such as asthma and chronic obstructive pulmonary
disease, indicators of quality of primary healthcare, vary by a factor of nearly
10 between the best and worst performing OECD countries. To take another
example, thirty-day mortality after admission to hospital for acute myocardial
infarction, an indicator of quality of acute care, varies by a factor of nearly three
between Norway and Hungary.
These data signal that more should be done to improve quality, and that strate-
gies to assure and improve quality care must remain at the core of healthcare
policy in all OECD and EU countries. Luckily, policy-makers have an arsenal of
strategies at their disposal. Many such policies are simple and cheap: think, for
example, of basic hygiene policies, which are key to cutting the risk of resistant
bacteria spreading in care settings. But policy-makers also must pay close atten-
tion to selecting the mix of strategies best fitting their unique conditions and
goals. This can be tricky. Evidence about the effectiveness of specific strategies
in specific settings is known, but making an informed choice across strategies
that address the quality both of a specific healthcare service and of the system
as a whole requires more careful consideration. Likewise, policy-makers need to
carefully balance intrinsic providers’ motivations for improving healthcare delivery
with external accountability and transparency of performance, and encourage
innovation without creating unnecessary administrative burdens.
Since 2003 the Organisation for Economic Co-operation and Development
(OECD) has put quality of care on centre stage, helping countries to better
benchmark Health Care Quality and Outcomes and improve quality and safety
policies. This book supports this body of knowledge and adds to the fruitful col-
laboration between OECD and the European Observatory on Health Systems
and Policies. It addresses the overall conceptual and measurement challenges and
xii Improving healthcare quality in Europe
Francesca Colombo
Head Health Division
Organisation for Economic Co-operation and Development
Paris, June 2019
Foreword xiii
Foreword
from the European Observatory
on Health Systems and Policies
The year 2018 also marked the 10th anniversary of the Observatory’s first com-
prehensive study on quality of care (Assuring the quality of health care in the
European Union: a case for action, by Helena Legido-Quigley, Martin McKee,
Ellen Nolte and Irene Glinos). The 2008 study is a well-cited resource, which
provided important conceptual foundations and a mapping of quality-related
initiatives in European countries. It highlighted the variability of practices among
countries and the vast potential for improvement. It also helped the Observatory
identify a significant unmet need for policy-makers: the availability of concen-
trated, comparable evidence that would help with prioritizing and/or aligning
different quality initiatives to achieve separate but complementary goals within
a comprehensive approach to quality improvement.
Over the years, and in line with health policy priorities, the Observatory has
carried out work on individual strategies that contribute to quality of healthcare
(for example on pharmaceutical regulation in 2004, 2016 and 2018; on human
resources for health in 2006, 2011 and 2014; on health technology assessment
in 2008; on audit and feedback in 2010; on clinical guidelines in 2013; and on
public reporting in 2014). However, because “quality of care” is usually defined
quite broadly, it is often unclear how the many organizations and movements
aiming to improve it fit within a health system and how effective (and cost-
effective) they can be. In a general effort to improve quality of care, should the
focus be on more stringent regulations for health professionals, on a mandatory,
rigorous accreditation of health provider organizations, or on financial incen-
tives in the shape of pay-for-quality payment models? While the recent work
on healthcare quality mentioned above provides vital resources to address such
challenges, it does not answer these questions directly.
To bridge this gap, the Observatory worked together with the OECD to develop
a conceptual framework for this study and to apply it for the collection, syn-
thesis and presentation of evidence. This was motivated both by the experience
of previous fruitful and successful collaboration between the two institutions
(such as in the volume Paying for Performance in Health Care: Implications for
health system performance and accountability, published in 2014) and by the
OECD’s vast expertise in developing healthcare quality indicators and compar-
ing results across countries. The latter is reflected in the Health Care Quality
Indicators (HCQI) project and the OECD’s work on international health system
performance comparisons.
Fuelled by the complementarity in roles and expertise of the Observatory and
the OECD, this study breaks new ground in seven different ways:
vii) it clarifies the links between different strategies, paving the way for a
coherent overall approach to improving healthcare quality.
The described approach fully embodies the principles underpinning the
Observatory’s work as a knowledge-broker. The Observatory was conceived
at the first European Ministerial Conference on health systems in Ljubljana in
1996, as a response to the expressed need of Member States to systematically
assess, compare and learn from health system developments and best practices
across the European region. While this study focuses primarily on the European
context, as a toolkit it can also be used by policy-makers outside Europe, reflect-
ing the OECD’s mission of promoting policies that will improve the economic
and social well-being of people around the world.
Ensuring universal access to healthcare services of high quality is a global aspi-
ration. This study joins its recent counterparts in arguing that the battle for
healthcare quality is far from won, at any level. The European Observatory on
Health Systems and Policies intends to continue its engagement in the field
for years to come to aid policy-makers in understanding this dynamic field of
knowledge and maintaining the necessary overview to navigate it.
Liisa-Maria Voipio-Pulkki
Chair, Steering Committee
European Observatory European Observatory on Health Systems and Policies
xvi Improving healthcare quality in Europe
List of tables,
figures and boxes
Tables
Table 1.1 Selected definitions of quality, 1980–2018 5
Table 1.2 Quality dimensions in ten selected definitions of quality, 1980–2018 10
Table 1.3 A selection of prominent quality strategies (marked in grey are the
strategies discussed in Chapters 5 to 14 of this book) 15
Table 2.1 Targets of various quality strategies 25
Table 2.2 Overview of chapter structure and topics addressed in Part II of
the book 29
Table 3.1 The purpose of quality measurement: quality assurance versus quality
improvement.36
Table 3.2 Examples of structure, process and outcome quality indicators for
different dimensions of quality 38
Table 3.3 Strengths and weaknesses of different types of indicator 41
Table 3.4 Advantages and disadvantages of composite indicators 43
Table 3.5 Information needs of health system stakeholders with regard to quality of
care45
Table 3.6 Strengths and weaknesses of different data sources 49
Table 3.7 Potential patient risk-factors 56
Table 4.1 WHO targets for ensuring quality in healthcare 67
Table 4.2 Some examples of Council of Europe recommendations with regards to
quality in healthcare 69
Table 4.3 CEN Technical Committees on healthcare 71
Table 4.4 EU legal sources of quality and safety requirements in healthcare 75
Table 4.5 A selection of EU-funded projects on quality and/or safety 93
Table 5.1 Nurse categories and key elements of basic nursing education for
selected European countries 116
Table 5.2 Comparison of structure-based versus outcome-based educational
programmes 118
Table 5.3 Overview of national licensing exams for medical graduates in selected
European countries 122
Table 5.4 Overview of licensing and registration procedures for nurses in selected
European countries 124
Table 5.5 Key considerations and components of relicensing strategies 126
Table 5.6 Overview of methods for Continuing Medical Education 127
Table 5.7 Relicensing strategies of physicians in selected European countries 129
Table 5.8 Responsible institutions for the sanctioning of medical professionals in
selected European countries 131
List of tables, figures and boxes xvii
Figures
Fig. 1.1 Quality is an intermediate goal of health systems 11
Fig. 1.2 Two levels of healthcare quality 12
Fig. 1.3 The link between health system performance and quality of healthcare
services13
Fig. 2.1 Framework of the OECD Health Care Quality Indicators project 20
Fig. 2.2 The Plan-Do-Check-Act (PDCA) cycle 22
Fig. 2.3 Three major activities of different quality strategies (with examples
covered in this book) 23
Fig. 2.4 Donabedian’s Structure-Process-Outcome (SPO) framework for Quality
Assessment24
Fig. 2.5 Comprehensive framework for describing and classifying quality
strategies 27
Fig. 4.1 An integrated international governance framework for quality in
healthcare 64
Fig. 5.1 Relationship between human resources actions and health outcomes
and focus of this chapter (highlighted in blue) 107
Fig. 5.2 Strategies for regulating health professionals (in this chapter) 110
Fig. 5.3 Various domains of skills 111
Fig. 5.4 Visualization of the medical education systems in selected European
countries and the USA/Canada 114
Fig. 6.1 Regulating pharmaceuticals along the product life-cycle 155
Fig. 6.2 Health technology regulation, assessment and management 156
Fig. 6.3 Typology of HTA processes in European countries 162
Fig. 6.4 Key principles for the improved conduct of HTA 168
Fig. 7.1 Overview and link between Eurocodes 183
Fig. 7.2 Overview of regulatory systems for healthcare buildings in European
countries186
Fig. 7.3 Design process model by Dickerman & Barach (2008) 193
Fig. 7.4 Three-step framework for medical device – associated patient safety 195
Fig. 7.5 Future national healthcare building design quality improvement scenarios
in the UK explored by Mills et al. 196
Fig. 8.1 Key differences between external assessment strategies 206
Fig. 8.2 Generic framework for external assessment 211
Fig. 8.3 Number of ISO-certificates in health and social care, 1998–2017 215
Fig. 9.1 Influence of clinical guidelines on process and outcomes of care 237
Fig. 9.2 AWMF criteria for guideline categorization 245
Fig. 10.1 The audit and feedback cycle 269
Fig. 11.1 Reason’s accident causation model 292
Fig. 11.2 Three levels of patient safety initiatives 293
Fig. 11.3 Patient safety and Donabedian’s structure-process-outcome
framework294
Fig. 11.4 WHO Safety improvement cycle 295
Fig. 11.5 The Safety Culture Pyramid 303
Fig. 12.1 A clinical pathway for the management of elderly inpatients with
malnutrition314
Fig. 12.2 Clinical pathway vs. usual care, outcome: in-hospital complications 321
List of tables, figures and boxes xix
Boxes
Box 1.1 Reasons for (re)focusing on quality of care 3
Box 3.1 Criteria for indicators 47
Box 3.2 Seven principles to take into account when using quality indicators 58
Box 4.1 Excerpt from the Council Conclusions on Common values and
principles in European Union Health Systems (2006) 65
Box 4.2 Soft law instruments to improve quality of cancer control policies in
the EU 89
Box 5.1 Developments at EU level to ensure quality of care given the mobility of
health professionals 112
Box 5.2 Challenges in the established continuing education paradigm 141
Box 6.1 The HTA Core Model® 158
Box 6.2 European developments in HTA 160
Box 6.3 EUnetHTA recommendations for the implementation of HTA at national
level (barriers and actions to address them) 169
Box 7.1 Aspects of quality and performance and potential influences from the
built environment 180
Box 7.2 Examples of different types of specifications for building a bridge 182
Box 7.3 Quality of medical devices as part of healthcare infrastructure 195
Box 8.1 EU Regulations on certification of medical products 215
Box 8.2 Rapid review of the scientific literature 218
Box 9.1 Terminology around clinical guidelines 238
Box 9.2 Desirable attributes of clinical guidelines 239
Box 9.3 Dimensions of guideline implementability 251
Box 9.4 G-I-N principles for dealing with conflicts of interests in guideline
development256
Box 11.1 Definitions of patient safety, adverse events and errors 290
Box 11.2 Incident reporting systems and analysis 298
Box 12.1 EPA definition of a clinical pathway 311
Box 12.2 The European Pathways Association (EPA) 315
Box 12.3 Methodology of systematic review 319
Box 13.1 Policy implications for successful public reporting 351
Box 14.1 Review methods used to inform the content of this chapter 360
Box 14.2 Structures of financial incentives within P4Q programmes 363
Box 14.3 Aspects of financial incentives that must be considered when planning
a P4Q programme 389
Box 14.4 Conclusions with respect to P4Q programme design 392
xx Improving healthcare quality in Europe
List of abbreviations
IQTIG German Institute for Quality Assurance and Transparency in Health Care
QM quality management
Max Geraedts, Professor, Institute for Health Services Research and Clinical
Epidemiology, Department of Medicine, Philipps-Universität Marburg
Oliver Groene, Vice Chairman of the Board, OptiMedis AG, and Honorary
Senior Lecturer, London School of Hygiene and Tropical Medicine, UK.
Noah Ivers, Family Physician, Chair in Implementation Science, Women’s
College Hospital, University of Toronto
Gro Jamtvedt, Dean and Professor, OsloMet – Oslo Metropolitan University,
Faculty of Health Sciences
Leigh Kinsman, Joint Chair, professor of Evidence Based Nursing, School of
Nursing and Midwifery, University of Newcastle and Mid North Coast Local
Health District
Niek Klazinga, Head of the OECD Health Care Quality Indicator Programme,
Organisation for Economic Co-operation and Development, and Professor of
Social Medicine, Academic Medical Centre, University of Amsterdam
Oliver Komma, Outpatient Clinic Manager, MonorMED Outpatient Clinic
Anika Kreutzberg, Research Fellow, Department of Health Care Management,
Berlin University of Technology
Finn Borlum Kristensen, Professor, Department of Public Health, University
of Southern Denmark
Solvejg Kristensen, Programme Leader PRO-Psychiatry, Aalborg University
Hospital – Psychiatry, Denmark
Helena Legido-Quigley, Associate Professor, Saw Swee Hock School of Public
Health, National University of Singapore and London School of Hygiene and
Tropical Medicine
Claudia Bettina Maier, Senior Research Fellow, Department of Health Care
Management, Berlin University of Technology
Grant Mills, Senior Lecturer, The Bartlett School of Construction and Project
Management, University College London
Camilla Palmhøj Nielsen, Research Director, DEFACTUM, Central Denmark
Region
Günter Ollenschläger, Professor, Institute for Health Economics and Clinical
Epidemiology (IGKE), University Hospital Cologne
Willy Palm, Senior Adviser, European Observatory on Health Systems and
Policies
Author affiliations xxv
• Growing recognition of the need to align the performance of public and private healthcare
delivery in fragmented and mixed health markets
• Increasing understanding of the critical importance of trusted services for effective
preparedness for outbreaks or other complex emergencies
quality is dynamic and continuously evolving. In that sense, providers can only
be assessed against the current state of knowledge as a service that is considered
“good quality” at any given time may be regarded as “poor quality” twenty years
later in light of newer insights and alternatives.
The definition of quality by the Council of Europe included in Table 1.1, pub-
lished seven years after the IOM’s definition as part of the Council’s recommen-
dations on quality improvement systems for EU Member States, is the first to
explicitly include considerations about the aspect of patient safety. It argues that
quality of care is not only “the degree to which the treatment dispensed increases
the patient’s chances of achieving the desired results”, which basically repeats the
IOM definition, but it goes on to specify that high-quality care also “diminishes
the chances of undesirable results” (The Council of Europe, 1997). In the same
document the Council of Europe also explicitly defines a range of dimensions
of quality of care – but, surprisingly, does not include safety among them.
The final two definitions included in Table 1.1 are from the European Commission
(2010) and from WHO (2018). In contrast to those discussed so far, both of
these definitions describe quality by specifying three main dimensions or attrib-
utes: effectiveness, safety and responsiveness or patient-centredness. It is not by
chance that both definitions are similar as they were both strongly influenced by
the work of the OECD’s Health Care Quality Indicators (HCQI) project (Arah
et al., 2006; see below). These final two definitions are interesting also because
they list a number of further attributes of healthcare and healthcare systems that
are related to quality of care, including access, timeliness, equity and efficiency.
However, they note that these other elements are either “part of a wider debate”
(EC, 2010) or “necessary to realize the benefits of quality health care” (WHO,
2018), explicitly distinguishing core dimensions of quality from other attributes
of good healthcare.
In fact, the dimensions of quality of care have been the focus of considerable
debate over the past forty years. The next section focuses on this international
discussion around the dimensions of quality of care.
quality of care. However, many definitions – also beyond those shown in Table
1.2 – include attributes such as appropriateness, timeliness, efficiency, access
and equity. This is confusing and often blurs the line between quality of care
and overall health system performance. In an attempt to order these concepts,
the table classifies its entries into core dimensions of quality, subdimensions
that contribute to core dimensions of quality, and other dimensions of health
system performance.
This distinction is based on the framework of the OECD HCQI project, which
was first published in 2006 (Arah et al., 2006). The purpose of the framework
was to guide the development of indicators for international comparisons of
healthcare quality. The HCQI project selected the three dimensions of effective-
ness, safety and patient-centredness as the core dimensions of healthcare quality,
arguing that other attributes, such as appropriateness, continuity, timeliness and
acceptability, could easily be accommodated within these three dimensions. For
example, appropriateness could be mapped into effectiveness, whereas continuity
and acceptability could be absorbed into patient-centredness. Accessibility, effi-
ciency and equity were also considered to be important goals of health systems.
However, the HCQI team argued – referring to the IOM (1990) definition –
that only effectiveness, safety and responsiveness are attributes of healthcare that
directly contribute to “increasing the likelihood of desired outcomes”.
Some definitions included in Table 1.2 were developed for specific purposes and
this is reflected in their content. As mentioned above, the Council of Europe
(1997) definition was developed to guide the development of quality improve-
ment systems. Therefore, it is not surprising that it includes the assessment
of the process of care as an element of quality on top of accessibility, efficacy,
effectiveness, efficiency and patient satisfaction.
In 2001 the IOM published “Crossing the Quality Chasm”, an influential report
which specified that healthcare should pursue six major aims: it should be safe,
effective, patient-centred, timely, efficient and equitable. These six principles have
been adopted by many organizations inside and outside the United States as the
six dimensions of quality, despite the fact that the IOM itself clearly set them
out as “performance expectations” (“a list of performance characteristics that, if
addressed and improved, would lead to better achievement of that overarching
purpose. To this end, the committee proposes six specific aims for improvement.
Health care should be …”; IOM, 2001). For example, WHO (2006b) adapted
these principles as quality dimensions in its guidance for making strategic choices
in health systems, transforming the concept of timeliness into “accessibility”
to include geographic availability and progressivity of health service provision.
However, this contributed to the confusion and debate about quality versus
other dimensions of performance.
An introduction to healthcare quality: defining and explaining its role in health systems 9
Council
Donabedian
IOM (1990) of Europe IOM (2001) OECD (2006) WHO (2006b) EC (2010) EC (2014) WHO (2016) WHO (2018)
(1980)
(1997)
Core Effectiveness X X X X X X X X X
dimensions Safety X X X X X X X X
of healthcare
Patient- Patient- Patient- Patient- Patient-
quality Responsiveness X X X
centredness centredness centredness centredness centredness
Acceptability X
Appropriateness X X
10 Improving healthcare quality in Europe
Continuity
Timeliness X X X
Subdimensions
(related to core Satisfaction X X
dimensions) Health
X X
improvement
Assessment
Patient Patient’s
Other of care Integration Integration
Welfare preferences
process
Other Efficiency X X X X X X X
dimensions of
Access X X
health systems
performance Equity X X X X X X
} }
An introduction to healthcare quality: defining and explaining its role in health systems 11
Intermediate goals/
SERVICE DELIVERY
outcomes
LEADERSHIP/GOVERNANCE
It is worth noting that quality and safety are mentioned separately in the frame-
work, while most of the definitions of quality discussed above include safety as a
core dimension of quality. For more information about the relationship between
quality and safety, see also Chapter 11.
As mentioned above, Donabedian defined quality in general terms as “the abil-
ity to achieve desirable objectives using legitimate means” (Donabedian, 1980).
Combining Donabedian’s general definition of quality with the WHO building
blocks framework (Fig. 1.1), one could argue that a health system is “of high
quality” when it achieves these (overall and intermediate) goals using legitimate
means. In addition, Donabedian highlighted that it is important to distinguish
between different levels when assessing healthcare quality (Donabedian, 1988).
He distinguished between four levels at which quality can be assessed – indi-
vidual practitioners, the care setting, the care received (and implemented) by the
patient, and the care received by the community. Others have conceptualized
different levels at which policy developments with regard to quality may take
place: the health system (or “macro”) level, the organizational (“meso”) level and
the clinical (“micro”) level (Øvretveit, 2001).
While the exact definition of levels is not important, it is essential to recognize
that the definition of quality changes depending on the level at which it is
assessed. For simplicity purposes, we condense Donabedian’s four tiers into two
conceptually distinct levels (see Fig. 1.2). The first, narrower level is the level
of health services, which may include preventive, acute, chronic and palliative
12 Improving healthcare quality in Europe
care (Arah et al., 2006). At this level, there seems to be an emerging consensus
that “quality of care is the degree to which health services for individuals and
populations are effective, safe and people-centred” (WHO, 2018).
The second level is the level of the healthcare system as a whole. Healthcare sys-
tems are “of high quality” when they achieve the overall goals of improved health,
responsiveness, financial protection and efficiency. Many of the definitions of
healthcare quality included in Table 1.2 seem to be concerned with healthcare
system quality as they include these attributes among stated quality dimensions.
However, such a broad definition of healthcare quality can be problematic in the
context of quality improvement: while it is undoubtedly important to address
access and efficiency in health systems, confusion about the focus of quality
improvement initiatives may distract attention away from those strategies that
truly contribute to increasing effectiveness, safety and patient-centredness of care.
Healthcare
system quality
(performance)
Healthcare
service quality
Fig. 1.3 The link between health system performance and quality of
healthcare services
Population health
outcomes
Access(ibility)
incl. financial protection* × Quality
(for those who
receive services)
= (system-wide effectiveness,
level and distribution)
Responsiveness
(level and distribution)
(Allocative)
Inputs (money and/or resources) Efficiency
(value for money,
Health system performance i.e. population health and/or
responsiveness per input unit)
The framework highlights that health systems have to ensure both access to
care and quality in order to achieve the final health system goals. However, it is
important to distinguish conceptually between access and quality because very
different strategies are needed to improve access (for example, improving finan-
cial protection, ensuring geographic availability of providers) than are needed to
improve quality of care. This book focuses on quality and explores the potential
of different strategies to improve it.
14 Improving healthcare quality in Europe
Source: authors’ own compilation based on Slawomirksi, Auraaen & Klazinga, 2017, and WHO, 2018.
selected strategies to date in Europe and beyond, (2) to summarize the available
evidence on their effectiveness and – where available – cost-effectiveness and the
prerequisites for their implementation, and (3) to provide recommendations to
policy-makers about how to select and actually implement different strategies.
The book is structured in three parts. Part I includes four chapters and deals with
cross-cutting issues that are relevant for all quality strategies. Part II includes ten
chapters each dealing with specific strategies. Part III focuses on overall conclu-
sions for policy-makers.
The aim of Part I is to clarify concepts and frameworks that can help policy-makers
to make sense of the different quality strategies explored in Part II. Chapter 2
introduces a comprehensive framework that enables a systematic analysis of the
key characteristics of different quality strategies. Chapter 3 summarizes different
approaches and data sources for measuring quality. Chapter 4 explores the role
of international governance and guidance, in particular at EU level, to foster
and support quality in European countries.
Part II, comprising Chapters 5 to 14, provides clearly structured and detailed
information about ten of the quality strategies presented in Table 1.3 (those
16 Improving healthcare quality in Europe
marked in grey). Each chapter in Part II follows roughly the same structure,
explaining the rationale of the strategy, exploring its use in Europe and summa-
rizing the available evidence about its effectiveness and cost-effectiveness. This
is followed by a discussion of practical aspects related to the implementation
of the strategy and conclusions for policy-makers. In addition, each chapter is
accompanied by an abstract that follows the same structure as the chapter and
summarizes the main points on one or two pages.
Finally, Part III concludes with the main findings from the previous parts of the
book, summarizing the available evidence about quality strategies in Europe and
providing recommendations for policy-makers.
References
Arah OA et al. (2006). A conceptual framework for the OECD Health Care Quality Indicators
Project. International Journal for Quality in Health Care, 18(S1):5–13.
Busse R (2017). High Performing Health Systems: Conceptualizing, Defining, Measuring and
Managing. Presentation at the “Value in Health Forum: Standards, Quality and Economics”.
Edmonton, 19 January 2017.
Carinci F et al. (2015). Towards actionable international comparisons of health system performance:
expert revision of the OECD framework and quality indicators. International Journal for
Quality in Health Care, 27(2):137–46.
Donabedian A (1980). The Definition of Quality and Approaches to Its Assessment. Vol 1.
Explorations in Quality Assessment and Monitoring. Ann Arbor, Michigan, USA: Health
Administration Press.
Donabedian A (1988). The quality of care. How can it be assessed? Journal of the American Medical
Association, 260(12):1743–8.
Donabedian A, Wheeler JR, Wyszewianski L (1982). Quality, cost, and health: an integrative
model. Medical Care, 20(10):975–92.
EC (2010). EU Actions on Patient Safety and Quality of Healthcare. European Commission,
Healthcare Systems Unit. Madrid: European Commission.
EC (2014). Communication from the Commission – On effective, accessible and resilient health
systems. European Commission. Brussels: European Commission.
EC (2016). So What? Strategies across Europe to assess quality of care. Report by the Expert
Group on Health Systems Performance Assessment. European Commission (EC). Brussels:
European Commission.
European Council (2006). Council Conclusions on Common values and principles in European
Union Health Systems. Official Journal of the European Union, C146:1–2.
Fekri O, Macarayan ER, Klazinga N (2018). Health system performance assessment in the WHO
European Region: which domains and indicators have been used by Member States for its
measurement? Copenhagen: WHO Regional Office for Europe (Health Evidence Network
(HEN) synthesis report 55).
Flodgren G, Gonçalves-Bradley DC, Pomey MP (2016). External inspection of compliance with
standards for improved healthcare outcomes. Cochrane Database Syst Rev. 12: CD008992.
doi: 10.1002/14651858.CD008992.pub3.
Gharaveis A et al. (2018). The Impact of Visibility on Teamwork, Collaborative Communication,
and Security in Emergency Departments: An Exploratory Study. HERD: Health Environments
Research & Design Journal, 11(4):37–49.
An introduction to healthcare quality: defining and explaining its role in health systems 17
Health Council of Canada (2013). Better health, better care, better value for all: refocussing health
care reform in Canada. Toronto, Health Care Council of Canada; 2013.
Houle SK et al. (2012). Does performance-based remuneration for individual health care practitioners
affect patient care? A systematic review. Annals of Internal Medicine, 157(12):889–99.
IOM (1990). Medicare: A Strategy for Quality Assurance: Volume 1. Washington (DC), US:
National Academies Press.
IOM (2001). Crossing the Quality Chasm: A New Health System for the 21st Century. Washington
(DC), US: National Academies Press.
Ivers N et al. (2014). Growing literature, stagnant science? Systematic review, meta-regression
and cumulative analysis of audit and feedback interventions in health care. Journal of General
Internal Medicine, 29(11):1534–41.
Legido-Quigley H et al. (2008). Assuring the Quality of Health Care in the European Union: A
case for action. Observatory Studies Series, 12. Copenhagen: WHO on behalf of the European
Observatory on Health Systems and Policies.
OECD (2017). Caring for Quality in Health: Lessons learnt from 15 reviews of health care quality.
OECD Reviews of Health Care Quality. Paris: OECD Publishing. Available at: http://dx.doi.
org/10.1787/9789264267787-en, accessed 9 April 2019.
Øvretveit J (2001). Quality evaluation and indicator comparison in health care. International
Journal of Health Planning Management, 16:229–41.
Papanicolas I (2013). International frameworks for health system comparison. In: Papanicolas I
& Smith P (eds.): Health system performance comparison: An agenda for policy, information
and research. European Observatory on Health Systems and Policies, Open University Press.
New York.
Slawomirski L, Auraaen A, Klazinga N (2017). The economics of patient safety. Paris: Organisation
for Economic Co-operation and Development.
The Council of Europe (1997). The development and implementation of quality improvement
systems (QIS) in health care. Recommendation No. R (97) 17 and explanatory memorandum.
Strasbourg: The Council of Europe.
WHO (2006a). Everybody’s business: Strengthening health systems to improve health outcomes:
WHO’s framework for action. Geneva: World Health Organization.
WHO (2006b). Quality of care: a process for making strategic choices in health systems. Geneva:
World Health Organization.
WHO (2016). WHO global strategy on people centred and integrated health services. Interim
Report. Geneva: World Health Organization.
WHO (2018). Handbook for national quality policy and strategy – A practical approach for
developing policy and strategy to improve quality of care. Geneva: World Health Organization.
WHO/OECD/World Bank (2018). Delivering quality health services: a global imperative for
universal health coverage. Geneva: World Health Organization, Organisation for Economic
Co-operation and Development, and The World Bank. Licence: CC BY-NC-SA 3.0 IGO.
Chapter 2
Understanding healthcare quality
strategies: a five-lens framework
2.1 Introduction
The previous chapter defined healthcare quality as the degree to which health
services for individuals and populations are effective, safe and people-centred. In
doing so, it clarified the concept of healthcare quality and distinguished it from
health system performance. It also explained how the term “quality strategy” is
used in this book; however, it did not link the theoretical work behind under-
standing, measuring and improving healthcare quality to the characteristics of
specific quality strategies (or “initiatives”, or “activities” or “interventions”, as
they are called elsewhere; see Chapter 1).
Several conceptual frameworks exist that aim at characterizing different aspects
of quality or explaining pathways for effecting change in healthcare. However,
existing frameworks have traditionally focused on specific aspects of healthcare
quality or on particular quality improvement strategies. For example, some
frameworks have attempted to classify different types of indicator for measuring
healthcare quality (for example, Donabedian, 1966), while other frameworks have
contributed to a better understanding of the different steps needed to achieve
quality improvements (for example, Juran & Godfrey, 1999). However, no single
framework is available that enables a systematic comparison of the characteristics
of the various (and varied) quality strategies mentioned in Chapter 1 and further
discussed in Part II of this book.
To bridge this gap and facilitate a better understanding of the characteristics of
these strategies, and of how they can contribute to assessing, assuring or improv-
ing quality of care, a comprehensive framework was developed for this book
and is presented here. The framework draws on several existing concepts and
20 Improving healthcare quality in Europe
Fig. 2.1 Framework of the OECD Health Care Quality Indicators project
Quality dimension
Healthcare needs Effectiveness Safety Responsiveness/patient-centredeness
1. Primary
prevention
2. Getting better
3. Living with Individual
Integrated
illness or patient
care
disability/ experiences
chronic care
4. Coping with end
of life
• Coping with the end of life: getting help to deal with a terminal illness
The logic behind the inclusion of these needs categories into the quality frame-
work is that patients seek different types of care depending on their needs. For
example, in order to stay healthy, patients seek preventive care, and in order to
get better, they seek acute care. Similarly, chronic care corresponds to patients’
needs of living with illness or disability, and palliative care corresponds to the
need for coping with end of life. Indicators and quality strategies have to be
planned differently for different types of services, depending on patients’ needs
and the corresponding necessary healthcare. For example, inpatient mortality is
frequently used as an indicator of quality for acute care (for example, mortality
of patients admitted because of acute myocardial infarction), but it cannot serve
as a quality indicator for palliative care, for obvious reasons.
As mentioned above, the OECD HCQI project has used this framework to define
its scope and develop indicators for the different fields in the matrix. One of the
updates included in the 2015 version of the framework (shown in Fig. 2.1) was
that the dimension of patient-centredness was split into the two areas of “indi-
vidual patient experiences” and “integrated care”. This was meant to facilitate the
creation of related indicators and reflects the international acknowledgement of
the importance of integrated care (see also Chapter 1 for a reflection on how the
proposed dimensions of healthcare quality have evolved over time). Also, in the
2015 version, the initial wording of “staying healthy” was changed to “primary
prevention” to provide a clearer distinction from “living with illness and disabil-
ity – chronic care”, as many patients living with a managed chronic condition
may consider themselves as seeking care to stay healthy (Carinci et al., 2015).
Drawing from the conceptualization behind the OECD HCQI project, the first
lens of the framework developed for this book consists of the three dimensions
of quality, i.e. effectiveness, safety and responsiveness. The second lens encom-
passes the four functions of care that correspond to the categories of patients’
healthcare needs described above, i.e. primary prevention, acute care, chronic
care and palliative care.
cycle (Reed & Card, 2016). The PDCA cycle is a four-step model for implement-
ing change that has been applied by many healthcare institutions and public
health programmes. It also provides the theoretical underpinning for several of
the quality strategies presented in Part II of the book, for example, audit and
feedback, and external assessment strategies (see Chapters 10 and 8).
The method of quality management behind the PDCA cycle originated in indus-
trial design, specifically Walter Shewhart and Edward Deming’s description of
iterative processes for catalysing change. The PDCA cycle guides users through a
prescribed four-stage learning approach to introduce, evaluate and progressively
adapt changes aimed at improvement (Taylor et al., 2014). Fig. 2.2 presents the
four stages of the PDCA cycle as originally described by Deming.
Other quality improvement scholars have developed similar and somewhat related
concepts. For example, the Juran trilogy defines three cyclical stages of manage-
rial processes that are often used in discussions around healthcare improvement
(Juran & Godfrey, 1999), including (1) quality planning, (2) quality control, and
(3) quality improvement. On the one hand, the trilogy draws attention to the
fact that these are three separable domains or activities that can be addressed by
particular quality interventions (WHO, 2018a). On the other hand, the cyclical
conceptualization of the trilogy highlights that all three elements are necessary
and complementary if improvements are to be assured.
Similar to the Juran trilogy, WHO defined three generic domains – or areas
of focus – of quality strategies that are useful when thinking about approaches
addressing different target groups, such as professionals or providers (WHO,
2008): (1) legislation and regulation, (2) monitoring and measurement, (3)
Understanding healthcare quality strategies: a five-lens framework 23
assuring and improving the quality of healthcare services (as 3a) and healthcare
systems (as 3b). The idea behind specifying these domains was to guide national
governments in their assessment of existing approaches and identification of
necessary interventions to improve national quality strategies. A focus on these
three cornerstones of quality improvement has proven useful for the analysis of
national quality strategies (see, for instance, WHO, 2018b).
Based on these considerations, the third lens of the framework developed for
this book builds on these concepts and defines three major activities (or areas
of focus) of different quality strategies: (1) setting standards, (2) monitoring,
and (3) assuring improvements (see Fig. 2.3). Some of the strategies presented in
Part II of the book provide the basis for defining standards (for example, clini-
cal guidelines, see Chapter 9), while others focus on monitoring (for example,
accreditation and certification, see Chapter 8) and/or on assuring improvements
(for example, public reporting, see Chapter 13), while yet others address more
than one element. Focusing on the characteristic feature of each strategy in
this respect is useful as it can help clarify why it should contribute to improved
quality of care.
Setting
standards
Audit and
feedback
Pay for
quality Assuring Accreditation
Monitoring
improvement
Public
Certification
reporting
However, following the idea of the PDCA cycle, these three activities are concep-
tualized in the five-lens framework as a cyclical process (see Fig. 2.3). This means
that all three activities are necessary in order to achieve change. For example,
setting standards does not lead to change by itself if these standards are not
24 Improving healthcare quality in Europe
outcome” (Donabedian, 1988). For example, the availability of the right mix of
qualified professionals at a hospital increases the likelihood that a heart surgery
will be performed following current professional standards, and this in turn
increases the likelihood of patient survival.
Accordingly, the fourth lens of the framework adopts Donabedian’s distinction
between structures, processes and outcomes. Again, this distinction is useful
because several strategies presented in Part II of this book focus more on one
of these elements than on the others. For example, regulation of professionals
focuses on the quality of inputs, while clinical guidelines focus on the quality
of care processes. Ultimately, the goal of all improvement strategies is better
outcomes; the primary mechanism for achieving this goal, however, will vary.
Technolo
gies
n als
io
ss s Assuri
me n g im
Ou f e
o
Palliative pr
Pr
t co
ov
ne
ss em
P ro v
e
s
nts
ng standard
ve
Sa
ider organization
Effecti
ic
fety
h r o n
Structu
C
Pre
Setti
R
s
es
v
ponsivenes
e
res
nti
ve
s
M
on
Acute ito
ring
e s ses Pa
Pro c tie
nts
rs
Paye
& Klazinga, 2017), and Table 1.3 at the end of Chapter 1 includes 28 quality
strategies; neither of those lists is exhaustive. Given the multiplicity of different
quality strategies and the various levels on which they can be implemented,
policy-makers often struggle to make sense of them and to judge their relative
effectiveness and cost-effectiveness for the purposes of prioritization.
Any book on quality strategies is inevitably selective, as it is impossible to provide
an exhaustive overview and discussion. The strategies discussed in detail in the
second part of this book were selected based on the experience of the European
Observatory on Health Systems and Policies and comprise those most frequently
discussed by policy-makers in Europe. However, this does not mean that other
not are less important or should not be considered for implementation. In par-
ticular, the book includes only one strategy explicitly targeting patients, i.e. public
reporting (see Chapter 13). Other strategies, such as systematic measurement
of patient experience or strategies to support patient participation could poten-
tially have an important impact on increasing patient-centredness of healthcare
service provision. Similarly, the book does not place much emphasis on digital
innovations, such as electronic health records or clinical decision support systems
to improve effectiveness and safety of care, despite their potential impact on
changing service provision. Nevertheless, among the included strategies there is
at least one corresponding to each element of the five-lens framework, i.e. there
is at least one strategy concerned with payers (or providers or professionals, etc.),
one strategy concerned with structures (or processes or outcomes), and so on.
Many different categorizations of quality strategies are possible along the five
lenses of the framework described above. For the sake of simplicity, Table 2.2
categorizes the strategies discussed in the second part of the book into three
groups using lenses three and four of the five-lens framework: (1) strategies that
set standards for health system structures and inputs, (2) strategies that focus on
steering and monitoring health system processes, and (3) strategies that leverage
processes and outcomes with the aim of assuring improvements.
Table 2.2 also shows the common structure largely followed by all chapters in Part
II of the book. First, chapters describe the characteristic features of the quality
strategy at hand, i.e. what are its target(s) (professionals, technologies, provider
organizations, patients or payers; lens five of the framework described above)
and main activity (setting standards, monitoring or assuring improvements; lens
three). In addition, each chapter describes the underlying rationale of why the
strategy should contribute to healthcare quality by explaining how it may affect
safety, effectiveness and/or patient-centredness (lens 1) of care through changes
of structures, processes and/or outcomes (lens 4). Secondly, the chapters provide
an overview of what is being done in European countries in respect to the spe-
cific quality strategy, considering – among other things – whether the strategy
Understanding healthcare quality strategies: a five-lens framework 29
is mostly applied in preventive care, acute care, chronic care or palliative care
(lens 2). They then summarize the available evidence with regard to the strategy’s
effectiveness and cost-effectiveness, often building on existing systematic reviews
or reviews of reviews. They follow up by addressing questions of implementation,
for example, what institutional and organizational requirements are necessary
to implement the strategy. Finally, each chapter provides conclusions for policy-
makers bringing together the available evidence and highlighting the relationship
of the strategy to other strategies.
References
Arah OA et al. (2006). A conceptual framework for the OECD Health Care Quality Indicators
Project. International Journal for Quality in Health Care, 18(S1):5–13.
30 Improving healthcare quality in Europe
Ayanian ZJ, Markel H (2016). Donabedian’s Lasting Framework for Health Care Quality. The
New England Journal of Medicine, 375:205–7.
Carinci F et al. (2015). Towards actionable international comparisons of health system performance:
expert revision of the OECD framework and quality indicators. International Journal for
Quality in Health Care, 27(2):137–46.
Donabedian A (1966). Evaluating the quality of medical care. Milbank Quarterly, 691–729.
Donabedian A (1988). The quality of care. How can it be assessed? Journal of the American Medical
Association, 260(12):1743–8.
IOM (2001). Envisioning the National Health Care Quality Report. Washington (DC), US:
National Academies Press.
Juran JM, Godfrey A (1999). Juran’s Quality Handbook. New York: McGraw-Hill.
Reed JE, Card AJ (2016). The problem with Plan-Do-Study-Act cycles. BMJ Quality & Safety,
25:147–52.
Slawomirski L, Auraaen A, Klazinga N (2017). The economics of patient safety. Paris: Organisation
for Economic Co-operation and Development.
Taylor MJ et al. (2014). Systematic review of the application of the plan-do-study-act method to
improve quality in healthcare. BMJ Quality & Safety, 23(4):290–8.
WHO (2008). Guidance on developing quality and safety strategies with a health system approach.
Copenhagen: World Health Organization (Regional Office for Europe).
WHO (2018a). Handbook for national quality policy and strategy – A practical approach for
developing policy and strategy to improve quality of care. Geneva: World Health Organization.
WHO (2018b). Quality of care review in Kyrgyzstan. Copenhagen: World Health Organization
(Regional Office for Europe).
Chapter 3
Measuring healthcare quality
3.1 Introduction
The field of quality measurement in healthcare has developed considerably in the
past few decades and has attracted growing interest among researchers, policy-
makers and the general public (Papanicolas & Smith, 2013; EC, 2016; OECD,
2019). Researchers and policy-makers are increasingly seeking to develop more
systematic ways of measuring and benchmarking quality of care of different
providers. Quality of care is now systematically reported as part of overall health
system performance reports in many countries, including Australia, Belgium,
Canada, Italy, Mexico, Spain, the Netherlands, and most Nordic countries. At
the same time, international efforts in comparing and benchmarking quality of
care across countries are mounting. The Organisation for Economic Co-operation
and Development (OECD) and the EU Commission have both expanded their
efforts at assessing and comparing healthcare quality internationally (Carinci et
al., 2015; EC, 2016). Furthermore, a growing focus on value-based healthcare
(Porter, 2010) has sparked renewed interest in the standardization of measure-
ment of outcomes (ICHOM, 2019), and notably the measurement of patient-
reported outcomes has gained momentum (OECD, 2019).
The increasing interest in quality measurement has been accompanied and sup-
ported by the growing ability to measure and analyse quality of care, driven,
amongst others, by significant changes in information technology and associated
advances in measurement methodology. National policy-makers recognize that
without measurement it is difficult to assure high quality of service provision
in a country, as it is impossible to identify good and bad providers or good and
bad practitioners without reliable information about quality of care. Measuring
quality of care is important for a range of different stakeholders within healthcare
systems, and it builds the basis for numerous quality assurance and improve-
ment strategies discussed in Part II of this book. In particular, accreditation
32 Improving healthcare quality in Europe
and certification (see Chapter 8), audit and feedback (see Chapter 10), public
reporting (see Chapter 13) and pay for quality (see Chapter 14) rely heavily on
the availability of reliable information about the quality of care provided by
different providers and/or professionals. Common to all strategies in Part II is
that without robust measurement of quality, it is impossible to determine the
extent to which new regulations or quality improvement interventions actually
work and improve quality as expected, or if there are also adverse effects related
to these changes.
This chapter presents different approaches, frameworks and data sources used
in quality measurement as well as methodological challenges, such as risk-
adjustment, that need to be considered when making inferences about quality
measures. In line with the focus of this book (see Chapter 1), the chapter focuses
on measuring quality of healthcare services, i.e. on the quality dimensions of
effectiveness, patient safety and patient-centredness. Other dimensions of health
system performance, such as accessibility and efficiency, are not covered in this
chapter as they are the focus of other volumes about health system performance
assessment (see, for example, Smith et al., 2009; Papanicolas & Smith, 2013;
Cylus, Papanicolas & Smith, 2016). The chapter also provides examples of
quality measurement systems in place in different countries. An overview of
the history of quality measurement (with a focus on the United States) is given
in Marjoua & Bozic (2012). Overviews of measurement challenges related to
international comparisons are provided by Forde, Morgan & Klazinga (2013)
and Papanicolas & Smith (2013).
and classifying quality strategies. Several of these lenses are also useful for better
understanding the different aspects and contexts that need to be taken into
account when measuring healthcare quality. First, it is clear that different indica-
tors are needed to assess the three dimensions of quality, i.e. effectiveness, safety
and/or patient-centredness, because they relate to very different concepts, such
as patient health, medical errors and patient satisfaction.
Secondly, quality measurement has to differ depending on the concerned function
of the healthcare system, i.e. depending on whether one is aiming to measure
quality in preventive, acute, chronic or palliative care. For example, changes
in health outcomes due to preventive care will often be measurable only after
a long time has elapsed, while they will be visible more quickly in the area of
acute care. Thirdly, quality measurement will vary depending on the target of
the quality measurement initiative, i.e. payers, provider organizations, profes-
sionals, technologies and/or patients. For example, in some contexts it might be
useful to assess the quality of care received by all patients covered by different
payer organizations (for example, different health insurers or regions) but more
frequently quality measurement will focus on care provided by different provider
organizations. In international comparisons, entire countries will constitute
another level or target of measurement.
In addition, operationalizing quality for measurement will always require a focus
on a limited set of quality aspects for a particular group of patients. For example,
quality measurement may focus on patients with hip fracture treated in hospitals
and define aspects of care that are related to effectiveness (for example, surgery
performed within 24 hours of admission), safety (for example, anticoagulation
to prevent thromboembolism), and/or patient-centredness of care (for example,
patient was offered choice of spinal or general anaesthesia) (Voeten et al., 2018).
However, again, the choice of indicators – and also potentially of different
appraisal concepts for indicators used for the same quality aspects – will depend
on the exact purpose of measurement.
Table 3.1 highlights the differences between quality assurance and quality improve-
ment (Freeman, 2002; Gardner, Olney & Dickinson, 2018). Measurement for
quality assurance and accountability is focused on identifying and overcoming
problems with quality of care and assuring a sufficient level of quality across
providers. Quality assurance is the focus of many external assessment strategies
(see also Chapter 8), and providers of insufficient quality may ultimately lose
their licence and be prohibited from providing care. Assuring accountability is
one of the main purposes of public reporting initiatives (see Chapter 13), and
measured quality of care may contribute to trust in healthcare services and allow
patients to choose higher-quality providers.
Quality measurement for quality assurance and accountability makes sum-
mative judgements about the quality of care provided. The idea is that “real”
differences will be detected as a result of the measurement initiative. Therefore,
a high level of precision is necessary and advanced statistical techniques may
need to be employed to make sure that detected differences between providers
are “real” and attributable to provider performance. Otherwise, measurement
will encounter significant justified resistance from providers because its potential
consequences, such as losing the licence or losing patients to other providers,
would be unfair. Appraisal concepts of indicators for quality assurance will usu-
ally focus on assuring a minimum quality of care and identifying poor-quality
providers. However, if the purpose is to incentivize high quality of care through
pay for quality initiatives, the appraisal concept will likely focus on identifying
providers delivering excellent quality of care.
By contrast, measurement for quality improvement is change oriented and
quality information is used at the local level to promote continuous efforts of
providers to improve their performance. Indicators have to be actionable and
hence are often more process oriented. When used for quality improvement,
quality measurement does not necessarily need to be perfect because it is only
informative. Other sources of data and local information are considered as well
in order to provide context for measured quality of care. The results of quality
measurement are only used to start discussions about quality differences and
to motivate change in provider behaviour, for example, in audit and feedback
initiatives (see Chapter 10). Freeman (2002) sums up the described differences
between quality improvement and quality assurance as follows: “Quality improve-
ment models use indicators to develop discussion further, assurance models use
them to foreclose it.”
Different stakeholders in healthcare systems pursue different objectives and as a
result they have different information needs (Smith et al., 2009; EC, 2016). For
example, governments and regulators are usually focused on quality assurance
and accountability. They use related information mostly to assure that the quality
36 Improving healthcare quality in Europe
Table 3.2 Examples of structure, process and outcome quality indicators for
different dimensions of quality
structures where health care is provided have an effect on the processes of care,
which in turn will influence patient health outcomes. Table 3.2 provides some
examples of structure, process and outcome indicators related to the different
dimensions of quality.
In general, structural quality indicators are used to assess the setting of care, such as
the adequacy of facilities and equipment, staffing ratios, qualifications of medical
staff and administrative structures. Structural indicators related to effectiveness
include the availability of staff with an appropriate skill mix, while the availability
of safe medicines and the volume of surgeries performed are considered to be
more related to patient safety. Structural indicators for patient-centredness can
include the organizational implementation of a patients’ rights charter or the
availability of patient information. Although institutional structures are certainly
important for providing high-quality care, it is often difficult to establish a clear
link between structures and clinical processes or outcomes, which reduces, to a
certain extent, the relevance of structural measures.
Process indicators are used to assess whether actions indicating high-quality
care are undertaken during service provision. Ideally, process indicators are built
on reliable scientific evidence that compliance with these indicators is related
Measuring healthcare quality 39
Likewise, structure, process and outcome indicators each have their compara-
tive strengths and weaknesses. These are summarized in Table 3.3. The strength
of structural measures is that they are easily available, reportable and verifiable
because structures are stable and easy to observe. However, the main weakness
is that the link between structures and clinical processes or outcomes is often
indirect and dependent on the actions of healthcare providers.
Process indicators are also measured relatively easily, and interpretation is often
straightforward because there is often no need for risk-adjustment. In addition,
poor performance on process indicators can be directly attributed to the actions
of providers, thus giving clear indication for improvement, for example, by better
adherence to clinical guidelines (Rubin, Pronovost & Diette, 2001). However,
healthcare is complex and process indicators usually focus only on very specific
procedures for a specific group of patients. Therefore, hundreds of indicators
are needed to enable a comprehensive analysis of the quality of care provided
by a professional or an institution. Relying only on a small set of process indica-
tors carries the risk of distorting service provision towards a focus on measured
areas of care while disregarding other (potentially more) important tasks that
are harder to monitor.
Outcome indicators place the focus of quality assessments on the actual goals of
service provision. Outcome indicators are often more meaningful to patients and
policy-makers. The use of outcome indicators may also encourage innovations
in service provision if these lead to better outcomes than following established
processes of care. However, attributing health outcomes to the services provided
by individual organizations or professionals is often difficult because outcomes are
influenced by many factors outside the control of a provider (Lilford et al., 2004).
In addition, outcomes may require a long time before they manifest themselves,
which makes outcome measures more difficult to use for quality measurement
(Donabedian, 1980). Furthermore, poor performance on outcome indicators
does not necessarily provide direct indication for action as the outcomes may be
related to a range of actions of different individuals who worked in a particular
setting at a prior point in time.
Advantages Disadvantages
• Condense complex, multidimensional aspects of • Performance on indicator depends on methodological
quality into a single indicator. choices made to construct the composite.
• Easier to interpret than a battery of many separate • May send misleading messages if poorly constructed
indicators. or misinterpreted.
• Enable assessments of progress of providers or • May invite simplistic conclusions.
countries over time. • May be misused, if the composite construction
• Reduce the number of indicators without dropping the process is not transparent and/or lacks sound
underlying information base. statistical or conceptual principles.
• Place issues of provider or country performance and • The selection of indicators and weights could be the
progress at the centre of the policy arena. subject of political dispute.
• Facilitate communication with general public and • May disguise serious failings in some dimensions
promote accountability. and increase the difficulty of identifying remedial
• Help to construct/underpin narratives for lay and action, if the construction process is not transparent.
literate audiences. • May lead to inappropriate decisions if dimensions of
• Enable users to compare complex dimensions performance that are difficult to measure are ignored.
effectively.
Relevance
• Impact of disease or risk on health and health expenditures. What is the impact on
health and on health expenditure associated with each disease, risk or patient group?
• Importance. Are relevant stakeholders concerned about the quality problem and have
they endorsed the indicator?
• Potential for improvement. Does evidence exist that there is less-than-optimal
performance, for example, variation across providers?
• Clarity of purpose and context. Are the purpose of the indicator and the organizational
and healthcare contexts clearly described?
Scientific soundness
• Validity. Does the indicator measure what it is intended to measure? The indicator
should make sense logically and clinically (face validity); it should correlate well with
other indicators of the same aspects of the quality of care (construct validity) and should
capture meaningful (i.e. evidence-based) aspects of the quality of care (content validity).
• Sensitivity and specificity. Does the indicator detect only a few false positives and false
negatives?
• Reliability. Does the measure provide stable results across various populations and
circumstances?
• Explicitness of the evidence base. Is scientific evidence available to support the measure
(for example, systematic reviews, guidelines, etc.)?
• Adequacy of the appraisal concept. Are reference values fit for purpose, and do they
allow identification of good and bad providers?
Feasibility
• Previous experience. Is the measure in use in pilot programmes or in other countries?
• Availability of required data across the system. Can information needed for the measure
be collected in the scale and timeframe required?
• Cost or burden of measurement. How much will it cost to collect the data needed for
the measure?
continued
overleaf >
48 Improving healthcare quality in Europe
• Capacity of data and measure to support subgroup analyses. Can the measure be used
to compare different groups of the population (for example, by socioeconomic status
to assess disparities)?
Meaningfulness
• Comparability: does the indicator permit meaningful comparisons across providers,
regions, and/or countries?
• User-friendliness: is the indicator easily understood and does it relate to things that are
important for the target audience?
• Discriminatory power: does the indicator distinguish clearly between good and bad
performers?
Sources: Hurtado, Swift & Corrigan, 2001; Mainz, 2003; Kelley & Hurst, 2006; de Koning, Burgers &
Klazinga, 2007; Evans et al., 2009; Lüngen & Rath, 2011; IQTIG, 2018; NQF, 2019b
quality of life are not available in administrative data. The strength of adminis-
trative data is that they are comprehensive and complete with few problems of
missing data. The most important problem of administrative data is that they
are generated by healthcare providers, usually for payment purposes. This means
that coding may be influenced by the incentives of the payment system, and –
once used for purposes of quality measurement – also by incentives attached to
the measured quality of care.
also be used for monitoring and evaluation of screening programmes and esti-
mating cancer survival by follow-up of cancer patients (Bray & Parkin, 2009).
In Scandinavian countries significant efforts have gone into standardizing cancer
registries to enable cross-country comparability. Nevertheless, numerous differ-
ences persist with regard to registration routines and classification systems, which
are important when comparing time trends in the Nordic countries (Pukkala
et al., 2018).
In some countries there is a large number of clinical registries that are used for
quality measurement. For example, in Sweden there are over a hundred clinical
quality registries, which work on a voluntary basis as all patients must be informed
and have the right to opt-out. These registries are mainly for specific diseases
and they include disease-specific data, such as severity of disease at diagnosis,
diagnostics and treatment, laboratory tests, patient-reported outcome measures,
and other relevant factors such as body mass index, smoking status or medica-
tion. Most of the clinical registries focus on specialized care and are based on
reporting from hospitals or specialized day care centres (Emilsson et al., 2015).
With increasing diffusion of electronic health records, it is possible to generate
and feed disease-specific population registries based on electronic abstraction
(Kannan et al., 2017). Potentially, this may significantly reduce the costs of data
collection for registries. Furthermore, linking of data from different registries with
other administrative data sources can increasingly be used to generate datasets
that enable more profound analyses.
surveys can use generic tools (for example, the SF-36 or EQ-5D) or disease-
specific tools, which are usually more sensitive to change (Fitzpatrick, 2009).
The NHS in the United Kingdom requires all providers to report PROMs for
two elective procedures: hip replacement and knee replacement. Both generic
(EQ-5D and EQ VAS) and disease-specific (Oxford Hip Score, Oxford Knee
Score and Aberdeen Varicose Vein Questionnaire) instruments are used (NHS
Digital, 2019b).
Finally, several countries also use surveys of patient satisfaction in order to monitor
provider performance. However, satisfaction is difficult to compare internationally
because it is influenced by patients’ expectations about how they will be treated,
which vary widely across countries and also within countries (Busse, 2012).
quality of care or rather the quality of the wider hospital team (for example,
including anaesthesia, intensive care unit quality) or the organization and man-
agement of the hospital (for example, the organization of resuscitation teams
within hospitals) (Westaby et al., 2015). Nevertheless, with data released at the
level of the surgeon, responsibility is publicly attributed to the individual and
not to the organization.
Other examples where attributing causality and responsibility is difficult include
outcome indicators defined using time periods (for example, 30-day mortality
after hospitalization for ischemic stroke) because patients may be transferred
between different providers and because measured quality will depend on care
received after discharge. Similarly, attribution can be problematic for patients
with chronic conditions, for example, attributing causality for hospitalizations of
patients with heart failure – a quality indicator in the USA – is difficult because
these patients may see numerous providers, such as one (or more) primary care
physician(s) and specialists, for example, nephrologists and/or cardiologists.
What these examples illustrate is that attribution of quality differences to providers
is difficult. However, it is important to accurately attribute causality because it is
unfair to hold individuals or organizations accountable for factors outside their
control. In addition, if responsibility is attributed incorrectly, quality improve-
ment measures will be in vain, as they will miss the appropriate target. Therefore,
when developing quality indicators, it is important that a causal pathway can
be established between the agents under assessment and the outcome proposed
as a quality measure. Furthermore, possible confounders, such as the influence
of other providers or higher levels of the healthcare system on the outcome of
interest, should be carefully explored in collaboration with relevant stakeholders
(Terris & Aron, 2009).
Of course, many important confounders outside the control of providers have
not yet been mentioned as the most important confounders are patient-level
clinical factors and patient preferences. Prevalence of these factors may differ
across patient populations and influence the outcomes of care. For example,
severely ill patients or patients with multiple coexisting conditions are at risk of
having worse outcomes than healthy individuals despite receiving high-quality
care. Therefore, providers treating sicker patients are at risk of performing poorly
on measured quality of care, in particular when measured through outcome
indicators.
Risk-adjustment (sometimes called case-mix adjustment) aims to control for these
differences (risk-factors) that would otherwise lead to biased results. Almost all
outcome indicators require risk-adjustment to adjust for patient-level risk fac-
tors that are outside the control of providers. In addition, healthcare processes
may be influenced by patients’ attitudes and perceptions, which should be
Measuring healthcare quality 55
3.11 Conclusion
This chapter has introduced some basic concepts and methods for the measure-
ment of healthcare quality and presented a number of related challenges. Many
different stakeholders have varying needs for information on healthcare quality
and the development of quality measurement systems should always take into
account the purpose of measurement and the needs of different stakeholders.
Quality measurement is important for quality assurance and accountability to
make sure that providers are delivering good-quality care but they are also vital
Measuring healthcare quality 57
Box 3.2 Seven principles to take into account when using quality indicators
one provider is better than another) but if underlying data and methods are weak, users may
come to incorrect conclusions.
References
ACSQHC (2019). Indicators of Safety and Quality. Australian Commission on Safety and
Quality in Health Care (ACSQHC): https://www.safetyandquality.gov.au/our-work/
indicators/#Patientreported, accessed 21 March 2019.
Baker DW, Chassin MR (2017). Holding Providers Accountable for Health Care Outcomes.
Annals of Internal Medicine, 167(6):418–23.
Braithwaite RS (2018). Risk Adjustment for Quality Measures Is Neither Binary nor Mandatory.
Journal of the American Medical Association, 319(20):2077–8.
Bray F, Parkin DM (2009). Evaluation of data quality in the cancer registry: Principles and methods.
Part I: Comparability, validity and timeliness. European Journal of Cancer, 45(5):747–55.
Busse R (2012). Being responsive to citizens’ expectations: the role of health services in responsiveness
and satisfaction. In: McKee M, Figueras J (eds.) Health Systems: Health, wealth and societal
well-being. Maidenhead: Open University Press/McGraw-Hill.
Calhoun C (2002). Oxford dictionary of social sciences. New York: Oxford University Press.
Campbell SM et al. (2008). Quality indicators for the prevention and management of cardiovascular
disease in primary care in nine European countries. European Journal of Cardiovascular
Prevention & Rehabilitation, 15(5):509–15.
Carinci F et al. (2015). Towards actionable international comparisons of health system performance:
expert revision of the OECD framework and quality indicators. International Journal for
Quality in Health Care, 27(2):137–46.
60 Improving healthcare quality in Europe
Chan KS et al. (2010). Electronic health records and the reliability and validity of quality measures:
a review of the literature. Medical Care Research and Review, 67(5):503–27.
Cheng EM et al. (2014). Quality measurement: here to stay. Neurology Clinical Practice, 4(5):441–6.
Cylus J, Papanicolas I, Smith P (2016). Health system efficiency: how to make measurement matter
for policy and management. Copenhagen: WHO, on behalf of the European Observatory
on Health Systems and Policies.
Davies H (2005). Measuring and reporting the quality of health care: issues and evidence from
the international research literature. NHS Quality Improvement Scotland.
Donabedian A (1980). The Definition of Quality and Approaches to Its Assessment. Vol 1.
Explorations in Quality Assessment and Monitoring. Ann Arbor, Michigan, USA: Health
Administration Press.
EC (2016). So What? Strategies across Europe to assess quality of care. Report by the Expert
Group on Health Systems Performance Assessment. European Commission (EC). Brussels:
European Commission.
Emilsson L et al. (2015). Review of 103 Swedish Healthcare Quality Registries. Journal of Internal
Medicine, 277(1):94–136
Evans SM et al. (2009). Prioritizing quality indicator development across the healthcare system:
identifying what to measure. Internal Medicine Journal, 39(10):648–54.
Fitzpatrick R (2009). Patient-reported outcome measures and performance measurement. In:
Smith P et al. (eds.) Performance Measurement for Health System Improvement: Experiences,
Challenges and Prospects. Cambridge: Cambridge University Press.
Forde I, Morgan D, Klazinga N (2013). Resolving the challenges in the international comparison
of health systems: the must do’s and the trade-offs. Health Policy, 112(1–2):4–8.
Freeman T (2002). Using performance indicators to improve health care quality in the public
sector: a review of the literature. Health Services Management Research, 15:126–37.
Fujisawa R, Klazinga N (2017). Measuring patient experiences (PREMS): Progress made by the
OECD and its member countries between 2006 and 2016. Paris: Organisation for Economic
Co-operation and Development (OECD).
Fujita K, Moles RJ, Chen TF (2018). Quality indicators for responsible use of medicines: a
systematic review. BMJ Open, 8:e020437.
Gardner K, Olney S, Dickinson H (2018). Getting smarter with data: understanding tensions in
the use of data in assurance and improvement-oriented performance management systems to
improve their implementation. Health Research Policy and Systems, 16(125).
Goddard M, Jacobs R (2009). Using composite indicators to measure performance in health
care. In: Smith P et al. (eds.) Performance Measurement for Health System Improvement:
Experiences, Challenges and Prospects. Cambridge: Cambridge University Press.
Hurtado MP, Swift EK, Corrigan JM (2001). Envisioning the National Health Care Quality
Report. Washington, DC: National Academy Press.
ICHOM (2019). Standard Sets. International Consortium for Health Outcomes Measurement
(ICHOM): https://www.ichom.org/standard-sets/, accessed 8 February 2019.
Iezzoni L (2009). Risk adjustment for performance measurement. In: Smith P et al. (eds.)
Performance Measurement for Health System Improvement: Experiences, Challenges and
Prospects. Cambridge: Cambridge University Press.
IQTIG (2018). Methodische Grundlagen V1.1.s. Entwurf für das Stellungnahmeverfahren.
Institut für Qualitätssicherung und Transparenz im Gesundheitswesen (IQTIG). Available
at: https://iqtig.org/das-iqtig/grundlagen/methodische-grundlagen/, accessed 18 March 2019.
Kannan V et al. (2017). Rapid Development of Specialty Population Registries and Quality Measures
from Electronic Health Record Data. Methods of information in medicine, 56(99):e74–e83.
Kelley E, Hurst J (2006). Health Care Quality Indicators Project: Conceptual framework paper.
Paris: Organization for Economic Co-operation and Development (OECD). Available at:
https://www.oecd.org/els/health-systems/36262363.pdf, accessed on 22/03/2019.
Measuring healthcare quality 61
De Koning J, Burgers J, Klazinga N (2007). Appraisal of indicators through research and evaluation
(AIRE). Available at: https://www.zorginzicht.nl/kennisbank/PublishingImages/Paginas/AIRE-
instrument/AIRE%20Instrument%202.0.pdf, accessed 21 March 2019.
Kristensen SR, Bech M, Quentin W (2015). A roadmap for comparing readmission policies
with application to Denmark, England and the United States. Health Policy, 119(3):264–73.
Kronenberg C et al. (2017). Identifying primary care quality indicators for people with serious
mental illness: a systematic review. British Journal of General Practice, 67(661):e519–e530.
Lawrence M, Olesen F (1997). Indicators of Quality in Health Care. European Journal of General
Practice, 3(3):103–8.
Lighter D (2015). How (and why) do quality improvement professionals measure performance?
International Journal of Pediatrics and Adolescent Medicine, 2(1):7–11.
Lilford R et al. (2004). Use and misuse of process and outcome data in managing performance of
acute medical care: avoiding institutional stigma. Lancet, 363(9424):1147–54.
Lohr KN (1990). Medicare: A Strategy for Quality Assurance. Washington (DC), US: National
Academies Press.
Lüngen M, Rath T (2011). Analyse und Evaluierung des QUALIFY Instruments zur Bewertung
von Qualitätsindikatoren anhand eines strukturierten qualitativen Interviews. Zeitschrift für
Evidenz, Fortbildung und Qualität im Gesundheitswesen, 105(1):38–43.
Mainz J (2003). Defining and classifying indicators for quality improvement. International Journal
for Quality in Health Care, 15(6):523–30.
Mainz J, Hess MH, Johnsen SP (2019). Perspectives on Quality: the Danish unique personal
identifier and the Danish Civil Registration System as a tool for research and quality
improvement. International Journal for Quality in Health Care (efirst): https://doi.org/10.1093/
intqhc/mzz008.
Marjoua Y, Bozic K (2012) Brief history of quality movement in US healthcare. Current reviews
in musculoskeletal medicine, 5(4):265–73.
NHS BSA (2019). Medication Safety – Indicators Specification. NHS Business Services
Authority (NHS BSA). Available at: https://www.nhsbsa.nhs.uk/sites/default/files/2019-02/
Medication%20Safety%20-%20Indicators%20Specification.pdf, accessed 21 March 2019.
NHS Digital (2019a). Indicator Methodology and Assurance Service. NHS Digital, Leeds.
Available at: https://digital.nhs.uk/services/indicator-methodology-and-assurance-service,
accessed 18 March 2019.
NHS Digital (2019b). Patient Reported Outcome Measures (PROMs). NHS Digital, Leeds.
Available at: https://digital.nhs.uk/data-and-information/data-tools-and-services/data-services/
patient-reported-outcome-measures-proms, accessed 22 March 2019.
NHS Employers (2018). 2018/19 General Medical Services (GMS) contract Quality and
Outcomes Framework (QOF). Available at: https://www.nhsemployers.org/-/media/Employers/
Documents/Primary-care-contracts/QOF/2018-19/2018-19-QOF-guidance-for-stakeholders.
pdf, accessed 21 March 2019.
NQF (2019a). Quality Positioning System. National Quality Forum (NQF). Available at:
http://www.qualityforum.org/QPS/QPSTool.aspx, accessed 19 March 2019.
NQF (2019b). Measure evaluation criteria. National Quality Forum (NQF). Available at:
http://www.qualityforum.org/measuring_performance/submitting_standards/measure_
evaluation_criteria.aspx, accessed 19 March 2019.
Oderkirk J (2013). International comparisons of health system performance among OECD
countries: opportunities and data privacy protection challenges. Health Policy, 112(1–2):9–18
OECD (2008). Handbook on Constructing Composite Indicators: Methodology and user
guide. Organisation for Economic Co-operation and Development (OECD). Available at:
https://www.oecd.org/sdd/42495745.pdf, accessed 22 March 2019.
OECD (2010). Improving Value in Health Care: Measuring Quality. Organisation for
Economic Co-operation and Development. Available at: https://www.oecd-ilibrary.org/
62 Improving healthcare quality in Europe
docserver/9789264094819-en.pdf?expires=1545066637&id=id&accname=ocid56023174
a&checksum=1B31D6EB98B6160BF8A5265774A54D61, accessed 17 December 2018.
OECD (2017). Recommendations to OECD Ministers of Health from the High Level Reflection
Group on the future of health statistics. Strengthening the international comparison of health
system performance through patient-reported indicators. Paris: Organisation for Economic
Co-operation and Development.
OECD (2019). Patient-Reported Indicators Survey (PaRIS). Paris: Organisation for Economic
Co-operation and Development. Available at: http://www.oecd.org/health/paris.htm, accessed
8 February 2019.
OECD HCQI (2016). Definitions for Health Care Quality Indicators 2016–2017. HCQI Data
Collection. Organisation for Economic Co-operation and Development Health Care Quality
Indicators Project. Available at: http://www.oecd.org/els/health-systems/Definitions-of-Health-
Care-Quality-Indicators.pdf, accessed 21 March 2019.
Papanicolas I, Smith P (2013). Health system performance comparison: an agenda for policy,
information and research. WHO, on behalf of the European Observatory. Open University
Press, Maidenhead.
Parameswaran SG, Spaeth-Rublee B, Alan Pincus H (2015). Measuring the Quality of Mental
Health Care: Consensus Perspectives from Selected Industrialized Countries. Administration
and Policy in Mental Health, 42:288–95.
Pfaff K, Markaki A (2017). Compassionate collaborative care: an integrative review of quality
indicators in end-of-life care. BMC Palliative Care, 16:65.
Porter M (2010). What is value in health care? New England Journal of Medicine, 363(26):2477–81.
Pukkala E et al. (2018). Nordic Cancer Registries – an overview of their procedures and data
comparability. Acta Oncologica, 57(4):440–55.
Radford PD et al. (2015). Publication of surgeon specific outcome data: a review of implementation,
controversies and the potential impact on surgical training. International Journal of Surgery,
13:211–16.
Romano PS et al. (2011). Impact of public reporting of coronary artery bypass graft surgery perform
ance data on market share, mortality, and patient selection. Medical Care, 49(12):1118–25.
Rubin HR, Pronovost P, Diette G (2001). The advantages and disadvantages of process-based
measures of health care quality. International Journal for Quality in Health Care, 13(6):469–74.
Shwartz M, Restuccia JD, Rosen AK (2015). Composite Measures of Health Care Provider
Performance: A Description of Approaches. Milbank Quarterly, 93(4):788–825.
Smith P et al. (2009). Introduction. In: Smith P et al. (eds.) Performance Measurement for
Health System Improvement: Experiences, Challenges and Prospects. Cambridge: Cambridge
University Press.
Steinwachs DM, Hughes RG (2008). Health Services Research: Scope and Significance. Patient
Safety and Quality: An Evidence-Based Handbook for Nurses. Rockville (MD): Agency for
Healthcare Research and Quality (US).
Terris DD, Aron DC (2009). Attribution and causality in health-care performance measurement. In:
Smith P et al. (eds.) Performance Measurement for Health System Improvement: Experiences,
Challenges and Prospects. Cambridge: Cambridge University Press.
Voeten SC et al. (2018). Quality indicators for hip fracture care, a systematic review. Osteoporosis
International, 29(9):1963–85.
Westaby S et al. (2015). Surgeon-specific mortality data disguise wider failings in delivery of safe
surgical services. European Journal of Cardiothoracic Surgery, 47(2):341–5.
Chapter 4
International and EU governance
and guidance for national
healthcare quality strategies
Willy Palm, Miek Peeters, Pascal Garel, Agnieszka Daval, Charles Shaw
4.1 Introduction
This chapter deals with international frameworks and guidance to foster and
support quality strategies in European countries. As will be demonstrated in
the chapter, the legal status and binding nature of various international gov-
ernance and guidance instruments differ substantially. While some are meant
to support national quality initiatives in healthcare, others have a more direct
effect on determining quality and safety of healthcare goods and services. This
is definitely the case for measures taken at EU level to ensure free movement of
goods, persons and services.
One of the questions addressed in this chapter is how the international community
can contribute to national policies related to quality of care. Four different ways
can be distinguished, which – taken together – can be considered as defining
the four main elements of an integrated international governance framework
for quality in healthcare (Fig. 4.1):
Raising
political
awareness
Strengthening Sharing
monitoring and experience and
evaluation good practices
Developing
standards and
models
framework. Instead, it will provide some examples of the kind of support they
are providing, and illustrate the complementary elements that can be observed
in their approaches.
Box 4.1 Excerpt from the Council Conclusions on Common values and
principles in European Union Health Systems (2006)
— Quality:
All EU health systems strive to provide good quality care. This is achieved in particular through the
obligation to continuous training of healthcare staff based on clearly defined national standards
and ensuring that staff have access to advice about best practice in quality, stimulating innovation
and spreading good practice, developing systems to ensure good clinical governance, and through
monitoring quality in the health system. An important part of this agenda also relates to the principle
66 Improving healthcare quality in Europe
of safety. . . . Patients can expect each EU health system to secure a systematic approach to
ensuring patient safety, including the monitoring of risk factors and adequate training for health
professionals, and protection against misleading advertising of health products and treatments.
— Safety:
Patients can expect each EU health system to secure a systematic approach to ensuring patient
safety, including the monitoring of risk factors and adequate training for health professionals,
and protection against misleading advertising of health products and treatments.
1 http://www.isqua.org/who-we-are/30th-anniversary/timeline-1985---2015.
International and EU governance and guidance for national healthcare quality strategies 67
Health21 Target 16, Managing for quality of care, focuses on outcomes as the ultimate
measure of quality
By the year 2010, Member States should ensure that the clinical management of the health
sector, from population-based health programmes to individual patient care at the clinical level, is
oriented towards health outcomes.
16.1 The effectiveness of major public health strategies should be assessed in terms of
health outcomes, and decisions regarding alternative strategies for dealing with individual
health problems should increasingly be taken by comparing health outcomes and their cost-
effectiveness.
16.2 All countries should have a nationwide mechanism for continuous monitoring and
development of the quality of care for at least ten major health conditions, including measurement
of health impact, cost-effectiveness and patient satisfaction.
16.3 Health outcomes in at least five of the above health conditions should show a significant
improvement, and surveys should show an increase in patients’ satisfaction with the quality of
services received and heightened respect for their rights.
diabetes care and hospital infections). Other WHO activities have included the
commissioning of monographs on specific technical issues in quality, with an
emphasis on the integration of standards, measurement and improvement as a
global, cyclical and continuing activity (Shaw & Kalo, 2002).
Later, WHO also developed similar activities to facilitate and support the devel-
opment of patient safety policies and practices across all WHO Member States.
In 2004 the WHO Global Alliance for Patient Safety was launched, following a
resolution that urged countries to establish and strengthen science-based systems,
necessary for improving patients’ safety and the quality of healthcare, including
the monitoring of drugs, medical equipment and technology (WHO, 2002).
Blood Recommendation No. R(95)15 on the preparation, use and quality assurance of blood
components.
Cancer control Recommendation No. R(89)13 of the Committee of Ministers to Member States on the
organization of multidisciplinary care for cancer patients
Recommendation No. R(80)6 of the Committee of Ministers to Member States concerning cancer
control
Disabilities Recommendation Rec(2006)5 of the Committee of Ministers to Member States on the Council of
Europe Action Plan to promote the rights and full participation of people with disabilities in society:
improving the quality of life of people with disabilities in Europe, 2006–2015
Health Policy, Recommendation Rec(2001)13 on developing a methodology for drawing up guidelines on best
Development medical practices
and Promotion Recommendation No. R(97)17 of the Committee of Ministers to Member States on the
development and implementation of quality improvement systems (QIS) in healthcare
Health services Recommendation Rec(2006)7 of the Committee of Ministers to Member States on management
of patient safety and prevention of adverse events in healthcare
Recommendation Rec(99)21 of the Committee of Ministers to Member States on criteria for the
management of waiting lists and waiting times in healthcare
Recommendation Rec(84)20 on the prevention of hospital infections
Mental Recommendation Rec(2004)10 of the Committee of Ministers to Member States concerning the
disorder protection of human rights and dignity of persons with mental disorder
Palliative care Recommendation Rec(2003)24 of the Committee of Ministers to Member States on the
organization of palliative care
Patients’ role Recommendation Rec(2000)5 of the Committee of Ministers to Member States on the
development of structures for citizen and patient participation in the decision-making process
affecting healthcare
Recommendation Rec(80)4 concerning the patient as an active participant in his own treatment
Transplantation Recommendation Rec(2005)11 of the Committee of Ministers to Member States on the role and
training of professionals responsible for organ donation (transplant “donor co-ordinators”)
Vulnerable Recommendation R(98)11 of the Committee of Ministers to Member States on the organization of
groups healthcare services for the chronically ill
* Based on http://www.coe.int/t/dg3/health/recommendations_en.asp
2 https://www.edqm.eu/
70 Improving healthcare quality in Europe
Note: Three other standards exist: Medical laboratories – Requirements for quality and competence (EN
ISO 15189), Services offered by hearing aid professionals (EN 15927), Early care services for babies born
with cleft lip and/or palate (CEN/TR 16824), Services of medical doctors with additional qualification
in Homoeopathy (MDQH) – Requirements for healthcare provision by Medical Doctors with additional
qualification in Homoeopathy (EN 16872).
for the EU-wide provision of accreditation services for the marketing of goods,
should extend to services (EC, 2011). In 2013 the Joint Research Centre of the
European Commission, together with the European standards organizations,
launched an initiative, “Putting Science into Standards”, to bring the scientific
and standardization communities closer together (European Commission Joint
Research Centre, 2013). It is in that context that a pilot project was launched
to develop a voluntary European Quality Assurance Scheme for Breast Cancer
Services (BCS), as part of the European Commission’s Initiative on Breast Cancer
(ECIBC, 2014). This project demonstrates the challenges of applying concepts of
“certification” to healthcare, and of transposing standards for diagnostic services
(ISO 15189) into clinical services in Europe.
the necessary elements for periodic monitoring and evaluation (Article 168.2 para
2). The tools for developing this are commonly referred to as “soft law” (see also
Greer & Vanhercke, 2010). They include instruments such as Recommendations,
Communications, the Open Method of Coordination, high-level reflection
processes or working parties, action programmes, Joint Actions, etc. (see also
Greer et al., 2014).
These two different approaches will be further elaborated in the next sections.
First, we will explore how quality and safety are secured through EU provisions
and policies that are meant to ensure free movement and establish an internal
market. Next, we will address the more horizontal and generic EU policies with
respect to quality and safety that follow from the mandate to support, coordinate
or supplement national policies (Article 2.5 TFEU). Finally, we will draw con-
clusions on the different ways in which EU integration and policy touch upon
quality in healthcare and how the approach has evolved over time.
Pharmaceuticals
Starting in the 1960s, a comprehensive framework of EU legislation has gradu-
ally been put in place to guarantee the highest possible level of public health
with regard to medicinal products. This body of legislation is compiled in ten
volumes of “The rules governing medicinal products in the European Union”
(EudraLex). All medicinal products for human use have to undergo a licensing
procedure in order to obtain a marketing authorization. The requirements and
procedures are primarily laid down in Directive 2001/83/EC and in Regulation
(EC) No. 726/2004. More specific rules and guidelines, which facilitate the
interpretation of the legislation and its uniform application across the EU, are
compiled in volumes 3 and 4 of Eudralex. Since 1994 the European Medicines
Agency (EMA) has coordinated the scientific evaluation of the quality, safety
and efficacy of all medicinal products that are submitted to licensing. New phar-
maceuticals can be licensed either by EMA or by authorities of Member States.
More details about the regulation of pharmaceuticals are provided in Chapter 6.
The EMA is also responsible for coordinating the EU “pharmacovigilance”
system for medicines. If information indicates that the benefit-risk balance of
a particular medicine has changed since authorization, competent authorities
can suspend, revoke, withdraw or change the marketing authorization. There is
a EudraVigilance reporting system that systematically gathers and analyses sus-
pected cases of adverse reactions to a medicine, which was further strengthened
in 2010. The EMA has also released good pharmacovigilance practice guidelines
(GVP) to facilitate the performance of pharmacovigilance activities in all Member
States. In addition, Commission Implementing Regulation (EU) No. 520/2012
International and EU governance and guidance for national healthcare quality strategies 77
Medical devices
Also for medical devices, EU regulation combines the double aim of ensuring a
high level of protection of human health and safety with the good functioning
of the Single Market. However, the scrutiny for product safety is not – as yet –
so far advanced as in the case of pharmaceuticals (Greer et al., 2014). The legal
framework in this area was developed in the 1990s with a set of three directives
covering, respectively, active implantable medical devices (Directive 90/385/
EEC), medical devices (Directive 93/42/EEC) and in vitro diagnostic medical
devices (Directive 98/79/EC). They were supplemented subsequently by several
modifying and implementing directives, including the last technical revision
78 Improving healthcare quality in Europe
The European Commission can make sure that these measures are then applied
throughout the Union. The new rules provide for a mandatory unique device
identifier to strengthen traceability and an implant card to improve information
to patients.
3 The measures referred to in paragraph 4(a) shall not affect national provisions on the donation or medical
use of organs and blood. (Article 168, 7, in fine).
80 Improving healthcare quality in Europe
legislation. In fact, the Treaty makes a special provision for the medical and
allied and pharmaceutical professions, indicating that the progressive abolition
of restrictions (to free movement) shall be dependent upon coordination of the
conditions for their exercise in the various Member States (Article 53.2 TFEU).
Traditionally two coordination systems were combined to achieve equivalence
between qualifications from different countries. The so-called “sectoral system” is
based on a minimum harmonization of training requirements. Under this system
Member States are obliged to automatically recognize a diploma without any
individual assessment or imposing any further condition. It applies to specific
regulated professions that are explicitly listed. Five of the seven professions falling
under the sectoral system of automatic recognition are health professions: doctors
(including specialists), nurses responsible for general care, dentists (including
specialists), midwives and pharmacists. Other health professions (for example,
specialist nurses, specialist pharmacists, psychologists, chiropractors, osteopaths,
opticians) fall under the “general system”. As under this system training require-
ments were not harmonized, Member States can require certain compensating
measures to recognize a diploma from another Member State, such as an aptitude
test or an adaptation period (Peeters, 2005).
The legislative framework regarding the mutual recognition of qualifications was
revised for the first time in 2005. The various sectoral and general Directives
were merged and consolidated into Directive 2005/36/EC on the recognition
of professional qualifications. A second major revision took place in 2013 with
Directive 2013/55. This revision aimed to modernize the legal framework and to
bring it in line with the evolving labour market context. While clearly these con-
secutive revisions were aimed at making free movement of professionals simpler,
easier and quicker – not least by cutting red tape and speeding up procedures
through the use of e-government tools (cf. the European professional card) –
they were also motivated by an ambition to better safeguard public health and
patient safety with respect to health professions (Tiedje & Zsigmond, 2012).
One element has been the modernization of the minimum training requirements
under the automatic recognition procedure. Next to the specification and updat-
ing of the minimum duration of training and the knowledge as well as skills and
training subjects that have to be acquired, the possibility of adding a common
list of competences was introduced (as was done for nurses under Article 31.7).
The reform also made it possible to expand automatic recognition to professions
falling under the general system (or specialties of a sectoral profession) that are
regulated in at least one third of Member States by developing common training
principles, a detailed set of knowledge, skills and competences. However, doubts
are raised as to whether this would really improve quality and safety (cf. Council
of European Dentists, 2015). Although there is no minimum harmonization,
International and EU governance and guidance for national healthcare quality strategies 83
some would argue that the general system offers more possibilities for quality
assurance as it allows the host Member State to require compensation measures
and to more quickly respond to changes in clinical practice – in particular, the
emergence of new specialties (Peeters, McKee & Merkur, 2010). Finally, the
revised Directive also introduced an obligation for Member States to organize
continuous professional development (CPD) – at least for the sectoral profes-
sions – so that professionals can update their knowledge, skills and competences.
The revisions also strengthened the rules concerning the “pursuit” of the profes-
sion. Indeed, equivalence of standards of education and training alone does not
as such provide sufficient guarantees for good quality medical practice (Abbing,
1997). In principle, Member States can make authorization to practise subject
to certain conditions, such as the presentation of certain documents (a certifi-
cate of good standing, of physical or mental health) and/or an oath or solemn
declaration, or the applicability of national disciplinary measures.
Two main patient safety concerns prevailed in this context: language proficiency
and professional malpractice. Since communication with patients is an important
aspect of quality assurance in health care, improper language assessment and
induction of inflowing health professionals could compromise patient safety
(Glinos et al., 2015). Therefore, the revised professional qualification Directive
clarified that for professions with implications for patient safety, competent
authorities can carry out systematic language controls. However, this should
only take place after the recognition of the qualification and should be limited
to one official or administrative language of the host Member State. Any lan-
guage controls should also be proportionate to the activity to be pursued and
be open to appeal (Article 53). Another serious public health risk derives from
professionals “fleeing” to another country after they have been found guilty of
professional misconduct or considered unfit to practise. On several accounts
the voluntary exchange of information between Member States as foreseen in
the qualifications Directive was judged far from optimal. This is why in the last
reform the duties in terms of information exchange were strengthened with a
particular emphasis on health professionals. The revised Directive introduced a
pro-active alert mechanism under the Internal Market Information system (IMI),
with an obligation for competent authorities of a Member State to inform the
competent authorities of all other Member States about professionals who have
been banned, even temporarily, from practising.
Even if it is commonly accepted that national regulation of health profession-
als is needed to protect public health and to ensure quality of care and patient
safety, any conditions imposed should be non-discriminatory and not unduly
infringe on the principles of free movement. In the past, the European Court
of Justice found that certain national measures that were taken to protect public
84 Improving healthcare quality in Europe
prior check and require an aptitude test if there is a risk of serious damage to the
health or safety of the service recipient due to a lack of professional qualification
of the service provider (Article 7.4).
Another dimension of free movement of health services is mobility of patients.
The right of citizens to seek healthcare in another Member State was already
acknowledged by the European Court of Justice in the cases Luisi and Carbone
and Grogan.4 That this right would also apply to health services provided within
the context of statutory health systems – irrespective of the type of healthcare or
health system – was subsequently confirmed in the cases Kohll, Smits-Peerbooms
and Watts,5 which dealt with the situation of patients requesting reimbursement
for healthcare they obtained in another Member State. In this context quality
of health services came up first as a justification ground for refusing reimburse-
ment. The Luxembourg government, joined by other Member States, argued that
requiring prior authorization for cross-border care was a necessary measure to
protect public health and guarantee the quality of care that was provided abroad
to its citizens. However, the Court rejected this on the grounds that the EU’s
minimum training requirements for doctors and dentists established equivalence
and required that health professionals in other Member States should be treated
equally.6 Even though a similar framework based on the mutual recognition
principle is lacking for hospitals, the Court in the Stamatelaki case (in which
the Greek authorities refused to pay for treatment in a London-based private
hospital) followed the same reasoning, arguing that private hospitals in other
Member States are also subject to quality controls and that doctors working in
those hospitals based on the professional qualifications Directive provide the
same professional guarantees as those in Greece.7
In the discussion on how to codify the case law around cross-border care, qual-
ity and patient safety gradually gained importance. Perceived national differ-
ences in quality of healthcare – and in policies to guarantee quality and patient
safety – were identified as both a driver for and an obstacle to patient mobility.
Eurobarometer surveys on cross-border health services have repeatedly dem-
onstrated that receiving better quality treatment was the second main reason
to consider travelling to another Member State for care (after treatment that is
not available at home) (EC, 2015). Also long waiting times have systematically
come out as an important motivation for patients to seek care abroad. At the
same time, lack of information about the quality of medical treatment abroad
and patient safety was considered a major deterrent for considering the option
4 Joined Cases 286/82 and 26/83, Luisi and Carbone v. Ministero del Tesoro [1984] ECR 377; Case
C-159/90, The Society for the Protection of Unborn Children Ireland Ltd v. Grogan [1991] ECR I-4685
5 Case C-158/96, Kohll v. Union des Caisses de Maladie [1998] ECR I-1931; Case C-157/99, Geraets-
Smits and Peerbooms [2001] ECR I-5473; Case C-372/04, Watts [2006] ECR I-4325
6 Case C-158/96, Kohll, para. 49
7 Case C-444/05, Stamatelaki [2007] ECR I-3185, paras. 36–7
86 Improving healthcare quality in Europe
of cross-border care. One of the main conclusions drawn from the public con-
sultation that the European Commission organized in 2006 to decide on what
Community action to take in this field was that the uncertainty that deterred
patients from seeking treatment in another Member State was not only related
to their entitlements and reimbursement but also linked to issues of quality and
safety (EC, 2006). This is also why the Commission finally opted for a broader
approach that would not only tackle the financial aspects around cross-border
care (as was initially proposed in the Services Directive) but would also address
these other uncertainties (Palm et al., 2011). Only then would patients feel suf-
ficiently confident to seek treatment across the Union.
The Directive 2011/24/EU on the application of patients’ rights in cross-border8
healthcare aims to facilitate access to safe and high-quality cross-border healthcare
(Article 1). In line with this, each Member State is given the responsibility to
ensure on their territory the implementation of common operating principles that
all EU citizens would expect to find – and structures to support them – in any
health system in the EU (Council of the European Union, 2006). This includes
in the first place the obligation for the Member State providing treatment to
guarantee cross-border patients access to good-quality care in accordance with
the applicable standards and guidelines on quality and safety (Article 4.1 (b)). In
addition, they are also entitled to obtain the provision of relevant information
to help them make rational choices (Article 4.2(a) and (b)), as well as recourse
to transparent complaint procedures and redress mechanisms (Article 4.2(c)),
and systems of professional liability insurance or similar arrangements (Article
4.2(d)). Finally, they also have the right to privacy protection with respect to
the processing of personal data (Article 4.2(e)), as well as the right to have and
to access a personal medical record (Article 4.2(f )).
In its current form the Directive does not contain any obligation for Member
States to define and implement quality and safety standards. It only states that
if such standards and guidelines exist they should also apply in the context of
healthcare provided to cross-border patients. The Commission’s initial proposal
was more ambitious: it wanted to set EU minimum requirements on quality
and safety for cross-border healthcare. However, this was considered by the
Member States as overstepping national competence to organize healthcare and
was reframed into an obligation to – only – inform patients about the applicable
quality and safety standards and guidelines. Member States are also required
to mutually assist each other for implementing the Directive, in particular on
standards and guidelines on quality and safety and the exchange of information
(Article 10.1). Under Chapter IV of the Directive Member States are encouraged
8 Directive 2011/24/EU of 9 March 2011 on the application of patients’ rights in cross-border healthcare,
OJ L88/45–65
International and EU governance and guidance for national healthcare quality strategies 87
9 See Article 3g; “healthcare provider” means any natural or legal person or any other entity legally providing
healthcare on the territory of a Member State.
88 Improving healthcare quality in Europe
pain and/or the nature of the patient’s disability at the time when the request
for authorization was made or renewed (Article 8.5). However, the European
Court of Justice made clear that the state of the health system also has to be
taken into account. In the Petru case it held that if a patient cannot get hospital
treatment in good time in his own country because of a lack of medication and
basic medical supplies and infrastructure, reimbursement of medical expenses
incurred in another Member State cannot be refused.10 But also access to more
advanced (better-quality) therapies has been a recurring point of discussion as
to whether it would justify reimbursement. In the Rindal and Slinning case the
EFTA Court held that when it is established according to international medicine
that the treatment abroad is indeed more effective, the state may no longer justify
prioritizing its own offer of treatment.11 And in the Elchinov case the European
Court of Justice stated that if the benefit basket of a country would only define
the types of treatment covered but not specify the specific method of treatment,
prior authorization could not be refused for a more advanced treatment (i.e.
proton therapy for treating an eye tumour) if this was not available within an
acceptable time period (Sokol, 2010).
Box 4.2 Soft law instruments to improve quality of cancer control policies
in the EU
A good example is the action undertaken in the field of the fight against cancer. As one of the first
areas where a specific Community initiative on health was launched, over time the political focus
gradually expanded from one that essentially promoted cooperation in research and prevention to
a more horizontal and integrated approach that covers all aspects of prevention, treatment and
follow-up of cancer as a chronic disease. Following the “Europe against Cancer” programme that
was started in 1985, the Council of Health Ministers in 2003 adopted a Council Recommendation
on cancer screening, setting out principles of best practice in the early detection of cancer and
calling for action to implement national population-based screening programmes for breast,
cervical and colorectal cancer (Council of the European Union, 2003). To ensure appropriate quality
assurance at all levels, the Commission, in collaboration with WHO’s International Agency for
Research on Cancer (IARC), produced European guidelines for quality assurance in respectively
cervical, breast cancer and colorectal cancer screening and diagnosis. The European Partnership
for Action against Cancer (EPAAC) that was launched in 2009 also marked the identification and
dissemination of good practice in cancer-related healthcare as one of its core objectives (EC,
2009). This focus on integrated cancer care services is also reflected in the Cancer Control Joint
Action (CanCon). As a next step, under the European Commission Initiative on Breast Cancer
(ECIBC) launched in 2012, a ground-breaking project was started to develop a European quality
assurance scheme for breast cancer services (BCS) underpinned by accreditation and referring
to high-quality, evidence-based guidelines.*
* http://ecibc.jrc.ec.europa.eu/
and regional strategies for patient safety, maximize the scope for cooperation and
mutual support across the EU and improve patients’ confidence by improving
information on safety in health systems. The recommendations were mainly aimed
at fostering a patient safety culture and targeted health professionals, patients,
healthcare managers and policy-makers. Some of the measures proposed in the
Recommendation (for example, information to patients about patient safety
standards, complaint in case a patient is harmed while receiving healthcare,
remedies and redress) were also included in the safety provisions of the 2011
cross-border healthcare Directive (see above). Two consecutive implementation
reports published in 2012 and 2014 demonstrated the significant progress that
was made in the development of patient safety policies and programmes as well
as of reporting and learning systems on adverse events (EC, 2012b, 2014c). Still,
more efforts are needed for educating and training health professionals12 and
empowering patients (see also chapter 11). This again pushed towards looking
at other quality aspects than only safety.
The work on patient safety paved the way to broadening the scope of EU col-
laborative action to the full spectrum of quality in healthcare. With a somewhat
more relaxed EU mandate on health systems, which was reflected in the EU
health strategy (2008) and later confirmed in the Council Conclusion on the
reflection process on modern, responsive and sustainable health systems (2011),
the Commission could start to develop a Community framework for safe, high-
quality and efficient health services that would support Member States in making
their health systems more dynamic and sustainable through coordinated action
at EU level (Council of the European Union, 2011). Some of the preparatory
work was entrusted to the Working Group (then the Expert Group) on Patient
Safety and Quality of Healthcare, which supported the policy development by
the European Commission until 2017. Also the Joint Action on patient safety
and quality of healthcare (PaSQ), which was launched in 2012, helped to further
strengthen cooperation between EU Member States, international organizations
and EU stakeholders on issues related to quality of healthcare, including patient
safety. In 2014 the Commission’s Expert Panel on Effective Ways of Investing in
Health (EXPH) was asked to produce an opinion on the future EU agenda on
quality. The report emphasized the important role that the European Commission
can play in improving quality and safety in healthcare – either through the Health
Programme or the Research Framework Programme (see Table 4.5) – by sup-
porting the development of guidelines and sharing of good practices, boosting
research in this area, promoting education and training of both patients and
health professionals, further encouraging cooperation on HTA, collecting the
12 This was addressed under the 2010 Belgian EU Presidency: see Flottorp SA et al. (2010). Using audit and
feedback to health professionals to improve the quality and safety of health care. Copenhagen: WHO
Regional Office for Europe and European Observatory on Health Systems and Policies
International and EU governance and guidance for national healthcare quality strategies 91
necessary data, etc. It also proposed the creation of an EU Health Care Quality
Board to coordinate EU initiatives in this field and the development of an HSPA
framework to compare and measure impacts (Expert Panel on Effective Ways of
Investing in Health, 2014). In 2014, under the Italian EU Presidency, Council
Conclusions were adopted that invited the Commission and Member States to
develop a methodology of establishing patient safety standards and guidelines,
and to propose a framework for sustainable EU collaboration on patient safety
and quality of care. Moreover, the Commission was invited to propose a rec-
ommendation on information to patients on patient safety. However, the EU
activities on patient safety and quality of care were discontinued in 2015 and
to date none of the recommendations made by the Expert Panel or the Council
Conclusions has been taken forward.
Especially since the EU increased its role in monitoring the financial sustain-
ability and performance of health systems, quality of healthcare has become
embedded within the context of health system performance measurement and
improvement. In order to ensure the uptake of the health theme in the European
Semester process, the Council called for translating the concept of “access to
good quality healthcare” into operational assessment criteria (Council of the
European Union, 2013). The 2014 Commission’s Communication on modern,
responsive and sustainable health systems also marked quality as a core element
of health systems’ performance assessment (HSPA) (EC, 2014a). As a result the
Expert Group on HSPA that was set up in 2014 as a first strand produced a
report on quality (EC, 2016). The goal of this report was not so much to compare
or benchmark quality between Member States but rather to support national
policy-makers by providing examples, tools and methodologies for implementing
or improving quality strategies. This should not only help to optimize the use
of resources but also to improve information on quality and safety as required
under Directive 2011/24.
4.4 Conclusions
This chapter has tried to show how international frameworks can help foster and
support quality initiatives in countries. The international dimension is particularly
important to raise political awareness, to share experience and practice and to
provide tools (conceptual frameworks, standards, models, assessment frameworks)
for implementing quality measures and policies at national and regional level. The
legal status and binding nature of the various international instruments differ.
Most legally binding instruments are to be found at EU level, but their prime
purpose is often facilitating free movement rather than ensuring quality. Also
non-binding instruments have shown to be effective in pushing policy-makers
at country level towards putting quality in healthcare on the political agenda.
92 Improving healthcare quality in Europe
The various international organizations have been cooperating with each other
on quality, and also complementing one another’s work. As an example, EU
pharmaceutical legislation makes direct reference to the Council of Europe’s
European Pharmacopoeia, and the European Medicines Agency (EMA) and the
European Directorate for the Quality of Medicines and Healthcare (EDQM)
work closely together.
Quality became more recently an international political priority and rapidly
gained importance as a significant lever to the international community for
pushing health system reform, complementing the objective of universal health
coverage. The efforts made by international organizations, such as WHO, to
support national actors with guidance, information, practical tools and capacity
building have paid off and contributed to launching a global movement advocat-
ing for monitoring and improving quality and safety in healthcare worldwide.
Next to the importance of quality from a public health perspective, and its
close ties with fundamental patient rights, there is also an important economic
dimension. In a context of increasing mobility and cross-border exchange in
healthcare, quality can constitute both a driver of and an impediment to free
movement. This is also why at EU level the attention for quality initially was
rather indirect, as a precautionary measure to ensure the realization of the internal
market, mostly the free movement of medical goods and health professionals.
When the European Court of Justice in 1998 had to deal with the first cases on
cross-border care it explicitly referred to the framework of mutual recognition
of professional qualifications to dismiss the argument put forward by Member
States that differences in quality would justify denying reimbursement of medi-
cal care provided in another Member State (Ghekiere, Baeten & Palm, 2010).
However, the principle of mutual recognition, which is one of the cornerstones
of the EU Single Market as it guarantees free movement without the need to
harmonize Member States’ legislation (Ghekiere, Baeten & Palm, 2010), is not
always considered sufficient to guarantee high quality and safety standards. Also,
the attempts at EU level to submit national regulation to a proportionality test
and to develop an industry-driven kind of standardization in healthcare provision
are met with some criticism and concern of the health sector.
Hence, the awareness grew that a more integrated approach was needed for
promoting and ensuring quality in healthcare, with the case law on patient
mobility as a turning point. The 2003 High Level Process of Reflection on
Patient Mobility and Healthcare Developments in the European Union called
for more systematic attention and information exchange on quality issues as
well as for assessment of how European activities could help to improve quality
(EC, 2003). The High Level Group on Health Services and Medical Care that
started to work in 2004 further elaborated quality-related work in various areas,
International and EU governance and guidance for national healthcare quality strategies 93
which eventually also made its way into the EU’s horizontal health strategy (EC,
2007). Where initially the focus was very much concentrated on safety, it gradu-
ally broadened to include other aspects of quality, such as patient-centredness.
The added value of cross-border cooperation in specific areas, such as quality
and safety, and sharing experiences and information about approaches and good
practice is widely recognized. While it is not considered appropriate to harmo-
nize health systems, the development of quality standards or practice guidelines
at EU level was always something that was envisaged as a way to further make
health systems converge (Cucic, 2002). Even if Directive 2011/24/EU on the
application of patients’ rights in cross-border healthcare finally did not include
any obligation for Member States to introduce quality and safety standards,
it did provide a clear mandate for the European Commission to claim more
transparency around quality and safety, from which domestic patients would
also benefit (Palm & Baeten, 2011).
This illustrates how the EU’s involvement in healthcare quality has gradually
evolved. Besides a broadening of its scope, we have also seen a move from merely
fostering the sharing of information and best practices towards standardiza-
tion and even the first signs of enforcement (Vollaard, van de Bovenkamp &
Vrangbæk, 2013).
Table 4.5 A selection of EU-funded projects on quality and/or safety
HCQI (Health 2002– The OECD Health Care Quality Indicators project, initiated HP www.oecd.org/els/
Care Quality in 2002, aims to measure and compare the quality of health-systems/
Indicators) health service provision in different countries. An Expert health-care-quality-
Group has developed a set of quality indicators at the indicators.htm
health systems level, which allows the impact of particular
factors on the quality of health services to be assessed.
QM IN HEALTH 2003– The aim of the project was to facilitate and coordinate FP5
CARE (Exchange 2005 the exchange of information and expertise on similarities
of knowledge and differences among European countries in national
on quality quality policies, and in research methods to assess
management in quality management (QM) in healthcare organizations at
healthcare) a national level.
SImPatiE (Safety 2005– The SImPatiE project gathered a Europe-wide network HP
Improvement 2007 of organizations, experts, professionals and other
for Patients in stakeholders to establish a common European set of
Europe) strategies, vocabulary, indicators and tools to improve
patient safety in healthcare. It focused on facilitating free
movement of people and services.
MARQuIS 2005– The MARQuIS project’s main objective was to identify FP6
(Methods of 2007 and compare different quality improvement policies and
Assessing strategies in healthcare systems across the EU Member
Response States and to consider their potential use for cross-border
to Quality patients. Next to providing an overview of different
Improvement national quality strategies, it described how hospitals
Strategies) in a sample of states applied them to meet the defined
requirements of cross-border patients.
HP = Health Programme FP = Framework Programme for Research
94 Improving healthcare quality in Europe
EUnetHTA 2006– EUnetHTA was first established as a project to create an FP7 www.eunethta.eu
(European effective and sustainable network for health technology HP
network assessment across Europe. After the successful
for Health completion of the EUnetHTA Project (2006–2008), the
Technology EUnetHTA Collaboration was launched in November
Assessment) 2008 in the form of a Joint Action. Under the cross-border
health Directive it is being transformed into a permanent
structure to help in developing reliable, timely, transparent
and transferable information to contribute to HTAs in
European countries.
EUNetPaS 2008– The aim of this project was to encourage and improve HP
(European 2010 partnership in patient safety by sharing the knowledge,
Network for experiences and expertise of individual Member States
Patient Safety) and EU stakeholders on patient safety culture, education
and training, reporting and learning systems, and
medication safety in hospitals.
VALUE+ (Value+ 2008– The project’s objective was to exchange information, HP
Promoting 2010 experiences and good practices around the meaningful
Patients’ involvement of patients’ organizations in EU-supported
Involvement) health projects at EU and national level and raise
awareness on its positive impact on patient-centred and
equitable healthcare across the EU.
ORCAB 2009– This project sought to highlight the role of hospital FP7
(Improving 2014 organizational culture and physician burnout in promoting
quality and safety patient safety and quality of care. It aimed to profile and
in the hospital: monitor the specific factors of hospital-organizational
the link between culture that increase burnout among physicians and their
organizational impact on patient safety and quality of care.
culture, burnout
and quality of
care)
EuroDRG 2009– Based on a comparative analysis of DRG systems across FP7 www.eurodrg.eu
(Diagnosis- 2011 10 European countries embedded in various types
Related Groups of health system, this project wanted to improve the
in Europe: knowledge on DRG-based hospital payment systems and
Towards their effect on health systems performance. Since policy-
Efficiency and makers are often concerned about the impact on quality,
Quality) the project specifically assessed the relationship between
costs and the quality of care.
DUQuE 2009– This was a research project to study the effectiveness of FP7 www.duque.eu
(Deepening our 2014 quality improvement systems in European hospitals. It
Understanding mainly looked at the relationship between organizational
of Quality quality improvement systems, organizational culture,
improvement in professional involvement and patient involvement in
Europe) quality management and their effect on the quality of
hospital care (clinical effectiveness, patient safety and
patient experience). A total of 192 hospitals from eight
countries participated in the data collection. Seven
measures for quality management were developed and
validated.
HP = Health Programme FP = Framework Programme for Research
International and EU governance and guidance for national healthcare quality strategies 95
References
Abbing HDCR (1997). The right of the patient to quality of medical practice and the position of
migrant doctors within the EU. European Journal of Health Law, 4:347–60.
Alarcón-Jiménez O (2015). The MEDICRIME Convention – Fighting against counterfeit
medicines. Eurohealth, 21(4):24–7.
Baeten R (2017). Was the exclusion of health care from the Services Directive a pyrrhic victory?
A proportionality test on regulation of health professions. OSE Paper Series, Opinion Paper
18. Brussels. Available at: http://www.ose.be/files/publication/OSEPaperSeries/Baeten_2017_
OpinionPaper18.pdf , accessed 3 December 2018.
Baeten R, Palm W (2011). The compatibility of health care capacity planning policies with EU
Internal Market Rules. In: Gronden JW van de et al. (eds.). Health Care and EU Law.
Bertinato L et al. (2005). Policy brief: Cross-border health care in Europe. Copenhagen: WHO
Regional Office for Europe.
Buchan J, Glinos IA, Wismar M (2014). Introduction to health professional mobility in a changing
Europe. In: Buchan J et al. (eds.). Health Professional Mobility in a Changing Europe. New
dynamics, mobile individuals and diverse responses. Observatory Studies Series. WHO
Regional Office for Europe.
Charter of fundamental rights of the European Union, 2002, Article 35.
Cluzeau F et al. (2003). Development and validation of an international appraisal instrument for
assessing the quality of clinical practice guidelines: the AGREE project. Quality and Safety
in Health Care, 12(2003):18–23.
Committee of Ministers (2001). Recommendation Rec(2001)13 of the Committee of Ministers
to Member States on developing a methodology for drawing up guidelines on best medical
practices, adopted by the Committee of Ministers on 10 October 2001 at the 768th meeting
of the Ministers’ Deputies.
Committee on Quality of Health Care in America, IoM et al. (1999). To err is human: Building
a safer health system. Washington, DC: National Academy Press.
Council of Europe (1997). The development and implementation of quality improvement systems
(QIS) in health care. Recommendation No. R(97)17 adopted by the Committee of Ministers
of the Council of Europe on 30 September 1997 and explanatory memorandum. Strasbourg:
Council of Europe.
Council of European Dentists (2015). Cf. Statement by the Council of European Dentists, Common
training principles under Directive 2005/36/EC. Available at: http://www.eoo.gr/files/pdfs/
enimerosi/common_training_principles_under_dir_2005_36_en_ced_doc_2015_023_fin_e.
pdf , accessed 3 December 2018.
Council of the European Union (2003). Council Recommendation of 2 December 2003 on
cancer screening (2003/878/EC), OJ L 327/34–37.
Council of the European Union (2006). Council Conclusions on Common values and principles
in European Union Health Systems, (2006/C 146/01). Official Journal of the European Union,
C-146:1–3. Available at: http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:C:2006
:146:0001:0003:EN:PDF, accessed 3 December 2018.
Council of the European Union (2009). Council Recommendation of 9 June 2009 on patient
safety, including the prevention and control of healthcare associated infections. Official Journal
of the European Union, C151.
Council of the European Union (2011). Council conclusions “Towards modern, responsive and
sustainable health systems”. Luxembourg.
Council of the European Union (2013). Council conclusions on the “Reflection process on
modern, responsive and sustainable health systems”.
International and EU governance and guidance for national healthcare quality strategies 97
Council of the European Union (2017). Proposal for a directive of the European Parliament and
of the Council on a proportionality test before adoption of new regulation of professions –
General approach.
Cucic S (2000). European Union health policy and its implication for national convergence.
International Journal for Quality in Health Care, 12(3):224.
Directive 2005/36/EC on the recognition of professional qualifications. Official Journal of the
European Union, L255.
Directive 2011/24/EU of 9 March 2011 on the application of patients’ rights in cross-border
healthcare. Official Journal of the European Union, L88:45–65.
Directive 2013/55/EU of 20 November 2013 amending Directive 2005/36/EC on the recognition of
professional qualifications and Regulation (EU) No. 1024/2012 on administrative cooperation
through the Internal Market Information System (‘the IMI Regulation’). Official Journal of
the European Union, L354.
EC (2003). High Level Process of Reflection on Patient Mobility and Healthcare Developments
in the European Union, Outcome of the Reflection Process. HLPR/2003/16. Available at:
http://ec.europa.eu/health/ph_overview/Documents/key01_mobility_en.pdf, accessed 3
December 2018.
EC (2006). Commission Communication SEC (2006) 1195/4: Consultation regarding Community
action on health services.
EC (2007). Together for Health: A Strategic Approach for the EU 2008–2013. COM (2007) 630
final. Available at: http://ec.europa.eu/health/ph_overview/Documents/strategy_wp_en.pdf,
accessed 3 December 2018.
EC (2008b). Communication on patient safety, including the prevention and control of healthcare-
associated infections, 15 December 2008. COM (2008) 837 final.
EC (2009). Communication of 24 June 2009 on Action against Cancer: European Partnership.
COM (2009) 291 final.
EC (2011). A strategic vision for European standards: moving forward to enhance and accelerate
the sustainable growth of the European economy by 2020. COM(2011) 311. Brussels.
EC (2012a). Communication on safe, effective and innovative medical devices and in vitro
diagnostic medical devices for the benefit of patients, consumers and healthcare professionals.
COM (2012) 540 final. Luxembourg: Publications Office of the European Union.
EC (2012b). Report from the Commission to the Council on the basis of Member States’ reports
on the implementation of the Council recommendation (2009/C 151/01) on patient safety,
including the prevention and control of healthcare associated infections. COM (2012) 658 final.
EC (2014a). Communication on modern, responsive and sustainable health systems. COM
(2014) 215 final.
EC (2014c). Report from the Commission to the Council. The Commission’s Second Report to the
Council on the implementation of Council Recommendation 2009/C 151/01 on patient safety,
including the prevention and control of healthcare associated infections. COM (2014) 371.
EC (2015). Flash Eurobarometer 210. Cross-border health services in the EU, June 2007; Special
Eurobarometer 425 – Patients’ rights in cross-border healthcare in the European Union.
EC (2016). So What? Strategies across Europe to assess quality of care. Report by the Expert
Group on Health Systems Performance Assessment. European Commission (EC). Brussels:
European Commission.
ECIBC (2014). European Commission Initiative on Breast Cancer: background and concept.
Available at: https://ec.europa.eu/jrc/sites/jrcsh/files/ECIBC%20background%20and%20
concept.pdf.
European Commission Joint Research Centre (2013). Round table European Forum for Science
and Industry. “Putting Science into Standards: the example of Eco-Innovation”.
European Hospital and Healthcare Federation et al. (2016). Joint letter of 6
July 2016. Available at: http://www.epsu.org/sites/default/files/article/files/
98 Improving healthcare quality in Europe
HOPE%2BCPME%2BCED%2BEPSU%2BETUC-Letter-Standardisation-06.07.16.pdf,
accessed 2 February 2017.
Expert Panel on Effective Ways of Investing in Health (2014). Future EU Agenda on quality of
health care with a special emphasis on patient safety.
Federal Ministry of Labour, Health and Social Affairs (1998). Quality in health care: opportunities
and limits of co-operation at EU level. Report of meeting of European Union Health Ministers
on quality in healthcare. Vienna: Federal Ministry.
Federal Ministry of Social Security and Generations (2001). Quality policy in the health care
systems of the EU accession candidates. Vienna: Federal Ministry.
Ghekiere W, Baeten R, Palm W (2010). Free movement of services in the EU and health care.
In: Mossialos E et al. (eds.). Europe: the role of European Union law and policy. Cambridge:
Cambridge University Press, pp. 461–508.
Glinos IA et al. (2015). How can countries address the efficiency and equity implications of
health professional mobility in Europe? Adapting policies in the context of the WHO Code
of Practice and EU freedom of movement. Policy Brief, 18:13.
Greer SL, Vanhercke B (2010). The hard politics of soft law: the case of health. In: Mossialos
E et al. (eds.). Europe: the role of European Union law and policy. Cambridge: Cambridge
University Press, pp. 186–230.
Greer SL et al. (2014). Everything you always wanted to know about European Union health
policies but were afraid to ask.
Kramer DB, Xu S, Kesselheim AS (2012). Regulation of medical devices in the United States and
European Union. New England Journal of Medicine, 366(9):848–55.
Legido-Quigley H et al. (eds.) (2013). Clinical guidelines for chronic conditions in the European
Union.
Maier CB et al. (2011). Cross-country analysis of health professional mobility in Europe: the
results. In: Wismar M et al. (eds.). Health professional mobility and health systems. Evidence
from 17 European countries. Observatory Studies Series. WHO Regional Office for Europe,
p. 49.
Nys H, Goffin T (2011). Mapping national practices and strategies relating to patients’ rights.
In: Wismar M et al. (eds). Cross-Border Healthcare: Mapping and Analysing Health Systems
Diversity. Copenhagen: World Health Organization on behalf of the European Observatory
on Health Systems and Policies, pp. 159–216.
OECD (2017). Caring for quality in health: Lessons learnt from 15 reviews of health care quality.
Paris: Organisation for Economic Co-operation and Development.
Palm W, Baeten R (2011). The quality and safety paradox in the patients’ rights Directive. European
Journal of Public Health, 21(3):272–4.
Palm W, Glinos I (2010). Enabling patient mobility in the EU: between free movement and
coordination. In: Mossialos E et al. (eds.). Health systems governance in Europe: the role
of European Union law and policy. Cambridge: Cambridge University Press, pp. 509–60.
Palm W et al. (2011). Towards a renewed community framework for safe, high-quality and efficient
cross-border health care within the European Union. In: Wismar M et al. (eds.). Cross-border
health care in the European Union: Mapping and analysing practices and policies. Observatory
Studies Series. WHO Regional Office for Europe.
Peeters (2005). Free movement of medical doctors: the new Directive 2005/36/EC on recognition
of professional qualifications. European Journal of Health Law, 12:373–96.
Peeters M (2012). Free movement of patients: Directive 2011/24 on the application of patients’
rights in cross-border healthcare. European Journal of Health Law, 19:1–32.
Peeters M, McKee M, Merkur S (2010). EU law and health professionals. In: Baeten R, Mossialos
E, Hervey T (eds.). Health Systems Governance in Europe: the Role of EU Law and Policy.
Health economics, policy and management. Cambridge: Cambridge University Press, p. 599.
International and EU governance and guidance for national healthcare quality strategies 99
Reynolds M et al. (2010). European coding system for tissues and cells: a challenge unmet? Cell
and Tissue Banking, 11:353–64.
Shaw C, Kalo I (2002). A background for national quality policies in health systems. Copenhagen:
WHO Regional Office for Europe.
Shaw CD (2015). How can healthcare standards be standardised? BMJ Quality and Safety,
24:615–19.
Sokol T (2010). Rindal and Elchinov: a(n) (impending) revolution in EU law on patient mobility.
Croatian yearbook of European law & policy, 6(6):167–208.
Tiedje J, Zsigmond A (2012). How to modernise the professional qualifications Directive.
Eurohealth, 18(2):18–22.
Vollaard H, van de Bovenkamp HM, Vrangbæk K (2013). The emerging EU quality of care policy:
from sharing information to enforcement. Health Policy, 111(3):226–33.
White Paper: Together for health: a strategic approach for the EU 2008–2013.
WHO (2002). World Health Assembly Resolution WHA55.18: Quality of care: patient safety.
WHO (1998). Health for All in the 21st Century, WHA51/5. Available at: http://apps.who.int/
gb/archive/pdf_files/WHA51/ea5.pdf, accessed 3 December 2018.
Wismar M et al. (2011). Health professional mobility and health systems in Europe: an introduction.
In: Wismar M et al. (eds.). Health professional mobility and health systems. Evidence from
17 European countries. Observatory Studies Series. WHO Regional Office for Europe, p. 11.
Part II
Chapter 5
Regulating the input:
health professions
Summary
training across countries. For most health professionals, training consists of basic
undergraduate education and subsequent specialized training combined with on-
the-job learning. Only a few countries allow physicians to practise after finishing
undergraduate studies; usually, further specialization is required before they are
allowed to deliver patient care. The pathway to becoming a medical specialist is
very different among countries in Europe and worldwide. Nursing education is even
less uniform, with varying levels of regulation. Across Europe, nursing education is
usually subdivided into basic education, which offers the qualifications required to
practise as a professional nurse, and subsequent specialty training. Developments
in the design of training contents and curricula for medical and nursing education
in recent decades have been moving towards outcome-based education. Several
national bodies responsible for designing medical education in European countries
have developed frameworks to describe the desired outcomes of medical education
and define the competencies that graduates should possess to enter the profes-
sion. Countries differ in their requirements of what is required to be granted the
right to practise. At a minimum, professionals have to successfully finish basic
professional education. Licensing is often combined with mandatory registration
in a health professional register. Registers serve as a tool to inform the public
as well as potential employers about the accredited qualifications and scopes of
practice of a certain health professional. Licensing and registration for doctors are
mostly regulated at national level. In the majority of countries in Europe licensing
and registration are also required in order to practise as a nurse. Closely linked
to the process of licensing and registration are schemes for actively maintaining
professional competence. Overall, continuing education for specialist doctors
has become increasingly mandatory within the European Union, with 21 countries
operating obligatory systems. In countries with voluntary structures, continuing
education is at least actively supported. Continuing education and licence renewal
is less regulated for the nursing professions. Finally, there is little consistency in
how events questioning competency and qualities of medical professionals
are handled across Europe. Considerable diversity exists in the range of topics
addressed by regulatory bodies, with almost all covering healthcare quality and
safety, and some also exploring themes around reputation and maintaining the
public’s trust in the profession.
and quantitative research. In nursing, research has shown that the shift towards
higher education can be beneficial for patient outcomes. Advanced knowledge and
skill acquirement during undergraduate nursing training was shown to be effec-
tive in improving perceptions of competence and confidence. There is only little
research regarding the effects of licensing on the quality of care; it suggests that
additional performance points achieved in the national licensing examinations in
the US were associated with small decreases in mortality. A review of reviews on
the effectiveness of continuing medical education on physician performance and
patient health outcomes found that continuing education is effective on all fronts:
in the acquisition and retention of knowledge, attitudes, skills and behaviours as
well as in the improvement of clinical outcomes, with the effects on physician per-
formance being more consistent than those on patient outcomes across studies.
No empirical evidence of good quality was identified on the cost-effectiveness of
licensing for physicians or nurses.
• Competitive Remuneration
• Non-Financial Incentives Motivation: Efficiency Population
• Systems Support Systems Supported and Effectiveness Health
• Safety/health of workers
first lens of the quality framework described in Chapter 2). Regulation refers to
laws or bylaws defining the conditions for health professionals’ minimum edu-
cational requirements, entry to practice, title protection, scope-of-practice and
other measures, such as the regulation of continuing professional development.
Countries are free to decide whether to regulate a specific profession or not. The
decision is usually based on the level of complexity of the professional’s role and
its implications for patient safety. For some health professionals, such as healthcare
assistants, countries may choose not to regulate the titles or scopes-of-practice
by law, but to entrust the assurance of quality to other governance levers, such
as employer-based mechanisms or protocols. In most European countries the
primary aim of health professional regulation is to ensure quality and safety of
care – this is mirrored in professional codes of conduct, which usually define
quality of care as their main focus (Struckmann et al., 2015).
It is important to distinguish between the different regulatory mechanisms
that can be used to ensure quality in the health and social sectors (command
and control; meta-regulation; self-regulation; and market mechanisms – see
Schweppenstedde et al., 2014). Professional self-regulation (as reflected in profes-
sional codes of conduct, for example) plays an important role in the regulation
of health professionals. The professions are often involved in defining what
standards constitute good professional practice (WHO, 2006). Striking the
optimal balance between autonomy of the profession and regulation through
command and control can be challenging. The roles of different regulatory
bodies should be well balanced with clearly defined and transparent boundaries
between them and public authorities. The interpretation of this balance differs
considerably between countries, resulting in diverse national systems of health
professional regulation.
Given the complexity of healthcare provision, the overall group of health
professionals consists of numerous individual professions with different and
complementary job profiles and skill requirements. WHO distinguishes two
main groups of health professionals: (a) health service providers, encompassing
all workers whose daily activities aim at improving health, including doctors,
nurses, pharmacists, dentists and midwives working for hospitals, medical clin-
ics or other community providers as well as for organizations outside the health
sector (for example, factories or schools); and (b) health system workers, who do
not provide health services directly but ensure that health service providers can
do their jobs, like staff working in ministries of health, managers, economists,
or specialists for information systems (WHO, 2006).
The diversity of the health workforce is in turn reflected in highly multifaceted
and complex regulatory procedures for the different professions, and this chapter
does not attempt to give an exhaustive overview of related mechanisms for all
Regulating the input: health professions 109
individual health professions. Although all health professionals are essential for a
national health system to function, the following sections focus on health service
providers as the ones directly involved with the delivery of healthcare services
to the population. Specifically, this chapter looks at the regulation of physicians
and nurses as two large health professional groups. It provides an overview of
strategies that aim to regulate the acquisition and maintenance of competence
among health professionals, discussing generic systems which are well established
in many European countries but also outlining the diversity among countries
when it comes to the detailed definition and practical application of these systems.
Following the general approach of this volume, this chapter is structured as fol-
lows: it first describes strategies that regulate health professionals along with cur-
rent practice in European countries for each strategy. These include: (a) strategies
to develop professional competence (including training structure and contents,
curriculum development and the accreditation of institutions for health educa-
tion); (b) strategies that regulate the entry of physicians and nurses into their
professions (for example, licensing and registration); (c) mechanisms to maintain
competence (for example, continuing professional development); and (d) levers
to address instances when fitness to practise comes into question. The interplay
of these strategies is shown in Fig. 5.2. The chapter then summarizes available
evidence on the effectiveness and cost-effectiveness of the described strategies
and subsequently derives implications for their implementation.
Tables 5.9 and 5.10 at the end of the chapter respectively provide an overview of
the national bodies responsible for regulating physicians and nurses in selected
European countries.
Fig. 5.2 Strategies for regulating health professionals (in this chapter)
Basic/Undergraduate Education
Specialization/Postgraduate
Graduation/State exam Internship/Practical
Education
Entry to practice
Right to practise
Life-long Time-limited
Maintaining competence
yes
Relicensing and/or re-registration
Financial incentives/
Contracting
Rectification/Time elapsed
Violation/Complaint
Sanctions for malpractice
Infringement of
Reprimand Restrictions Dismissal
right to practise
Severity of sanctions
Transversal Skills
skills (~knowledge, competencies,
abilities, [education] )
(~core/soft skills)
Building block for Needed to perform in practice
job‑specific skills
Cognitive skills Non-cognitive skills
e.g. reading, writing, e.g. problem-solving,
mathematics, emotional health,
use of information and social behaviour, work ethic
communication technologies and community responsibility
Admission criteria for basic nursing education range from a minimum number
of general education years and selection exams to the requirement of a medical
certification (Humar & Sansoni, 2017).
For most health professionals training consists of basic undergraduate educa-
tion and subsequent specialized training combined with on-the-job learning.
Accordingly, physicians usually complete undergraduate studies in medicine at
university level and in most cases require additional (postgraduate) training at a
hospital in order to practise (WHO, 2006). The national regulation of profes-
sions in Europe is guided by the EU Directives on the recognition of professional
qualifications (see Box 5.1).
Increasing mobility of health professionals and patients across Europe challenges national
regulations of qualification standards. Specifically, in the European Union and European Economic
Area (EU/EEA) mobility is facilitated by the principle of freedom of movement. Mobility is catalysed
by growing shortages in certain health professions or rural/underserved regions and countries,
which lead national organizations to actively recruit staff to fill vacancies. Increasing professional
migration has led to the realization that broader EU-level legislative changes need to be considered
in the development of the healthcare workforce within EU Member States (Leone et al., 2016).
Several efforts at European level have aimed to ensure quality (and, therein, safety) of care in
light of the mobility of health professionals. The Bologna Process, launched in 1999, had an
enormous impact on the homogenization of education. Directive 2013/55/EU of 20 November 2013
amending Directive 2005/36/EC of the European Parliament and of the Council of 7 September
2005 form the legal foundation for the mutual recognition of professional qualifications in EU
and EEA countries. The framework ensures that health professionals can migrate freely between
EU Member States and practise their profession. The new Directive came into effect in January
2016. It introduced the possibility for responsible authorities to have professionals undergo
language tests, a warning system to identify professionals who have been banned from practice,
and a European professional card as an electronic documentation tool to attest a professional’s
qualifications and registration status (Ling & Belcher, 2014).
However, some important issues remain under the EU framework, such as the widely variable
standards for accreditation of specialist training. Additional initiatives have been contributing to
a change in this direction. For example, the European Union of Medical Specialists (UEMS) was
founded in 1958 as the representative organization of all medical specialists in the European
Community. Its mission is to promote patient safety and quality of care through the development of
standards for medical training and healthcare across Europe. The UEMS outlined guiding principles
for a European approach of postgraduate medical training in 1994 with the intention to provide a
Regulating the input: health professions 113
voluntary complement to the existing national structures and ensure the quality of training across
Europe. More recently, the UEMS established the European Council for Accreditation of Medical
Specialist Qualifications (ECAMSQ®), which developed a competence-based approach for the
assessment and certification of medical specialists’ competence across Europe. This framework is
underpinned by the European Accreditation Council for Continuing Medical Education (EACCME®),
established in 1999, which provides the mutual recognition of accreditation of EU-wide and
international continuing medical education and continuing professional development activities.
To date, initiatives such as the EACCME® or ECAMSQ® have remained voluntary, complementing
what is provided and regulated by the national authorities and/or training institutions. As such,
their added value is predicated on the recognition provided by these national bodies. The aim
of UEMS remains to encourage the harmonization of specialist training across Europe with the
ambition to promote high standards of education, and in consequence high-quality healthcare,
but also to facilitate recognition of qualifications.
Age (yr) 18 19 20 21 22 23 24 25 26 27 28 29 30
NE National examination L/R Licensing/Registration process
1 Also referred to as registered nurses. Because not all countries mandate the registration of their nurses,
we use the term “professional nurse” in this chapter
116 Improving healthcare quality in Europe
Table 5.1 Nurse categories and key elements of basic nursing education
for selected European countries
started to move nursing education entirely to the graduate level. The UK, for
example, introduced the Bachelor’s degree as the minimum level of education
for professional nurses in 2013 (Riedel, Röhrling & Schönpflug, 2016). Austria
is considering restricting nursing education exclusively to the university level by
2024 (Bachner et al., 2018).
In turn, the increasing importance of degree-level nursing education further
triggered the specialization of nurses and the expansion of their roles. The major-
ity of European countries offer some form of specialized training with varying
titles, levels and length of education (Dury et al., 2014). Nurse specialists may
be qualified to care for certain patient groups such as chronically ill patients
(Riedel, Röhrling & Schönpflug, 2016). The number of recognized speciali-
zations (for example, theatre nursing, paediatric nursing, anaesthesia, mental
health, public health or geriatrics) can vary considerably between countries.
Also, Master’s degrees in Advanced Practice Nursing are increasingly available.
Within Europe, Finland, the Netherlands, Ireland and the UK have established
nursing roles at an advanced practice level, for example, Nurse Practitioners at
Master’s level. They have considerably expanded the scope of practice of Nurse
Practitioners, changing the professional boundaries between the medical and
nursing professions (Maier & Aiken, 2016). This change in skill-mix has impli-
cations for whether and how to regulate changes to the division of tasks and
responsibilities between professions.
Regulating the input: health professions 117
(NKLM) was released in April 2015 (Steffens et al., 2018). The catalogue defines
a total of 234 competencies and 281 subcompetencies. Primarily, the NKLM
serves as recommendation for the restructuring of medical curricula. Medical
faculties are encouraged to compare their existing curricula with the catalogue
and gather practical experience before it becomes mandatory for medical educa-
tion in Germany.
Competency-based requirements for post-graduate medical training on the other
hand are implemented only in a few European countries (Weggemans et al.,
2017). In the UK specialty training programmes define standards of knowledge,
skills and behaviours according to the General Medical Council’s framework
“Good Medical Practice” (General Medical Council, 2013). In the Netherlands
assessment during postgraduate medical education is competence-based; compe-
tencies for specialty training are increasingly described at national level to ensure
all specialists possess all necessary competencies. All starting residents work on a
portfolio which documents the progression on pre-defined competency domains
and builds the basis for progress evaluations. Postgraduate training in Germany
is not yet competency-based but some initiatives are under way. For instance,
the working group on post-graduate training at the German Society of Primary
Care Paediatrics defined guidelines (PaedCompenda) for educators in paediatric
medicine (Fehr et al., 2017). At the European level ECAMSQ®, established by
the UEMS (see Box 5.1), is developing a common framework for the assessment
and certification of medical specialists’ competence based on the core curricula
developed by the specialist sections of the UEMS.
Outside Europe, in 1998 the Accreditation Council on Graduate Medical
Education (ACGME) in the United States defined six domains of clinical com-
petence for graduate medical education programmes that reliably depict residents’
ability to care for patients and to work effectively in healthcare delivery systems
(Swing, 2007). The competencies were refined in 2013 alongside the definition
of milestones towards achieving them. Similarly, the Royal College of Physicians
and Surgeons in Canada introduced the Canadian Medical Educational Directives
for Specialists (CanMEDS) framework,2 which groups the abilities that physi-
cians require to effectively meet healthcare needs under seven roles (professional,
communicator, collaborator, leader, health advocate, scholar and the integrat-
ing role of medical expert). The CanMEDS framework was subsequently also
adopted in the Netherlands as of 2005–2006. In Australia the Confederation of
Postgraduate Medical Education Councils (CPMEC) launched the (outcome-
based) Australian Curriculum Framework for Junior Doctors in October 2006
(Graham et al., 2007).
2 http://www.royalcollege.ca/rcsite/canmeds/canmeds-framework-e.
120 Improving healthcare quality in Europe
patient survey, record review and patient simulation, can be used to measure
professional performance.
Regulating the input: health professions 123
Case-based learning An instructional design model which addresses high order knowledge and skill
objectives (actual or authored clinical cases are created to highlight learning
objectives).
Clinical experiences Clinical experiences address skill, knowledge, decision-making and attitudinal
objectives (preceptorship or observership with an expert to gain experience).
Demonstration Involves teaching or explaining by showing how to do or use something. It addresses
skill or knowledge objectives (live video or audio media).
Discussion group Addresses knowledge, especially application or higher order knowledge (readings, or
another experience).
Feedback Addresses knowledge and decision-making (the provision of information about an
individual’s performance to learners).
Lecture Lecture addresses knowledge content (live, video, audio).
Mentor or preceptor Personal skill developmental relationship in which an experienced clinician helps a less
experienced clinician. It addresses higher order cognitive and technical skills. Also used
to teach new sets of technical skills.
Point of care Addresses knowledge and higher order cognitive objectives (decision-making).
Information that is provided at the time of clinical need, integrated into chart or
electronic medical record.
Problem-based learning PBL is a clinician-centred instructional strategy in which clinicians collaboratively
or team-based learning solve problems and reflect on their experiences. It addresses higher order knowledge
objectives, meta-cognition, and some skill (group work) objectives (clinical scenario/
discussion).
Programmed learning Aims to manage clinician learning under controlled conditions. Addresses knowledge
objectives (delivery of contents in sequential steps).
Readings Reading addresses knowledge content or background for attitudinal objectives
(journals, newsletters, searching online).
Role play Addresses skill, knowledge and affective objectives.
Simulation Addresses knowledge, team working, decision-making and technical skill objectives
(full simulation; partial task simulation; computer simulation; virtual reality; standardized
patient; role play).
Standardized patient Addresses skill and some knowledge and affective objectives. Usually used for
communication and physical examination skills training and assessment.
Writing and authoring Addresses knowledge and affective objectives. Usually used for assessment purposes.
Source: based on Ahmed et al., 2013
Germany Quality of care; Lifelong learning Yes 250 credits General and specialty-
maintenance 5 years specific CPD courses;
of doctors’ individual learning;
knowledge and conference attendance;
skills research and scientific
publications; E-learning;
time as visiting
professional
Hungary Patient safety Lifelong learning Yes 250 credits General and specialty-
5 years specific CPD courses;
research and scientific
publications; E-learning;
time as visiting
professional; portfolio;
minimum hours of patient
contact; mandatory
intensive course
Ireland Maintenance Lifelong Yes 50 credits General CPD course;
of doctors’ learning; practice 1 year individual learning;
knowledge and performance conference attendance;
skills research and scientific
publications; clinical audit
Poland Maintenance Lifelong learning Yes 200 credits General and specialty-
of doctors’ 4 years specific CPD courses;
knowledge and conference attendance;
skill teaching; research and
scientific publications;
E-learning
Portugal Career Lifelong No NA Portfolio
learning; practice 5 years
performance
Spain Career Lifelong No NA General CPD course;
learning; practice 3 years portfolio
performance
Source: based on Sehlbach et al., 2018
Medical Education activities at the European level to ensure that they are free
from commercial bias and follow an appropriate educational approach.
Continuing education and licence renewal are less regulated for the nursing
profession (IOM, 2010). Belgium and the UK are among the few countries
where nurses are required to demonstrate continuing education to re-register.
In Belgium general nurses have to renew registration every three years, nurse
130 Improving healthcare quality in Europe
specialists every six years (Robinson & Griffiths, 2007). In the UK nurses have to
re-register annually and since 2016 they also have to revalidate their license. The
Nursing and Midwifery Council requires revalidation every three years based on
proof of participation in continuing education activities and reflection of their
experiences among peers (Cylus et al., 2015). In other countries, such as the
Netherlands, the responsibility of continuing education for nurses resides with
the healthcare provider where the nurse is employed. However, nursing staff can
voluntarily record their training and professional development activities online
in the “Quality Register for Nurses” (Kwaliteitsregister). This offers individuals
the chance to compare their skills with professionally agreed standards of com-
petence (Kroneman et al., 2016).
within the hospital. In most countries disciplinary panels are mainly composed
of legal experts and health professionals in related specialties. Some countries
like Malta and the UK include lay people, while others such as Estonia, Finland,
Hungary, Slovenia and Spain use external experts (Struckmann et al., 2015).
The diverse sanctioning procedures at national level are also challenged by the
increasing mobility of health professionals. Health professionals banned from
practice may move to another country and continue practising if no adequate
cross-border control mechanisms are in place. Under the revised EU Directive
on the mutual recognition of professional qualifications (see Box 5.1), an alert
mechanism was established to enable warnings across Member States when a
health professional is banned or restricted from practice, even temporarily. This
idea came out of an earlier collaboration between competent authorities under
the “Health Professionals Crossing Borders” initiative led by the UK’s GMC.
Since the introduction of the mechanism in January 2016, and until November
2017, more than 20 000 alerts were sent by competent Member State authori-
ties, mostly pertaining to cases of professionals who were restricted or prohibited
from practice (very few alerts were related to the falsification of qualifications).
Surveyed stakeholders found the alert system appropriate for its purpose and
the Commission recognized the importance of continuous monitoring and
adaptation of its use and functionalities (European Commission, 2018a, 2018b).
and showing mixed effects, not least due to a lack of rigorous qualitative and
quantitative research.
One concept used to describe the “effectiveness” of medical education is pre-
paredness to practise. Effective education has to ensure that graduates are
prepared for the complexity and pressures of today’s practice (Monrouxe et al.,
2018). Despite constant developments in medical education, the self-perceived
preparedness of doctors at various stages of their career is still lagging behind.
Graduates feel particularly unprepared for specific tasks including prescribing,
clinical reasoning and diagnosing, emergency management or multidisciplinary
team working (Monrouxe et al., 2017; Geoghegan et al., 2017; General Medical
Council, 2014). Also, senior doctors and clinical supervisors are concerned that
patient care and safety may be negatively affected as graduate doctors are not
well enough prepared for clinical practice (Smith, Goldacre & Lambert, 2017;
Vaughan, McAlister & Bell, 2011).
The concept of preparedness can be used to measure the effect of new training
approaches, such as interactive training in small groups (for example, problem-
based or simulation-based training), compared to traditional training tech-
niques based on lectures and seminars. Empirical evidence on the effectiveness
of problem-based training approaches in undergraduate medical education is
mixed. On the one hand, some UK-based studies have shown a beneficial effect
of problem-based curricula on medical graduates’ preparedness (O’Neill et al.,
2003; Cave et al., 2009), reflected in better skills related to recognizing limi-
tations, asking for help and teamwork (Watmough, Garden & Taylor, 2006;
Watmough, Taylor & Garden, 2006). On the other hand, some more recent
studies observed no relation between the perceived, self-reported preparedness
of medical graduates and the type of training they received. For example, Illing
et al. (2013) found that junior doctors from all training types in the UK felt
prepared in terms of communication skills, clinical and practical skills, and
teamwork. They felt less prepared for areas of practice based on experiential
learning such as ward work, being on call, management of acute clinical situa-
tions, prescribing, clinical prioritization and time management, and dealing with
paperwork. Also, Miles, Kellett & Leinster (2017) found no difference in the
overall perceived preparedness for clinical practice and the confidence in skills
when comparing problem-based training with traditional discipline-based and
lecture-focused curricula. However, graduates having undergone problem-based
training felt better prepared for tasks associated with communication, teamwork
and paperwork than graduates from traditional training. Overall, more than
half of all graduates felt insufficiently prepared to deal with neurologically or
visually impaired patients, write referral letters, understand drug interactions,
manage pain and cope with uncertainty, regardless of curriculum type. Further
evidence has shown that a shift from multiple-choice-based assessment methods
134 Improving healthcare quality in Europe
mortality would be almost 30% lower in hospitals in which 60% of nurses had
Bachelor’s degrees and would care for an average of six patients compared to
hospitals in which only 30% of nurses had Bachelor’s degrees and cared for an
average of eight patients (Zander et al., 2016). Advanced knowledge and skill
acquirement during undergraduate nursing training was shown to be effective
in improving perceptions of competence and confidence (Zieber & Sedgewick,
2018). In this context, the right balance between teaching hours and time for
clinical practice in nursing education is critical. For example, a condensation of
the weekly timetable for Bachelor students in nursing in order to extend time
in clinical placements was found to be related to lower academic achievement
and poorer quality in learning experience (Reinke, 2018).
Also, postgraduate-level nursing education can contribute to increased self-
perceived competence and confidence among nurses (Baxter & Edvardsson,
2018). Nurses in Master programmes rate their competence higher than nurses
in specialist programmes (Wangensteen et al., 2018). Furthermore, academic
literacy is strongly related to the development of critical thinking skills which
in turn are of relevance for professional practice (Jefferies et al., 2018). A study
on nurse competencies in relation to evidence-based practice (which includes
components such as questioning established practices towards improving qual-
ity of care, identifying, evaluating and implementing best available evidence,
etc.) showed that the higher the level of education, the higher the perceived
competence in such approaches (Melnyk et al., 2018). Benefits of Master-level
education such as increased confidence and self-esteem, enhanced communica-
tion, personal and professional growth, knowledge and application of theory
to practice, as well as analytical thinking and decision-making may positively
affect patient care (Cotterill-Walker, 2012). However, quantitative evidence on
whether Master-level nursing education makes a difference to patient outcomes
is rare and lacks the development of measurable and observable evaluation.
Tan et al. (2018) recently reviewed the evidence on the effectiveness of outcome-
based education on the acquisition of competencies by nursing students. The
methodological quality of the few identified studies was moderate. Overall,
outcome-based education seemed to predicate improvements in acquired nursing
competencies in terms of knowledge acquisition, skills performance, behaviour,
learning satisfaction and achieving higher order thinking processes. One study
reported contradictory, negative outcomes. The authors conclude that the cur-
rent evidence base is limited and inconclusive and more robust experimental
study designs with larger sample sizes and validated endpoints (including
patient outcomes) are needed, mirroring the findings by Onyura et al. (2016)
on medical education. In the same direction, Calvert & Freemantle (2009)
pointed out that “assessing the impact of a change in the undergraduate cur-
riculum on patient care may prove difficult, but not impossible”. They propose
136 Improving healthcare quality in Europe
clinical outcomes, with the effects on physician performance being more con-
sistent than those on patient outcomes across studies. The latter observation is
intuitive: it is methodologically more challenging to determine the extent of the
contribution of individual physicians’ actions to observed outcomes, as these are
also influenced by the healthcare system and the interdisciplinary team. Braido
et al. (2012) found that a one-year continuing education course for general
practitioners significantly improved knowledge. Training also resulted in phar-
maceutical cost containment and greater attention to diagnosis and monitoring.
More research is needed on the mechanisms of action by which different types of
continuing education affect physician performance and patient health. Although
numerous studies exist, as reviewed by Cervero & Gaines (2015), the variable
study objectives and designs hinder any generalizable conclusions. Bloom (2005)
found that interactive methods such as audit and feedback, academic detailing,
interactive education and reminders are most effective at improving performance
and outcomes. More conventional methods such as didactic presentations and
printed materials alone showed little or no beneficial effect. The superiority
of interactive, multimedia or simulation-based methods over conventional
approaches seems to be mirrored in other studies as well (Marinopoulos et al.,
2007; Mazmanian, Davis & Galbraith, 2009). At the same time, there seems
to be an overall agreement that a variety of strategies, or so called “multiple
exposures”, is necessary to achieve the desired effects of continuing education
in an optimal manner.
Table 5.10 Overview of national bodies that regulate nurses and midwives
in selected European countries
References
Ahmed K et al. (2013). The effectiveness of continuing medical education for specialist recertification.
Canadian Urological Association Journal, 7(7–8):266–72.
Aiken LH et al. (2014). Nurse staffing and education and hospital mortality in nine European
countries: a retrospective observational study. Lancet, 383(9931):1824–30.
Anand S, Bärnighausen T (2004). Human resources and health outcomes: cross-country econometric
study. Lancet, 364(9445):1603–9.
Bachner F et al. (2018). Austria: Health system review. Health Systems in Transition, 20(3):1–256.
Baxter R, Edvardsson D (2018). Impact of a critical care postgraduate certificate course on nurses’
self-reported competence and confidence: a quasi-experimental study. Nurse Education Today,
65:156–61.
Bisgaard CH et al. (2018). The effects of graduate competency-based education and mastery
learning on patient care and return on investment: a narrative review of basic anesthetic
procedures. BMC Medical Education, 18(1):154.
Bloom BS (2005). Effects of continuing medical education on improving physician clinical care
and patient health: a review of systematic reviews. International Journal of Technology Assessment
in Health Care, 21(3):380–5.
Braido F et al. (2012). Knowledge and health care resource allocation: CME/CPD course
guidelines-based efficacy. European Annals of Allergy and Clinical Immunology, 44(5):193–9.
Brown CA, Belfield CR, Field, SJ (2002). Cost effectiveness of continuing professional development
in health care: a critical review of the evidence. BMJ, 324:652–5.
Busse R, Blümel M (2014). Germany: health system review. Health Systems in Transition, 16(2):1–
296.
Calvert MJ, Freemantle N (2009). Cost-effective undergraduate medical education? Journal of the
Royal Society of Medicine, 102(2):46–8.
Carraccio C et al. (2002). Shifting Paradigms: From Flexner to Competencies. Academic Medicine,
77(5):361–7.
Carraccio C et al. (2017). Building a framework of Entrustable Professional Activities, supported
by competencies and milestones, to bridge the educational continuum. Academic Medicine,
92(3):324–30.
Cave J et al. (2009). Easing the transition from student to doctor: how can medical schools help
prepare their graduates for starting work? Medical Teacher, 31(5):403–8.
Cervero RM, Gaines JK (2015). The impact of CME on physician performance and patient health
outcomes: an updated synthesis of systematic reviews. Journal of Continuing Education in the
Health Professions, 35(2):131–8.
Chevreul K et al. (2015). France: Health system review. Health Systems in Transition, 17(3):1–218.
Cooney R et al. (2017). Academic Primer Series: key papers about competency-based medical
education. Western Journal of Emergency Medicine, 18(4):713–20.
Cotterill-Walker SM (2012). Where is the evidence that master’s level nursing education makes a
difference to patient care? A literature review. Nurse Education Today, 32(1):57–64.
Cylus J et al. (2015). United Kingdom: Health system review. Health Systems in Transition,
17(5):1–125.
Diallo K et al. (2003). Monitoring and evaluation of human resources for health: an international
perspective. Human Resources for Health, 1(1):3.
Dimova A et al. (2018). Bulgaria: Health system review. Health Systems in Transition, 20(4):1–256.
Dury C et al. (2014). Specialist nurse in Europe. International Nursing Review, 61:454–62.
EFN (2012). EFN Competency Framework for Mutual Recognition of Professional Qualifications
Directive 2005/36/EC, amended by Directive 2013/55/EU. EFN Guideline to implement
Article 31 into national nurses’ education programmes. European Federation of Nurses Asso-
ciations (EFN). Available at: http://www.efnweb.be/?page_id=6897, accessed 4 January 2019.
Regulating the input: health professions 147
Englander R et al. (2017). Toward a shared language for competency-based medical education.
Medical Teacher, 39(6):582–7.
European Commission (2018a). Assessment of stakeholders’ experience with the European Profes-
sional Card and the Alert Mechanism procedures. Available at: https://ec.europa.eu/docsroom/
documents/28671/attachments/1/translations/en/renditions/native, accessed 7 April 2019.
European Commission (2018b). Assessment of functioning of the European Professional Card and
the Alert Mechanism procedure. Available at: http://www.enmca.eu/system/files/epc_alerts.
pdf, accessed 7 April 2019.
Fehr F et al. (2017). Entrustable professional activities in post-licensure training in primary care
pediatrics: necessity, development and implementation of a competency-based post-graduate
curriculum. GMS Journal for Medical Education, 34(5):Doc67.
Fong S et al. (2019). Patient-centred education: how do learners’ perceptions change as they
experience clinical training? Advances in Health Sciences Education. Theory and Practice,
24(1):15–32. doi: 10.1007/s10459-018-9845-y.
Frank JR et al. (2010). Toward a definition of competency-based education in medicine: a systematic
review of published definitions. Medical Teacher, 32:631–7.
General Medical Council (2013). Good medical practice. Version from 25 March 2013. Available
at: https://www.gmc-uk.org/ethical-guidance/ethical-guidance-for-doctors/good-medical-
practice, accessed 3 December 2018.
General Medical Council (2014). The state of medical education and practice in the UK. Version
from November 2014. Available at: https://www.gmc-uk.org/about/what-we-do-and-why/
data-and-research/the-state-of-medical-education-and-practice-in-the-uk/archived-state-of-
medical-education-and-practice-in-the-uk-reports, accessed 3 December 2018.
Geoghegan SE et al. (2017). Preparedness of newly qualified doctors in Ireland for prescribing in
clinical practice. British Journal of Clinical Pharmacology, 83(8):1826–34.
Graham IS et al. (2007). Australian curriculum framework for junior doctors. Medical Journal of
Australia, 186(7):s14–s19.
Gravina EW (2017). Competency-based education and its effects on nursing education: a literature
review. Teaching and Learning in Nursing, 12:117–21.
Greiner AC, Knebel E (2003). Health professions education: a bridge to quality. Committee
on the Health Professions Education Summit; Board on Health Care Services; Institute of
Medicine. Available at: http://nap.edu/10681, accessed 3 December 2018.
Gulliford MC (2002). Availability of primary care doctors and population health in England: is
there an association? Journal of Public Health Medicine, 24:252–4.
Habicht T et al. (2018). Estonia: Health system review. Health Systems in Transition, 20(1):1–193.
Hautz SC et al. (2016). What makes a doctor a scholar: a systematic review and content analysis
of outcome frameworks. BMC Medical Education, 16(119).
Holen A et al. (2015). Medical students’ preferences for problem-based learning in relation to culture
and personality: a multicultural study. International Journal of Medical Education, 6:84–92.
Humar L, Sansoni J (2017). Bologna process and basic nursing education in 21 European countries.
Annali di Igiene: Medicina Preventiva e di Comunita, 29:561–71.
Illing JC et al. (2013). Perceptions of UK medical graduates’ preparedness for practice: a multi-
centre qualitative study reflecting the importance of learning on the job. BMC Medical
Education, 13:34.
IOM (2010). Redesigning Continuing Education in the Health Professions. Committee on
Planning a Continuing Health Care Professional Education Institute; Board on Health Care
Services; Institute of Medicine. Available at: http://nap.edu/12704, accessed 3 December 2018.
Ivers N et al. (2012). Audit and feedback: effects on professional practice and healthcare outcomes
(Review). Cochrane Database of Systematic Reviews, 2012(6):CD000259.
148 Improving healthcare quality in Europe
Jefferies D et al. (2018). The importance of academic literacy for undergraduate nursing students
and its relationship to future professional clinical practice: a systematic review. Nurse Education
Today, 60:84–91.
Koh J, Dubrowski A (2016). Merging Problem-Based Learning with Simulation-Based Learning
in the Medical Undergraduate Curriculum: the PAIRED Framework for Enhancing Lifelong
Learning. Cureus, 8(6):e647.
Kovacs E et al. (2014). Licensing procedures and registration of medical doctors in the European
Union. Clinical Medicine, 14(3):229–38.
Kroneman M et al. (2016). The Netherlands: health system review. Health Systems in Transition,
18(2):1–239.
Leach DC (2002). Competence is a habit. Journal of the American Medical Association, 287(2):243–4.
Leone C et al. (2016). Nurse migration in the EU: a moving target. Eurohealth incorporating Euro
Observer, 22(1):7–9.
Ling K, Belcher P (2014). Medical migration within Europe: opportunities and challenges. Clinical
Medicine (London), 14(6):630–2.
Maier CB, Aiken LH (2016). Task shifting from physicians to nurses in primary care in 39
countries: a cross-country comparative study. European Journal of Public Health, 26:927–34.
Marinopoulos SS et al. (2007). Effectiveness of continuing medical education. Evidence Report/
Technology Assessment (Full Report), 149:1–69.
Mazmanian PE, Davis DA, Galbraith R (2009). Continuing Medical Education Effect on Clinical
Outcomes: Effectiveness of Continuing Medical Education: American College of Chest
Physicians Evidence-Based Educational Guidelines. Chest, 135(3):49s–55s.
Melnyk BM et al. (2018). The First U.S. Study on Nurses’ Evidence-Based Practice Competencies
Indicates Major Deficits That Threaten Healthcare Quality, Safety, and Patient Outcomes.
Worldviews on Evidence Based Nursing, 15(1):16–25. doi: 10.1111/wvn.12269.
Melovitz Vasan CA et al. (2018). Analysis of testing with multiple choice versus open-ended
questions: outcome-based observations in an anatomy course. Anatomical Sciences Education,
11(3):254–61.
Miles S, Kellett J, Leinster SJ (2017). Medical graduates’ preparedness to practice: a comparison
of undergraduate medical school training. BMC Medical Education, 17(1):33.
Monrouxe LV et al. (2017). How prepared are UK medical graduates for practice? A rapid review
of the literature 2009–2014. BMJ Open, 7(1):e013656.
Monrouxe LV et al. (2018). New graduate doctors’ preparedness for practice: a multistakeholder,
multicentre narrative study. BMJ Open, 8(8):e023146.
Morcke AM, Dornan T, Eika B, (2013). Outcome (competency) based education: an exploration
of its origins, theoretical basis, and empirical evidence. Advances in Health Sciences Education,
18:851–63.
Nara N, Suzuki T, Tohda S (2011). The Current Medical Education System in the World. Journal
of Medical and Dental Sciences, 58:79–83.
Needleman J et al. (2002). Nurse-staffing levels and the quality of care in hospitals. New England
Journal of Medicine, 346:1715–22.
Norcini JJ, Lipner RS, Kimball HR (2002). Certifying examination performance and patient
outcomes following acute myocardial infarction. Medical Education, 36:853–9.
OECD (2016). Health workforce policies in OECD countries: right jobs, right skills, right places.
OECD Health Policy Studies. Paris: OECD Publishing.
OECD (2018). Feasibility study on health workforce skills assessment. Supporting health workers
achieve person-centred care. OECD Health Division team, February 2018. Available at:
http://www.oecd.org/els/health-systems/workforce.htm, accessed 3 December 2018.
O’Neill PA et al. (2003). Does a new undergraduate curriculum based on Tomorrow’s Doctors
prepare house officers better for their first post? A qualitative study of the views of pre‐
registration house officers using critical incidents. Medical Education, 37:1100–8.
Regulating the input: health professions 149
Onyura B et al. (2016). Evidence for curricular and instructional design approaches in
undergraduate medical education: an umbrella review. Medical Teacher, 38(2):150–61. doi:
10.3109/0142159X.2015.1009019.
Ousy K (2011). The changing face of student nurse education and training programmes. Wounds
UK, 7(1):70–6.
Phillips RL et al. (2017). The effects of training institution practice costs, quality, and other
characteristics on future practice. Annals of Family Medicine, 15(2):140–8.
Pijl-Zieber EM et al. (2014). Competence and competency-based nursing education: finding our
way through the issues. Nurse Education Today, 34(5):676–8.
Price T et al. (2018). The International landscape of medical licensing examinations: a typology
derived from a systematic review. International Journal of Health Policy and Management,
7(9):782–90.
Rashid A, Manek N (2016). Making primary care placements a universal feature of postgraduate
medical training. Journal of the Royal Society of Medicine, 109(12):461–2.
Reinke NB (2018). The impact of timetable changes on student achievement and learning
experiences. Nurse Education Today, 62:137–42.
Riedel M, Röhrling G, Schönpflug K (2016). Nicht-ärztliche Gesundheitsberufe. Institute
for Advanced Studies, Vienna. Research Report April 2016. Available at: http://irihs.ihs.
ac.at/4112/, accessed 3 December 2018.
Ringard Å et al. (2013). Norway: Health system review. Health Systems in Transition, 15(8):1–162.
Risso-Gill I et al. (2014). Assessing the role of regulatory bodies in managing health professional
issues and errors in Europe. International Journal for Quality in Health Care, 26(4):348–57.
Robinson S, Griffiths P (2007). Nursing education and regulation: international profiles and
perspectives. Kings College London, National Nursing Research Unit. Available at: https://www.
kcl.ac.uk/nursing/research/nnru/Publications/.../NurseEduProfiles, accessed 3 December 2018.
Ross S, Hauer K, van Melle E (2018). Outcomes are what matter: competency-based medical
education gets us to our goal. MedEdPublish, 7(2):1–5.
Rubin P, Franchi-Christopher P (2002). New edition of Tomorrow’s Doctors. Medical Teacher,
24(4):368–9.
Schostak J et al. (2010). Effectiveness of Continuing Professional Development project: a summary
of findings. Medical Teacher, 32:586–92.
Schweppenstedde D et al. (2014). Regulating Quality and Safety of Health and Social Care:
International Experiences. Rand Health Quarterly, 4(1):1.
Sehlbach C et al. (2018). Doctors on the move: a European case study on the key characteristics
of national recertification systems. BMJ Open, 8:e019963.
Sharp LK et al. (2002). Specialty board certification and clinical outcomes: the missing link.
Academic Medicine, 77(6):534–42.
Simões J et al. (2017). Portugal: Health system review. Health Systems in Transition, 19(2):1–184.
Simper J (2014). Proceedings from the second UEMS Conference on CME-CPD in Europe, 28
February 2014, Brussels, Belgium. Journal of European CME, 3(1), Article 25494.
Simpson JG et al. (2002). The Scottish doctor – learning outcomes for the medical undergradu-
ate in Scotland: a foundation for competent and reflective practitioners. Medical Teacher,
24(2):136–43.
Smith F, Goldacre MJ, Lambert TW (2017). Adequacy of postgraduate medical training: views of
different generations of UK-trained doctors. Postgraduate Medical Journal, 93(1105):665–70.
Solé M et al. (2014). How do medical doctors in the European Union demonstrate that they
continue to meet criteria for registration and licencing? Clinical Medicine, 14(6):633–9.
Steffens S et al. (2018). Perceived usability of the National Competence Based Catalogue of
Learning Objectives for Undergraduate Medical Education by medical educators at the
Hannover Medical School. GMS Journal for Medical Education, 35(2):1–12.
150 Improving healthcare quality in Europe
Struckmann V et al. (2015). Deciding when physicians are unfit to practise: an analysis of
responsibilities, policy and practice in 11 European Union member states. Clinical Medicine,
15(4):319–24.
Swing SR (2007). The ACGME outcome project: retrospective and prospective. Medical Teacher,
2007(29):648–54.
Tan K et al. (2018). The effectiveness of outcome based education on the competencies of nursing
students: a systematic review. Nurse Education Today, 64:180–9.
Touchie C, ten Cate O (2016). The promise, perils, problems and progress of competency-based
medical education. Medical Education, 50:93–100.
Tsugawa Y et al. (2017). Quality of care delivered by general internists in US hospitals who
graduated from foreign versus US medical schools: observational study. BMJ (Clinical research
edition), 356:j273.
Tsugawa Y et al. (2018). Association between physician US News & World Report medical school
ranking and patient outcomes and costs of care: observational study. BMJ (Clinical research
edition), 362:k3640.
van der Vleuten CP, Driessen EW (2014). What would happen to education if we take education
evidence seriously? Perspectives on Medical Education, 3(3):222–32.
van Rossum TR et al. (2018). Flexible competency based medical education: more efficient, higher
costs. Medical Teacher, 40(3):315–17.
van Zanten M (2015). The association between medical education accreditation and the examination
performance of internationally educated physicians seeking certification in the United States.
Perspectives on Medical Education, 4:142–5.
Vaughan L, McAlister, Bell D (2011). “August is always a nightmare”: results of the Royal College
of Physicians of Edinburgh and Society of Acute Medicine August transition survey. Clinical
Medicine, 11:322–6.
Wangensteen S et al. (2018). Postgraduate nurses’ self-assessment of clinical competence and need
for further training. A European cross-sectional survey. Nurse Education Today, 62:101–6.
Watmough S, Garden A, Taylor D (2006). Pre-registration house officers’ views on studying under
a reformed medical curriculum in the UK. Medical Education, 40(9):893–9.
Watmough S, Taylor D, Garden A (2006). Educational supervisors evaluate the preparedness of
graduates from a reformed UK curriculum to work as pre‐registration house officers (PRHOs):
a qualitative study. Medical Education, 40:995–1001.
Weggemans MM et al. (2017). The postgraduate medical education pathway: an international
comparison. GMS Journal for Medical Education, 34(5):Doc63.
WHO (2006). Human resources for health in the WHO European Region. Available at:
www.euro.who.int/__data/assets/pdf_file/0007/91474/E88365.pdf, accessed 3 December 2018.
WHO (2007). Strengthening health systems to improve health outcomes. WHO’s framework for ac-
tion. Available at: https://www.who.int/healthsystems/strategy/en/, accessed 3 December 2018.
WHO (2016). Working for health and growth. Investing in the health workforce. Report of
the High-Level Commission on Health Employment and Economic Growth. Available at:
https://www.who.int/hrh/com-heeg/reports/en/, accessed 2 December 2019.
Woodward CA (2000). Strategies for assisting health workers to modify and improve skills:
developing quality health care – a process of change. World Health Organization, Issues
in health services delivery, Discussion paper no. 1, WHO/EIP/OSD/00.1. Available at:
www.who.int/hrh/documents/en/improve_skills.pdf, accessed 3 December 2018.
Zander B et al. (2016). The state of nursing in the European Union. Eurohealth incorporating
Euro Observer, 22(1):3–6.
Zavlin D et al. (2017). A comparison of medical education in Germany and the United States:
from applying to medical school to the beginnings of residency. GMS German Medical Science,
15:Doc15.
Zieber M, Sedgewick M (2018). Competence, confidence and knowledge retention in undergraduate
nursing students – a mixed method study. Nurse Education Today, 62:16–21.
Chapter 6
Regulating the input –
Health Technology Assessment
Summary
level. While there is some convergence in national HTA systems in Europe, there
are also significant discrepancies concerning both process and methodology.
Regulation proposed by the European Commission in January 2018 opts for
mandating joint assessments of clinical elements (effectiveness and safety), while
leaving the consideration of other domains such as the economic and organiza-
tional impact to national authorities. The proposal has been met with criticism from
various sides, regarding the lack of flexibility for national assessments in light of
different standard practices of care (which influence comparator therapies and
choice of outcomes), the lack of an obligation for the industry to submit full trial data
despite increased traction in transparency expectations in recent years and the
loss of flexibility in decision-making at a national level in the presence of a binding
assessment. However, there is general consensus that synergies emerging from
increased collaboration can have a considerable impact in realizing the benefits
of HTA at country level.
and continued usefulness of HTA processes. Key principles for best practice in
national HTA programmes have been defined, but are only partially applied in real-
ity. Recommendations on overcoming barriers for performing HTA and establishing
HTA organizational structures were issued by EUnetHTA in 2011. More recent work
(2019) shows that many good practices have been developed, mostly regarding
assessment methodology and certain key aspects of HTA processes, but consensus
on good practice is still lacking for many areas, such as defining the organizational
aspects of HTA, use of deliberative processes and measuring the impact of HTA.
ACCESS
Regulating pharmaceuticals along the product life-cycle
Decommissioning/Disinvestment
From the EUnetHTA website: “The HTA Core Model® is a methodological framework for production
and sharing of HTA information. It consists of three components, each with a specific purpose:
1) a standardised set of HTA questions (the ontology), which allow users to define their specific
research questions within a hierarchical structure; 2) methodological guidance to assist in
answering the research questions and 3) a common reporting structure for presenting findings
}
in a standardised ‘question-answer pair’ format.”
}
Scope of assessment
HTA Core Model® Domains
1. Health problem and current use of technology
Rapid
3. Safety
4. Clinical effectiveness
5. Costs and economic evaluation
6. Ethical analysis
7. Organizational aspects
8. Patient and social aspects
9. Legal aspects
where the first HTA institutions had started to develop in the 1980s. This
comprised a combination of scientific, practical and political steps in countries
with social insurance- or tax-based national health systems and in a region of the
world that provides certain conditions that are conducive to collaboration – the
European integration and the European Union (EU) (Kristensen, 2012). Box
6.2 summarizes the timeline of developments on HTA at the European level.
Regarding the institutions actually producing HTA reports, one can distinguish
between agencies that serve the population of a whole nation or a region (i.e.
national or regional) and those that are integrated into single hospitals or hospital
trusts (hospital-based HTA). The focus of this chapter lies with the former, but
the possibilities and particularities of the latter have also been studied compara-
tively at the European and international level (for example, Sampietro-Colom
& Martin, 2016; Gagnon et al., 2014).
European HTA organizations can be classified into two main groups: those
concentrating on the production and dissemination of HTA and those with
broader mandates, which are often related to quality of care and include but
are not limited to the production and dissemination of HTA reports (Velasco-
Garrido et al., 2008). Variation is also observed in the degree to which HTA
organizations (and their products) are linked to decision-making. This is largely
dependent on whether there are formalized decision-making processes – most
often established in relation to service coverage and reimbursement and most
predominantly for pharmaceuticals. Indeed, HTA systems evolved organically in
the majority of European countries; as a result, they differ considerably regard-
ing process and methodology. The varying set-up of HTA systems in Europe
has been well documented (for example, Allen et al., 2013; Allen et al., 2017;
Panteli et al., 2015, Panteli et al., 2016, Fuchs et al., 2016).
The most recent overview of European practices stems from a background docu-
ment (European Commission, 2018a) produced for the European Commission
to inform the development of regulation (European Commission, 2018b) for
strengthening EU cooperation beyond 2020. The latter included joint clinical
assessment at European level as a part of the HTA process for certain technolo-
gies in the Member States (see Box 6.2 and below). This background work found
that in the last 20 years all EU Member States have started to introduce HTA
processes at national or regional level. National legal frameworks for HTA are
already in place in 26 Member States while some Member States are only at the
initial phase of establishing HTA systems and/or have dedicated only limited
resources to HTA (European Commission, 2018a). The Commission’s work
confirmed previous findings, namely that while there is some convergence in
national HTA systems in Europe, there are also significant discrepancies.
160 Improving healthcare quality in Europe
Box 6.2 European developments in HTA (adapted from Panteli & Edwards
2018)
The European Commission has supported collaboration in HTA across countries since the early
1990s. In 2004 it set HTA as a political priority, followed by a call towards establishing a sustainable
European network on HTA. The call was answered by 35 organizations throughout Europe and
led to the introduction of the European network for Health Technology Assessment (EUnetHTA)
Project in 2005. The strategic objectives of the EUnetHTA Project were to reduce duplication
of effort, promote more effective use of resources, increase HTA input to decision-making in
Member States and the EU to increase the impact of HTA, strengthen the link between HTA and
healthcare policy-making in the EU and its Member States, and support countries with limited
experience in HTA (Kristensen et al., 2009; Banta, Kristensen & Jonsson, 2009).
In May 2008 the EUnetHTA partner organizations endorsed a proposal for a permanent collaboration.
On the basis of the project’s results, the European Commission has consistently funded a number
of continuing initiatives: the EUnetHTA Collaboration 2009, the EUnetHTA Joint Action 2010–2012,
EUnetHTA Joint Action 2 2012–2015 and EUnetHTA Joint Action 3 2016–2020. This research has
mainly focused on developing joint methodologies for assessment, perhaps most importantly
the so-called Core Models for different types of technologies, but also piloting them in carrying
out joint assessments. It also maintains a database of planned and ongoing national HTA reports
accessible to its member organizations.
Cross-border collaboration in HTA was anchored in EU law through Directive 2011/24/EU on the
application of patients’ rights in cross-border healthcare. According to article 15, “the Union shall
support and facilitate cooperation and the exchange of scientific information among Member
States within a voluntary network connecting national authorities or bodies responsible for health
technology assessment designated by the Member States”. The Directive sets out both the
network’s goals and activities for which additional EU funds may be requested. It also explicitly
reinforces the principle of subsidiarity, stating that adopted measures should not interfere with
Member States’ competences in deciding on the implementation of HTA findings or harmonize
any related laws or regulations at national level, while providing a basis for sustained Union
support for HTA cooperation.
particularly for the alignment of a joint HTA process with national needs and processes (European
Commission, 2018a; Kleijnen et al., 2015). This primarily concerned the timely availability of
joint assessments, the relevance of each jointly selected topic for individual HTA agencies and
difficulties with integrating jointly produced reports in national templates and procedures. The
consultation culminated in the new proposed regulation described in the main text.
AWMG (WAL)
PDL (BUL)
IMPRC (ICE)
DC (CYP) SUKL (CZE)
TV EV CHE (LAT)
NCPE (IRE) INF ARMED DKMA (DEN)
LRC (LIT)
Assessment (of therapeutic and economic value – TV/EV) and appraisal (AP) processes
HEK (AUS)
TV EV INAMI (BEL)
NICE (ENG)
IQWIG (GER)
HILA (FIN)
AP OHTA (HUN)
AHTAPol (POL)
TV HAS (FRA)
MoH (ROM)
CFH (NET) SAM (EST)
ZZZS (SVN)
FDC (SWZ)
AP
that these were only partially applied in reality (Neumann et al., 2010; Stephens,
Handke & Doshi, 2012). The EUnetHTA Joint Action developed recommen-
dations on overcoming barriers for performing HTA and establishing HTA
organizational structures (EUnetHTA, 2011). These are summarized in Box
6.3. Clearly, a number of these principles could apply and are indeed considered
in the European Commission’s proposal for more formalized collaboration in
HTA at the European level; however, additional factors, such as better alignment
between evidentiary requirements for marketing approval and HTA as well as
the early involvement of stakeholders in this context could play a facilitating
role in implementing HTA as a quality assurance strategy.
Indeed, the ISPOR HTA Council Working Group issued a report on Good
Practices in HTA in early 2019, pointing out that many good practices have been
developed, mostly regarding assessment methodology and certain key aspects of
HTA processes, but consensus on good practice is still lacking for many areas,
such as defining the organizational aspects of HTA, use of deliberative processes
and measuring the impact of HTA (see Kristensen et al., 2019, and the discus-
sion above). These findings can help prioritize future work. Many of the areas
of priority for further HTA-related research identified by a systematic review in
2011 (Nielsen, Funch & Kristensen, 2011), including disinvestment, evidence
development for new technologies, assessing the wider effects of technology use,
determining how HTA affects decision-making, and individualized treatments,
remain on the table despite the time elapsed.
• Strengthen trust between scientists and politicians and improve the use of scientific
evidence in decision-making through continuous dialogue.
• Define clear position of HTA with regard to the specificity of healthcare system.
• Counteract improper or insufficient use of HTA, which may result in loss of political interest.
• Disseminate HTA products in order to prove their usefulness. Use transparency to make
agreement with policy-makers easier to reach. Use different approaches that raise
awareness of politicians as beneficiaries of the HTA processes and products.
BARRIER: FUNDING
• Use various motivating factors to attract people to the organization and protect them from
quitting i.e. encouraging salaries, friendly atmosphere at work, stability and prestige,
intellectual challenges.
• Create an appropriate sense of mission.
• Invest in people, i.e. ensure appropriate external and internal training.
• Allow flexible hours or part-time working.
• Employ people with experience in other areas and allow them to work part-time.
• Develop new mindsets in the society encouraging building capacity.
• Exchange staff with other institutions, involve external experts, use achievements of others.
170 Improving healthcare quality in Europe
References
Allen N et al. (2013). Development of archetypes for non-ranking classification and comparison
of European National Health Technology Assessment systems. Health Policy, 113(3):305–12.
Allen N et al. (2017). A Comparison of Reimbursement Recommendations by European HTA
Agencies: Is There Opportunity for Further Alignment? Frontiers in Pharmacology, 8:384.
Banta HD, Jonsson E (2006). Commentary to: Battista R. Expanding the scientific basis of HTA.
International Journal of Technology Assessment in Health Care, 22:280–2.
Banta HD, Luce BR (1993). Health Care Technology and its Assessment. Oxford, Oxford Medical
Publications (now Oxford University Press).
Banta HD, Kristensen FB, Jonsson E (2009). A history of health technology assessment at the
European level. International Journal of Technology Assessment in Health Care, 25(Suppl 1):68–73.
Regulating the input – Health Technology Assessment 171
Kristensen FB et al. (2009). European network for Health Technology Assessment, EUnetHTA:
Planning, development, and implementation of a sustainable European network for Health
Technology Assessment. International Journal of Technology Assessment in Health Care, 25(Suppl
2):107–16.
Kristensen FB et al. (2017). The HTA Core Model® – 10 Years of Developing an International
Framework to Share Multidimensional Value Assessment. Value in Health, 20(2):244–50.
Kristensen FB et al. (2019). Identifying the need for good practices in health technology assessment:
summary of the ISPOR HTA Council Working Group report on good Practices in HTA.
Value in Health, 22(1):13–20.
Lee A, Skött LS, Hansen HP (2009). Organizational and patient-related assessments in HTAs:
State of the art. International Journal of Technology Assessment in Health Care, 25:530–6.
Luce BR et al. (2010). EBM, HTA, and CER: clearing the confusion. Milbank Quarterly,
88(2):256–76.
Moharra M et al. (2009). Systems to support health technology assessment (HTA) in member
states of the European Union with limited institutionalization of HTA. International Journal
of Technology Assessment in Health Care, 25(Suppl 2):75–83.
Neumann P et al. (2010). Are Key Principles for Improved Health Technology Assessment
Supported and Used by Health Technology Assessment Organizations? International Journal
of Technology Assessment in Health Care, 26(1):71–8.
Nielsen CP, Funch TM, Kristensen FB (2011). Health technology assessment: research trends and
future priorities in Europe. Journal of Health Services Research and Policy, 16(Suppl 2):6–15.
Nielsen CP et al. (2009). Involving stakeholders and developing a policy for stakeholder involvement
in the European Network for Health Technology Assessment, EUnetHTA. International Journal
of Technology Assessment in Health Care, 25(Suppl 2):84–91.
NIH (2017). HTA 101: IX. MONITOR IMPACT OF HTA. Washington, DC: National
Information Center on Health Services Research and Health Care Technology (NICHSR).
O’Donnell JC et al. (2009). Health technology assessment: lessons learned from around the
world – an overview. Value in Health, 12(Suppl 2):S1–5.
Panteli D, Edwards S (2018). Ensuring access to medicines: how to stimulate innovation to meet
patients’ needs? Policy brief for the European Observatory on Health Systems and Policies.
Copenhagen: WHO Regional Office for Europe.
Panteli D et al. (2015). From market access to patient access: overview of evidence-based approaches
for the reimbursement and pricing of pharmaceuticals in 36 European countries. Health
Research Policy and Systems, 13:39.
Panteli D et al. (2016). Pharmaceutical regulation in 15 European countries: Review. Health
Systems in Transition, 18(5):1–118.
Pichon-Riviere M et al. (2017). Involvement of relevant stakeholders in health technology
assessment development. Background Paper. Edmonton: Health Technology Assessment
International.
Sampietro-Colom L, Martin J (eds.) (2016). Hospital-Based Health Technology Assessment.
Switzerland: Springer International Publishing.
Stephens JM, Handke B, Doshi J (2012). International survey of methods used in health technology
assessment (HTA): does practice meet the principles proposed for good research? Comparative
Effectiveness Research, 2:29–44.
Velasco-Garrido M, Zentner A, Busse R (2008). Health systems, health policy and health
technology assessment. In Velasco-Garrido M et al. (eds.). Health technology assessment
and health policy-making in Europe. Current status, challenges and potential. Copenhagen:
WHO Regional Office for Europe.
Velasco-Garrido M et al. (2008). Health technology assessment in Europe – overview of the
producers. In Velasco-Garrido M et al. (eds.). Health technology assessment and health
Regulating the input – Health Technology Assessment 173
Summary
its inconsistent nature, making the business case for evidence-based design in
healthcare is not always straightforward.
Box 7.1 Aspects of quality and performance and potential influences from
the built environment
Patient-centeredness, including
• applying the design and improving the availability of assistive devices to avert patient falls
• using ventilation and filtration systems to control and prevent the spread of infections
• using surfaces that can be easily decontaminated
• facilitating hand washing with the availability of sinks and alcohol hand rubs
• preventing patient and provider injury
• addressing the sensitivities associated with the interdependencies of care, including
work spaces and work processes
Effectiveness, including
• ensuring the size, layout, and functions of the structure meet the diverse care needs
of patients
Source: Henriksen et al., 2007, as cited in Reiling, Hughes & Murphy, 2008
both competing bids and the success of the final project are difficult to evaluate.
In practice, decisions about healthcare infrastructure will make use of a mix of
the above approaches, and stakeholders must consider their use and integration
through the lens of diverse infrastructure evidence bases. Box 7.2 provides an
example of the three types of standards for clarity.
Construction standards in European Union Member States conform to EU-level
stipulations for construction products and engineering services: the EN Eurocodes.
The Eurocodes were requested by the European Commission and developed by
the European Committee for Standardization. They are a series of 10 European
Standards providing a framework for the design of buildings and other civil
engineering works and construction products. They are recommended to ensure
conformity with the basic requirements of the Construction Products Regulation
(Regulation 305/2011 of the European Parliament and of the Council of 9 March
2011, laying down harmonized conditions for the marketing of construction
products and repealing Council Directive 89/106/EEC); they are also the pre-
ferred reference for technical specifications in public contracts in the European
Union. The Eurocodes apply to the structural design of all public buildings, and
refer to geotechnical considerations, fire protection and earthquake protection
design, as well as the required properties of common construction materials.
Fig. 7.1 shows the links between the different Eurocodes.
With the exception of some fundamental issues concerning fire and public
safety, the planning and design of space within healthcare buildings, and the
arrangements made for adjacencies between departments, equipment storage
and engineering services, are not amenable to highly prescriptive, European-wide
standards. The properties of construction materials and the individual compo-
nents of facilities (taps, door handles, roof tiles, flooring materials, etc.) can be
Regulating the input – healthcare facilities 183
closely specified to satisfy regulations on safety and durability, but the higher
level features of healthcare facilities – the arrangement of public and staff areas,
wards, laboratories, outpatient departments, reception halls and car parks – are
influenced by local custom and tradition, financial pressures and the prefer-
ences of those who commission, design, build and maintain the infrastructure.
Nonetheless, country-specific or regional standards and guidelines are commonly
used to orient and direct the commissioners, planners, designers and construc-
tors of healthcare facilities.
EN 1990
Structural safety, serviceability and durability
EN 1991
Actions on structures
EN 1997 EN 1998
Geotechnical design Seismic design
were in place, but were rather guided by professional private expertise and an
independent national agency.
al., 2016) and this process was matched by the 2010 demise of the National
Board for Healthcare Institutions (NBHI), which had been the central authority
for setting standards and approving hospital infrastructure projects. Hospitals
in the Netherlands now have to source infrastructure capital through banks and
the financial markets, and therefore make business cases – including provision
for the quality of planning, design and construction, lifecycle costing, etc. – on
the basis of return on investment, just as any commercial organization would. In
general terms there is a trend towards decentralization of the agencies responsible
for setting and overseeing standards for design and planning.
Detecting and gaining intelligence, responding and developing policy, enforcing and measuring compliance
Modifying, learning, adjustment, tool development, inentivize correct behaviour, benchmarking practice
Finland, for example, reported a process towards more individual and independent
decision-making, using a mix of external expert advice and guidance, since the
early 1990s. Italy has seen significant devolution of responsibility for healthcare
Regulating the input – healthcare facilities 187
provision to its regions since the late 1990s (Erskine et al., 2009), and this has
coincided with development of regional norms and requirements for hospital
functionality. Poland and Romania have made little change in the recent past,
and little is expected in the near future. In Hungary a number of regulations
are intended to specify only requirements, and their associated frameworks, to
encourage unencumbered architectural design, based on the joint knowledge of
the facility designer and the healthcare facility management. Northern Ireland
is undergoing increasing centralized standard setting, as a region small enough
to plan centrally.
Design Strategies
Appropriate lighting
Single-bed rooms
Acuity-adaptable
Views of nature
Noise-reducing
Interventions
Family zone in
patient rooms
Decentralized
Ceiling lifts
Carpeting
supplies
finishes
rooms
Healthcare
Outcomes
Reduced hospital- **
acquired infections
Reduced medical * * * *
errors
Reduced patient falls * * * * * *
Reduced pain * * ** *
Improved patient sleep ** * * *
Reduced patient * * * ** * **
stress
Reduced depression ** ** * *
Reduced length of * * * *
stay
Improved patient ** * *
privacy and
confidentiality
Improved ** * *
communication with
patients and family
members
Improved social * * *
support
Increased patient ** * * * * * *
satisfaction
Decreased staff ** *
injuries
Decreased staff stress * * * * *
Increased staff * * * * * *
effectiveness
Increased staff * * * * *
satisfaction
Source: Ulrich et al., 2008
Regulating the input – healthcare facilities 189
located a medium distance from the nurse station, with the patient’s right side
facing the entry door (right-handed), the bed orientation located within the
room, and the hand-wash sink facing the patient” (MacAllister, Zimring &
Ryherd, 2018). Layout design, visibility and accessibility levels are the most cited
aspects of design which can affect the level of communication and teamwork
in healthcare facilities, impacting patient outcomes and efficiency (Gharaveis,
Hamilton & Pati, 2018; Gharaveis et al., 2018). In fact, a switch to decentralized
nurse stations was shown to lead to a perception of decline in nursing teamwork
(Fay, Cai & Real, 2018).
All the design elements discussed so far have some component of influencing
process of care along with redefining structures. However, purely structural
changes are also expected to have some effect on patient outcomes. For instance,
determining the best ventilation system for operating rooms can influence the
incidence of surgical site infections. A recent systematic review showed no benefit
of laminar airflow compared with conventional turbulent ventilation in reduc-
ing the risk for infection, and concluded that it should not be considered as a
preventive measure (Bischoff et al., 2017). In terms of optimizing the auditory
and visual environment for inpatients, the evidence is also generally positive but
not robust enough for unequivocal conclusions. A systematic review on noise
reduction interventions published in 2018 highlighted this dearth of reliable
studies; while concluding that noise reduction interventions are feasible in ward
settings and have potential to improve patients’ in-hospital sleep experiences,
the evidence is insufficient to support the use of such interventions at present
(Garside et al., 2018). Work on ICU rooms with windows or natural views
found no improvement in outcomes of in-hospital care for general populations
of medical and surgical ICU patients (Kohn et al., 2013). At the same time, a
systematic review focusing on the effects of environmental design on patient out-
comes and satisfaction saw that exposure to particular audio (music and natural
sounds) and visual (murals, ornamental plants, sunlight) design interventions
contributed to a decrease in patients’ anxiety, pain and stress levels (Laursen,
Danielsen & Rosenberg, 2014).
& Zimring (2008) point out that, as a result, “central to the business case is the
need to balance one-time construction costs against ongoing operating savings
and revenue enhancements”. They also provide a comprehensive framework for
decision-makers to estimate which interventions make sense within their own
construction or renovation project and how investment to implement them will
be offset by operational gains down the road (Sadler, DuBose & Zimring, 2008).
Individual research projects have focused on balancing efficiency gains with
the intended improvement in healthcare outcomes and required investment
(see, for instance, Shikder & Price, 2011). For instance, a quasi-experimental
before-and-after study of a transformation to 100% single rooms in an acute
hospital found that an all single-room hospital can cost 5% more (with higher
housekeeping and cleaning costs) but the difference is marginal over time (Maben
et al., 2015b). Operational efficiencies improved with SPRs in a maternity ward
as well (Voigt, Mosier & Darouiche, 2018), supporting the savings assumption.
While ICU rooms with windows or natural views were not found to reduce
costs of in-hospital care, they also did not increase them (Kohn et al., 2013).
Laursen, Danielsen & Rosenberg (2014) argued that interventions to ameliorate
the auditory and visual environment for patients are arguably inexpensive and
easily implemented, and therefore feasible in most hospitals. In their framework,
Sadler, DuBose & Zimring (2008) differentiate between interventions that all
facilities can implement without investing too many resources and those requir-
ing more careful consideration. We reproduce these two clusters in Table 7.2.
large and diverse design teams have the best chance of producing healthcare
environments that are conducive to patient safety and function as healing
environments. However, managing such teams requires strong leadership – a
theme which reappears throughout the literature on evidence-based design (see
also Anåker et al., 2016).
Own
ont s
Trus
st
rol
ers
iali
cuti
ts
te
Saf atien
ves
nc
ctio
P
ety
Arc
h
Eng itects
in
Des eers
ign
ers
CLINICAL
PROCESS Hum
an
fac
ns tors
icia e
Clin cians Erg ngine
si on er
Phy s Beh omici s
Info quipm
Me tion te pecia
se avio sts
N r icians
u
rma
uris
dic
E
hn ts
Tec
al i chnolo lists
nfo
ent
rma gists
s
Visi ts
tici
en
tors
sts
Pati
lic
Pub
Note: the tenets of this model are: multidisciplinary approach; collaboration essential; patient safety; efficiency
and effectiveness; clinical and operational process at the core; good design resonates with the people it serves.
Conceptual work on the meaning of “good quality design” for healthcare facilities
found that there were three main themes emerging from the literature regarding
perceptions of what the concept can entail: environmental sustainability and
ecological values; social and cultural interactions and values; and resilience of
engineering and building construction (Anåker et al., 2016). While the latter
two elements have been discussed in previous sections, the first theme has not
been at the forefront of this chapter’s focus. However, it is important to note that
sustainable and “green” practices should be included in new standard considera-
tions, especially given accumulating findings from the wider research agenda;
having said that, balancing sustainability considerations with the primary goal of
evidence-based design (for example, safety improvement) needs to be approached
with care (Anåker et al., 2016; Wood et al., 2016).
194 Improving healthcare quality in Europe
Medical devices, like drugs, are indispensable for healthcare. But unlike drugs, medical devices
span a vast range of different physical forms – from walking sticks and syringes to centrifuges
and Magnetic Resonance Imaging (MRI) machines. They also differ from drugs in being even more
dependent on user skills to achieve desired outcomes (or to produce undesired effects): the role
of the user and engineering support are crucial in ensuring their ultimate safety and performance.
Furthermore, many medical devices (for example, imaging and laboratory equipment) represent
durable, reusable investments in health facilities.
In Europe the current quality standards of medical devices are provided by the relevant EC
Directives, which focus on ensuring product safety and performance (see also Chapter 4).
Responsibilities are imposed on the manufacturers to comply with regulatory requirements in
designing, producing, packaging and labelling their products. The manufacturer is also required
to maintain the quality of the device in the delivery process to the consumer as well as conduct
post-market surveillance, adverse event reporting, corrective action and preventive action. The
effectiveness and safety of medical devices is also increasingly evaluated in the context of Health
Technology Assessment (see Chapter 6).
Medical device regulations govern manufacturers to ensure product safety, but this does not
extend to the use of medical devices to ensure patient safety. HTA usually evaluates technologies
for reimbursement purposes. An overarching framework for the management of medical devices
through their lifecycle from the perspective of quality and safety is not formally in place in Europe;
Fig. 7.4 highlights important responsibilities to be considered regarding medical device safety
in healthcare organizations (see also Chapter 11).
Mills et al. (2015a) explored such an approach for the UK context specifically,
especially in light of developments in the country (for example, constrained
resources and reorganization of the NHS), which saw a gradual departure
from traditional command-and-control arrangements (Mills et al., 2015a). The
scenarios explored in this context represent different degrees of devolution of
responsibility and are shown in Fig. 7.5. The study further highlighted the need
for adaptable, responsive standards to keep up with emerging evidence within
a learning system, stating that “there are clear opportunities for meta- and self-
regulation regimes and a mix of interventions, tools and networks that will reduce
the burden of rewriting standards … and create a wider ownership of building
design quality standards throughout the supply chain”. For the UK context, the
study authors reiterate the importance of leadership (already mentioned above)
and conclude that redefining and strengthening successful models of central
responsibility in healthcare building design quality improvement strategy, foster-
ing the development and adoption of open and dynamic standards, guidance
and tools, and supporting the development of the evidence base to underpin
tools for quality improvement are of crucial importance. Despite its context-
specific scope, this approach can be adopted in other countries, accounting for
system particularities.
Interdisciplinary
learning
3 2
A wider delivery system of quality Shared responsibility among multiple stakeholders driving
assurance based on new knowledge improvements in healthcare building design quality but
generated through externally funded acknowledge limited resources and reduced central
research and its subsequent exploitation government command and control
Regulating the input – healthcare facilities 197
Given the issues discussed in this chapter for the productivity and effectiveness
of healthcare infrastructure, it is helpful to know where to find the evidence
and research outcomes. Table 7.3 at the end of the chapter contains a listing of
selected, web-based information sources in English.
References
AHRQ (2007). Transforming Hospitals: Designing for Safety and Quality. AHRQ Publication
No. 07-0076-1. Rockville, Maryland, USA: Agency for Healthcare Research and Quality.
Anåker A et al. (2016). Design Quality in the Context of Healthcare Environments: A Scoping
Review. Health Environments Research & Design Journal, 10(4):136–50.
Bischoff P et al. (2017). Effect of laminar airflow ventilation on surgical site infections: a systematic
review and meta-analysis. Lancet Infectious Diseases, 17(5):553–61.
Bonuel N, Cesario S (2013). Review of the literature: acuity-adaptable patient room. Critical Care
Nursing Quarterly, 36(2):251–71.
Busse R et al. (2010). Tackling Chronic Disease in Europe: Strategies, Interventions and Challenges.
Brussels: European Observatory on Health Systems and Policies.
CHD (2018). Evidence-Based Design Accreditation and Certification (EDAC). Available at:
https://www.healthdesign.org/certification-outreach/edac, accessed on 31 September 2018.
Cheng M et al. (2019a). Medical Device Regulations and Patient Safety. In: Iadanza E (ed.).
Clinical Engineering Handbook. Amsterdam: Elsevier.
Cheng M et al. (2019b). A systems management framework for medical device safety and optimal
outcomes. In: Iadanza E (ed.). Clinical Engineering Handbook. Amsterdam: Elsevier.
Clancy CM (2013). Creating a healing environment. Health Environments Research and Design
Journal, J(7):5–7.
Costello JM et al. (2017). Experience with an Acuity Adaptable Care Model for Pediatric Cardiac
Surgery. World Journal for Pediatric and Congenital Heart Surgery, 8(6):665–71.
Csipke E et al. (2016). Design in mind: eliciting service user and frontline staff perspectives on
psychiatric ward design through participatory methods. Journal of Mental Health, 25(2):114–21.
Deyneko A et al. (2016). Impact of sink location on hand hygiene compliance after care of patients
with Clostridium difficile infection: a cross-sectional study. BMC Infectious Diseases, 16:203.
Dickerman KN, Barach P (2008). Designing the Built Environment for a Culture and System of
Patient Safety – a Conceptual, New Design Process. In: Henriksen K et al. (eds.). Advances
in Patient Safety: New Directions and Alternative Approaches (Vol. 2: Culture and Redesign).
Rockville, Maryland, USA: Agency for Healthcare Research and Quality.
DIN (2016). Standard 13080: “Division of hospitals into functional areas and functional sections”.
Available at: https://www.din.de/en/getting-involved/standards-committees/nabau/standards/
wdc-beuth:din21:252635669/toc-2582257/download, accessed on 31 September 2018.
Erskine J et al. (2009). Strategic Asset Planning: an integrated regional health care system, Tuscany,
Italy. In: Rechel B et al. (eds.). Capital Investment in Health: Case Studies from Europe.
Copenhagen: WHO on behalf of the European Observatory on Health Systems and Policies.
EuHPN (2011). Guidelines and Standards for Healthcare Buildings: a European Health Property
Network Survey. Durham: European Health Property Network.
200 Improving healthcare quality in Europe
Scholz S, Ngoli B, Flessa S (2015). Rapid assessment of infrastructure of primary health care
facilities – a relevant instrument for health care systems management. BMC Health Services
Research, 15:183.
Shikder S, Price A (eds.) (2011). Design and Decision Making to Improve Healthcare Infrastructure.
Loughborough: Loughborough University.
Stiller A et al. (2016). Relationship between hospital ward design and healthcare-associated
infection rates: a systematic review and meta-analysis. Antimicrobial Resistance and Infection
Control, 29(5):51.
Suhrcke M et al. (2005). The contribution of health to the economy in the European Union.
Brussels: Office for Official Publications of the European Communities.
Taylor E, Card AJ, Piatkowski M (2018). Single-Occupancy Patient Rooms: A Systematic Review
of the Literature Since 2006. Health Environments Research & Design Journal, 11(1):85–100.
Ulrich RS et al. (2008). A Review of the Research Literature on Evidence-Based Healthcare Design.
Health Environments Research & Design Journal, 1(3):61–125.
Voigt J, Mosier M, Darouiche R (2018). Private Rooms in Low Acuity Settings: A Systematic
Review of the Literature. Health Environments Research & Design Journal, 11(1):57–74.
WHO (2009). Guidelines on Hand Hygiene in Health Care: First Global Patient Safety Challenge
Clean Care Is Safer Care. Appendix 1, Definitions of health-care settings and other related
terms. Geneva: World Health Organization.
Wood L et al. (2016). Green hospital design: integrating quality function deployment and end-
user demands. Journal of Cleaner Production, 112(1):903–13.
Zellmer C et al. (2015). Impact of sink location on hand hygiene compliance for Clostridium
difficile infection. American Journal of Infection Control, 43(4):387–9.
Chapter 8
External institutional
strategies: accreditation,
certification, supervision
Summary
International
Standards Accreditation bodies Standardization Legislation
Organization
Assessment
Accreditation bodies Certification bodies Inspectorates
bodies
medical education (see Chapter 5). It focuses on generic programmes, omitting the
application of accreditation at national or European level of specific departments
(for example, of breast cancer centres) and the growing body of specialty-based
research (Lerda et al., 2014).
}
Guidance/legislation Industry consensus Research
MOH: planning
Standards
Management: Leadership
Assessors’ training
8.3.1 Accreditation
Table 8.2 presents a list of European countries where accreditation programmes
have been introduced over the past thirty years. The table also provides infor-
mation on the year of introduction and whether programmes are voluntary
or mandatory. The table shows that most European countries have voluntary
national accreditation programmes. Only Bosnia, Bulgaria, Denmark, France
and Romania have mandatory programmes. National uptake of accreditation
programmes varies considerably, as does the governance structures of these pro-
grammes. According to findings of a survey, about one third of accreditation
programmes are run by governments, one third are independent and one third
are hybrids (Shaw et al., 2013).
The first national accreditation programme in Europe was introduced in the
UK in 1989. Many other countries followed in the 1990s: the Czech Republic,
Finland, France, the Netherlands, Poland, Portugal, Spain and Switzerland. Today,
national accreditation programmes are thriving in Bulgaria, the Czech Republic,
France, Germany, Luxembourg and Poland. In these countries the activity of
accreditation programmes has grown considerably and an increasing number of
External institutional strategies: accreditation, certification, supervision 213
8.3.2 Certification
In general, ISO standards and certification against those standards have a long
history in Europe. There are several EU regulations concerning quality of goods
and products (see Box 8.1 and Chapter 4).
EN ISO 15224:2012 (updated in 2017) is the first ISO standard that is specific
to quality management systems in healthcare. It has a focus on clinical processes
and their risk management in order to promote good-quality healthcare. The
standard aims to adjust and specify the requirements, as well as the “product”
concept and customer perspectives in EN ISO 9001:2008 to the specific condi-
tions of healthcare, where products are mainly services and customers are mainly
patients. Before the introduction of the translated version, there was a major
variation in the interpretation of key words (for example, product, supplier and
design control) such that the standards (and thus any subsequent audit and
certification) would not appear to be consistent between countries (Sweeney &
Heaton, 2000).
Information on the status of (ISO) certification is even less available than infor-
mation on accreditation status, partly because this is less directed by national
bodies and partly because the level or setting of certification is much more
variable, for example, hospital level, service level, laboratory level, diagnostic
facility level. There is no clearing house to gather such information in Europe.
214 Improving healthcare quality in Europe
Introduced
Country Agency, organization Type Homepage
in
Europe Joint Commission International, 1994 voluntary https://www.
Europe jointcommissioninternational.org/
Bulgaria Accreditation of hospitals and 2000 mandatory n.a.
diagnostic-consultative centres
Croatia Agency for Quality and 2007 voluntary n.a.
Accreditation in Health Care
Czechia Spojená akreditační komise 1998 voluntary www.sakcr.cz
Denmark The Danish Healthcare Quality 2006 mandatory www.ikas.dk
Programme (DDKM), IKAS
Finland Social and Health Quality Service 1993 voluntary www.qualitor.fi
(SHQS)
France Haute Autorité de Santé (HAS) 1996 mandatory www.has-sante.fr
Germany Kooperation für Transparenz und 2000 voluntary www.ktq.de
Qualität im Gesundheitswesen
(KTQ)
Hungary Institute for Healthcare Quality under n.a. www.emki.hu
Improvement and Hospital discussion
Engineering
Ireland Health Information and Quality 2007 voluntary www.hiqa.ie
Authority
Lithuania State Health-Care Accreditation 1999 voluntary www.vaspvt.gov.lt
Agency (SHCAA)
Luxembourg Incitants Qualité (IQ) 2006 voluntary n.a.
Netherlands Netherlands Institute for 1998 voluntary www.niaz.nl
Accreditation in Health Care (NIAZ)
Poland Program Akredytacji 1998 voluntary www.cmj.org.pl
Portugal Programa Nacional de Acreditação 1999 voluntary www.dgs.pt
em Saúde
Romania Autoritatea Nationala De 2011 mandatory https://anmcs.gov.ro/web/en
Management Al Calitatii In
Sanatate (ANMCS)
Serbia Agency for Accreditation of Health 2008 voluntary www.azus.gov.rs
Institutions of Serbia (AZUS)
Slovak Slovak National Accreditation 2002 voluntary http://www.snas.sk/index.
Republic Service (SNAS) php?l=en
The regulation EC 765/2008 defines requirements for accreditation and market surveillance
relating to the marketing of (medical) products. It aims to reduce variation between countries
and to establish uniform national bodies responsible for conformity assessment. When conformity
assessment bodies (CABs) are accredited by their national accreditation body (NAB) and a
mutual recognition agreement exists between the NABs, their certification is recognized across
national borders. The European Cooperation for Accreditation (EA) supervises national systems
to evaluate the competence of CABs throughout Europe, including peer evaluation among NABs.
EA has been formally appointed by the European Commission under Regulation (EC) 765/2008
to develop and maintain a multilateral agreement of mutual recognition based on a harmonized
accreditation infrastructure.
The Comité Européen de Normalisation (CEN) is the competent body of the European Union
(EU) and European Free Trade Area (EFTA) to develop and publish European standards, either on
request by business entities (bottom-up), or mandated by the European Commission (top-down).
Certification of management systems (such as compliant with ISO 9001) is by bodies which are
themselves accredited according to IEC ISO 17021 by the national accreditation body, which is
in turn accredited by the European Cooperation for Accreditation.
Searchable lists of ISO certificated healthcare organizations are not freely avail-
able at national or European level. However, an annual survey from ISO itself
provides an overview of awarded health and social care organizations in accord-
ance with 9001 norm series (see Fig. 8.3). However, information on the newer
norm 15224 is not available.
22 500
15 000
7 500
0
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
ISO certification is often not available. Countries where at least some hospitals
(or departments) have been ISO-certified include Bulgaria, Cyprus, Denmark,
Germany, Greece, Hungary, Poland, Slovenia and the UK. In Poland certified
(and accredited) hospitals receive higher reimbursements. In other countries, for
example, the Czech Republic, Ireland, Lithuania, Spain and the Netherlands either
ISO-based certification schemes were developed or it was mentioned that some
healthcare organizations in general (not explicitly hospitals) are ISO-certified.
8.3.3 Supervision
Many countries have several forms of supervision. Apart from basic supervision
as part of the licensing or authorization process that allows organizations to act
as healthcare providers, there is often additional supervision and licensing related
to fire safety, pharmacy and environmental health. Many of these supervision
and licensing functions are commonly delegated to separate agencies of national
or local government. Thus the law may be national but the supervision and
licensing may be regional or local.
No systematic overview is available of supervision arrangements in differ-
ent European countries. However, the European Partnership for Supervisory
Organisations (EPSO), an informal group of government-related organizations
enforcing or supervising health services in EU and EFTA countries, provides a list
of national member bodies (see Table 8.3). EPSO aims to support the exchange
of information and experiences in healthcare supervision and control of medical
and pharmaceutical products, instruments and devices.
However, the presence of a national supervisory organization does not neces-
sarily mean that there is a system of regular supervision and (re)licensing in a
country. In a survey of the European Accreditation Network (EAN) half of the
responding 14 countries reported either no requirement for hospital licensing or
the issue of a licence in perpetuity. The remainder reissue licences periodically,
with or without re-inspection (Shaw et al., 2010a).
In many countries the relationship between accreditation, regulation and ISO
quality systems is unclear. One notable exception is England, where an alliance
of professional associations was established in 2013 to harmonize clinical service
accreditation between the specialties in order to minimize administrative burden
and to support the regulatory function of the healthcare regulator, the Care
Quality Commission (CQC). The Alliance worked with the British Standards
Institution (BSI) to reconcile the various requirements of the CQC, the ISO
accreditation body (UKAS) and professionally led clinical review schemes. This
could be a transferable model for legally based collaboration between the health-
care regulator, ISO certification and professional peer review in other European
External institutional strategies: accreditation, certification, supervision 217
8.4.1 Accreditation
Eight systematic reviews were identified, which included between 2 and 66 origi-
nal studies. These had been conducted mostly in the US or other non-European
countries, for example, Australia, Canada, Japan and South Africa, thus limiting
the transferability of findings to the European context. In addition, interpreta-
tion of results is complicated by the heterogeneity of accreditation schemes in
different countries.
In general, evidence on the effectiveness, let alone cost-effectiveness, of hospital
accreditation to improve quality of care is limited (see Table 8.4). Part of the
problem is that there are very few controlled studies to evaluate such effects.
It seems that accreditation has effects on the extent to which hospitals prepare
for accreditation, which in turn may have a positive effect on team culture and
External institutional strategies: accreditation, certification, supervision 219
generic service organization. However, whether this translates into better process
measures and improved clinical outcomes is not clearly established.
The only systematic review that aimed to assess costs and cost-effectiveness
identified six studies conducted in non-European countries. The findings give an
indication of accreditation’s costs (including preparation), ranging from 0.2 to
1.7% of total hospital expenditures per annum averaged over the accreditation
cycle of usually three years. However, the number of studies was small and none
of them carried out a formal economic evaluation. Thus, no reliable conclusions
on the cost-effectiveness of accreditation can be drawn (Mumford et al., 2013).
In addition, nine relatively recent large-scale studies, which were not included
in the systematic reviews presented above, have assessed either the effectiveness
or costs of accreditation programmes (see Table 8.5).
Seven studies assessed the effectiveness of accreditation programmes, and they
generally reported mixed results. Two studies evaluating the effects on mortality
of a German (Pross et al., 2018) and a US accreditation programme (Lam et
al., 2018) found no association between hospital accreditation and mortality.
However, a third study of a Danish accreditation programme (Falstie-Jensen,
Bogh & Johnsen, 2018) did find an association between low compliance with
accreditation standards and high 30-day mortality. Findings on the effects of
accreditation on readmission rates in the same three studies are also inconclusive.
Another study conducted in Australia reported a significant reduction of
Staphylococcus aureus bacteraemia (SAB) rates, which were nearly halved in
accredited hospitals compared to non-accredited hospitals (Mumford et al.,
2015b). The findings of Bogh et al. (2017) suggest that the impact of accredita-
tion varies across conditions: heart failure and breast cancer care improved less
than other areas and improvements in diagnostic processes were smaller than
improvements in other types of processes. Moreover, the studies of Shaw et al.
(2010b, 2014) reported a positive effect of accreditation (and certification) for
three out of four clinical services (see the section on certification for a more
detailed description of both studies).
Two of the nine identified studies focused on the costs of accreditation. In an
Australian mixed-methods study including six hospitals the costs ranged from
0.03 to 0.60% of total expenditures per annum averaged on the accreditation
cycle of four years. The authors extrapolated the costs to national level, which
would accumulate to $A37 million – 0.1% of total expenditures for acute
public hospitals (Mumford et al., 2015a). The other study did not assess costs
directly, but evaluated the value of accreditation from the hospital’s perspective.
The study found that most hospitals increased expenditures in staff training,
consultants’ costs and infrastructure maintenance, and that almost one third of
220 Improving healthcare quality in Europe
Number
Author Period Country
of studies Main findings
(year) covered coverage
included
Flodgren, up to 2015 update of Flodgren et al. (2011), no further study met inclusion criteria
Gonçalves
& Pomey
(2016)
Brubakk et 1980–2010 4 (3 SRs, 1 RCT from • the RCT showed inconclusive results (see details in
al. (2015) RCT) ZA (1) Flodgren et al., 2011)
• findings from the reviews included were mixed
and therefore no conclusions could be reached to
support effectiveness of hospital accreditation
Mumford et up to 2011 Effectiveness
al. (2013)
15 AU (2), DE • studies on effectiveness were inconclusive in terms
(1), Europe of showing clear evidence of effects on patient
(1), JP (1), safety and quality of care
SAU (1), US
(8), ZA (1)
Costs and cost-effectiveness
6 "AU (1), US • no formal economic evaluation has been carried
(4), ZM (1) out to date
• incremental costs ranged from 0.2 to 1.7% of total
expenditures per annum
Alkhenizan 1983–2009 26 AU (1), CA • overall there was a positive association between
& Shaw (1), DK (1), accreditation and processes of care
(2011) EG (1), JP • associations are potentially overestimated as they
(1), KR (1), stem mostly from uncontrolled studies
PH (1), SG
(1), ZA (1),
US (16),
ZM (1)
Number
Author Period Country
of studies Main findings
(year) covered coverage
included
Flodgren et up to 2011 2 (cluster- England (1), • positive effects of hospital accreditation on
al. (2011) RCT, ITS) ZA (1) compliance with accreditation standards were
shown in the cluster-RCT; effects on quality
indicators were mixed: only one out of eight
indicators improved (“nurses perception of clinical
quality, participation and teamwork”)
• the ITS showed a statistically non-significant effect
of accreditation on hospital infections
NOKC up to 2009 update of NOKC (2006), no further study met inclusion criteria
(2009)
Greenfield & 1983–2006 66 not reported • findings suggest association between accreditation
Braithwaite and promoting change and professional
(2007) development
• inconsistent associations between accreditation
and professionals’ attitudes to accreditation,
organizational and financial impact, quality
measures and programme assessment
• evidence for an association between accreditation
and consumer views or patient satisfaction is
inconclusive
NOKC 1966–2006 2 (cohort- AU (1), DE • results suggest that accreditation might positively
(2006) study, (1) influence nurse’s working conditions, and the
before-after frequency of safety routines
study) • regarding certification the authors concluded that
it might result in cost reduction and an increase of
satisfaction among cooperating cardiologists
Notes: ITS = interrupted time series; RCT = randomized controlled trial; SR = systematic review
Country abbreviations: AU = Australia; CA = Canada; DK = Denmark; EG = Egypt; JP = Japan; KR = Korea;
PH = Philippines; SAU = Saudi Arabia; SG = Singapore; ZA = South Africa; US = United States of America;
ZM = Zambia.
of accreditation schemes, both within and across countries, findings from the
literature have to be interpreted with care and generalizability is limited.
8.4.2 Certification
There is little published research or descriptive evidence for the effectiveness of
certification in healthcare. A review conducted by NOKC (2006) covered both
accreditation and certification but only two of the references retrieved from
the literature review complied with the inclusion criteria, of which one study
was related to accreditation and one study to certification. The latter suggests
that a quality system according to ISO 9001 might result in cost reduction of
Table 8.5 Recent large-scale and experimental research on effectiveness and costs of healthcare accreditation
Hospitals
Study Design Country Aim Key findings
(patients)
Pross et al., 2018 secondary data DE to assess the impact of hospital accreditation on 30-day n = 1100– • no effects of hospital accreditation on 30-day stroke mortality were
analysis mortality of stroke patients 1300 per shown
year, from
2006 to
2014
Lam et al., 2018 secondary data US to assess the association between accreditation and n = 4.400 • hospital accreditation was not associated with lower mortality, and
analysis mortality (> 4 mio.) was only slightly associated with reduced readmission rates for the 15
common medical conditions selected in this study
222 Improving healthcare quality in Europe
Falstie-Jensen, nationwide DK to examine the association between compliance with n = 25 • persistent low compliance with the DDKM (in Danish: Den Danske
Bogh & Johnsen, population-based hospital accreditation and 30-day mortality (> 170.000) Kvalitetsmodel) accreditation was associated with higher 30-day
2018 study from 2012 mortality and longer length of stay compared to high compliance
to 2015 • no difference was seen for acute readmission
Bogh et al., 2017 multilevel, DK to analyse the effectiveness of hospital accreditation on n = 25 • the impact of accreditation varied across conditions: heart failure and
longitudinal, process indicators breast cancer improved less than other disease areas and diagnostic
stepped-wedge processes were less enhanced than other types of processes
study • hospital characteristics were no reliable predictors for assessing the
effects of accreditation
Mumford et al., mixed method AU to evaluate the costs of hospital accreditation n = 6 • the average costs through the four-year accreditation cycle ranged from
2015a study 0.03% to 0.60% of total hospital operating costs per year
• extrapolated to national level that would accumulate to $A36.83 mio.
(0.1% of acute public hospital expenditure)
• limitation is the small sample size (n = 6) of hospitals
Hospitals
Study Design Country Aim Key findings
(patients)
Mumford et al., retrospective AU to analyse the impact of accreditation scores on SAB rates n = 77 • significantly reduced SAB rates (1.34 per 100 000 bed days to 0.77 per
2015b cohort study in hospitals 100 000 bed days)
• although the authors support using SAB rates to measure the impact
of infection control programmes, there is less evidence whether
accreditation scores reflect the implementation status of infection
control standards
Saleh et al., 2013 observational LBN to assess the hospital’s view on accreditation as a worthy n = 110 • most hospitals had increased expenditure in training of staff (95.8%),
cross-sectional or not worthy investment consultants’ costs (80%) and infrastructure maintenance (77.1%)
designed survey • nearly two thirds (64.3%) of all responding hospitals considered
accreditation as a worthy investment
• most common arguments were that accreditation has positive effects on
quality and safety
Shaw et al., 2014 mixed method CZ, DE, to assess the effect of certification and accreditation on n = 73 • accreditation and certification are positively associated with clinical
multilevel cross- ES, FR, quality management in four clinical services leadership, systems for patient safety and clinical review, but not with
sectional design PL, PRT, clinical practice
TUR • both systems promote structures and processes, which support patient
safety and clinical organization but have limited effect on the delivery of
evidence-based patient care
Shaw et al., cross-sectional BE, CZ, to assess the association between type of external n = 71 • quality and safety structures and procedures were more evident in
2010b study ES, FR, assessment and a composite score of hospital quality hospitals with either type of external assessment
IRL, PL, • overall composite score was highest for accredited hospitals, followed
UK by hospitals with ISO certification
Note: SAB = Staphylococcus aureus bacteraemia.
Country abbreviations: AU = Australia; CZ = Czechia; DE = Germany; ES = Spain; FR = France; LBN = Lebanon; PL = Poland; PRT = Portugal; TUR = Turkey; UK = United Kingdom;
US = United States
External institutional strategies: accreditation, certification, supervision 223
224 Improving healthcare quality in Europe
medical expenses (–6.1%) and total laboratory costs (–35.2%) and an increase
of satisfaction among cooperating cardiologists. However, the study was of low
quality and conducted in a pre-and-post design without a control group. An
update of the review in 2009 could not identify further studies for inclusion
(NOKC, 2009) (see Table 8.4).
Another review that specifically aimed to assess the effects of ISO 9001 cer-
tification and the European Foundation for Quality Management (EFQM)
excellence model on improving hospital performance included a total of seven
studies (Yousefinezhadi et al., 2015). Four of them related to ISO certification,
reporting the results of four quasi-experimental studies from Germany, Israel,
Spain and the Netherlands. Implementation of ISO 9001 was found to increase
the degree of patient satisfaction, patient safety and cost-effectiveness. Moreover,
the hospital admissions process was improved and the percentage of unscheduled
returns to the hospital decreased. However, the review authors conclude that
there is a lack of robust evidence regarding the effectiveness of ISO 9001 (and
EFQM) because most results stem from observational studies.
Two of the original studies identified in our literature search assessed the impact
of accreditation and certification on hospital performance (see Table 8.5). The
study conducted by Shaw et al. (2014), covering 73 hospitals in seven European
countries, showed that ISO certification (and accreditation) is positively associated
with clinical leadership, systems for patient safety and clinical review. Moreover,
ISO certification (and accreditation) was found to promote structures and pro-
cesses, which support patient safety and clinical organization. However, no or
limited effects were found with regard to clinical practices, such as the delivery
of evidence-based patient care. The second study, covering 71 hospitals in seven
countries, also assessed the effect of both accreditation and certification. It sug-
gested that accredited hospitals showed better adherence to quality management
standards than certified hospitals, but that compliance in both was better than
in non-certified hospitals (Shaw et al., 2010b).
In addition, several descriptive single-centre studies discuss motivations, processes
and experiences of certification (Staines, 2000). While these studies are relevant
to inform managers, their contribution to answering the question as to whether
certification is effective is limited. The authors of two reviews that address more
broadly the lessons learned from the evaluations of ISO certification acknowledge
that the majority of studies on ISO certification in healthcare are supported
by descriptive statistics and surveys only, thus not allowing causal inference on
the impact of certification on organizational performance and other outcomes
(Sampaio et al., 2009; Sampaio, Saraiva & Monteiro, 2012).
External institutional strategies: accreditation, certification, supervision 225
8.4.3 Supervision
There is little published research on the effectiveness of supervision in healthcare.
Results of one review (Sutherland & Leatherman, 2006), covering three stud-
ies conducted in England and the US, suggest that the prospect of inspection
catalyzes organizational efforts to measure and improve performance. Although
inspections rarely uncover issues that are unknown to managers, they are able to
focus attention and motivate actors to address problems. However, the review
authors concluded that evidence is drawn from a small number of observational
studies and therefore the links between regulation and improvements in quality
are primarily associative rather than causal.
No further studies on the effectiveness and/or cost-effectiveness of supervi-
sion were identified. Thus, where evidence on accreditation and certification is
inconsistent or lacks experimental data, evidence on the effects of supervision
is almost non-existent.
Facilitators Barriers
Organizational • staff engagement and communication • organizational culture of resistance to
factors • multidisciplinary team-building and change
collaboration • increased staff workload
• change in organizational culture • lack of awareness about CQI
• enhanced leadership and staff training • insufficient staff training and support for CQI
• integration and utilization of information • lack of applicable standards for local use
• increased resources dedicated to CQI • lack of performance measures
System-wide • additional funding • Hawthorne effects and opportunistic
factors • public recognition behaviours
References
Accreditation Canada (2018). Find an Internationally Accredited Service Provider. Available at:
https://accreditation.ca/intl-en/find-intl-accredited-service-provider/, accessed 28 November
2018.
Alkhenizan A, Shaw C (2011). Impact of accreditation on the quality of healthcare services: a
systematic review of the literature. Annals of Saudi Medicine, 31(4):407–16.
Bogh SB et al. (2017). Predictors of the effectiveness of accreditation on hospital performance: a
nationwide stepped-wedge study. International Journal for Quality in Health Care, 29(4):477–83.
External institutional strategies: accreditation, certification, supervision 229
BSI (2016). Publicly Available Specification (PAS) 1616:2016 Healthcare – Provision of clinical
services – Specification. British Standards Institution. Available at: https://shop.bsigroup.com/
ProductDetail/?pid=000000000030324182, accessed 21 March 2019.
Brubakk K et al. (2015). A systematic review of hospital accreditation: the challenges of measuring
complex intervention effects. BMC Health Services Research, 15:280. doi: 10.1186/s12913-
015-0933-x.
DNV GL (2018). Accredited hospitals. Available at: https://www.dnvgl.com/assurance/healthcare/
accredited-hospitals.html, accessed 28 November 2018.
Eismann S (2011). “European Supervisory Bodies and Patient Safety”. Survey of 15 respondents
(Belgium, Denmark, England, Estonia, Finland, France, Germany, Ireland, Lithuania,
Netherlands, Northern Ireland, Norway, Scotland, Slovenia, Sweden). CQC.
EPSO (2018). EPSO member countries and partners (including not active). Available at:
http://www.epsonet.eu/members-2.html, accessed 28 November 2018.
European Observatory on Health Systems and Policies (2018). Health Systems in Transitions
Series. Available at: http://www.euro.who.int/en/who-we-are/partners/observatory/health-
systems-in-transition-hit-series/countries, accessed 29 November 2018.
Falstie-Jensen AM, Bogh SB, Johnsen SP (2018). Consecutive cycles of hospital accreditation:
Persistent low compliance associated with higher mortality and longer length of stay.
International Journal for Quality in Health Care, 30(5):382–9. doi: 10.1093/intqhc/mzy037.
Flodgren G, Gonçalves-Bradley DC, Pomey MP (2016). External inspection of compliance with
standards for improved healthcare outcomes. Cochrane Database of Systematic Reviews, 12:
CD008992. doi: 10.1002/14651858.CD008992.pub3.
Flodgren G et al. (2011). Effectiveness of external inspection of compliance with standards in
improving healthcare organisation behaviour, healthcare professional behaviour or patient
outcomes. Cochrane Database of Systematic Reviews: 11: CD008992. doi: 10.1002/14651858.
CD008992.pub2.
Fortes MT et al. (2011). Accreditation or accreditations? A comparative study about accreditation
in France, United Kingdom and Cataluña. Revista da Associação Médica Brasileira (English
Edition), 57(2):234–41. doi: 10.1016/S2255-4823(11)70050-9.
Greenfield D, Braithwaite J (2008). Health sector accreditation research: a systematic review.
International Journal for Quality in Health Care, 20(3):172–83.
ISO (2018). 9. ISO Survey of certifications to management system standards. Full results. Available
at: https://isotc.iso.org/livelink/livelink?func=ll&objId=18808772&objAction=browse&view
Type=1, accessed 26 November 2018.
ISQua (2015). Guidelines and Principles for the Development of Health and Social Care standards.
Available at: http://www.isqua.org/reference-materials.htm, accessed 28 November 2018.
ISQua (2018a). ISQua’s International Accreditation Programme. Current Awards. Available at:
https://www.isqua.org/accreditation.html, accessed 28 November 2018.
ISQua (2018b). ISQua Research. Available at: https://www.isqua.org/research.html, accessed 29
November 2018.
JCI (2018). JCI – Accredited Organizations. Available at: https://www.jointcommissioninternational.
org/about-jci/jci-accredited-organizations/, accessed 28 November 2018.
Khangura S et al. (2012). Evidence summaries: the evolution of a rapid review approach. Systematic
Reviews 1(10). doi: 10.1186/2046-4053-1-10.
Lam MB et al. (2018). Association between patient outcomes and accreditation in US hospitals:
observational study. BMJ (Clinical research edition), 363, k4011. doi: 10.1136/bmj.k4011.
Legido-Quigley H et al. (2008). Assuring the quality of health care in the European Union: a case
for action. Copenhagen: WHO Regional Office for Europe.
Lerda D et al. (2014). Report of a European survey on the organization of breast cancer care
services. European Commission. Joint Research Centre. Institute for Health and Consumer
Protection. Luxembourg (JRC Science and Policy Reports).
230 Improving healthcare quality in Europe
Mumford V et al. (2013). Health services accreditation: what is the evidence that the benefits
justify the costs? International Journal for Quality in Health Care, 25(5):606–20. doi: 10.1093/
intqhc/mzt059.
Mumford V et al. (2015a). Counting the costs of accreditation in acute care: an activity-based
costing approach. BMJ Open, 5(9).
Mumford V et al. (2015b). Is accreditation linked to hospital infection rates? A 4-year, data linkage
study of Staphylococcus aureus rates and accreditation scores in 77 Australian Hospitals?
International Journal for Quality in Health Care, 27(6):479–85.
Ng, GKB et al. (2013). Factors affecting implementation of accreditation programmes and the
impact of the accreditation process on quality improvement in hospitals: a SWOT analysis. Hong
Kong medical journal = Xianggang yi xue za zhi, 19(5):434–46. doi: 10.12809/hkmj134063.
NOKC (2006). Effect of certification and accreditation on hospitals. Rapport fra Kunnskapssenterat:
nr 27-2006. Oslo: Norwegian Knowledge Centre for the Health Services. Available at:
https://www.fhi.no/globalassets/dokumenterfiler/rapporter/2009-og-eldre/rapport_0627_
isosertifisering_akkreditering_ver1.pdf, accessed 1 April 2019.
NOKC (2009). Effect of certification and accreditation on hospitals. Rapport fra Kunnskapssenterat:
nr 30-2009. Oslo: Norwegian Knowledge Centre for the Health Services. Available at:
https://www.fhi.no/globalassets/dokumenterfiler/rapporter/2009-og-eldre/rapport_0930_
sertifisering_akkreditering_sykehus.pdf, accessed 1 April 2019.
Pross C et al. (2018). Stroke units, certification, and Outcomes in German hospitals: a longitudinal
study of patient-based 30-day mortality for 2006–2014. BMC Health Services Research,
18(1):880. doi: 10.1186/s12913-018-3664-y.
Saleh SS et al. (2013). Accreditation of hospitals in Lebanon: is it a worthy investment? International
Journal for Quality in Health Care, 25(3):284–90.
Sampaio P, Saraiva P, Monteiro A (2012). ISO 9001 Certification pay-off: myth versus reality.
International Journal of Quality and Reliability Management, 29(8):891–914.
Sampaio P et al. (2009). ISO 9001 certification research: questions, answers and approaches.
International Journal of Quality and Reliability Management, 26:36–58.
Shaw CD (2001). External assessment of health care. BMJ, 322(72990):851–4.
Shaw CD et al. (2010a). Sustainable Healthcare Accreditation: messages from Europe in 2009.
International Journal for Quality in Health Care, 22:341–50.
Shaw CD et al. (2010b). Accreditation and ISO certification: Do they explain differences in quality
management in European hospitals? International Journal for Quality in Health Care, 22:445–51.
Shaw CD et al. (2013). Profiling health-care accreditation organizations: an international survey.
International Journal for Quality in Health Care, 25(3):222–31.
Shaw CD et al. (2014). The effect of certification and accreditation on quality management in
4 clinical services in 73 European hospitals. International Journal for Quality in Health Care,
26(S1):100–6.
Staines A (2000). Benefits of an ISO 9001 certification – the case of a Swiss regional hospital.
International Journal of Health Care Quality Assurance Incorporating Leadership in Health
Services, 13:27–33.
Sutherland K, Leatherman S (2006). Regulation and quality improvement. A review of the
evidence. The Health Foundation.
Sweeney J, Heaton C (2000). Interpretations and variations of ISO 9000 in acute health care.
International Journal for Quality in Health Care, 12:203–9.
Walcque CDE et al. (2008). Comparative study of hospital accreditation programs in Europe.
KCE (KCE reports, 70C). Available at: https://kce.fgov.be/sites/default/files/atoms/files/
d20081027303.pdf, accessed 29 November 2018.
WHO (2003). Quality and accreditation in health care services: A global review. Geneva:
World Health Organization. Available at: http://www.who.int/hrh/documents/en/quality_
accreditation.pdf, accessed 3 December 2018.
External institutional strategies: accreditation, certification, supervision 231
WHO, OECD, World Bank (2018). Delivering quality health services: a global imperative for
universal health coverage. Geneva. (CC BY-NC-SA 3.0 IGO).
Wiysonge CS et al. (2016). Public stewardship of private for-profit healthcare providers in low-
and middle-income countries (Review). Cochrane Database Systematic Reviews, 8(CD009855).
Yousefinezhadi T et al. (2015). The Effect of ISO 9001 and the EFQM Model on Improving
Hospital Performance: A Systematic Review. Iranian Red Crescent Medical Journal, 17(12):1–5.
Chapter 9
Clinical Practice Guidelines
as a quality strategy
Summary
multiple levels of clinical guideline development, with regional and local bodies as
well as several professional organizations contributing to the centrally coordinated
process; finally, fewer countries had no central coordination of the guideline devel-
opment process at all: professional associations or providers often step in to fill the
void. Countries with “well established” activities and wide experience in guideline
development and implementation include Belgium, England, France, Germany and
the Netherlands; many others have introduced some form of guideline production.
There is no newer systematically collected evidence along these lines, but vary-
ing degrees of progress can be expected among countries depending on recent
reform activity.
Health Technology
Evidence-based medicine Clinical guidelines
Assessment
Target group/ Clinicians Healthcare professionals Decision-makers
Users Managers
Target Individual patients Patient groups (also applied Population, population groups
population to individual clinical decision-
making)
Context of Clinical decision-making Clinical decision-making (inter Coverage decisions,
application alia to address unjustified Investments, Regulation
practice variation)
Methods Systematic reviews Systematic reviews Systematic reviews
Meta-analyses Meta-analyses Meta-analyses
Decision analyses Decision analyses Clinical trials
Economic evaluations
Ethical, sociocultural,
organizational, legal analyses
Challenges Lack of specific methodology Lack of specific methodology Lack of evidence
Requires user training Requires user training Impact difficult to measure
Potentially hampered by Potentially hampered by Frequently only considers
reality of service provision reality of service provision medical and economic
aspects
Source: adapted from Perleth et al., 2013
Process Outcomes
of care of care
• “statements that include recommendations intended to optimize patient care that are
informed by a systematic review of evidence and an assessment of the benefits and
harms of alternative care.”
A clinical standard:
A clinical indicator:
• describes a measurable component of the standard, with explicit criteria for inclusion,
exclusion, timeframe and setting.
A clinical tool:
For clinical guidelines to have an actual impact on processes and ultimately out-
comes of care, they need to be well developed and based on scientific evidence.
Efforts to identify the attributes of high-quality clinical guidelines prompted
extensive debates on which criteria are most important. Desirable attributes of
clinical guidelines were defined by the IOM in 1990 (Box 9.2). The Council
of Europe (2001) endorsed both the use of guidelines themselves and the
importance of developing them based on a sound methodology and reliable
scientific evidence so as to support best practice. With the increasing interest in
the implications of guideline use, methodologies for their development, critical
assessment, dissemination and implementation, as well as their adaptation and
updating, have been developed and several studies on their appropriateness and
usefulness have been carried out (see below).
Regarding guideline development, a number of guidebooks in different for-
mats are available from different actors in different contexts (“guidelines for
Clinical Practice Guidelines as a quality strategy 239
Reliability and Would another group of experts derive similar guidelines given the same
reproducibility evidence and methodology? Would different caregivers interpret and apply
the guideline similarly in identical clinical circumstances?
Clinical Does the document describe the clinical settings and the population to which
applicability the guideline applies?
guidelines”; see, for example, Shekelle et al., 1999; Woolf et al., 2012; and
Schünemann et al., 2014). Increasingly since its inception in 2003, guideline
development tools include the GRADE approach (Guyatt et al., 2011; Neumann
et al., 2016; Khodambashi & Nytrø, 2017). The Grading of Recommendations
Assessment, Development and Evaluation (GRADE) approach was created by
the synonymous working group,1 which is a collaborative consisting mainly of
methodologists and clinicians. It provides a framework for assessing the quality
(or “certainty”) of the evidence supporting, inter alia, guideline recommenda-
tions and therefore their resulting strength (GRADE Working Group, 2004).
Essentially, GRADE classifies recommendations as strong when a recommended
intervention or management strategy would presumably be chosen by a majority
of patients, clinicians or policy-makers in all care scenarios, and as weak when
different choices could be made (reflecting limited evidence quality, uncertain
benefit-harm ratios, uncertainty regarding treatment effects, questionable cost-
effectiveness, or variability in values and preferences (see, for example, Vandvik
et al., 2013)). The GRADE evidence-to-decision framework further helps guide-
line developers in structuring their process and evaluation of available evidence
(Neumann et al., 2016).
On the user side, several tools to evaluate (“appraise”) the methodological
quality of clinical guidelines exist (for example, Lohr, 1994; Vlayen et al.,
1 www.gradeworkinggroup.org
240 Improving healthcare quality in Europe
2005, Siering et al., 2013; Semlitsch et al., 2015). The most commonly used
instrument to assess the quality of a guideline is that developed by the AGREE
(Appraisal of Guidelines for Research and Evaluation) Collaboration, initially
funded through an EU research grant. The instrument comprises 23 criteria
grouped in the following six domains of guideline development addressed by
the AGREE instrument in its second iteration (AGREE II): scope and purpose;
stakeholder involvement; rigour of development; clarity and presentation;
applicability; and editorial independence (Brouwers et al., 2010). To facilitate
the consideration of AGREE II elements already in the guideline development
process, a reporting checklist was created in 2016 (Brouwers et al., 2016). There
have been calls for more content-focused guideline appraisal tools, as existing
options were considered by some to be mainly looking at the documentation
of the guideline development process (Eikermann et al., 2014). At the same
time, there is recognition that the development of good clinical guidelines often
requires trade-offs between methodological rigour and pragmatism (Browman
et al., 2015; Richter Sundberg, Garvare & Nyström, 2017). Several studies have
evaluated the overall quality of guidelines produced in certain contexts, invari-
ably demonstrating that there is considerable variation in how guidelines score
on the various AGREE domains (for example, Knai et al., 2012). However,
there seems to be an overall improvement in quality over time (Armstrong et al.,
2017). Research shows that while guideline appraisals often use arbitrarily set
AGREE cut-off scores to categorize guidelines as being of good or bad quality
(Hoffmann-Eßer et al., 2018b), the scoring of specific criteria, such as rigour of
development and editorial independence, seems to be the major influencer of
final scores (Hoffman-Eßer et al., 2018a).
Beyond the methodological quality of the guideline itself, however, the issue of
applicability is also of great importance (see also Box 9.2). Heffner noted that
as guidelines were rarely tested in patient care settings prior to publication (as
would a drug before being approved), the quality of clinical guidelines is defined
narrowly by an analysis of how closely recommendations are linked to scientific
and clinical evidence (Heffner, 1998). This concern remains today, though it
is now more explicitly addressed (see, for example, Steel et al., 2014; Li et al.,
2018), raising the question of whether guidelines should be systematically
pilot-tested in care delivery settings before being finalized. Furthermore, local
contextual considerations often influence how guideline recommendations can
be used. The science of guideline adaptation aims to balance the need for tailored
recommendations with the inefficiency of replicating work already carried out
elsewhere. Here as well, a number of frameworks have been developed to guide
adaptation efforts (Wang, Norris & Bero, 2018).
Finally, considering the speed with which medical knowledge progresses and
the pace of knowledge production at primary research level, it is to be expected
Clinical Practice Guidelines as a quality strategy 241
2 www.guideline.gov
3 http://www.g-i-n.net/
242 Improving healthcare quality in Europe
May, Montori & Mair, 2009; Gupta, 2011). This issue constitutes a more recent
focus of discussion around guideline development and utilization processes,
with guidelines ideally not only facilitating patient education but also endorsing
engagement and fostering shared decision-making, thus assuring that individual
patient values are balanced against the “desired” outcomes embedded in the trials
that form the basis of the recommendations in the guidelines (see, for example,
van der Weijden et al., 2013). Ideally, guidelines should help in determining
the treatment plan and individual treatment goals before each intervention,
particularly for chronic patients. Different modalities of patient involvement
exist in different contexts: patient group representatives are sometimes included
in the guideline development process and guideline documents are increasingly
produced in different formats for practitioners and patients (see, for example,
G-I-N, 2015; as well as Elwyn et al., 2015; Fearns et al., 2016; Schipper et al.,
2016; Zhang et al., 2017; Cronin et al, 2018).
In summary, clinical guidelines have the potential to influence mainly processes
and ultimately outcomes of care, targeting primarily professionals and dealing
with the effectiveness, safety and increasingly also patient-centredness of care.
To fulfill this potential, they need to be:
• kept up-to-date.
The following sections look at how these aspects are addressed in European
countries, and how the potential contribution of clinical guidelines to quality
of care can be understood and optimized.
• The first category included those with “well established” activities and
wide experience in guideline development and implementation. This
category comprised the leaders in guideline development (Belgium,
England, France, Germany and the Netherlands) and other countries
that had, and have, well established programmes (Denmark, Finland,
Italy, Norway and Sweden).
• The third category involved cases where clinical guidelines had either
been “recently adopted” or were “in the planning stage” at the time
of investigation.
The majority of countries had no legal basis for the development and imple-
mentation of clinical guidelines. Only 13 reported having an officially estab-
lished basis for guidelines, although implementation still mostly took place
on a voluntary basis. Such examples are the French Health Authority (Haute
Authorité de Santé, HAS) and the National Disease Management Guidelines
Programme in Germany (Programm für Nationale Versorgungsleitlinien, NVL),
which develop clinical guidelines, disseminate them and evaluate their imple-
mentation within their respective healthcare system. In France, while clinical
guidelines are established by national regulations, their use by practitioners is
not mandatory and an initial phase of financial penalties for non-compliance
was soon abandoned. In Germany, the NVL programme is run by the highest
authorities in the self-governance of physicians, the German Medical Association
(Bundesärztekammer), the National Association of Statutory Health Insurance
Physicians (Kassenärztliche Bundesvereinigung), and the Association of the
Scientific Medical Societies in Germany (Arbeitsgemeinschaft der Wissenschaftlichen
Medizinischen Fachgesellschaften, AWMF). NVL guidelines follow a defined
methodology (Bundesärztekammer, 2017) and usually inform the content of
national disease management programmes (DMPs). Physicians who are vol-
untarily enrolled in these programmes sign an obligation to rely on the DMP
standards and to document their (non)-compliance (see also Stock et al., 2011);
however, the mission statement of the NVL programme clearly highlights that
244 Improving healthcare quality in Europe
At the other end of the spectrum in the study by Legido-Quigley et al. (2012),
practitioners in countries such as Greece and Slovenia had to rely on their own
efforts to obtain evidence usually produced abroad; at the time of investigation,
professional associations had begun to show interest in the field and both countries
have made progress since then (Albreht et al., 2016; Economou & Panteli, 2019).
checklist, which is based on the AGREE I instrument and adapted to the German
context (see Semlitsch et al., 2015).
Germany guidelines are collected and made available by the German guideline
repository (see above and Figure 9.2).4 Among the countries surveyed by Legido-
Quigley et al. (2012) a number of more proactive approaches to dissemination
could be observed, including tailored versions for different target groups and
newsletters. In Sweden, for example, updated clinical guidelines were sent to
each registered practitioner and a short version was compiled for the lay public.
Regarding implementation support tools, some countries reported concrete
measures, including checklists and how-to guides accompanying new guidelines,
as well as IT tools (websites, apps, etc., see below).
Most notably, NICE has a team of implementation consultants that work nation-
ally to encourage a supportive environment and locally to share knowledge and
support education and training; additionally, it has developed generic implemen-
tation tools (for example, an overall “how-to” guide) and specific tools for every
guideline (for example, a costing template and a PowerPoint presentation for
use within institutions). Interestingly, NICE’s smartphone app, which allowed
users to download guidance and use it offline during practice was retired at the
end of 2018 and users are now encouraged to use the revamped NICE website.
This decision reflects developments in IT infrastructures, personal mobile con-
nectivity (i.e. data) limits and NICE’s recognition of the importance of ensuring
clinicians’ access to up-to-date recommendations (NICE, 2018).
In the Netherlands the use of clinical guidelines is promoted through electronic
web pages, some developed with interactive learning. A national website contains
a series of implementation tools5 and certain guideline content is integrated in
electronic patient record systems. The latter was reported as being the cornerstone
of guideline implementation in Finland as well: guidelines are integrated with
the Evidence-Based Medicine electronic Decision Support (EBMeDS) system,
allowing clinicians to open them from within the electronic patient record.
Moreover, summaries, patient versions, PowerPoint slide series and online
courses are developed. In Germany indicator-based approaches are used to
monitor and endorse implementation (see below), while additional tools include
IT-applications in hospitals and the use of guideline-based clinical pathways. At
the time of Legido-Quigley et al.’s investigation in 2011, smartphone applica-
tions to further simplify guideline implementation had also started to appear
(for example, García-Lehuz, Munoz Guajarado & Arguis Molina, 2012). In the
intervening years many developers have produced implementation apps (ranging
from content repositories to interactive operationalization tools) and guideline
repositories have their own app-based platforms for guideline-based decision sup-
port (we return to this in the section on good implementation practice, below).
4 www.awmf.org
5 http://www.ha-ring.nl/
248 Improving healthcare quality in Europe
with lack of awareness (Cabana et al., 1999) and the reluctance of physicians to
change their approach to the management of disease (Michie & Johnston, 2004).
A public survey on NICE guidelines discovered that awareness of a guideline
did not necessarily imply that respondents understood or knew how to use it
(McFarlane et al., 2012). A related study carried out in the German primary
care context found awareness of clinical guidelines to be relatively low and the
inclination to treat according to guidelines not to be higher – and occasionally
even lower – in those practitioners who were aware of their existence compared
to those who were not (Karbach et al., 2011). Similarly, a study in the French
primary care context concluded that, while a favourable disposition towards
guidelines in general meant a higher likelihood of awareness of specific guide-
lines, it did not have a significant effect on the actual application of guideline
recommendations in practice (Clerc et al., 2011). Cook et al. (2018) showed
that while clinicians believed practice variation should be reduced, they were
less certain that this can be achieved. In the Swiss context, despite a generally
favourable disposition towards guidelines, barriers to adherence comprised lack
of guideline awareness and familiarity, applicability of existing guidelines to
multimorbid patients, unfavourable guideline factors and lack of time, as well
as inertia towards changing previous practice (Birrenbach et al., 2016). In a
scoping review capturing evidence published up to the end of 2015, Fischer et
al. (2016) found that barriers to guideline implementation can be differentiated
into personal factors, guideline-related factors and external factors, and that
structured implementation can improve guideline adherence.
Regarding drivers towards guideline awareness and utilization, Francke et al.
(2008) showed that the simpler a guideline is to follow, the more likely it is to
be accepted by practitioners. Work by Brusamento et al. (2012) supports the
conclusions already drawn by Grimshaw et al. (2004) that the effect of different
implementation strategies on care processes varies but spans from non-existence
to moderate, with no clear advantage of multifaceted or single interventions.
The latter finding was confirmed by a review of reviews in 2014 (Squires et al.,
2014), as well as for specific areas of care (for example, Suman et al., 2016).
Looking at the issue of guideline adherence over time, recent work found that it
decreased about half of the time after more than one year following implementa-
tion interventions but the evidence was generally too heterogeneous for really
robust conclusions (Ament et al., 2015). A number of studies have tackled the
concept of guideline “implementability” in the past few years and are discussed
more closely in the next section.
Early work investigating the effects of guidelines on outcomes in primary care
found little evidence of effect, citing methodological limitations of the evidence
body (Worral, Chaulk & Freake, 1997). Evidence from the Netherlands also
suggests that while clinical guidelines can be effective in improving the process
250 Improving healthcare quality in Europe
and structure of care, their effects on patient health outcomes were studied far
less and data are less convincing (Lugtenberg, Burgers & Westert, 2009). This was
substantiated by further work in the area (Grimshaw et al., 2012). The systematic
review by Brusamento et al. (2012) confirmed the lack of conclusive evidence:
while significant effects had been measured, for example regarding the percentage
of patients who achieved strict hypertension control through guideline compliant
treatment, other studies showed no or unclear effects of guideline-concordant
treatment. Newer studies also show mixed results regarding the effect of guidelines
on outcomes, but a clear link with implementation modalities (Roberts et al.,
2016; Cook et al., 2018; Kovacs et al., 2018; Shanbhag et al., 2018).
Regarding cost-effectiveness, the scope of evidence is even more limited. A
comprehensive analysis should include the costs of the development phase,
the dissemination/implementation and the change determined in the health
service by putting the guideline into practice. However, in practice data on the
cost of guideline development are scarce and – given the vast variability of set-
tings and practices – likely not generalizable (Köpp et al., 2012; Jensen et al.,
2016). A systematic review by Vale et al. (2007) pointed out that among 200
studies on guideline implementation strategies (only 11 from Europe), only
27% had some data on cost and only four provided data on development and
implementation. Most of the relevant studies only partially accounted for costs
incurred in the process of guideline production. Having said that, NICE has
developed methods to assess the resource impact of its guidelines; for a subset
of cost-saving guidelines, savings ranged from £31 500 to £690 per 100 000
population. An investigation of one component of guideline use, namely that of
active implementation in comparison to general dissemination practices, found
that while the former requires a substantial upfront investment, results regarding
optimized processes of care and improved patient outcomes may not be sufficient
to render it cost-effective (Mortimer et al., 2013). A related but separate issue
is the use of cost-effectiveness analyses in clinical guidelines; challenges and
opportunities have been identified in the international literature (Drummond,
2016; Garrison, 2016).
Creation of content: The four interrelated domains of content creation are (i) stakeholder
involvement (including credibility of the developers and disclosure of conflicts of interest); (ii)
evidence synthesis (specifying what evidence is needed and how and when it is synthesized);
(iii) considered judgement (including clinical applicability and values); and (iv) feasibility (local
applicability, resource constraints and novelty). These domains may be considered non-sequentially
and iteratively.
The need for rapid responses in emergency situations (for example, epidemics) has
prompted research into so-called “rapid guidelines”, which approach the balance
between expedience of process, methodological rigour and implementability in
a systematic manner (Florez et al., 2018; Kowalski et al., 2018; Morgan et al.,
2018). Another consideration in this direction is the potential of observational
6 https://www.decide-collaboration.eu/
254 Improving healthcare quality in Europe
did not have publicly available policies and of the available policies several did
not clearly report critical steps in obtaining, managing and communicating
disclosure of relationships of interest (Morciano et al., 2016). Recent work
from Germany indicates that while financial conflicts of interest seem to be
adequately disclosed in the most rigorously developed guidelines, active manage-
ment of existing conflicts of interest is lagging behind (Napierala et al., 2018);
this is also reflected in work from Canada, which discovered frequent relations
between guideline producing institutions and, for example, the pharmaceutical
industry and no clear management strategy (Campsall et al., 2016; Shnier et
al., 2016). This type of issue was also identified in Australia, with one in four
guideline authors without disclosed ties to pharmaceutical companies showing
potential for undisclosed relevant ties (Moynihan et al., 2019). To foster trust
and implementation, it is clear that institutions involved in guideline develop-
ment should invest resources in explicitly collecting all relevant information and
establish clear management criteria; the structure of disclosure formats also has
a role to play here (Lu et al., 2017).
Box 9.4 shows the conflicts of interest management principles defined by the
Guidelines International Network (Schünemann et al., 2015). In Germany
the website Leitlinienwatch.de (“guideline watch”) uses an explicit evaluation
matrix to appraise how new German guidelines address the issue of financial
conflicts of interest. Beyond measures for direct financial conflicts of interest,
the management of indirect conflicts of interest (for example, issues related
to academic advancement, clinical revenue streams, community standing and
engagement in academic activities that foster an attachment to a specific point of
view, cf. Schünemann et al., 2015) is also important in guideline development.
Ensuring that guidelines are developed based on robust consensus processes by
a multidisciplinary panel can contribute to mitigating the effect of such conflicts
(see, for instance, Ioannidis, 2018).
Box 9.4 G-I-N principles for dealing with conflicts of interests in guideline
development
• Principle 1: Guideline developers should make all possible efforts to not include members
with direct financial or relevant indirect conflicts of interest.
• Principle 2: The definition of conflict of interest and its management applies to all
members of a guideline development group, regardless of the discipline or stakeholders
they represent, and this should be determined before a panel is constituted.
• Principle 3: A guideline development group should use standardized forms for disclosure
of interests.
• Principle 4: A guideline development group should disclose interests publicly, including
all direct financial and indirect conflicts of interest, and these should be easily accessible
for users of the guideline.
• Principle 5: All members of a guideline development group should declare and update any
changes in interests at each meeting of the group and at regular intervals (for example,
annually for standing guideline development groups).
• Principle 6: Chairs of guideline development groups should have no direct financial or
relevant indirect conflicts of interest. When direct or indirect conflicts of interest of a
chair are unavoidable, a co-chair with no conflicts of interest who leads the guideline
panel should be appointed.
• Principle 7: Experts with relevant conflicts of interest and specific knowledge or expertise
may be permitted to participate in discussion of individual topics, but there should be
an appropriate balance of opinion among those sought to provide input.
• Principle 8: No member of the guideline development group deciding about the direction
or strength of a recommendation should have a direct financial conflict of interest.
• Principle 9: An oversight committee should be responsible for developing and implementing
rules related to conflicts of interest.
References
Agoritsas T et al. (2015). Decision aids that really promote shared decision making: the pace
quickens. BMJ, 350:g7624.
Akl EA et al. (2017). Living systematic reviews: 4. Living guideline recommendations. Journal of
Clinical Epidemiology, 91:47–53. doi: 10.1016/j.jclinepi.2017.08.009.
Albreht T et al. (2016). Slovenia: Health system review. Health Systems in Transition, 18(3):1–207.
Althaus A et al. (2016). Implementation of guidelines – obstructive and beneficial factors. Cologne:
Institute for Quality and Efficiency in Health Care.
Ament SM et al. (2015). Sustainability of professionals’ adherence to clinical practice
guidelines in medical care: a systematic review. BMJ Open, 5(12):e008073. doi: 10.1136/
bmjopen-2015-008073.
Armstrong JJ et al. (2017). Improvement evident but still necessary in clinical practice guideline
quality: a systematic review. Journal of Clinical Epidemiology, 81:13–21.
AWMF (2012). Ständige Kommission Leitlinien. AWMF-Regelwerk “Leitlinien”. Available at:
http://www.awmf.org/leitlinien/awmf-regelwerk.html, accessed 14 April 2019.
Birrenbach T et al. (2016). Physicians’ attitudes toward, use of, and perceived barriers to clinical
guidelines: a survey among Swiss physicians. Advances in Medical Education and Practice,
7:673–80. doi: 10.2147/AMEP.S115149.
Blozik E et al. (2012). Simultaneous development of guidelines and quality indicators – how
do guideline groups act? A worldwide survey. International Journal of Health Care Quality
Assurance, 25(8):712–29.
Brouwers M et al. (2010). AGREE II: Advancing guideline development, reporting and evaluation
in healthcare. Canadian Medical Association Journal, 182:E839–842.
Brouwers MC et al. (2015). The Guideline Implementability Decision Excellence Model
(GUIDE-M): a mixed methods approach to create an international resource to advance the
practice guideline field. Implementation Science, 10:36. doi: 10.1186/s13012-015-0225-1.
Brouwers MC et al. (2016). The AGREE Reporting Checklist: a tool to improve reporting of
clinical practice guidelines. BMJ, 352:i1152. doi 10.1136/bmj.i1152.
Browman GP et al. (2015). When is good, good enough? Methodological pragmatism for sustainable
guideline development. Implementation Science, 10:28. doi: 10.1186/s13012-015-0222-4.
258 Improving healthcare quality in Europe
Brusamento S et al. (2012). Assessing the effectiveness of strategies to implement clinical guidelines
for the management of chronic diseases at primary care level in EU Member States: a systematic
review. Health Policy, 107(2–3):168–83.
Bundesärztekammer (2017). Programm für Nationale Versorgungsleitlinien – Methodenreport.
5. Auflage 2017. doi: 10.6101/AZQ/000169.
Cabana MD et al. (1999). Why don’t physicians follow clinical practice guidelines? A framework
for improvement. Journal of the American Medical Association, 282(15):1458–65.
Campsall P et al. (2016). Financial Relationships between Organizations that Produce Clinical
Practice Guidelines and the Biomedical Industry: a Cross-Sectional Study. PLoS Medicine,
13(5):e1002029.
Chan WV et al. (2017). ACC/AHA Special Report: Clinical Practice Guideline Implementation
Strategies: a Summary of Systematic Reviews by the NHLBI Implementation Science Work
Group: a Report of the American College of Cardiology/American Heart Association Task Force
on Clinical Practice Guidelines. Journal of the American College of Cardiology, 69(8):1076–92.
Clerc I et al. (2011). General practitioners and clinical practice guidelines: a reexamination. Medical
Care Research and Review, 68(4):504–18.
Cook DA et al. (2018). Practice variation and practice guidelines: attitudes of generalist and
specialist physicians, nurse practitioners, and physician assistants. PloS One, 13(1):e0191943.
Council of Europe (2001). Recommendation of the Committee of Ministers to Member States on
developing a methodology for drawing up guidelines on best medical practices. Available at:
https://search.coe.int/cm/Pages/result_details.aspx?ObjectID=09000016804f8e51, accessed
14 April 2019.
Cronin RM et al. (2018). Adapting medical guidelines to be patient-centered using a patient-
driven process for individuals with sickle cell disease and their caregivers. BMC Hematology,
18:12. doi: 10.1186/s12878-018-0106-3.
Drummond M (2016). Clinical Guidelines: a NICE Way to Introduce Cost-Effectiveness
Considerations? Value in Health, 19(5):525–30. doi: 10.1016/j.jval.2016.04.020.
Economou C, Panteli D (2019). Assesment report monitoring and documenting systemic and
health effects of health reforms in Greece. WHO Regional Office for Europe.
Eddy DM (2005). Evidence-based medicine: a unified approach. Health Affairs (Millwood),
24(1):9–17.
Eikermann M et al. (2014). Tools for assessing the content of guidelines are needed to enable
their effective use – a systematic comparison. BMC Research Notes, 7:853. doi: 10.1186/1756-
0500-7-853.
Elliott JH et al. (2014). Living systematic reviews: an emerging opportunity to narrow the
evidence-practice gap. PLoS Medicine, 11(2):e1001603. doi: 10.1371/journal.pmed.1001603.
Elwyn G et al. (2015). Trustworthy guidelines – excellent; customized care tools – even better.
BMC Medicine, 13:199. doi: 10.1186/s12916-015-0436-y.
ESF (2011). Implementation of Medical Research in Clinical Practice. European Science
Foundation. Available at: http://archives.esf.org/fileadmin/Public_documents/Publications/
Implem_MedReseach_ClinPractice.pdf, accessed 14 April 2019.
Fearns N et al. (2016). What do patients and the public know about clinical practice guidelines
and what do they want from them? A qualitative study. BMC Health Services Research, 16:74.
doi: 10.1186/s12913-016-1319-4.
Fischer F et al. (2016). Barriers and Strategies in Guideline Implementation – a Scoping Review.
Healthcare (Basel, Switzerland), 4(3):36. doi: 10.3390/healthcare4030036.
Florez ID et al. (2018). Development of rapid guidelines: 2. A qualitative study with WHO
guideline developers. Health Research Policy and Systems, 16(1):62. doi: 10.1186/s12961-
018-0329-6.
Clinical Practice Guidelines as a quality strategy 259
Francke AL et al. (2008). Factors influencing the implementation of clinical guidelines for health
care professionals: a systematic meta-review. BMC Medical Informatics and Decision Making,
8:38.
Gagliardi AR, Brouwers MC (2015). Do guidelines offer implementation advice to target users?
A systematic review of guideline applicability. BMJ Open, 5(2):e007047. doi: 10.1136/
bmjopen-2014-007047.
Gagliardi AR, Alhabib S, members of Guidelines International Network Implementation Working
Group (2015). Trends in guideline implementation: a scoping systematic review. Implementation
Science, 10:54. doi: 10.1186/s13012-015-0247-8.
Gagliardi AR et al. (2015). Developing a checklist for guideline implementation planning: review
and synthesis of guideline development and implementation advice. Implementation Science,
10:19. doi: 10.1186/s13012-015-0205-5.
Garcia-Lehuz JM, Munoz Guajardo I, Arguis Molina S (2012). Mobile application to facilitate
use of clinical practice guidelines in the Spanish National Health Service. Poster presentation,
G-I-N International Conference, Berlin 22–25 August 2012.
Garrison LP (2016). Cost-Effectiveness and Clinical Practice Guidelines: Have We Reached a
Tipping Point? An Overview. Value in Health, 19(5):512–15. doi: 10.1016/j.jval.2016.04.018.
G-I-N (2015). G-I-N Public Toolkit: patient and Public Involvement in Guidelines. Pitlochry:
Guidelines International Network.
GRADE Working Group (2004). Grading quality of evidence and strength of recommendations.
BMJ, 328(7454):1490–4.
Grimshaw JM, Russell IT (1993). Effect of clinical guidelines on medical practice: a systematic
review of rigorous evaluations. Lancet, 342(8883):1317–22.
Grimshaw JM et al. (2004). Effectiveness and efficiency of guideline dissemination and
implementation strategies. Health Technology Assessment, 8(6):iii–iv, 1–72.
Grimshaw JM et al. (2012). Knowledge translation of research findings. BMC Implementation
Science, 7:50.
Gupta M (2011). Improved health or improved decision making? The ethical goals of EBM.
Journal of Evaluation in Clinical Practice, 17(5):957–63.
Guyatt GH et al. (2011). GRADE guidelines: a new series of articles in the Journal of
Clinical Epidemiology. Journal of Clinical Epidemiology, 64(4):380–2. doi: 10.1016/j.
jclinepi.2010.09.011.
Härter M et al. (2017). Shared decision making in 2017: International accomplishments in
policy, research and implementation. Zeitschrift für Evidenz, Fortbildung und Qualität im
Gesundheitswesen, 123–124:1–5.
Heffner JE (1998). Does evidence-based medicine help the development of clinical practice
guidelines? Chest, 113(3 Suppl):172S–8S.
Hewitt-Taylor J (2006). Evidence-based practice, clinical guidelines and care protocols. In: Hewitt-
Taylor J (ed.). Clinical guidelines and care protocols. Chichester: John Wiley & Sons, pp. 1–16.
Hoffmann-Eßer W et al. (2018a). Guideline appraisal with AGREE II: online survey of the potential
influence of AGREE II items on overall assessment of guideline quality and recommendation
for use. BMC Health Services Research, 18(1):143. doi: 10.1186/s12913-018-2954-8.
Hoffmann-Eßer W et al. (2018b). Systematic review of current guideline appraisals performed
with the Appraisal of Guidelines for Research & Evaluation II instrument – a third of AGREE
II users apply a cut-off for guideline quality. Journal of Clinical Epidemiology, 95:120–7. doi:
10.1016/j.jclinepi.2017.12.009.
Ioannidis JPA (2018). Professional Societies Should Abstain From Authorship of Guidelines
and Disease Definition Statements. Circulation: Cardiovascular Quality and Outcomes,
11(10):e004889. doi: 10.1161/CIRCOUTCOMES.118.004889.
IOM (2011). Clinical Practice Guidelines We Can Trust (Consensus Report). Washington DC:
National Academies Press.
260 Improving healthcare quality in Europe
JCI (2016). Clinical Practice Guidelines: Closing the gap between theory and practice. A
White Paper by the Joint Commission International. Oak Brook (USA): Joint Commission
International.
Jensen CE et al. (2016). Systematic review of the cost-effectiveness of implementing guidelines
on low back pain management in primary care: is transferability to other countries possible?
BMJ Open, 6(6):e011042. doi: 10.1136/bmjopen-2016-011042.
Karbach UI et al. (2011). Physicians’ knowledge of and compliance with guidelines: an exploratory
study in cardiovascular diseases. Deutsches Arzteblatt International, 108(5):61–9.
Kastner M et al. (2015). Guideline uptake is influenced by six implementability domains for
creating and communicating guidelines: a realist review. Journal of Clinical Epidemiology,
68:498–509. doi: 10.1016/j.jclinepi.2014.12.013.
Khodambashi S, Nytrø Ø (2017). Reviewing clinical guideline development tools: features and
characteristics. BMC Medical Informatics and Decision Making, 17(1):132. doi: 10.1186/
s12911-017-0530-5.
Knai C et al. (2012). Systematic review of the methodological quality of clinical guideline
development for the management of chronic disease in Europe. Health Policy, 107(2–3):157–67.
Köpp J et al. (2012). Financing of Clinical Practice Guidelines (CPG) – what do we really know?
Poster presentation, G-I-N International Conference, Berlin 22–25 August 2012.
Kovacs E et al. (2018). Systematic Review and Meta-analysis of the Effectiveness of Implementation
Strategies for Non-communicable Disease Guidelines in Primary Health Care. Journal of
General Internal Medicine, 33(7):1142–54. doi: 10.1007/s11606-018-4435-5.
Kowalski SC et al. (2018). Development of rapid guidelines: 1. Systematic survey of current practices
and methods. Health Research Policy and Systems, 16(1):61. doi: 10.1186/s12961-018-0327-8.
Kredo T et al. (2016). Guide to clinical practice guidelines: the current state of play. International
Journal for Quality in Health Care, 28(1):122–8.
Legido-Quigley H et al. (2012). Clinical guidelines in the European Union: Mapping the regulatory
basis, development, quality control, implementation and evaluation across Member States.
Health Policy, 107(2–3):146–56.
Li H et al. (2018). A new scale for the evaluation of clinical practice guidelines applicability: de-
velopment and appraisal. Implementation Science, 13(1):61. doi: 10.1186/s13012-018-0746-5.
Liang L et al. (2017). Number and type of guideline implementation tools varies by guideline,
clinical condition, country of origin, and type of developer organization: content analysis of
guidelines. Implementation Science, 12(1):136. doi: 10.1186/s13012-017-0668-7.
Lohr KN (1994). Guidelines for clinical practice: applications for primary care. International
Journal for Quality in Health Care, 6(1):17–25.
Lu Y et al. (2017). Transparency ethics in practice: revisiting financial conflicts of interest
disclosure forms in clinical practice guidelines. PloS One, 12(8):e0182856. doi: 10.1371/
journal.pone.0182856.
Lugtenberg M, Burgers JS, Westert GP (2009). Effects of evidence-based clinical practice guidelines
on quality of care: a systematic review. Quality and Safety in Health Care, 18(5):385–92. doi:
10.1136/qshc.2008.028043.
McFarlane E et al. (2012). DECIDE: survey on awareness of NICE guidelines and their
implementation. Poster presentation, G-I-N International Conference, Berlin 22–25 August
2012.
Martínez García L et al. (2014). The validity of recommendations from clinical guidelines: a
survival analysis. Canadian Medical Association Journal, 186(16):1211–19.
Martínez García L et al. (2015). Efficiency of pragmatic search strategies to update clinical
guidelines recommendations. BMC Medical Research Methodology, 15:57. doi: 10.1186/
s12874-015-0058-2.
May C, Montori VM, Mair FS (2009). We need minimally disruptive medicine. BMJ, 339:b2803.
doi: 10.1136/bmj.b2803.
Clinical Practice Guidelines as a quality strategy 261
Michie S, Johnston M (2004). Changing clinical behaviour by making guidelines specific. BMJ,
328(7435):343–5.
Morciano C et al. (2016). Policies on Conflicts of Interest in Health Care Guideline Development:
a Cross-Sectional Analysis. PloS One, 11(11):e0166485. doi: 10.1371/journal.pone.0166485.
Morgan RL et al. (2018). Development of rapid guidelines: 3. GIN-McMaster Guideline
Development Checklist extension for rapid recommendations. Health Research Policy and
Systems, 16(1):63. doi: 10.1186/s12961-018-0330-0.
Mortimer D et al. (2013). Economic evaluation of active implementation versus guideline
dissemination for evidence-based care of acute low-back pain in a general practice setting.
PloS One, 8(10):e75647. doi: 10.1371/journal.pone.0075647.
Moynihan R et al. (2019). Undisclosed financial ties between guideline writers and pharmaceutical
companies: a cross-sectional study across 10 disease categories. BMJ Open, 9:e025864. doi:
10.1136/bmjopen-2018-025864.
Napierala H et al. (2018). Management of financial conflicts of interests in clinical practice
guidelines in Germany: results from the public database GuidelineWatch. BMC Medical
Ethics, 19(1):65. doi: 10.1186/s12910-018-0309-y.
Neumann I et al. (2016). The GRADE evidence-to-decision framework: a report of its testing
and application in 15 international guideline panels. Implementation Science, 11:93. doi:
10.1186/s13012-016-0462-y.
NICE (2014). Developing NICE guidelines: the manual. Process and methods guides. London:
National Institute for Health and Care Excellence.
NICE (2018). NICE to retire Guidance app. Available at: https://www.nice.org.uk/news/article/
nice-to-retire-guidance-app, accessed 13 April 2019.
Nothacker M et al. (2016). Reporting standards for guideline-based performance measures.
Implementation Science, 11:6. doi: 10.1186/s13012-015-0369-z.
OECD (2015). Health Data Governance: Privacy, Monitoring and Research. Paris: OECD. Available
at: http://www.oecd.org/health/health-systems/health-data-governance-9789264244566-en.htm.
Oyinlola, JO, Campbell J, Kousoulis AA (2016). Is real world evidence influencing practice?
A systematic review of CPRD research in NICE guidances. BMC Health Services Research,
16:299. doi: 10.1186/s12913-016-1562-8.
Perleth M et al. (2013). Health Technology Assessment: Konzepte, Methoden, Praxis für Wissenschaft
und Entscheidungsfindung. MWV Medizinisch Wissenschaftliche Verlagsgesellschaft.
Qaseem A et al. (2012). Guidelines International Network: Toward International Standards for
Clinical Practice Guidelines. Annals of Internal Medicine, 156:525–31.
Richter Sundberg L, Garvare R, Nyström ME (2017). Reaching beyond the review of research
evidence: a qualitative study of decision making during the development of clinical practice
guidelines for disease prevention in healthcare. BMC Health Services Research, 17(1):344. doi:
10.1186/s12913-017-2277-1.
Richter-Sundberg L et al. (2015). Addressing implementation challenges during guideline
development – a case study of Swedish national guidelines for methods of preventing disease.
BMC Health Services Research, 15:19. doi: 10.1186/s12913-014-0672-4.
Roberts ET et al. (2016). Evaluating Clinical Practice Guidelines Based on Their Association with
Return to Work in Administrative Claims Data. Health Services Research, 51(3):953–80. doi:
10.1111/1475-6773.12360.
Sackett DL et al. (1996). Evidence based medicine: what it is and what it isn’t. BMJ, 312(7023):71–2.
Schipper K et al. (2016). Strategies for disseminating recommendations or guidelines to patients:
a systematic review. Implementation Science, 11(1):82. doi: 10.1186/s13012-016-0447-x.
Schünemann HJ et al. (2014). Guidelines 2.0: systematic development of a comprehensive checklist
for a successful guideline enterprise. Canadian Medical Association Journal, 186(3):E123–42.
doi: 10.1503/cmaj.131237.
262 Improving healthcare quality in Europe
Woolf S et al. (2012). Developing clinical practice guidelines: types of evidence and outcomes;
values and economics, synthesis, grading, and presentation and deriving recommendations.
Implementation Science, 7:61. doi: 10.1186/1748-5908-7-61.
Worrall G, Chaulk P, Freake D (1997). The effects of clinical practice guidelines on patient outcomes
in primary care: a systematic review. Canadian Medical Association Journal, 156(12):1705–12.
Wright A et al. (2010). Best Practices in Clinical Decision Support: the Case of Preventive Care
Reminders. Applied Clinical Informatics, 1(3):331–45.
Zhang Y et al. (2017). Using patient values and preferences to inform the importance of health
outcomes in practice guideline development following the GRADE approach. Health and
Quality of Life Outcomes, 15(1):52. doi: 10.1186/s12955-017-0621-0.
Chapter 10
Audit and Feedback as
a Quality Strategy
Summary
Preparing
for audit
Audit
Country Programme/ Focus of programme Audited information Indicators Type of feedback Comments
responsible (data sources)
Care area Quality Types of
institution
dimension(s) providers
Finland Conmedic (31 health Prevention, acute, Effectiveness Primary health Electronic patient Process Feedback report and
centres which provide chronic care centres records web page for potential
services for one fifth of exchange between
the population) health centres
Germany External Quality 30 acute care areas Effectiveness, Inpatient care Specifically 416 process and Benchmark report Mandatory
Assurance for (2014) patient safety documented quality outcome indicators to hospital (with programme, combined
Inpatient Care assurance data, (33% risk-adjusted) comparison to national with peer review
(esQS)/Federal Joint administrative data (2014), for example, average performance) process in case of
270 Improving healthcare quality in Europe
developed on a voluntary basis in the 1970s and 1980s. Later, from 1991, the UK
was the first country that required hospital doctors to participate in audit. Within
a few years other health professionals were required to join multiprofessional
clinical audits. In Germany and France audit and feedback initiatives emerged
mostly in the 1990s. Table 10.1 provides an overview about some prominent
audit and feedback programmes in Europe.
In the UK various actors are active in the field of audit and feedback. The National
Clinical Audit Programme is run by the Healthcare Quality Improvement
Partnership (HQIP). National audits are performed for about 30 clinical condi-
tions, including acute (for example, emergency laparotomy) and chronic condi-
tions (for example, diabetes). These audits focus mostly on specialist inpatient and
outpatient service providers, who are assessed with regard to all three dimensions
of quality: effectiveness, patient safety and patient experience. Audits rely on
various data sources, and assess performance in relation to numerous indicators
of structures, processes and outcomes. Benchmark reports are provided to local
trusts and annual reports are published for each of the clinical conditions. In the
area of primary care the most important national audit programme is the Quality
and Outcomes Framework (QOF). However, the main purpose of QOF is to
distribute financial incentives (representing around 15% of GP income), and
indicators were developed externally. GPs are also required to undertake audit
and feedback as part of their revalidation scheme, which was launched in 2012.
Furthermore, medical students are taught audit, and there is some teaching
for GP trainees. Finally, there is a National Quality Improvement and Clinical
Audit Network (NQICAN), which brings together 15 regional clinical audit/
effectiveness networks from across England. NQICAN supports staff working
in quality improvement and clinical audit in different health and social care
organizations, providing practical guidance and support.
In the Netherlands audit and feedback activities historically started in primary
care and were initiated by GPs. More recently, audit and feedback has expanded
also to secondary inpatient and outpatient care and is more embedded in broader
quality assurance initiatives. A Dutch Institute for Clinical Audit (DICA) was
set up in 2009 and medical specialist societies use DICA to measure quality and
communicate about it. DICA runs registers for cancer patients (colorectal, breast,
upper gastrointestinal and lung), collects patient-reported outcome measures, and
provides feedback reports to professionals. Almost all hospitals have established
quality improvement strategies based on feedback reports from DICA, which also
allow them to measure improvements over time. In addition, a comprehensive
clinical and organizational audit is part of primary care practice accreditation.
Furthermore, almost all GPs are part of one of 600 educational pharmacotherapy
groups existing in the country, each consisting of GPs and pharmacists. These
groups use audits of prescribing data as a starting point for discussions.
Audit and Feedback as a Quality Strategy 273
In Germany audit and feedback efforts also exist at several levels of the health-
care system. The most important audit and feedback initiative is the mandatory
external quality assurance programme introduced for all hospitals in 2001. It is
the responsibility of the Federal Joint Committee, which includes representatives
of providers (for example, hospitals) and sickness funds. By 2014 the programme
covered 30 specific areas of inpatient care (for example, cholecystectomy, or
community-acquired pneumonia), which were assessed on the basis of more
than 400 process and outcome indicators, including also patient-safety indicators
(AQUA, 2014). Providers have to comply with specific quality documentation
requirements in order to provide data for the audits. Collected data are analysed
and forwarded to professional expert sections who may initiate a peer review
process if the data suggest potential quality problems. Public disclosure of data
was introduced in 2007. Smaller programmes cover amongst other things disease
management programmes (DMPs) and ambulatory dialysis. In addition, profes-
sional associations may have their own audit systems, for example, for reproduc-
tive medicine, producing annual reports and providing feedback to providers.
In Italy the Emilia-Romagna region requires GPs to join a Primary Care Team.
GPs are mandated to collaborate and share information and to engage in
improving the quality of healthcare services provided to patients. Primary Care
Teams receive quality reports featuring structure, process and outcome indica-
tors computed on the basis of data from the regional healthcare administrative
database, an anonymous comprehensive and longitudinal database linkable at
the patient and provider level. The GPs in each team are asked to identify at
least one critical area of the report and initiate quality improvement activities
in their practice accordingly. The reports are not meant to be “punitive”; rather,
the reports are intended to promote teamwork and coordination, and encourage
clinical discussion. GPs seem to have a positive view of the reports (Maio et al.,
2012; Donatini et al., 2012).
In Finland audit and feedback is used mostly in health centres. One fifth of all
health centres participate in yearly quality measurements, based on two-week
samples of treatment of patients, organized by Conmedic, a primary care qual-
ity consortium. Quality measurement always includes indicators for diabetes
and cardiovascular care but also several other areas of care, which may vary
from year to year based on decisions of health centres. Measured care areas have
included fracture prevention, smoking cessation, interventions for risky alcohol
consumption, dementia and self-care. The purpose of the audit and feedback
is to inform local quality improvement activities. In addition, all intensive care
units collect information on all patients, and the information is reported back
to the professionals. Both audit and feedback systems started in 1994. The audit
and feedback is voluntary, driven by health professionals. Audit data are fed back
at group level. Another interesting initiative in Finland is the evidence-based
274 Improving healthcare quality in Europe
Country USA (49%), UK or Ireland (15%), Canada (8%), Australia or New Zealand (7%), other (21%)
Setting Outpatient (67%), inpatient (26%), other/unclear (7%)
Intervention Audit and feedback alone (35%), with clinician education (34%), with educational outreach/
academic detailing (20%), with clinician reminders or decision support (12%)
Clinical topic Diabetes/cardiovascular disease management (21%), laboratory testing/radiology (15%),
prescribing (22%), other (41%)
Targeted Physicians (86%), nurses (11%), pharmacists (4%), other (2%)
professionals
Audited Assessed Processes (79%), outcomes (14%), other (for example, costs, 32%)
information indicators
Focus of analysis Individual patient cases (for example, patients who did not receive a
particular test, 25%), aggregate of patient cases (for example, proportion not
receiving guideline consistent care, 81%)
Level of analysis Performance of individual provider (81%), performance of provider group
(64%)
Feedback Format Written (60%), verbal and written (23%), verbal (9%), unclear (8%)
characteristics
Source Investigators/unclear (80%), supervisor/colleague (9%), employer (11%)
Frequency Once only (49%), less than monthly (26%), monthly (14%), weekly (8%)
Lag time Days (4%), weeks (16%), months (33%), years (2%), mix (1%), unclear
(44%)
Target Individuals (51%), groups (18%), both (16%), unclear (14%)
Comparison Others’ performance (49%), guideline (11%), own previous performance
(4%), other or combination (10%), unclear (26)
Required change Increase current behaviour (41%), decrease current behaviour (21%), mix or unclear (39%)
Instructions for Goal setting (8%), action planning (29%), both (3%), neither (60%)
change
Source: based on Ivers et al., 2012, Brehaut et al., 2016, and Colquhoun et al., 2017
providers. Feedback was usually provided in writing, and in almost half of the
studies it was provided only once. In more than half of the studies feedback was
provided to individuals and it mostly showed comparisons with the performance
of peers. In response to the feedback, professionals were required to either increase
(41%) or decrease (21%) their behaviour, but they usually did not receive detailed
instructions about how to change their behaviour.
Table 10.3 provides an overview of the main results of the meta-analyses per-
formed as part of the 2012 Cochrane review of audit and feedback trials. The
largest number of studies reported results comparing the compliance of profes-
sionals with desired practice using dichotomous outcomes (for example, the
proportion of professionals compliant with guidelines). These studies found a
small to moderate effect of audit and feedback. The median increase of compli-
ance with desired practice was 4.3% (interquartile range (IQR) 0.5% to 16%).
276 Improving healthcare quality in Europe
Table 10.3 Main results of audit and feedback studies included in Ivers et
al., 2012
with previous qualitative work, which suggested that feedback with a punitive
tone is less effective than constructive feedback (Hysong, Best & Pugh, 2006).
Also, Feedback Intervention Theory (Kluger & DeNisi, 1996) suggests that
feedback directing attention towards acceptable and familiar tasks (as opposed
to feedback that generates emotional responses or causes deep self-reflection) is
more likely to lead to improvements.
Finally, Table 10.3 presents separately results from studies where audit and
feedback was carried out alone and results for interventions where audit and
feedback was combined with other interventions. Although combined interven-
tions appeared to have a larger median effect size than studies where audit and
feedback was implemented alone, the difference was not statistically significant.
These findings are consistent with other reviews (O’Brien et al., 2007; Forsetlund
et al., 2009; Squires et al., 2014), which found that there is no compelling evi-
dence that multifaceted interventions are more effective than single-component
ones. Therefore, it remains unclear whether it is worth the additional resources
and costs to add other interventions to audit and feedback.
The cost-effectiveness of audit and feedback in comparison with usual care has
not been evaluated in any review to date. In general, cost-effectiveness analyses
are rare in the quality improvement literature (Irwin, Stokes & Marshall, 2015).
However, it is clear that the costs of setting up an audit and feedback programme
will vary depending on how the intervention is designed and delivered. Local
conditions, such as the availability of reliable routinely collected data, have an
important impact on the costs of an intervention. If accurate data are readily
available, audit and feedback may prove to be cost-effective, even when the
effect size is small.
Only very few reviews investigating the effectiveness of audit and feedback
compared with other quality improvement strategies are available. The Cochrane
review included 20 direct comparisons between audit and feedback and other
interventions but it remained unclear whether audit and feedback works better
than reminders, educational outreach, opinion leaders, other educational activi-
ties or patient-mediated interventions. One review compared the influence of
11 different quality improvement strategies, including audit and feedback, on
outcomes of diabetes care (Tricco et al., 2012). Findings consistently indicated
across different outcome measures (HbA1c, LDL levels, systolic and diastolic
blood pressure) that complex interventions, such as team changes, case man-
agement and promotion of self-management, are more effective than audit and
feedback in improving outcomes. However, cost-effectiveness was not considered
in this review. The greater effectiveness of complex, system-level interventions
compared to audit and feedback suggests that audit and feedback does not work
Audit and Feedback as a Quality Strategy 279
well if the desired patient-level outcomes are not exclusively under the control
of the provider receiving the feedback.
In summary, substantial evidence shows that audit and feedback improves care
across a variety of clinical settings and conditions; further trials comparing audit
and feedback with no intervention are not needed. However, given that the
effect size differs widely across different studies, it is important to focus future
research on understanding how audit and feedback systems can be designed and
implemented to maximize the desired effect.
Focus of Care areas that are a priority for the organization and for patients and are perceived as important
intervention by the recipients of the feedback
Care areas with high volumes, high risks (for patients or providers), or high costs
Care areas where there is variation across healthcare providers/organizations in performance and
where there is substantial room for improvement
Care areas where performance on specific measures can be improved by providers because
they are capable and responsible for improvements (for example, changing specific prescribing
practices rather than changing the overall management of complex conditions)
Care areas where clear high-quality evidence about best practice is available
Audit Indicators include relevant measures for the recipient (this may include structure, processes and/or
component outcomes of care, including patient-reported outcomes) that are specific for the individual recipient
Indicators are based on clear high-quality evidence (for example, guidelines) about what
constitutes good performance
Data are valid and perceived as credible by the report recipients
Data are based on recent performance
Data are about the individual/team’s own behaviour(s)
Audit cycles are repeated at a frequency informed by the number of new patient cases with the
condition of interest such that new audits can capture attempted changes
Feedback Presentation is multimodal including either text and talking or text and graphical materials
component Delivery comes from a trusted, credible source (for example, supervisor or respected colleague),
with open acknowledgement of potential limitations in the data
Feedback includes a relevant comparator to allow the recipient to immediately identify if they are
meeting the desired performance level
A short, actionable declarative statement should describe the discrepancy between actual and
desired performance, followed by detailed information for those interested
Targets, goals The target performance is provided; the target may be based on peer data or on a consensus-
and action approved benchmark
plan Goals for target behaviour are specific, measurable, achievable, relevant and time-bound
A clear action plan is provided when discrepancies are evident
Organizational Audit and feedback is part of a structured programme with a local lead
context Audit and feedback is part of an organizational commitment to a constructive, non-punitive
approach to continuous quality improvement
Recipients have or are provided with the time, skills and/or resources required to analyse and
interpret the data available
Teams are provided with the opportunity to discuss the data and share best practices
Sources: Copeland, 2005; Ivers, 2014a, 2014b; Brehaut, 2016; McNamara, 2016
achieve and that they would feel capable of improving within the measurement
interval. If goal-commitment and/or self-efficacy to achieve high performance in
the indicator are not present, co-interventions may be needed for the feedback
to achieve its desired results (Locke & Latham, 2002). It has been suggested
that the key source of information for audits should be the medical record and
Audit and Feedback as a Quality Strategy 281
routinely collected data from electronic systems (Akl et al., 2007). However,
medical records are not always available or suitable for extracting the data
needed, and it is necessary to pay attention to the reliability and validity of the
data as well as to the appropriateness of the sample. In particular, the validity of
records can vary depending on the type of information being extracted (Peabody
et al., 2004), especially in outpatient settings. In some cases clinical vignettes
or case reports have been shown to be a more valid source of information about
practice behaviours than records (Peabody et al., 2004; Stange et al., 1998). In
other cases, the use of patient-reported experience or outcome measures might
be a promising approach, so long as the measures are validated and perceived as
actionable (Boyce, Browne & Greenhalgh, 2014).
Concerning the feedback component, feedback is likely to be more effective when
it is presented both verbally and in writing than when using only one modal-
ity and when the source (i.e., the person delivering the feedback) is a respected
colleague rather than unknown investigators or employers of purchasers of care.
Source credibility matters a great deal (Ferguson, Wakeling & Bowie, 2014).
Audit and feedback schemes should always include clear targets and an action
plan specifying the steps necessary to achieve the targets (Gardner et al., 2010).
Ideal targets are commonly considered to be specific, measurable, achievable,
relevant and time-bound (Doran, 1981). In addition, feedback should include a
comparison with achievable but challenging benchmarks (for example, compar-
ing performance to the top 10% of peers) (Kiefe et al., 2001).
Furthermore, audit and feedback requires a supportive organizational context.
This includes commitment to a constructive (i.e. non-punitive) approach to
continuous quality improvement and to iterative cycles of measurement at
regular, predictable intervals (Hysong, Best & Pugh, 2006). In addition, many
mediating structural factors may impact on care and on the likelihood of clini-
cal audit to improve care, such as staffing levels, staffing morale, availability of
facilities and levels of knowledge. Finally, the recipients may require skills and/
or resources to properly analyse and interpret the audited data and they need
to have the capacity to act upon it. This is especially true if the feedback does
not provide patient-level information with clear suggestions for clinical action
(meaning resources may be needed to conduct further analyses) or if the feedback
highlights indicators that require organizational changes to address (such that
change-management resources may be needed).
It is rarely possible to design each component of an audit and feedback scheme
in an optimal way. Therefore, it is useful to perceive the individual components
outlined in Table 10.4 as “levers” to be manipulated when working within setting-
specific constraints. For example, if circumstances dictate that the delivery of
feedback cannot be repeated in a reasonable timeframe, extra attention should
282 Improving healthcare quality in Europe
be paid to other aspects of the intervention, such as the source of the feedback.
In addition, co-interventions, tailored to overcome identified barriers and boost
facilitators, may help if feedback alone seems unlikely to activate the desired
response (Baker et al., 2010).
References
Akl EA et al. (2007). NorthStar, a support tool for the design and evaluation of quality improvement
interventions in healthcare. Implementation Science, 2:19.
AQUA (2014). Qualitätsreport 2014. Göttingen: Institut für Qualitätsförderung und Forschung
im Gesundheitswesen (AQUA).
Ash S et al. (2006). Who is at greatest risk for receiving poor-quality health care? New England
Journal of Medicine, 354(24):2617–19.
Baker R et al. (2010). Tailored interventions to overcome identified barriers to change: effect
on professional practice and health care outcomes. Cochrane Database of Systematic Reviews,
(3):CD005470.
Benjamin A (2008). Audit: how to do it in practice. BMJ, 336(7655):1241–5.
Brehaut J et al. (2016). Practice Feedback Interventions: 15 Suggestions for Optimizing Effectiveness.
Annals of Internal Medicine, 164(6):435–41.
Boyce MB, Browne JP, Greenhalgh J (2014). The experiences of professionals with using information
from patient-reported outcome measures to improve the quality of healthcare: a systematic
review of qualitative research. BMJ Quality and Safety, 23(6):508–18.
Brown B et al. (2016). A meta-synthesis of findings from qualitative studies to audit and feedback
interventions. UK: National Institute of Health Research.
Colquhoun H et al. (2017). Reporting and design elements of audit and feedback interventions:
a secondary review. BMJ Quality and Safety, (1):54–60.
Copeland G (2005). A Practical Handbook for Clinical Audit. London: Clinical Governance
Support Team, Department of Health Publications.
Davis D et al. (2006). Accuracy of physician self-assessment compared with observed measures of
competence: a systematic review. Journal of the American Medical Association, 296(9):1094–102.
Donatini A et al. (2012). Physician Profiling in Primary Care in Emilia-Romagna Region, Italy: a
Tool for Quality Improvement. Population Health Matters (formerly Health Policy Newsletter),
25(1):10.
284 Improving healthcare quality in Europe
Doran G (1981). There’s a S.M.A.R.T. way to write management’s goals and objectives. AMA
Management Review, 70:35–6.
Dreischulte T et al. (2016). Safer Prescribing – A Trial of Education, Informatics, and Financial
Incentives. New England Journal of Medicine, 374:1053–64.
Ferguson J, Wakeling J, Bowie P (2014). Factors influencing the effectiveness of multisource
feedback in improving the professional practice of medical doctors: a systematic review. BMC
Medical Education, 14:76.
Forsetlund L et al. (2009). Continuing education meetings and workshops: effects on professional
practice and health care outcomes. Cochrane Database of Systematic Reviews, (2):CD003030.
Foy R. et al. (2002). Attributes of clinical recommendations that influence change in practice
following audit and feedback. Journal of Clinical Epidemiology, 55(7):717–22.
Gardner B et al. (2010). Using theory to synthesise evidence from behaviour change interventions:
the example of audit and feedback. Social Science and Medicine, 70(10):1618–25.
Godin G et.al (2008). Healthcare professionals’ intentions and behaviours: a systematic review of
studies based on social cognitive theories. Implementation Science, 3:36.
Grol R et al. (2007). Planning and studying improvement in patient care: the use of theoretical
perspectives. Millbank Quarterly, 85(1):93–138.
Guthrie B et al. (2016). Data feedback and behavioural change intervention to improve primary
care prescribing safety (EFIPPS): multicentre, three arm, cluster randomised controlled trial.
BMJ (Clinical Research Edition), 354:i4079.
Guyatt G et al. (2008). GRADE: an emerging consensus on rating quality of evidence and strength
of recommendations. BMJ (Clinical Research Edition), 336(7650):924–6.
Hysong S, Best R, Pugh J (2006). Audit and feedback and clinical practice guideline adherence:
making feedback actionable. Implementation Science, 1:9.
Irwin R, Stokes T, Marshall T (2015). Practice-level quality improvement interventions in
primary care: a review of systematic reviews. Primary Health Care Research and Development,
16(6):556–77.
Ivers N, Grimshaw J (2016). Reducing research waste with implementation laboratories. Lancet,
388(10044):547–8.
Ivers N et al. (2012). Audit and feedback: effects on professional practice and healthcare outcomes.
Cochrane Database of Systematic Reviews, (6):CD000259.
Ivers N et al. (2014a). Growing literature, stagnant science? Systematic review, meta-regression
and cumulative analysis of audit and feedback interventions in health care. Journal of General
Internal Medicine, 29(11):1534–41.
Ivers N et al. (2014b). No more “business as usual” with audit and feedback interventions: towards
an agenda for a reinvigorated intervention. Implementation Science, 9:14.
Kiefe C et al. (2001). Improving quality improvement using achievable benchmarks for physician
feedback: a randomized controlled trial. Journal of the American Medical Association,
285(22):2871–9.
Kluger A, DeNisi A (1996). The Effects of Feedback Interventions on Performance: a Historical
Review, a Meta-Analysis, and a Preliminary Feedback Intervention Theory. Psychological
Bulletin, 119(2):254–84.
Locke E, Latham G (2002). Building a Practically Useful Theory of Goal Setting and Task
Motivation; a 35-Year Odyssey. American Psychologist, (9):705–17.
McNamara et al. (2016). Confidential physician feedback reports: designing for optimal impact
on performance. Rockville, Maryland: AHRQ.
Maio V et al. (2012). Physician Profiling in Primary Care in Emilia-Romagna Region, Italy: A
Tool for Quality Improvement. Population Health Matters (formerly Health Policy Newsletter),
25(1):Article 10.
O’Brien M et al. (2007). Educational outreach visits: effects on professional practice and health
care outcomes. Cochrane Database of Systematic Reviews, (4): CD000409.
Audit and Feedback as a Quality Strategy 285
Payne V, Hysong S (2016). Model depicting aspects of audit and feedback that impact physicians’
acceptance of clinical performance feedback. BMC Health Services Research, 16:260.
Peabody J et al. (2004). Measuring the quality of physician practice by using clinical vignettes: a
prospective validation study. Annals of Internal Medicine, 141(10):771–80.
Squires J et al. (2014). Are multifaceted interventions more effective than single-component
interventions in changing health-care professionals’ behaviours? An overview of systematic
reviews. Implementation Science, (9):152.
Stange K et al. (1998). How valid are medical records and patient questionnaires for physician
profiling and health services research? A comparison with direct observation of patient visits.
Medical Care, 36(6):851–67.
Tricco A et al. (2012). Effectiveness of quality improvement strategies on the management of
diabetes: a systematic reviews and meta-analysis. Lancet, 379(9833):2252–61.
Wennberg J (2014). Forty years of unwanted variation – and still counting. Health Policy,
114(1):1–2.
Chapter 11
Patient safety culture as
a quality strategy
Summary
progress along all four areas of action but also ample room for improvement in
many countries, particularly regarding patient empowerment and workforce edu-
cation. While the Council recommendations had raised awareness on safety at
political and provider levels, concrete action had not been triggered to the same
extent. At the same time, just over half of surveyed EU citizens thought it likely that
patients could be harmed by healthcare in their country. Regarding patient safety
culture specifically, an investigation of the use of patient safety culture surveys in
2008–2009 collected information on the use of patient safety culture instruments
in 32 European countries and recommended fitting tools for future use. There is no
newer overview of country practices in the EU, although an increasing volume of
work, mainly from the Netherlands, focuses on the effects of patient safety culture.
Patient safety
• Kohn, Corrigan & Donaldson, 2000: Patient safety relates to the reduction of risk and
is defined as “freedom from accidental injury due to medical care, or medical errors”.
• Emanuel et al., 2008: Patient safety is a discipline in the healthcare sector that applies
safety science methods towards the goal of achieving a trustworthy system of healthcare
delivery. Patient safety is also an attribute of healthcare systems; it minimizes the
incidence and impact of, and maximizes recovery from, adverse events.
• Slawomirski, Auraaen & Klazinga, 2017: Patient safety is the reduction of risk of
unnecessary harm associated with healthcare to an acceptable minimum; [this minimum
is defined based on] the collective notions of current knowledge, resources available and
the context in which care was delivered and weighed against the risk of non-treatment
or alternative treatment.
Errors and adverse events (from Kohn, Corrigan & Donaldson, 2000; see also Walshe, 2000)
Level of care Adverse event related to level of care General drivers of adverse events
(unrelated)
Primary care • Adverse drug events/ • Communication and information
• medication errors deficits
• Diagnostic error/
• Insufficient skills/knowledge
• delayed diagnosis
Long-term care • Adverse drug events • Inadequate organizational culture
• Pressure injury and misaligned incentives
• Falls
Hospital care • Healthcare-associated
• infections
• Venous thromboembolism
• Adverse drug events
• Pressure injury
• Wrong site surgery
The prevailing notion until that point was that adverse events were attributable
to human failure on the part of clinicians. Seminal work by James Reason in
1990 had already described upstream factors affecting safety outcomes in other
contexts. Reason’s “Swiss cheese” model of accidents occurring in an organiza-
tional setting (like a hospital) demonstrates how upstream errors can lead to
incidents downstream, i.e. at the point of care. The latter is considered “active
error”, as it occurs at the point of human interface with a complex system and
the former “latent error”, which represents failures of system design. Reason’s
safety management model (Fig. 11.1) shows the relationship between distant
latent factors like management decisions (for example, on the number of nurses
on a patient ward), to contextual factors on the ward (for example, having no
structured handover at shift changes), to human active factors (for example,
forgetting a patient’s new medication). Adverse events can be linked to overuse,
underuse and misuse of healthcare services (Chassin & Galvin, 1998) as well as
a lack of care coordination (Ovretveit, 2011).
As Emanuel et al. (2008) point out, the propagation of this understanding in
the IOM report led to the realization that blame culture was pointless as long
as the underlying causes of errors remained unaddressed. Thus, To Err is Human
essentially catalysed the establishment of patient safety as a discipline and
shifted the focus from professional education alone to targeting organizational
and contextual factors as well. It spurred a considerable international response,
demonstrated by the creation of the WHO’s and the OECD’s World Alliance
for Patient Safety in 2004 and a number of European initiatives. These included
the Safety Improvement for Patients in Europe (SImPatIE) project, which sys-
tematized nomenclatures and identified appropriate indicators and other safety
improvement tools, and the European Network for Patient Safety (EUNetPaS),
which introduced a collaborative network for a range of stakeholders in EU
Member States. In 2009 the Council of the European Union issued its first
Recommendation on Patient Safety, urging Member States to take action along
several axes. Following a sobering evaluation on the extent of its implementation
in 2014 (see below), the European Parliament adopted its “Resolution on safer
healthcare in Europe: improving patient safety and fighting antimicrobial resist-
ance (2014/2207(INI))”, which reiterated the importance of advancing patient
safety and urged Member States to redouble efforts even in light of financial
constraints. It stressed the importance of training and multidisciplinarity, but
also of adequate reporting systems and a unified, validated set of patient safety
indicators. It also highlighted the necessity of cross-country collaboration. Later
on, the European Union Network for Patient Safety and Quality of Care (PaSQ)
Joint Action aimed to advance these goals through knowledge exchange. It
united representatives of the European medical community and the institutional
292 Improving healthcare quality in Europe
partners involved in Patient Safety and Quality of Care in the Member States
of the European Union.
TASK/
ORGANIZATION INDIVIDUAL DEFENCES ACCIDENT
ENVIRONMENT
Design Violations
Build
Operate Violation-
Maintain producing
Communicate conditions
10-12 general
faillure types Learning from incidents and accidents
Clinical level
Clinical care standards
Organizational level (including patient hydration and nutrition)
Clinical governance system and Management programmes for medication,
safety frameworks acute delirium and cognitive impairment
Monitoring, management and Response to clinical deterioration
reporting systems for clinical incidents Smart infusion pumps and drug
and patient complaints administration systems
Digital safety solutions Protocols for: error minimization, sterilization,
Human Resource interventions barrier precautions, catheter and insertion,
VA pneumonia minimization, perioperative
Infection surveillance medication, patient identification and procedure
Hygiene, sterilization and antimicrobial matching, and the prevention of venous
stewardship thromboembolism, ulcer injury, falls.
Blood (management) protocols Procedural/surgical checklists
Operation room integration
and display checklists
More recently, the ambition to learn and improve has shifted from learning
from incidents and adverse events (Safety I) to learning from the comparison of
“work-as-imagined” as described in guidelines and procedures, and “work-as-
done” in daily practices and ever-changing contexts (Safety II). Safety II is based
on complexity theory and the idea that interactions between various parts in the
system determine the outcome, instead of a cause-effect chain in a linear way.
In the same situation the outcome might be good or bad, and resilience to daily
changes should be recognized and trained to make healthcare safer (Hollnagel,
Patient safety culture as a quality strategy 295
2014; Dekker, 2011). This line of thought is new in healthcare and instruments
for implementation have still to be developed, alongside the change in culture
needed to think from inside-out during incident analyses.
1. Measure
5. Evaluate
adverse
impact
event rate
4. Start
2. Get insight
improvement
into causes
projects
3. Identify
solutions
Source: adapted from WHO, 2008
Table 11.1 Member State action in the four domains of the 2009 Council
Recommendation, 2014
provider levels, concrete action had not been triggered to the same extent. This
led to the reiteration of the importance of continued attention in the European
Parliament’s Resolution of 2015 (see above).
A concurrent Eurobarometer survey found that just over half of surveyed EU
citizens thought it likely that patients could be harmed by healthcare in their
Patient safety culture as a quality strategy 297
country – a slight increase since 2009. The share was slightly higher for hospital
care than for ambulatory care and the variation between countries was substantial.
The survey also recorded a significant increase in the proportion of adverse events
that were reported by those who experienced them or by their families – from
28% in 2009 to 46% in 2013. This can be interpreted to mean that pathways to
report adverse events are more accessible to care recipients and may also indicate
a change in culture. At the same time, the most likely outcome of reporting an
adverse event was lack of action (37%), with only one in five respondents receiving
an apology from the doctor or nurse and even fewer (17%) an explanation for
the error from the healthcare facility. These results further underlined the need
for continued action and attention to safety culture and patient empowerment
(European Commission, 2014a).
The 2017 OECD report on the economics of patient safety reviewed available
evidence and surveyed relevant policy and academic experts to identify cost-
effective interventions (Slawomirski, Auraaen & Klazinga, 2017). It found that
approximately 15% of hospital expenditure and activity in OECD countries
was attributable to addressing safety failures, while the overall cost of adverse
events would also need to consider indirect elements, such as productivity loss
for patients and carers. Furthermore, the report illustrates that most of the finan-
cial burden is linked to a definite number of common adverse events, includ-
ing healthcare-associated infections (HAI), venous thromboembolism (VTE),
pressure ulcers, medication errors, and wrong or delayed diagnoses. Accordingly,
the most cost-effective safety interventions would target those occurrences first,
and the OECD report summarizes sound evidence to support this notion.
However, bearing in mind the complex and dynamic causes of patient harm
(as described earlier in this chapter), it is not surprising that the importance of
system- and organizational-level interventions was highlighted in the report and
that such approaches were short-listed as “best buys” to cost-effectively address
safety overall, including professional education and training, clinical governance
systems, safety standards, and person and patient engagement strategies. Policy
and academic experts surveyed for the purposes of the report also highlighted
the critical contribution of developing a culture conducive to safety to anchor
individual interventions. In accordance with these results, Box 11.2 summarizes
information on incident reporting systems and root cause analysis, which are
indispensable for framing necessary action at the organizational level. The second
part of the chapter focuses on safety culture as a quality improvement strategy.
298 Improving healthcare quality in Europe
An incident is an unexpected and unwanted event during the healthcare process which could have
harmed or did harm the patient. In various countries national, regional or local incident reporting
systems have been introduced (Smits et al., 2009; Wagner et al. 2016). The first reporting systems
in healthcare were introduced in the 2000s, following the examples of other high-risk industries
such as aviation and nuclear power. Analysing incidents as well as near misses can provide valuable
information for detecting patient safety problems and might help professionals to prevent harming
patients in the future and improve quality of care. Incident reporting systems are considered a fairly
inexpensive although incomplete means for monitoring patient safety and, when combined with
systematic interventions, potentially effective in reducing preventable adverse events (Simon et al.,
2005). Other methods of incident tracking include morbidity and mortality conferences and autopsy,
malpractice claims analysis, administrative data analysis, chart review, applications embedded
in electronic medical records, observation of patient care and clinical surveillance including staff
interviews (Thomas & Petersen, 2003). Some are more geared towards the detection of active and
some latent errors.
A well known national reporting system is the National Reporting and Learning System in the UK
(Howell et al., 2015). Established in 2003, it received over a million reports in a period of five years,
mainly from acute care hospitals. In 2010 it became mandatory for National Health Service (NHS)
trusts in England to report all serious patient safety incidents to the central Care Quality Commission.
As a result of considerations about the extent to which the results of national reporting systems
are applicable to hospital units, the very places where changes and improvements have to be
implemented, the government and healthcare providers in the Netherlands have opted for a local
and decentralized unit-based approach. The advantage of a centralized system is the possibility to
discover rare but important problems (Dückers et al., 2009), whereas decentralized reporting systems
might increase the sense of urgency because all reported incidents have actually happened in a
recognizable context. Indeed, national figures on incident types and root causes do not necessarily
reflect the risks of a specific hospital unit or unit type. Team engagement in improvement projects
may suffer if the reporting does not match their practice needs (Wagner et al., 2016). The European
Commission’s Reporting and Learning Subgroup published an overview of reporting systems in
European countries in 2014 (European Commission, 2014b).
Despite the considerable effort that has been put into establishing incident reporting and learning
systems in healthcare in many countries and settings, under-reporting of incidents is estimated
to be considerable (see, for example, Archer et al., 2017). Barach & Small (2000) put it at 50% to
96% annually in the US (Barach & Small, 2000), a figure that is still used as an orientation point
today. Nevertheless, there is evidence that the willingness to report has increased over the years in
hospitals (Verbeek-van Noord et al., 2018). Common barriers to reporting incidents among doctors
are due to a negative attitude, a non-stimulating culture or a perceived lack of ability to fulfill related
tasks and include lack of clarity about what constitutes an incident, fear of reprisal, unfavourable
working conditions involving colleagues and supervisors, code of silence (reporting as a sign of
Patient safety culture as a quality strategy 299
lack of loyalty), loss of reputation, additional work based on user-unfriendly platforms, and lack
of feedback or action when incidents are reported (Martowirono et al., 2012). On the other hand,
features of an organization that encourage incident reporting are: flat hierarchy, staff participation
in decision-making, risk management procedures, teamwork, and leadership ability and integrity
(Firth-Cozens, 2004). Research shows that mandatory reporting may result in lower error rates
than voluntary reporting, while the reporting profession (for example, nurses vs. physicians) and
the mode of reporting (paper-based vs. web-based) may also play a role in how effective reporting
systems are. An increase in incident reporting is positively correlated with a more positive safety
culture (Hutchinson et al., 2009). Reporting should be non-punitive, confidential or anonymous,
independent, timely, systems oriented and responsive (see also Leape, 2002).
Root cause analysis (RCA) can give insight into the origination of incidents which have already
happened and have been reported; it is a method to analyse adverse events and to generate
interventions, in order to prevent recurrence. RCA is generally employed to uncover latent errors
underlying an adverse event (see Fig. 11.1) and consists of four major steps: first, a team of managers,
physicians and/or experts from the particular field as well as representatives from involved staff
collect relevant data concerning the event; the RCA team then organizes and analyses possible
causal factors using a root-cause tree or a sequence diagram with logic tests that describe the
events leading up to an occurrence, plus the conditions surrounding these events (there is rarely
just one causal factor – events are usually the result of a combination of contributors); the third
step entails the identification of the underlying reason for each causal factor, so all problems
surrounding the occurrence can be addressed; finally, the RCA team generates recommendations
for changes in the care process. Clearly, the effectiveness of RCA depends on the actions taken
based on its outputs. If the analysis reveals an underlying problem, solutions need to be discussed
and implemented, a process which can be as difficult as any requiring that professionals change
their behaviour. Thus, the impact of RCA on patient safety outcomes is indirect and difficult to
measure. Nevertheless, insights from RCA can help to prioritize improvement areas and solutions.
Overall, an easily accessible, comprehensive reporting system combined with awareness of and
training in RCA are prerequisites for learning and safety improvements.
For the proactive, prospective identification of potential process failures, Failure Mode Effects Analysis
(FMEA) was developed for the aviation industry and has also been used in a healthcare context. Its aim
is to look at all possible ways in which a process can fail, analyse risks and make recommendations
for changes in the process of preventing adverse events. A few variations exist, like Failure Mode
Effects and Criticality Analysis (FMECA) and Healthcare Failure Mode Effects Analysis (HFMEA).
Despite the importance of a proactive approach, FMEA in its entirety was considered cumbersome
to implement at clinical or organizational level, and showing results of not unequivocal validity
(Shebl, Franklin & Barber, 2012; Shebl et al., 2012). However, it was recognized that it may have
potential as a tool for aiding multidisciplinary groups in mapping and understanding a process of care
(Shebl et al., 2012). A newly developed risk identification framework (Simsekler, Ward & Clarkson,
2018), which incorporates FMEA elements, still needs to be tested for usability and applicability.
300 Improving healthcare quality in Europe
Comparative work showed similarities between the first two instruments and
concluded that survey length, content, sensitivity to change and the ability to
benchmark should determine instrument choice (Etchegaray & Thomas, 2012).
There is no newer overview of country practices in the EU. Most research on
the implementation and effectiveness of patient safety culture in Europe comes
from the Netherlands (see below).
Behaviours
Attitudes
and opinions
Organizational mission,
leadership strategies, norms,
history, legends and heroes
example, Sølvtofte, Larsen & Laustsen, 2017). A systematic review showed that
classroom-based team training can improve patient safety culture (Verbeek-van
Noord et al., 2014).
Weaver et al. (2013) identified and evaluated interventions to foster patient
safety culture in acute care settings. Most studies included team training or
communication initiatives, executive or inter-disciplinary walk-rounds, and mul-
ticomponent, unit-based interventions were also investigated. In all, 29 studies
reported some improvement in safety culture (or patient outcomes, see above),
but considerable heterogeneity was observed and the strength of evidence was
low. Thus, the review only tentatively concluded that interventions can improve
perceptions of safety culture and potentially reduce patient harm. Evidence on
interventions to enhance safety culture in primary care was largely also incon-
clusive due to limited evidence quality (Verbakel et al., 2016). A Danish study
found that strengthening leadership can act as a significant catalyst for patient
safety culture improvement. To broaden knowledge and strengthen leadership
skills, a multicomponent programme consisting of academic input, exercises,
reflections and discussions, networking and action learning was implemented
among clinical leaders. The proportion of frontline staff with positive attitudes
improved by approximately five percent for five of seven patient safety culture
dimensions over time. Moreover, frontline staff became more positive on almost
all cultural dimensions investigated (Kristensen et al., 2015a).
A survey of healthcare professionals, on the other hand, found them to be
positive about feedback on patient safety culture and its effect on stimulating
improvement, especially when it is understandable and tailored to specific hospital
departments (Zwijnenberg et al., 2016). A different survey demonstrated that
the perception of safety climate differs between professional groups (higher for
clinical leaders compared to frontline clinicians) and suggested that the imple-
mentation of quality management systems can be supportive in fostering shared
values and behaviours. As perceptions have also been shown to differ among
professionals in primary care (Verbakel et al., 2014), tailored approaches seem
reasonable overall. Organizational-level initiatives aimed at building a positive
culture may include training and development, team-building and commu-
nication strategies, inclusive management structures, staff culture surveys and
safety awards (Slawomirski, Auraaen & Klazinga, 2017). An example of such a
multifaceted approach is the TeamSTEPPS system developed by the Agency for
Healthcare Research and Quality in the USA (AHRQ, 2018).
For the training component towards more safety-sensitive care, curricula based on
the Crew Resource Management (CRM) concept created for aviation have been
adopted in healthcare as well (see, for example, McConaughey, 2008; Verbeek-
van Noord et al., 2014; Eddy, Jordan & Stephenson, 2016). CRM promotes
Patient safety culture as a quality strategy 305
and reinforces situational awareness and team learning by emphasizing six key
areas: managing fatigue; creating and managing teams; recognizing adverse
situations (red flags); cross-checking and communication; decision-making;
and performance feedback. Classroom and simulation-based team trainings of
this kind are expected to improve cooperation, communication and handovers
between professionals. However, evidence on their implementation shows that
results might be time-consuming to achieve (Sax et al., 2009). Overall, the
importance of teamwork is gaining recognition along with the impact of team
training on attitudes of healthcare providers and team communication (see, for
example, Frankel et al., 2017).
References
AHRQ (2018). Hospital Survey on Patient Safety Culture. Rockville, Maryland: Agency for
Healthcare Research and Quality.
Archer S et al. (2017). Development of a theoretical framework of factors affecting patient safety
incident reporting: a theoretical review of the literature. BMJ Open, 7(12):e017155.
Barach P, Small SD (2000). Reporting and preventing medical mishaps: lessons from non-medical
near miss reporting systems. BMJ, 320(7237):759–63.
Bates D, Singh H (2018). Two Decades Since To Err is Human: an Assessment of Progress and
Emerging Priorities in Patient Safety. Health Affairs, 37(11):1736–43.
Brennan TA et al. (1991). Incidence of adverse events and negligence in hospitalized patients:
results of the Harvard Medical Practice Study I. New England Journal of Medicine, 324:370–7.
Chassin MR, Galvin RW (1998). The urgent need to improve health care quality. Institute of
Medicine National Roundtable on Health Care Quality. Journal of the American Medical
Association, 280(11):1000–5.
Dekker S (2011). Drift info failure. From hunting broken components to understanding complex
systems. Farnham: Ashgate.
Dückers M et al. (2009). Safety and risk management interventions in hospitals: a systematic
review of the literature. Medical Care Research and Review, 66(6 Suppl):90S–119S.
Eddy K, Jordan Z, Stephenson M (2016). Health professionals’ experience of teamwork education
in acute hospital settings: a systematic review of qualitative literature. JBI Database of Systematic
Reviews and Implementation Reports, 14(4):96–137.
Emanuel L et al. (2008). What Exactly Is Patient Safety? In: Henriksen K et al. (eds.). Advances
in Patient Safety: New Directions and Alternative Approaches (Vol. 1: Assessment). Rockville,
Maryland: Agency for Healthcare Research and Quality.
Etchegaray JM, Thomas EJ (2012). Comparing two safety culture surveys: safety attitudes
questionnaire and hospital survey on patient safety. BMJ Quality and Safety, 21(6):490–8.
European Commission (2014a). Special Eurobarometer 411: “Patient Safety and Quality of Care”.
Wave EB80.2. Brussels: European Commission/TNS Opinion & Social.
European Commission (2014b). Key findings and recommendations on Reporting and learning
systems for patient safety incidents across Europe. Report of the Reporting and learning
subgroup of the European Commission PSQCWG. Brussels: European Commission.
Firth-Cozens (2004). Organisational trust: the keystone to patient safety. Quality and Safety in
Health Care, 13(1):56–61.
Frankel A et al. (2003). Patient safety leadership WalkRounds. Joint Commission Journal on Quality
and Safety, 29(1):16–26.
Frankel A et al. (2017). A Framework for Safe, Reliable, and Effective Care. White Paper.
Cambridge, Mass.: Institute for Healthcare Improvement and Safe & Reliable Healthcare.
Health Foundation (2011). Does improving safety culture affect outcomes? London: Health
Foundation.
Hollnagel, E (2014). Safety-I and Safety-II. The past and future of safety management. Farnham:
Ashgate.
Howell AM et al. (2015). Can Patient Safety Incident Reports Be Used to Compare Hospital
Safety? Results from a Quantitative Analysis of the English National Reporting and Learning
System Data. PloS One, 10(12):e0144107.
Patient safety culture as a quality strategy 307
Hutchinson A et al. (2009). Trends in healthcare incident reporting and relationship to safety and
quality data in acute hospitals: results from the National Reporting and Learning System.
BMJ Quality & Safety, 18:5–10.
Kohn LT, Corrigan JM, Donaldson MS (eds.) (2000). To Err is Human: Building a Safer Health
System. Washington, DC: National Academies Press.
Kristensen S et al. (2015a). Strengthening leadership as a catalyst for enhanced patient safety
culture: a repeated cross-sectional experimental study. BMJ Open, 6:e010180.
Kristensen S et al. (2015b). Quality management and perceptions of teamwork and safety climate
in European hospitals. International Journal for Quality in Health Care, 27(6):499–506.
Leape LL (2002). Reporting of adverse events. New England Journal of Medicine, 347(20):1633–8.
Leape LL et al. (1991). The nature of adverse events in hospitalized patients. Results of the Harvard
Medical Practice Study II. New England Journal of Medicine, 324:377–84.
McConaughey E (2008). Crew resource management in healthcare: the evolution of teamwork
training and MedTeams. Journal of Perinatal and Neonatal Nursing, 22(2):96–104.
Martowirono K et al. (2012). Possible solutions for barriers in incident reporting by residents.
Journal of Evaluation in Clinical Practice, 18(1):76–81.
Meddings J et al. (2017). Evaluation of the association between Hospital Survey on Patient
Safety Culture (HSOPS) measures and catheter-associated infections: results of two national
collaboratives. BMJ Quality and Safety, 26(3):226–35.
Ovretveit J (2011). Widespread focused improvement: lessons from international health for
spreading specific improvements to health services in high-income countries. International
Journal for Quality in Health Care, 23(3):239–46.
Patankar MS, Sabin EJ (2010). The Safety Culture Perspective. In Salas E and Maurino D (eds.).
Human Factors in Aviation (Second Edition). San Diego: Academic Press.
Reason J (1990). Human error. New York: Cambridge University Press.
Reason J, Hollnagel E, Paries J (2006). Rethinking the “Swiss Cheese” Model of Accidents. EEC
Note No. 13/06. Brussels: European Organization for the Safety of Air Navigation.
Sammer CE et al. (2010). What is patient safety culture? A review of the literature. Journal of
Nursing Scholarship, 42(2):156–65.
Sax HC et al. (2009). Can aviation-based team training elicit sustainable behavioral change?
Archives of Surgery, 144(12):1133–7.
Schwendimann R et al. (2018). The occurrence, types, consequences and preventability of in-
hospital adverse events – a scoping review. BMC Health Services Research, 18(1):521.
Scott T et al. (2003). The quantitative measurement of organizational culture in health care: a
review of the available instruments. BMC Health Services Research, 38(3):923–45.
Sexton JB et al. (2006). The Safety Attitudes Questionnaire: psychometric properties, benchmarking
data, and emerging research. BMC Health Services Research, 6:44.
Shebl N, Franklin B, Barber N (2012). Failure mode and effects analysis outputs: are they valid?
BMC Health Services Research, 12:150.
Shebl N et al. (2012). Failure Mode and Effects Analysis: views of hospital staff in the UK. Journal
of Health Services Research & Policy, 17(1):37–43.
Simon A et al. (2005). Institutional medical incident reporting systems: a review. Edmonton,
Alberta: Heritage Foundation for Medical Research.
Simsekler M, Ward JR, Clarkson PJ (2018). Design for patient safety: a systems-based risk
identification framework. Ergonomics, 61(8):1046–64.
Slawomirski L, Auraaen A, Klazinga N (2017). The economics of patient safety. Paris: OECD
Publishing.
Smits M et al. (2009). The nature and causes of unintended events reported at ten emergency
departments. BMC Emergency Medicine, 9:16.
Smits M et al. (2012). The role of patient safety culture in the causation of unintended events in
hospitals. Journal of Clinical Nursing, 21(23–24):3392–401.
308 Improving healthcare quality in Europe
Sølvtofte AS, Larsen P, Laustsen S (2017). Effectiveness of Patient Safety Leadership WalkRounds™
on patient safety culture: a systematic review protocol. JBI Database of Systematic Reviews and
Implementation Reports, 15(5):1306–15.
Thomas EJ, Petersen LA (2003). Measuring errors and adverse events in health care. Journal of
General Internal Medicine, 18(1):61–7.
University of Manchester (2006). Manchester Patient Safety Framework (MaPSaF). Manchester:
University of Manchester.
Verbakel NJ et al. (2014). Exploring patient safety culture in primary care. International Journal
for Quality in Health Care, 26(6):585–91.
Verbakel NJ et al. (2015). Effects of patient safety culture interventions on incident reporting in
general practice: a cluster randomised trial. British Journal of General Practice: the Journal of
the Royal College of General Practitioners, 65(634):e319–29.
Verbakel NJ et al. (2016). Improving Patient Safety Culture in Primary Care: A Systematic Review.
Journal of Patient Safety, 12(3):152–8.
Verbeek-van Noord I et al. (2014). Does classroom-based Crew Resource Management training
improve patient safety culture? A systematic review. Sage Open Medicine, 2:2050312114529561.
Verbeek-van Noord I et al. (2018). A nation-wide transition in patient safety culture: a multilevel
analysis on two cross-sectional surveys. International Journal for Quality in Health Care. doi:
10.1093/intqhc/mzy228 [Epub ahead of print].
Wagner C et al. (2016). Unit-based incident reporting and root cause analysis: variation at three
hospital unit types. BMJ Open, 6(6):e011277.
Walshe K (2000). Adverse events in health care: issues in measurement. Quality in Health Care,
9(1):47–52.
Weaver SJ et al. (2013). Promoting a culture of safety as a patient safety strategy: a systematic
review. Annals of Internal Medicine, 158(5 Pt 2):369–74.
WHO (2008). World Alliance for Patient Safety: Research for Patient Safety – Better Knowledge
for Safer Care. Geneva: World Health Organization.
WHO (2013). Exploring patient participation in reducing healthcare-related safety risks.
Copenhagen: WHO Regional Office for Europe.
Zwijnenberg NC et al. (2016). Healthcare professionals’ views on feedback of a patient safety
culture assessment. BMC Health Services Research, 16:199.
Chapter 12
Clinical pathways
as a quality strategy
Summary
“A care pathway is a complex intervention for the mutual decision making and organisation of care
processes for a well-defined group of patients during a well-defined period. Defining characteristics
of care pathways include: An explicit statement of the goals and key elements of care based on
evidence, best practice, and patients’ expectations and their characteristics; the facilitation of
communication among team members and with patients and families; the coordination of the
care process by coordinating the roles and sequencing the activities of the multidisciplinary care
team, patients and their relatives; the documentation, monitoring, and evaluation of variances
and outcomes, and the identification of the appropriate resources” (EPA, 2018a).
The EPA definition lacks specificity, i.e. it does not allow CPWs to be distin-
guished from similar concepts or strategies. Such a distinction is necessary when
addressing the issue of effectiveness of the strategy.
Independent of the terminology used, the concept of CPWs is defined by the
characteristics and content of the strategy. Based on a synthesis of published defi-
nitions and descriptions, an operational definition of CPWs has been proposed
(Kinsman et al., 2010; Rotter et al., 2010; Rotter et al., 2013).
Therefore, a CPW is a structured multidisciplinary care plan with the following
characteristics:
bedside for all the health professionals involved (Campbell et al., 1998; Kinsman
et al., 2010). (For more information on professionals’ education, see Chapter 5.)
As an example, a clinical guideline recommendation for an outpatient rehabilita-
tion programme will be implemented locally in a clinical pathway in much more
detail, such as when to submit the referral and to whom it should be submitted.
Thus CPWs aim to standardize clinical processes of care within the unique cul-
ture and environment of the healthcare institution. As a result of standardizing
clinical practice according to evidence-based clinical practice guidelines, CPWs
have the potential to reduce treatment errors and improve patient outcomes.
An example of a CPW for the management of elderly inpatients with malnutri-
tion is provided in Fig. 12.1.
Another rationale (for policy-makers and healthcare institutions) for imple-
menting and using CPWs is that they have also been proposed as a strategy to
optimize resource allocation and cost-effectiveness. Within the trend towards
the economization of healthcare, as evidenced by the prevalence of case mix
(CM) systems worldwide, there is also evidence of the increased promotion of
clinical pathway interventions to tackle these dramatic changes in healthcare
reimbursement methods (Delaney et al., 2003).
Fig. 12.1 A clinical pathway for the management of elderly inpatients with
malnutrition
Admission: MNA-SF
17 ≤ MNA ≤ 23.5
MNA <17
At risk of malnutrition
Low risk of malnutrition Malnutrition
Follow-up 2
Follow-up 1 Follow-up 3
— Weight 1 day/7
— Weight 1 day/7 — Nutrition counselling by
— Nutritional intakes monitoring
dietician
semiquantatively 3 days/7
Follow-up 2
Follow-up 1
and education
Nutritional+, oral
Energy and protein, supplements
supplements,
(enriched meal, oral supplements), education
enteral nutrition
Worsened
Monitoring,
Weight gain Stable weight nutritional
new proposal
situation
Note: MNA-SF =
Mini Nutritional Assessment
short-form Follow-up 2
Follow-up 1 Follow-up 3
and education
Source: Trombetti et al., 2013
Clinical pathways as a quality strategy 315
(1) To conduct international research into the quality and efficiency of organizing healthcare and
methods for the coordination of primary healthcare and care pathways.
(2) To set up an international network for pooling know-how and the international training
initiatives that go with it.
(3) To foster international cooperation between healthcare researchers, managers and healthcare
providers from European countries and the wider international community.
EPA network activities are organized by country in the form of EPA national sections, but not
all European countries are represented. EPA runs a summer school and a clinical pathway
conference, held on a yearly basis. EPA also edits the International Journal of Care Pathways and
is developing a standardized set of indicators to evaluate CPWs in clinical practice (EPA, 2018a).
The network aims to support Belgian and Dutch hospital organizations in the
development, implementation and evaluation of CPWs.
The main activities are: (1) to provide education sessions on CPWs, patient safety,
quality management and evidence-based medicine; (2) to support multidisci-
plinary teamwork; and (3) to foster international research and collaboration.
Since 2003 the network has closely collaborated with the Dutch Institute for
Healthcare Improvement (CBO). By 2018 more than 57 healthcare organiza-
tions were members of the BDCP Network (including acute hospital trusts,
rehabilitation centres and home-care organizations) (BDCP Network, 2018).
Within the Network more than 1000 projects are under development or have
been implemented.
In 2003 the Dutch Ministry of Health initiated a complementary national
quality improvement collaborative called Faster Better. The purpose of the pro-
gramme was to realize a significant improvement in patient safety and patient
flow in 20% of Dutch hospitals within four years. One of the specific aims of
the programme was to shorten the total duration of the diagnostic process and
treatment by between 40% and 90%. CPWs were used to achieve this. During
the first year of the programme the participating hospitals achieved a reduction
of 32% (Consortium-Sneller-Beter-Pijler 3, 2006).
The Dutch government has been pushing responsibility for improving health-
care to healthcare facilities, insurance companies and patients. In 2011 one of
the largest Dutch insurance companies and various healthcare providers jointly
created the Lean Network in Healthcare (LIDZ) knowledge network. The goal
of this network is to make process improvement an integral and daily part of
healthcare by creating and sharing knowledge (LidZ, 2012). The approach of the
network is complementary to CPW and directly refers to the Lean methodology.
The network comprises more than 60 healthcare organizations.
12.3.2 England
CPWs have been promoted in several government health policy reports and it
is likely that the use of CPWs in the NHS is increasing (Darzi, 2008, 2009;
Department of Health, 2007). The growing focus in the NHS, especially during
the current budget constraints, is on evidence-based practice and improving
quality of care. As a result, CPWs have been identified as tools which could play
an important role in reducing costly variations in care in addition to improving
patient safety (Darzi, 2009). Several tools and resources have been developed
to facilitate the use and implementation of CPWs within the NHS. An online
pathway tool aims to provide easy access for NHS staff to clinical evidence and
best practice. The pathway database is hosted at the National Institute for Health
Clinical pathways as a quality strategy 317
and Care Excellence (NICE). The NICE database offers generic information
about CPWs for all NHS staff, jurisdictions and stakeholders including quality
standards, technology appraisals, clinical and public health guidance and NICE
implementation tools (NICE, 2012). In addition, the Releasing Time To Care®
programme in the NHS is a complementary approach but it has a much broader
scope and directly refers to the Lean Methodology.1 Releasing Time to Care (also
known as the productive ward) provides a systematic approach to delivering safe,
high-quality care to patients within the NHS. It has been widely implemented
in NHS trusts and entities to respond to the needs of the community and to
ensure that standards of healthcare are high (Wilson, 2009).
CPWs have the potential to stimulate social movements such as the demand
for shared decision-making, the continuing development of the “information
society”, advances in treatment, and the changing expectations of patients and
the workforce in the UK. There have been several success stories of CPW imple-
mentation in England thus far, for example the stroke care pathway originally
highlighted by Lord Darzi’s report (Intercollegiate Stroke Working Party, 2011).
Nevertheless, despite the noted benefits of several CPW initiatives and support
among key stakeholders, a recent report by the King’s Fund and Nuffield Trust
highlights several barriers to implementation of CPWs within the NHS, and
makes recommendations for calls to action in order to support and facilitate
CPWs “at scale and pace” (Goodwin et al., 2012). Although this is an important
issue and should guide future efforts, it is not unique to the UK (Greenhalgh et
al., 2004; Evans-Lacko et al., 2010).
More recently, there has been growing emphasis on better integration of patient
and public involvement in the development and implementation of CPWs in the
NHS. Resources such as the Smart Guides to Engagement (Maher, 2013), which
support Clinical Commissioning Groups in employing strategies for pathway
development involving and clearly reflecting the values of patients, caregivers
and family members in order to promote appropriateness and efficiency (NHS
England, 2016) of CPWs, play an important role.
12.3.3 Germany
Before 2008 the implementation of CPWs had been proposed and endorsed by
many stakeholders in the German healthcare system. Several professional societies
had recommended that CPWs should be used in everyday practice, but their
development was left to single institutions and cross-linking and exchange of ideas
between them was rare and often cumbersome. Many healthcare professionals
1 Lean Management (LM) in healthcare is based upon the principles of reducing waste and wait-times and
improving the quality of care. The Lean Methodology is a complex multicomponent intervention and
refers to standard work in the form of clinical protocols and clinical pathways.
318 Improving healthcare quality in Europe
12.3.4 Bulgaria
In Bulgaria so-called “clinical pathways” are being used in case-based payments.
Since 2001 hospitals have been reimbursed with a single flat rate per pathway.
A set number of diagnoses are grouped and reimbursed according to a “clinical
pathway” (more than 250 in 2017) where the costs of up to two outpatient
medical examinations after hospital discharge are included. As an attempt to
optimize hospital activity, CPWs for outpatient procedures were also introduced
in 2016. There are 42 outpatient procedures (for example, cataract surgery,
chemotherapy) and four different procedures which require a length of stay up
to 24 hours (for example, intensive treatment of new-borns with assisted breath-
ing) (Dimova et al., 2018).
Clinical pathways as a quality strategy 319
Effectiveness
As with any other intervention in healthcare, the question is whether CPWs
achieve what they aim for, whether they ultimately contribute to improve the
outcomes of healthcare, and at what cost this is achieved. Rotter et al. (2012)
addressed the effects of CPWs on professional practice, patient outcomes, length
of stay and hospital costs for the hospital setting in a Cochrane systematic review
(Rotter et al., 2012). The methodology of the review is summarized in Box 12.3.
The review represents the most comprehensive database in terms of the available
quantitative literature; an update has been submitted to the Cochrane Library
for publication.
Selection Criteria
Randomized controlled trials (RCTs) and non-randomized trials (for example, controlled clinical
trials, controlled before and after studies and interrupted time series studies) were included.
Outcomes measures
Objectively measured patient outcomes included mortality, hospital readmission, complications,
adverse events, length of stay (LOS) and hospital costs. Professional practice outcomes included
documentation in medical records, patient satisfaction and time to mobilization post-surgery.
Data Synthesis
The authors presented the results of their studies in tabular form and made an assessment of
the effects of the studies. Primary studies were statistically pooled and the results depicted if
there were enough comparable primary studies.
320 Improving healthcare quality in Europe
Aizawa et al. (2002) tested a clinical pathway for transurethral resection of the
prostate (TURP), Choong et al. (2000) assessed a CPW for femoral neck frac-
ture, Delaney et al. (2003) tested a CPW for laparotomy and intestinal resec-
tion, Kiyama et al. (2003) a CPW for gastrectomy, and Marelich et al. (2000) a
clinical pathway for mechanical ventilation. In-hospital complications assessed
were wound infections, bleeding and pneumonia (Aizawa et al., 2002; Choong
et al., 2000; Delaney et al., 2003; Kiyama et al., 2003; Marelich et al., 2000).
The results indicate that in order to avoid one hospital complication it would be
necessary to include 18 patients in a CPW (i.e. number needed to treat = 18).
However, both groups did not differ for in-hospital mortality and hospital read-
mission within six months after discharge (the longest follow-up period reported.)
12.4.1 Cost-effectiveness
Hospital cost data were reported as direct hospital costs and as total costs (direct
costs and indirect costs) including administration or other overhead costs.
Due to the low number of high-quality studies evaluating hospital costs, the
study investigated all objective cost data available, such as hospital charges (i.e.
DRGs) or country-specific insurance points (Rotter et al., 2010). This highly
variable set of reported cost measures precluded further economic evaluation
and we concentrated therefore on the direct cost-effects of CPWs rather than
their cost-effectiveness. Table 12.2 presents an overview of the costing method
used and which costs/charges were included and excluded in the calculations
(as far as reported).
Most studies reported a reduction in in-hospital costs. The adjusted cost effects
(weighted mean difference in US dollars standardized to the year 2000) ranged
from additional costs of US$261 per case for a protocol-directed weaning from
mechanical ventilation (Kollef et al., 1997) to savings of US$4 919 per case for an
emergency department-based protocol for rapidly ruling out myocardial ischemia
(Gomez et al., 1996). Significant clinical and methodological heterogeneity
prevented a meta-analysis of the reported cost results. In summary, CPWs are
associated with improved patient outcomes and could play an important role in
patient safety, but considerable clinical and methodological heterogeneity prohib-
ited further economic investigation of the reported effect measures and benefits.
It should be noted that the development and implementation of CPWs con-
sumes a considerable amount of resources. This corresponds to the fact that truly
achievable cost savings depend on the number of cases (volume) of the condition
targeted by the pathway. According to a cost analysis from Comried (1996),
inflation-adjusted costs for the development and implementation of the pathway
for the indication “Caesarian section” amounted to more than US$26 000 while
the costs for the development and implementation of a CPW for the indication
“uncomplicated vaginal delivery” were estimated at approximately US$10 000
Clinical pathways as a quality strategy 323
(Comried, 1996). However, since normally 20% of diagnoses cover 80% of cases
(Schlüchtermann et al., 2005), a considerable percentage of medical services can
be dealt with using a relatively small number of CPWs.
References
Aizawa T et al. (2002). Impact of a clinical pathway in cases of transurethral resection of the
prostate. Japanese Journal of Urology, 93(3):463–8.
Bauer MS et al. (2006). Collaborative care for bipolar disorder: part I (& II) Intervention and
implementation in a randomized effectiveness trial. Psychiatric Services, 57(7):927–6.
Clinical pathways as a quality strategy 327
BDCP Network (2018). Belgian Dutch Clinical Pathway Network. Available at: https://www.
kuleuven.be/samenwerking/ligb/reseachlines/belgian-dutch-clinical-pathway-network, accessed
27 June 2018.
Bero LA et al. (1998). Closing the gap between research and practice: an overview of systematic
reviews of interventions to promote the implementation of research findings. The Cochrane
Effective Practice and Organization of Care Review Group. BMJ, 317(7156):465–8.
Bookbinder M et al. (2005). Improving end-of-life care: development and pilot-test of a clinical
pathway. Journal of Pain and Symptom Management, 29(6):529–43.
Bosch M et al. (2007). Tailoring quality improvement interventions to identified barriers: a multiple
case analysis. Journal of Evaluation in Clinical Practice, 13(2):161–8.
Brattebo G et al. (2002). Effect of a scoring system and protocol for sedation on duration of
patients’ need for ventilator support in a surgical intensive care unit. BMJ, 324(7350):1386–9.
Brook AD et al. (1999). Effect of a nursing-implemented sedation protocol on the duration of
mechanical ventilation. Critical Care Medicine, 27(12):2609–15.
Campbell H et al. (1998). Integrated care pathways. BMJ, 316(7125):133–7.
Cene CW et al. (2016). A Narrative Review of Patient and Family Engagement: The “Foundation”
of the Medical “Home”. Medical Care, 54(7): 697-705.
Chadha Y et al. (2000). Guidelines in gynaecology: evaluation in menorrhagia and in urinary
incontinence. BJOG: An International Journal of Obstetrics and Gynaecology, 107(4):535–43.
Chen SH et al. (2004). The development and establishment of a care map in children with asthma
in Taiwan. Journal of Asthma, 41(8):855–61.
Choong PF et al. (2000). Clinical pathway for fractured neck of femur: a prospective, controlled
study. Medical Journal of Australia, 172(9):423–6.
Cluzeau FA et al. (1999). Development and application of a generic methodology to assess the
quality of clinical guidelines. International Journal of Quality in Health Care, 11(1):21–8.
Cole MG et al. (2002). Systematic detection and multidisciplinary care of delirium in older
medical inpatients: a randomized trial. Canadian Medical Association Journal, 167(7):753–9.
Comried LA (1996). Cost analysis: initiation of HBMC and first CareMap. Nursing Economics,
14(1):34–9.
Consortium Sneller Beter Pijler 3 (2006). Sneller Beter werkt! Resultaten van het eerste jaar.
Den Haag.
Darzi A (2008). High quality care for all: NHS Next Stage Review final report. London: Department
of Health.
Darzi A (2009). The first year of high quality care for all. Health Service Journal, 119(6162):17.
De Allegri M et al. (2011). Which factors are important for the successful development and
implementation of clinical pathways? A qualitative study. BMJ Quality and Safety, 20(3):203–8.
De Bleser L et al. (2006). Defining pathways. Journal of Nursing Management, 14:553–63.
de Vries EN et al. (2010). Effect of a comprehensive surgical safety system on patient outcomes.
New England Journal of Medicine, 363:1928–37.
Delaney CP et al. (2003). Prospective, randomized, controlled trial between a pathway of controlled
rehabilitation with early ambulation and diet and traditional postoperative care after laparotomy
and intestinal resection. Diseases of the Colon and Rectum, 46(7):851–9.
Department of Health (2007). Our NHS Our future: NHS next stage review – interim report.
London: Department of Health.
DGKPM (2008). German Society for Clinical Process Management. Available at: www.dgkpm.
de, accessed 04 May 2012.
Dimova A et al. (2018). Bulgaria: Health system review. Health Systems in Transition: 20(4).
Doherty SR, Jones PD (2006). Use of an ‘evidence-based implementation’ strategy to implement
evidence-based care of asthma into rural district hospital emergency departments. Rural and
Remote Health, 6(1):1.
328 Improving healthcare quality in Europe
Dowsey MM et al. (1999). Clinical pathways in hip and knee arthroplasty: a prospective randomised
controlled study. Medical Journal of Australia, 170(2):59–62.
EPA (2018a). EPA Care Pathways. Available at: http://e-p-a.org/care-pathways/, accessed 27
June 2018.
EPA (2018b). Health Quality Ontario. Evidence and Health Quality Ontario. Available at:
http://www.hqontario.ca/Evidence-to-Improve-Care/Evidence-and-Health-Quality-Ontario,
accessed 29 June 2018.
Evans-Lacko S et al. (2010). Facilitators and barriers to implementing clinical care pathways.
BMC Health Services Research, 10:182.
Falconer JA et al. (1993). The critical path method in stroke rehabilitation: lessons from an
experiment in cost containment and outcome improvement. QRB Quality Review Bulletin,
5(1):8–16.
Gomez MA et al. (1996). An emergency department-based protocol for rapidly ruling out
myocardial ischemia reduces hospital time and expense: results of a randomized study
(ROMIO). Journal of the American College of Cardiology, 28(1):25–33.
Goodwin N et al. (2012). A report to the Department of Health and the NHS Future Forum.
Available at: http://www.kingsfund.org.uk/sites/files/kf/integrated-care-patients-populations-
paper-nuffield-trust-kings-fund-january-2012.pdf, accessed 05 December 2018.
Greenhalgh T et al. (2004). Diffusion of innovations in service organizations: systematic review
and recommendations. Milbank Quarterly, 82(4):581–629.
Grimshaw J (1998). Evaluation of four quality assurance initiatives to improve out-patient referrals
from general practice to hospital. Aberdeen: University of Aberdeen.
Grimshaw JM, Thomson MA (1998). What have new efforts to change professional practice
achieved? Cochrane Effective Practice and Organization of Care Group. Journal of the Royal
Society of Medicine, 91(Suppl 35):20–5.
Grimshaw JM et al. (2001). Changing provider behaviour: an overview of systematic reviews of
interventions. Medical Care, 39(8 – suppl 2).
Grimshaw JM et al. (2007). Looking inside the black box: a theory-based process evaluation
alongside a randomised controlled trial of printed educational materials (the Ontario printed
educational message, OPEM) to improve referral and prescribing practices in primary care in
Ontario, Canada. Implementation Science, 2:38.
Huckson S, Davies J (2007). Closing evidence to practice gaps in emergency care: the Australian
experience. Academic Emergency Medicine, 14(11):1058–63.
Intercollegiate Stroke Working Party (2011). National Sentinel Stroke Clinical Audit 2010 Round 7:
Public Report for England, Wales and Northern Ireland. London: Royal College of Physicians.
Johnson KB et al. (2000). Effectiveness of a clinical pathway for inpatient asthma management.
Pediatrics, 106(5):1006–12.
Kampan P (2006). Effects of counseling and implementation of clinical pathway on diabetic patients
hospitalized with hypoglycemia. Journal of the Medical Association of Thailand, 89(5):619–25.
Kim MH et al. (2002). A prospective, randomized, controlled trial of an emergency department-
based atrial fibrillation treatment strategy with low-molecular-weight heparin (Structured
abstract). Annals of Emergency Medicine, 40:187–92.
Kinsman L, James EL (2001). Evidence based practice needs evidence based implementation.
Lippincott’s Case Management, 6(5):208–19.
Kinsman L, James E, Ham J (2004). An interdisciplinary, evidence-based process of clinical
pathway implementation increases pathway usage. Lippincott’s Case Management, 9(4):184–96.
Kinsman L et al. (2010). What is a clinical pathway? Development of a definition to inform the
debate. BMC Medicine, 8(31).
Kiyama T et al. (2003). Clinical significance of a standardized clinical pathway in gastrectomy
patients (Structured abstract). Journal of Nippon Medical School, 70:263–9.
Clinical pathways as a quality strategy 329
Knai C et al. (2013). International experiences in the use of care pathways. Journal of Care Services
Management, 7(4):128–35.
Kollef MH et al. (1997). A randomized, controlled trial of protocol-directed versus physician-
directed weaning from mechanical ventilation. Critical Care Medicine, 25(4):567–641.
LidZ (2012). Stichting Lean in de zorg. Website: http://www.lidz.nl, accessed 20 May 2019.
Maher L (2013). Developing pathways: using patient and carer experiences. London: NHS
Networks.
Marelich GP et al. (2000). Protocol weaning of mechanical ventilation in medical and surgical
patients by respiratory care practitioners and nurses: effect on weaning time and incidence of
ventilator-associated pneumonia. Chest, 118(2):459–67.
Marrie TJ et al. (2000). A controlled trial of a critical pathway for treatment of community-acquired
pneumonia. CAPITAL Study Investigators. Community-Acquired Pneumonia Intervention
Trial Assessing Levofloxacin. Journal of the American Medical Association, 283:749–55.
NHS England (2016). Involving our patients and public in improving London’s healthcare:
NHS England (London) participation and engagement review 2015/16. Available at:
https://www.england.nhs.uk/london/wp-content/uploads/sites/8/2016/12/improv-londons-
healthcare.pdf, accessed 29 June 2018.
NICE (2012). NICE database clinical pathways. Available at: http://pathways.nice.org.uk/,
accessed 14 April 2019.
Philbin EF et al. (2000). The results of a randomized trial of a quality improvement intervention
in the care of patients with heart failure. The MISCHF Study Investigators. American Journal
of Medicine, 109(6):443–9.
Review Manager (RevMan) [Computer program] (2008). Version 5.0. Copenhagen, The Nordic
Cochrane Centre.
Roberts RR et al. (1997). Costs of an emergency department-based accelerated diagnostic protocol
vs hospitalization in patients with chest pain: a randomized controlled trial. Journal of the
American Medical Association, 278(20):1670–6.
Ronellenfitsch U et al. (2008). Clinical Pathways in surgery: should we introduce them into clinical
routine? A review article. Langenbeck’s Archives of Surgery, 393(4):449–57.
Rotter T et al. (2010). Clinical pathways: effects on professional practice, patient outcomes,
length of stay and hospital costs. Cochrane Database of Systematic Reviews, 17(3):CD006632.
Rotter T et al. (2012). The effects of clinical pathways on professional practice, patient outcomes,
length of stay, and hospital costs. Cochrane systematic review and meta-analysis. Evaluation
and the Health Professions, 35(1): 3–27.
Rotter T et al. (2013). Clinical pathways for primary care: effects on professional practice, patient
outcomes, and costs. Cochrane Database of Systematic Reviews, 8:CD010706.
Schlüchtermann J et al. (2005). Clinical Pathways als Prozesssteuerungsinstrument im Krankenhaus.
In: Oberender P (ed.). Clinical pathways: Facetten eines neuen Versorgungsmodells. Stuttgart:
Kohlhammer Verlag.
Smith et al. (2004). Impact on readmission rates and mortality of a chronic obstructive pulmonary
disease inpatient management guideline. Chronic Respiratory Disease, 1(1):17–28.
Sulch D et al. (2000). Randomized controlled trial of integrated (managed) care pathway for
stroke rehabilitation. Stroke, 31(8):1929–34.
Sulch D et al. (2002). Integrated care pathways and quality of life on a stroke rehabilitation unit.
Stroke, 33(6):1600–4.
Tilden VP, Shepherd P (1987). Increasing the rate of identification of battered women in an
emergency department: use of a nursing protocol. Research in Nursing and Health, 10(4):209–24.
Trombetti A et al. (2013). A critical pathway for the management of elderly inpatients with
malnutrition: effects on serum insulin-like growth factor-I. European Journal of Clinical
Nutrition, 67(11):1175–81.
330 Improving healthcare quality in Europe
Uerlich M et al. (2009). Clinical Pathways – Nomenclature and levels of development. Perioperative
Medicine, 1(3):155–63.
Usui K et al. (2004). Electronic clinical pathway for community acquired pneumonia (e-CP CAP).
Nihon Kokyuki Gakkai zasshi (The Journal of the Japanese Respiratory Society), 42(7):620–4.
van der Weijden T et al. (2012). Clinical practice guidelines and patient decision aids. An inevitable
relationship. Journal of Clinical Epidemiology, 65(6):584–9.
Vanhaecht K et al. (2006). Prevalence and use of clinical pathways in 23 countries – an international
survey by the European Pathway Association. Journal of Integrated Care Pathways, 10:28–34.
Wilson G (2009). Implementation of Releasing Time to Care – the productive ward. Journal of
Nursing Management, 17(5):647–54.
Zander K (2002). Integrated Care Pathways: eleven international trends. Journal of Integrated
Care Pathways, 6:101–7.
Chapter 13
Public reporting
as a quality strategy
Summary
This definition does not consider all publicly available information on providers
to be public reporting. First and foremost, reporting must allow for the identifi-
cation of individual providers. This excludes initiatives that report performance
as summary indicators at the level of geographic areas, as happens in France. In
addition, our definition excludes open comments by healthcare users in the mass
media because this information is not systematically collected. However, the
definition includes public reporting of patient satisfaction based on systematic
surveys or rating websites. Finally, non-public feedback from insurers to provid-
ers is not considered if this information is not disclosed to the public (Marshall,
Romano & Davies, 2004).
Public reporting as a quality strategy focuses on the reporting of quality-related
information about effectiveness, safety and responsiveness of care, measured in
terms of structure, process or outcome indicators. Public reporting may be used
to address quality in different areas of care, i.e. primary prevention, acute care,
chronic care or palliative care. In this chapter we focus only on hospital care
and on physician practices, which – depending on the healthcare system under
consideration – are predominantly single or group practices. Although several
European countries also provide public reports for nursing homes (Rodrigues
et al., 2014), considering these activities here would go beyond the scope of
the chapter.
Public reporting requires the systematic and reliable measurement of a range
of relevant and meaningful quality indicators (see also Chapter 3). It may be
combined with audit and feedback strategies (see Chapter 10) and external
assessment strategies (see Chapter 8) with the aim of strengthening incentives
for improvement of quality.
The chapter follows the common structure of most chapters in Part 2 of this
book. The next section describes the underlying rationale of why public report-
ing should contribute to healthcare quality, followed by a review of approaches
to public reporting in European countries in order to identify context, relevant
actors, scope and the range of indicators used. While the interest in public
reporting is continuously growing in Europe, evaluations of public reporting
instruments are scant. We therefore derive information on effectiveness and (cost-)
effectiveness and on the implementation requirements by including experiences
from other continents, in particular from the United States. In synthesizing these
experiences, we conclude by summarizing the lessons learned and by deriving
potential conclusions and recommendations for policy-makers.
334 Improving healthcare quality in Europe
Both pathways are interlinked through the provider’s self-awareness and the
intention to maintain or increase reputation and, in a competitive context, market
share (Berwick, James & Coye, 2003; Werner & Asch, 2005). It is worth noting
that through the second pathway quality improvement may occur even if patients
make limited use of provider choice, slightly releasing the link between choice and
exit as a prerequisite for change (Cacace et al., 2011). Schlesinger (2010) argues
that “voice”, i.e. the critical dialogue exerted by the informed and empowered
patient, is complementary, and in some cases also alternative, to “exit”. This is
particularly important for healthcare settings in which voice seems the more
promising strategy in achieving quality gains compared to exit, for example in
primary care, where the continuity of the physician-patient relationship is an
objective in its own right. Admittedly, however, voice is much more powerful if
there is an exit option and a credible threat that consumers will exert their choice.
reporting initiatives focus on quality in the hospital sector, while there are fewer
public reporting initiatives that cover GPs and/or specialists.
In some countries, such as Germany, the Netherlands and the UK, several dif-
ferent initiatives exist for the hospital sector and there are at least two that cover
GPs and specialists. Relatively few initiatives cover both ambulatory care (GPs
and specialists) and hospital care. Interestingly, all reviewed public reporting
initiatives end at the borders of the respective country. To our knowledge, there
is no public reporting system supporting cross-border care in Europe.
Relatively elaborated public reporting initiatives have been implemented in the
United Kingdom (nhs.uk, former NHS Choices), the Netherlands (kiesbeter.nl,
“Make better Choices”), Germany (weisse-liste.de “White List”), and Denmark
(sundhed.dk, “Health”). These initiatives cover either all or at least a majority of
providers in the respective country and report on large sets of quality indicators
in multiple sectors of the healthcare system, including general and specialist care
in hospitals and physician practices, and optionally also nursing homes as well
as dental care providers.
In some countries public reporting is combined with financial incentives in a
Pay-for-Quality (P4Q) approach (see also Chapter 14), such as the Quality and
Outcomes Framework (QOF) in the UK or the Quality Bonus Scheme (QBS) in
Estonia. In the UK the QOF was introduced in 2004 for rewarding GP practices
for providing quality care. It systematically rewards and reports an array of clinical
and non-clinical performance indicators at the level of GP practices and therefore
goes far beyond the usually reported data on GP practices in other countries.
Many other countries also have public reporting initiatives but these are usually
less systematic and cover a smaller proportion of providers for a variety of reasons.
For example, in some more decentralized healthcare systems, such as Sweden, the
implementation of public reporting initiatives and the detail of publicly released
information vary greatly between regional units. The same applies to Italy, where
measures of the National Evaluation Programme (Programma Nazionale Esiti,
PNE) are publicly reported at hospital level in some regions, for example, in the
Regional Programme for the Evaluation of Healthcare Outcomes (P.Re.Val.E) in
Lazio (PNE, 2019). As with many other policy innovations, regions can serve
as “laboratories for experimentation” for quality reporting with the potential for
national scale-up (Cacace et al., 2011).
So far, few public reporting activities have been identified in the countries that
joined the EU in 2004 or later (for example, Bulgaria, the Czech Republic,
Romania, Slovakia, Slovenia). Only the Baltic countries have recently introduced
some initiatives: in Estonia, the Quality Bonus Scheme (QBS) publishes infor-
mation about the achieved quality points per practice. In Latvia a pilot project
Public reporting as a quality strategy 337
of public reporting on both hospitals’ and GPs’ performance has been initiated
recently which – depending on its success – might be scaled up in the future.
In Lithuania quality indicators are publicly reported for both hospitals and GPs
by the six sickness funds (OECD, 2018). However, as detailed information is
unavailable, the initiative is not included in Table 13.1.
Finally, it needs to be acknowledged that for some countries information is not
available in international publications and that, in contrast to other quality strat-
egies (see Chapters 12 and 8), no organization or association exists that unites
different national organizations responsible for public reporting. Furthermore,
public reporting in European countries is constantly changing, with new initia-
tives being implemented, and others being dropped, renamed and/or incorpo-
rated into new ones. Therefore, the overview of public reporting initiatives does
not claim to be exhaustive, but considers the most important public reporting
strategies identified at the time of writing.
hospital care (Emmert et al., 2016) and 29 physician rating websites have been
identified (Emmert & Meszmer, 2018).
A range of different public – and sometimes private – actors play a role in the
governance of reporting initiatives in hospital care. In Denmark (sundhedskvalitet.
dk), for example, the municipalities and regions, the National Board of Health
and the Ministry of the Interior and Health are involved. The Dutch kiesbeter.
nl is operated by the National Quality Institute, which was founded in 2013,
to bundle different existing activities related to quality in healthcare (van den
Hurk, Buil & Bex, 2015). In the German social insurance system, sickness funds
play a major role in the regulation of public reporting through their representa-
tion in the Federal Joint Committee (Gemeinsamer Bundesausschuss, G-BA),
which is the highest decision-making body in healthcare. Furthermore, sickness
funds are obliged to make data from hospital quality reports accessible for users
on the internet (see, for example, AOK Gesundheitsnavigator, “AOK Health
Navigator”). In addition, some hospitals report performance data on the basis
of membership in a (private) quality initiative, such as the German qualitaet-
skliniken.de, which is, however, restricted to rehabilitation care. Reporting in
this case is more self-regulated and also (self-)selective, as non-members do not
contribute to quality reporting.
At the level of physician practices private sponsorship is more frequent than in
the hospital sector but public sponsorship remains the more common form (see
also Table 13.1). An array of private commercial initiatives has sprung up, as for
example the physician rating websites in Germany (jameda.de), the Netherlands
(independer.nl) and Austria (docfinder.at). Because of private, profit-oriented
sponsorship, users have to accept – more or less health-related – advertisement,
as these initiatives usually do not have access to other (public) funding sources.
Public sponsorship exists in Denmark, Estonia, Germany, Norway, and the
UK. In Germany, several sickness funds have set up their own physician rating
websites by drawing on results of the weisse-liste.de, for example, the AOK
Gesundheitsnavigator. The weisse-liste.de itself has been created by a private non-
profit foundation in cooperation with three sickness funds as well as associations
of patients and consumer organizations (Cacace et al., 2011). Weisse-liste.de
allows members and co-insured family members of three large sickness funds to
rate providers, and the entire population has access to the information (Emmert
& Meszmer, 2018).
into indicators of structure, process and outcome (see also Chapters 2 and 3). In
addition, this chapter reports separately on indicators of patient satisfaction and
patient experience to highlight the use of indicators for the evaluation of patient-
centredness. An important challenge for public reporting of outcome indicators
is risk-adjustment, which is needed to make comparisons across providers fair
and meaningful (see also Chapter 3).
Austria kliniksuche.at ✓ ✓
Germany AOK ✓ ✓ ✓ ✓ ✓
Gesundheitsnavigatora
deutsches- ✓ ✓ ✓
krankenhaus-
verzeichnis.de
g-ba- ✓ ✓ ✓
qualitaetsberichte.de
qualitätskliniken.de ✓ ✓ ✓ ✓ ✓
weisse-liste.de ✓ ✓ ✓ ✓ ✓ ✓
Denmark sundhetskvalitet.dk ✓ ✓ ✓ ✓
esundhed.dk ✓ ✓ ✓
France scopesante.fr ✓ ✓ ✓ ✓ ✓
Italy P.Re.Val.E b
✓ ✓ ✓
Netherlands independer.nl ✓ ✓ ✓ ✓ ✓ ✓
kiesbeter.nl ✓ ✓ ✓ ✓ ✓
ziekenhuischeck.nl ✓ ✓ ✓ ✓ ✓
Norway helsenorge.noc ✓ ✓ ✓ ✓ ✓
Sweden öppna jämforelserd ✓ ✓ ✓ ✓
vantetider.se ✓
UK Hospital Scorecard ✓ ✓ ✓ ✓
Scotlande
nhs.ukf ✓ ✓ ✓ ✓ ✓ ✓
Quality and Outcomes ✓ ✓ ✓ ✓ ✓
Framework (QOF)g
Note: a: “AOK Health Navigator”, one example of a sickness fund-led initiative based on results from
weisse-liste.de;
b: for registered users only, website: https://bit.ly/2BtrebL;
c: patient perspective is separately presented from other quality indicators;
d: “open comparisons”, website: socialstyrelsen.se/oppnajamforelser;
e: only in Scotland and only for NHS registered users, website: isdscotland.org;
f: former NHS Choices;
g: relaxed in Wales, dropped in Scotland, running in England and Northern Ireland, website: qof.digital.
nhs.uk
Source: based on Cacace et al. 2011, updated in 2019
342 Improving healthcare quality in Europe
All public reporting initiatives included in Table 13.2 provide at least some
guidance for users, for example through manuals opening up when scrolling
over technical terms. Often interactive website tools allow users to perform
one-to-one comparisons of a few hospitals, selected for example by entering a
postal code search, often combined with a search according to body-parts or
indications. Some public reporting initiatives provide a reference to national or
regional averages to facilitate comparisons across hospitals. Another option is to
set a reference threshold on the basis of scientific standards or clinical guidelines.
For example, nhs.uk has defined, on the basis of clinical guidelines, that at least
95% of patients should be assessed for the risk of venous thromboembolism
(blood clots). Kiesbeter.nl and weisse-liste.de indicate the deviation of indicators
from averages and/or scientific standards using a flag system (green-yellow-red/
green-red).
The German qualitaetskliniken.de used to have a somewhat different approach
to presenting information on hospital quality. Here users were able to select
hospitals by setting minimum performance thresholds for different criteria
covering clinical quality, patient safety, patient perspective and/or satisfaction of
referring physicians. However, Qualitaetskliniken.de discontinued this approach.
Nevertheless, we find the idea of making information adaptable to users’ needs
by enabling them to prioritize search criteria quite remarkable.
results. Weisse-liste.de, as well as the English nhs.uk, use both methods of data
collection. While weisse-liste.de combines the results into one database, nhs.
uk reports the survey results separately from users’ website ratings. As a means
to improve reliability of reported information, weisse-liste.de does not publish
scores based on fewer than five ratings per provider, and it reports average scores
across providers as a reference. In the Austrian docfinder.at offensive comments
are simply deleted in order to avoid a culture of “naming and blaming”. Finally,
there are also different ways to present patients’ open comments to the user. Many
systems endow all ratings and open comments with a calendar date in order to
enable users to judge on the timeliness and thus relevance of data.
Major obstacles to the reporting of clinical outcome indicators at the level of
physician practices are the comparably small numbers of cases and the lack of
information in medical records to allow risk-adjustment. QOF reports on a
comparatively large number of outcome indicators, although these are mostly
“intermediate outcomes”. Based on scientific evidence, these measures link specific
processes to effective outcomes, such as rewarding GPs for the proportion of
patients with hypertension whose last blood pressure reading was below 150/90,
where there is evidence that lower blood pressure improves the odds for survival
(Campbell & Lester, 2010). In order to enable fair comparisons, QOF relies
mostly on exception reporting (and not on risk-adjustment), allowing physicians
to exclude data from certain patients (for example, palliative patients), when
calculating average scores (NHS Digital, 2019).
Netherlands independer.nl ✓ ✓ ✓ ✓ ✓
zorgkaartnederland.nl ✓ ✓ ✓ ✓ ✓
Norway helsenorge.no ✓ ✓ ✓ (✓)e ✓
Sweden munin.vgregion.sef ✓ ✓ ✓ ✓ ✓
UK nhs.uk ✓ ✓ ✓ ✓ ✓ ✓ ✓
cqc.org.uk ✓ ✓ ✓ ✓ ✓ ✓ ✓
Quality and Outcomes Framework ✓ ✓ ✓ ✓ ✓
(QOF)g
Note: a: negative commments are deleted; b: “AOK Health Navigator”, one example of a sickness fund-led initiative based on results of weisse-liste.de;
c: one example of an array of physician rating sites in Germany, see Emmert & Meszmer (2018) for more information;
d: website: haigekassa.ee; e: planned; f: only in the region Vastra Götaland: the Quality Follow Up Programme for Primary Care (QFP);
g: relaxed in Wales, dropped in Scotland, running in England and Northern Ireland, website: qof.digital.nhs.uk.
Source: authors’ compilation
Public reporting as a quality strategy 345
Fung et al., 2008; Marshall et al., 2000; Hibbard, Stockard & Tusler, 2003;
Werner & Bradlow, 2010). Totten et al. (2012) also found relatively robust
evidence that the likeliness of quality improvement was greater for providers
with low baseline performance.
Other reviews have focused on the unintended effects of public reporting. One
relatively recent review investigated potential negative effects of public reporting
of individual surgeons’ outcomes, including 25 studies (22 from the US and three
from the UK) (Behrendt & Groene, 2016). It found some evidence from the
US that public reporting may lead to patient selection, although similar effects
were not observed in the UK, where hospital care is provided mostly by public
hospitals. However, another (narrative) review of negative effects resulting from
performance measurement in the NHS identified several dysfunctional conse-
quences, including measurement fixation, tunnel vision, gaming or increased
inequality through patient selection (Mannion & Braithwaite, 2012). Also, the
above-mentioned review by the AHRQ (Totten et al., 2012) found evidence
of some unintended effects, such as changed coding and readmission practices.
Concerning the effect of public reporting on patients’ choice of providers, the
so-called selection pathway, there is relatively robust evidence that patients have –
so far – not made much use of publicly reported quality information (Faber et
al., 2009; de Cruppé & Geraedts, 2017). Patient surveys conducted in several
European countries indicate that only 3% to 4% had looked at quality informa-
tion before undergoing treatment (Kumpunen, Trigg & Rodrigues, 2014). One
reason might be that they are not aware of publicly reported quality information
(Hermeling & Geraedts, 2013; Patel et al., 2018).
Even if users are aware of publicly reported quality information, there is little
evidence that they use this information to avoid low performers (Marshall et al.,
2000; Fung et al., 2008; Victoor et al., 2012). Several studies have found that
the sheer quantity of publicly released information on healthcare providers in
terms of initiatives and the indicators can be overwhelming and confusing for
users – especially when presented information is inconsistent (Boyce et al., 2010;
Leonardi, McGory & Ko, 2007; Rothberg et al., 2008). As a consequence, the
patient may seek information from other important sources of reference when it
comes to provider choice, such as the referring physicians or family and friends
(Victoor et al., 2012). In theory, physicians could use publicly reported infor-
mation to counsel patients when choosing a provider. However, a recent study
from Germany found that publicly reported quality information does not help
physicians in counselling their patients (Geraedts et al., 2018).
There is moderate evidence that public reporting does not lead to increasing
market shares for high-performing providers (Totten et al., 2012), implying
that the selection pathway is not particularly relevant. These findings have been
Public reporting as a quality strategy 347
confirmed by the recent Cochrane Review (Metcalfe et al., 2018), where the
authors concluded that the public disclosure of performance data may make
little or no difference to healthcare utilization by consumers, except for certain
subgroups of the population. In particular, it was shown that data may have a
greater effect on provider choice among advantaged populations (Metcalfe et
al., 2018). These results indicate that populations with lower socioeconomic
status – just like older adults – may be disadvantaged because they are less likely
to search for health information on the internet (Cacace et al., 2011; Kumpunen,
Trigg & Rodrigues, 2014). This is of concern given that these groups generally
tend to be in poorer health and therefore also in greater need of healthcare and
of quality information.
Evidence on costs and cost-effectiveness of public reporting is missing. In fact,
to our knowledge, even conceptual approaches to measuring costs and benefits
of public reporting systematically are missing so far.
selecting a provider. The figure also shows that users in practice rarely meet the
expectations of the theoretical model.
⚡ ⚡ ⚡ ⚡
information e.g trust, value
of composite indicators). On the other hand, many users desire more detailed
information in order to better understand what lies behind the data. This is
particularly true if the public reporting information aims to motivate provid-
ers to improve their practice. Therefore, it is useful to present data at different
levels of aggregation and to allow users to expand the data and to see individual
indicators. However, with a greater level of detail, explanatory notes become
even more important because more specific (clinical) indicators are often more
difficult to interpret for patients but they may also be more easily related to their
particular health problem.
Concerning the third point – to enable a positive attitude towards the presented
data – it is important that data are of high quality. In particular, they should be
reliable, sensitive to change, consistent, valid and resistant to manipulation. In
general, reporting strategies benefit from methods that safeguard the timeliness
and completeness of data, for example through mandatory reporting or by using
financial incentives (pay for reporting/pay for transparency). Furthermore, to
generate trust, public reporting needs to provide information on whether and
how outcome indicators are risk-adjusted and how composite indices are derived.
In addition, sponsors must be aware of the fact that depending on the system,
consumers will have more or less trust in different authors of public reports.
German consumers, for example, express confidence in consumer protection
organizations as authors of public reports whereas scientific societies, government
agencies or other interest groups are less acknowledged (Geraedts, 2006). Other
stakeholders, such as patient associations, self-help groups, the media, academic
departments and GPs, could serve as information intermediaries who will help to
interpret the information and test applicability to the patient’s individual needs
and preferences (Shaller, Kanouse & Schlesinger, 2014).
Finally, in order to be successful, implementation strategies will always have to
consider that patient/user involvement is essential for public reporting initiatives
that primarily aim to enable informed choice of providers. Ideally, reporting
schemes are regularly re-evaluated and improved based on patient/user and
patterns of information use (Pross et al., 2017). Also provider involvement is
a prerequisite for public reporting to be successful in changing peer behaviour.
To raise acceptance among providers, the achievements reported should be fully
under the control of those being assessed, i.e. the issues reported addressable by
providers’ action (Campbell & Lester, 2010). Of course, this also recurs to the
(necessarily) flawed risk-adjustment of outcome indicators, such as morbidity
and mortality, which may potentially lead to unintended consequences, in par-
ticular for high-risk patients. As risk-adjustment is likely to be imperfect, some
authors suggest abandoning the use of standardized mortality ratios completely
from public reporting initiatives and using clinical audit data instead (Goodacre,
Campbell & Carter, 2015; Lilford & Pronovost, 2010).
350 Improving healthcare quality in Europe
include a clear definition of its goals and its target group(s). The strategy should
also indicate the regulation and sponsorship of public reporting, for example
if it is linked to external assessment (see Chapter 8) or financial incentives (see
Chapter 14), and the role of governmental and private organizations should be
defined. One critical question for policy-makers should be whether the expected
benefits outweigh the administrative and financial costs of high-quality reporting
initiatives. Clearly, a difficulty here is that approaches are missing so far to assess
the cost-effectiveness of public reporting.
When implementing public reporting, it is important to systematically involve
all relevant stakeholders, i.e. patients/patient organizations and providers and
staff at all levels of the healthcare system (2). As described in Chapter 3, different
stakeholders have different information needs. The designers of public reporting
systems need to acknowledge, that “the typical user” is difficult to identify or
does not even exist. As diverse as users are, so are their information requirements.
Victoor et al. (2012) showed that the information needs of patients differ across
primary and secondary care and that they vary by type of disease or treatment,
by age group and by educational and socioeconomic background. Individual
(1) Clarify the aims as well as target groups and develop an overarching strategy.
(3) Display information on quality dimensions that are relevant for users.
(4) Design indicators that match the interest and skill levels of users.
(6) Present data at different levels of aggregation and allow users to expand the data to see
individual indicators.
(8) Educate patients and users about quality in healthcare and increase patient and user
awareness of public reporting.
(11) Take a long-term perspective and keep the system under constant review.
patients may consider a range of factors and may have different preferences
as to their trade-offs. In line with the implications (3) to (5) in Box 13.1, we
underline the importance of taking these aspects into account when developing
a public-reporting system.
Continuous efforts are required to improve public reporting and to adapt it to
the users’ needs. These efforts should be made as we can take for granted that
users want more information about the performance of their healthcare providers.
Public reporting is widely accepted as a means to improve transparency and to
involve the patient in decision-making. Although a considerable body of work
intended exploring the benefits of public reporting, much less is known about
the actual mechanisms behind these effects. One of the puzzles that remain is
why utilization is low. Notoriously, patients are interested in receiving more
information. More and more users are interested in sharing their experience with
healthcare, as the growing quantity of provider ratings shows.
Quality information should be tailored to the information needs of the intended
users. This concerns both the content of public reporting (i.e. the selection of
indicators) and the methods of presentation, which should reduce complex-
ity without losing important information. This can be achieved by displaying
information using composite indicators, which can be expanded by users if they
are interested to see the constituting indicators. It is also possible to sort infor-
mation in such a way that users are pointed to the most important information
(Kumpunen, Trigg & Rodrigues, 2014), although this is complicated by the fact
that individual users have different preferences. An innovative approach could
be to offer a range of both clinical and non-clinical indicators, and to let the
users develop their own priorities and give them the opportunity of weighting
results accordingly.
The provision of structured decision aids, such as evidence-based information and
other tools that help patients to clarify their preferences, could support patients
and users to make informed choices. Independently of the aspects mentioned
above, education of patients and users about quality in healthcare and an increased
awareness of public reporting are important. In addition, engaging professionals
in supporting and using public reporting is essential to meet their own informa-
tion needs and to support patients in better understanding information.
Furthermore, policy-makers should reflect how access to such reporting systems
can be improved. The internet has turned out to be a smart way to present such
comparative information on providers. However, the problem remains that
access is not secured, in particular to the most vulnerable or less literate groups
of the population. If policy-makers indeed favour equitable access to quality
information across the population, more research is needed about how different
target audiences can be reached. Should public reporting indeed enable better
Public reporting as a quality strategy 353
informed groups to receive higher-quality care, then everybody must have a fair
chance to belong to that group.
Finally, “trial and error” experiences will be part of the process of developing
public reporting systems, and international exchange may be a useful source for
policy learning. As pointed out in Box 13.1, policy-makers should take a longer-
term perspective and keep a public reporting system under constant review.
Continuous efforts are required to find out what information users want and
how information can be presented in an easily interpretable way.
References
Ashworth M, Gulliford M (2016). Funding for general practice in the next decade: life after QOF.
British Journal of General Practice, 67(654): 4–5.
Behrendt K, Groene O (2016). Mechanisms and effects of public reporting of surgeon outcomes:
a systematic review of the literature. Health Policy, 120(10):1151–61.
Berwick DM, James B, Coye MJ (2003). Connections between quality measurement and
improvement. Health care consumers and quality in England and Germany. Medical Care
Research and Review, 41(1): i30–i38.
Boyce T et al. (2010). Choosing a high-quality hospital: the role of nudges, scorecard design and
information. London: The King’s Fund.
Cacace M et al. (2011). How health systems make available information on service providers:
Experience in seven countries. RAND Europe and London School of Hygiene & Tropical
Medicine. Available at: http://www.rand.org/pubs/technical_reports/TR887.html, accessed
11 February 2019.
Campanella P et al. (2016). The impact of Public Reporting on clinical outcomes: a systematic
review and meta-analysis. BMC Health Services Research, 16:296.
Campbell S, Lester H (2010). Developing indicators and the concept of QOFability. In: Gillam
SA, Siriwardena N (eds.). The Quality and Outcomes Framework: QOF – transforming
general practice. Abingdon: Ratcliffe Publishing, 16–27.
de Cruppé W, Geraedts M (2017). Hospital choice in Germany from the patient’s perspective: a
cross-sectional study. BMC Health Services Research, 17:720.
Dixon A, Robertson R, Bal R (2010). The experience of implementing choice at point of referral:
a comparison of the Netherlands and England. Health Economics, Policy and Law, 5:295–317.
Donabedian A (1988). The quality of care: How can it be assessed? Journal of the American Medical
Association, 260:1743–8.
Emmert M, Meszmer N (2018). Eine Dekade Arztbewertungsportale in Deutschland: Eine
Zwischenbilanz zum aktuellen Entwicklungsstand. Das Gesundheitswesen, 80(10):851–8.
doi: 10.1055/s-0043-114002.
Emmert M et al. (2016). Internetportale für die Krankenhauswahl in Deutschland: Eine
leistungsbereichsspezifische Betrachtung. Das Gesundheitswesen, 78(11):721–34.
Eurostat (2019). Broadband and connectivity. Available at: http://appsso.eurostat.ec.europa.eu/
nui/show.do?dataset=isoc_bde15b_h&lang=en, accessed 9 February 2019.
Faber M et al. (2009). Public reporting in health care: how do consumers use quality-of-care
information? A systematic review. Medical Care, 47(1):1–8.
Fung CH et al. (2008). Systematic review: the evidence that publishing patient care performance
data improves quality of care. Annals of Internal Medicine, 148(2):111–23.
354 Improving healthcare quality in Europe
Patel S et al. (2018). Public Awareness, Usage, and Predictors for the Use of Doctor Rating
Websites: Cross-Sectional Study in England. Journal of Medical Internet Research, 20(7):e243.
doi:10.2196/jmir.9523.
PNE (Programma Nazionale Esiti) (2019). Website. Available at: http://95.110.213.190/
PNEedizione16_p/spe/spe_prog_reg.php?spe, accessed 10 February 2019.
Pross C et al. (2017). Health care public reporting utilization – user clusters, web trails, and usage
barriers on Germany’s public reporting portal Weisse-Liste.de. BMC Medical Informatics and
Decision Making, 17(1):48.
Rodrigues R et al. (2014). The public gets what the public wants: experiences of public reporting
in long-term care in Europe. Health Policy, 116(1):84–94.
Rothberg MB et al. (2008). Choosing the best hospital: the limitations of public quality reporting.
Health Affairs, 27(6):1680–7.
Schlesinger M (2010). Choice cuts: parsing policymakers’ pursuit of patient empowerment from
an individual perspective. Health Economics, Policy and Law, 5:365–87.
Shaller D, Kanouse DE, Schlesinger M (2014). Context-based Strategies for Engaging Consumers
with Public Reports about Health Care Providers. Medical Care Research and Review, 71(5
Suppl):17S–37S.
Shekelle PG (2009). Public performance reporting on quality information. In: Smith PC et al.
(eds). Performance measurement for health system improvement. Cambridge: Cambridge
University Press, 537–51.
Totten A et al. (2012) Closing the quality gap series: public reporting as a quality improvement
strategy (No. 208). Evidence Report. Agency for Healthcare Research and Quality (AHRQ).
Available at: https://effectivehealthcare.ahrq.gov/sites/default/files/pdf/public-reporting-quality-
improvement_research.pdf, accessed 5 December 2018.
Vallance AE et al. (2018). Effect of public reporting of surgeons’ outcomes on patient selection,
“gaming,” and mortality in colorectal cancer surgery in England: population-based cohort
study. BMJ, 361:k1581.
Van den Hurk J, Buil C, Bex P (2015). Tussentijdse evaluatie Kwaliteitsinstituut. De ontwikkeling
van regeldruk van transparantie door de komst van het Kwaliteitsinstituut. Available at:
https://www.siracompanies.com/wp-content/uploads/tussentijdse-evaluatie-kwaliteitsinstituut.
pdf, accessed 8 February 2019.
Victoor A et al. (2012). Determinants of patient choice of healthcare providers: a scoping review.
BMC Health Services Research, 12:272.
Werner RM, Asch DA (2005). The unintended consequences of publicly reporting quality
information. Journal of the American Medical Association, 239(10):1239–44.
Werner RM, Bradlow ET (2010). Public reporting on hospital process improvements is linked
to better patient outcomes. Health Affairs, 29(7):1319–24.
Zorginstituut (2019). Transparantiekalender. Available at: https://www.zorginzicht.nl/bibliotheek/
Paginas/transparantiekalender.aspx?p=1563, accessed 08 February 2019.
Chapter 14
Pay for Quality: using financial
incentives to improve quality of care
Summary
conceptual, practical and ethical reasons (Roland & Dudley, 2015; Wharam
et al., 2009). In fact, there is no universally accepted definition of P4Q, and
the term is often used interchangeably with “pay for performance” (P4P). Yet
the term P4Q is more precise, as it makes clear that payment depends on the
quality of care – and not on other dimensions of health system performance
(see also Chapter 1).
The two characteristic features of P4Q programmes are that (1) performance
of providers is monitored in relation to pre-specified quality indicators and (2)
a monetary transfer is made conditional on the (achievement or improvement
of ) measured quality of care. In theory, as discussed in Chapter 3, quality can
be measured by use of structure, process or outcome indicators of quality – and
this is true also for P4Q programmes. In addition, P4Q programmes can, in
theory, aim at assuring or improving quality in different areas of care (preven-
tive, acute, chronic or long-term care), and target different types of professional
(for example, physicians, nurses or social workers) and providers (for example,
primary care practices, hospital departments or hospitals). Furthermore, quality
may be incentivized with the aim of assuring or improving quality in terms of
effectiveness, safety and/or responsiveness. Nevertheless, despite the potentially
very large variation of different characteristics of P4Q programmes, this chap-
ter shows that most existing programmes target a more narrow set of providers
(namely primary care providers and hospitals), and that certain characteristics
are much more common in P4Q programmes in primary care than in P4Q
programmes in hospital care.
P4Q can be implemented together with other quality improvement strategies,
such as audit and feedback (see Chapter 10) and public reporting (see Chapter
13). In fact, by design, a P4Q programme includes elements of audit and report-
ing, since the performance has to be monitored and performance data have to
be transmitted to the programme administrators.
The chapter follows the standard structure of chapters in Part 2 of this book. The
next section explains why P4Q is expected to contribute to healthcare quality.
The following section provides an overview of a selection of existing national
and regional P4Q programmes in Europe based on a rapid review (see Box 14.1
for a summary of the methods). The next section summarizes the available evi-
dence on the effectiveness and cost-effectiveness of existing P4Q programmes in
Europe and other high-income countries based on a review of reviews, followed
by a discussion of the organizational and institutional requirements for the
implementation of P4Q programmes, before we draw together the conclusions
of the chapter for policy-makers.
360 Improving healthcare quality in Europe
Box 14.1 Review methods used to inform the content of this chapter
In order to identify existing P4Q schemes in Europe, we searched the European Observatory on
Health Systems and Policies’ Health Systems in Transition (HiT) reviews of all 28 EU countries.
In addition, we extracted information from the OECD Health Systems Characteristics Survey
database and searched the OECDiLibrary. The list of identified initiatives was complemented by
initiatives identified during a systematic review of reviews (next paragraph). Information on the
characteristics of the identified P4Q initiatives was drawn from HiT reviews, OECD reports, studies
identified during the systematic review of reviews and websites of relevant national institutions.
Types of incentive
Types of measurement
CZ (NW, V) - PC, CC EFFS P, S Disease management and provision of services; IT services B, AM - IND
DE DMP (R, V) 2001 PC, CC EFFS P, S Disease management and provision of services B, AM - IND, ORG
EE PHC QBS (NW, M 2006 PC, CC EFFS P, S – Disease management (esp. diabetes) and provision of B, AM, A+I ≤5 % IND
since 2015) change (preventive) services, coordination; Appropriate prescription;
annually Paediatric care; Pregnancy and maternity care; Surgical
services
FR ENMR (NW, V) 2009 PC, CC EFFS S Multiprofessional cooperation; Practice organization; B, W (since 5%, 40% ORG
364 Improving healthcare quality in Europe
patient population, have been the target of programmes in France, Latvia and
the UK. In addition, a final outcome – i.e. reduced hospitalization in patients
with chronic diseases – is included as an indicator in P4Q programmes in Latvia
and Lithuania (Mitenbergs et al., 2012; Murauskiene et al., 2013). Furthermore,
programmes in Portugal, Sweden and the UK reward outcomes of patient sat-
isfaction or patient experience of care. The programme in Poland is the only
known programme rewarding correct and timely diagnosis and timely treatment
of cancer (OECD, 2016). Coordination efforts are rewarded in French, German,
Italian and Swedish P4Q programmes, while practice organization and imple-
mentation of information technology and provision of other computer-based
services are incentivized in at least seven countries, namely the Czech Republic,
France, the Netherlands, Portugal, Spain, Sweden and the UK (Anell, Nylinder &
Glenngård, 2012; OECD, 2016; Srivastava, Mueller & Hewlett, 2016). Finally,
some programmes also reward improved access to care (for example, the scheme
in the Czech Republic) – but this goes beyond the narrow definition of quality
adopted by this book (see Chapter 1).
In all programmes, providers are rewarded with a bonus payment in relation
to the measured quality of care – there are no penalties in any of the countries,
except in certain regions of Sweden. The bonus is usually relatively small (<5%
of total income) and is paid in relation to absolute performance. This means
that the bonus of an individual provider is independent from the performance
of other providers, except in certain regions of Sweden (Lindgren, 2014), where
relative achievement compared to peers is rewarded. Only four programmes (in
Croatia, France, Portugal and the UK) pay a bonus of more than 10%.
In Portugal bonuses are paid to physicians (up to 30% of income) and nurses
(up to 10% of income) working in organizationally mature Family Health Units
(FHU) that have gained greater autonomy from public administration (Biscaia
& Heleno, 2017). Bonuses depend on achievements related to preventive and
monitoring services in vulnerable populations (pregnant women, children,
patients with diabetes or high blood-pressure) and in women of reproductive
age (Almeida Simoes et al., 2017; Srivastava, Mueller & Hewlett, 2016).
Under the Quality and Outcomes Framework (QOF), implemented in the
UK in 2004, practices could originally receive a bonus of up to 25% of income
until 2013, when this share was reduced to 15% (Roland & Guthrie, 2016).
The bonus comprises an up-front payment at the beginning of the year and
achievement payments at the end. Points are awarded for the achievement of
each incentivized indicator, and total payment depends on the monetary value
of a QOF point, practice list size and prevalence data (NHS Digital, 2016).
Indicators and the value of QOF points differ between England, Northern
Ireland, Scotland and Wales. Initially, the scheme in England comprised 146
Pay for Quality: using financial incentives to improve quality of care 367
DK Journalauditindikatoren 2009 AC EFFS, O, P, S Proportion of patients with a case manager; Patient satisfaction B, PN, AM <1% IND
(NW, M) RESP (departments
in four
hospitals)
FR IFAQ (NW, M+V) 2012 AC EFFS P, S Disease management (AMI, acute stroke, renal failure); B, RR, 0.4–0.6% – V; ORG
Prevention and management of postpartum haemorrhage; TOP20P 0.2–0.5% – M; (hospital)
Documentation (€15t–500t) – 460 in
368 Improving healthcare quality in Europe
2014–2015
HR (NW, M) 2015 AC EFFS O, P, S All-cause mortality; % of day-hospital cases; % of treatment by W, AM (RR) 10% ORG
reserve antibiotics in the total number of cases (hospital)
IT PAFF (Lazio, M) 2009 AC EFFS P–1 Hip-fracture surgery within 48 hours of admission PN, AM Reduced ORG
reimbursement (hospital)
LU Incitants qualité 1998 AC EFFS, O, P, S Change annually B, AM ≤2.00% ORG
(NW, V) SFTY (hospital)
NO QBF (NW, V) 2014 AC EFFS, O, P – 33 Clinical outcomes (five-year survival rates in cancer, 30-day W, RR Redistribution ORG
RESP, survival for hip fracture, AMI, stroke and all admissions), of NOK 500M (hospitals in
SFTY management of diseases (treatment of hip fractures within 48 four regions)
hours, cancer treatment initiation within 20 days, waiting time,
etc.); Waiting times; Patient satisfaction
PT Hospital contract 2002 AC EFFS, O, P – 12 LOS; 30 days ER; Hip-fracture surgery within 48 hours of B, PN, RR ≤5% ORG
(NW, M) SFTY admission; Waiting times; Day case surgeries; Generics (hospital)
prescription; Use of Surgical Safety Checklist
SE R, M (in 10 out of the 2004 AC EFFS, O, P, S Compliance with guidelines (AMI, diabetes, hip fracture, renal W, AM 2–4%
21 regions) RESP failure – within 48 hours, stroke); Patient satisfaction
UK Advancing Quality 2008 AC EFFS, O, P – 52 Disease management (AKI, AMI, ARLD, CABG, COPD, B, AM 2–4% IND (clinical
(ENG) (NW, V) RESP diabetes, dementia, HKRS, hip-fracture, heart failure, teams), ORG
pneumonia, psychosis, sepsis, stroke); Patient-reported (hospital – 24)
outcomes; Patient experience
UK CQUIN (NW, M) 2009 AC EFFS, O, P – PSIs and process quality; Patient experience PN, AM 0.5–2.5% of ORG
(ENG) RESP, depends on the contract (hospital)
SFTY agreement
UK BPT (NW, M+V) 2010 AC EFFS, P – 65 Avoiding unnecessary admissions (day case surgeries); B, W, AM <1% (5–43% ORG
(ENG) SFTY Delivering care in appropriate settings; Promoting provider of tariff) (hospital)
quality accreditation; Improving quality of care
UK Non-payment for 2009 AC SFTY O – 14 PSIs – reduce 14 never events PN, AM No ORG
(ENG) never events (NW, M) reimbursement (hospital)
UK Non-payment for ER 2011 AC SFTY O–1 PSIs – 30 days ER PN, AM No ORG
(ENG) (NW, M) reimbursement (hospital)
Abbreviations: Countries: DK = Denmark; ENG = England; FR = France; IT = Italy; LU = Luxembourg; NO = Norway; PT = Portugal; SE = Sweden; UK = United Kingdom
Programmes: BPT = Best Practice Tariffs; CQUIN = Commissioning for Quality and Innovation; ER = emergency readmission; IFAQ = Incitation financière àl’amélioration de la qualité;
PAFF = Applicazione del percorso assistenziale nei pazienti ultrasessantacinquenni con fratture di femore; QBF = Kvalitetsbasert finansiering (Quality-based financing)
Diffusion/participation: NW = nationwide; R = regional; M = mandatory; V= voluntary
Quality dimension: EFFS = effectiveness; RESP = responsiveness; SFTY = safety Care area: AC = acute care; PC = preventive care
Area of activity: AKI = Acute Kidney Injury; AMI = Acute Myocardial Infarction; ARLD = Alcohol-Related Liver Disease; CABG = Coronary Artery Bypass Graft; COPD = Chronic
Obstructive Pulmonary Disease; HKRS = Hip and Knee Replacement Surgery Hip Fracture; PSIs = patient safety indicators
Type of indicators: O = outcomes; P = processes; S = structures
Incentive structure: A = achievement; AM = absolute measure; B = bonus; I = improvement; PN = penalty; RR = relative ranking; TOP20P = reward of the upper 20% of all performers;
W = withhold Type and number of provider: IND = individual providers; ORG = organizations
Source: authors’ compilation
Pay for Quality: using financial incentives to improve quality of care 369
370 Improving healthcare quality in Europe
Most P4Q programmes for hospitals have a stronger focus on outcomes and/or
processes than P4Q programmes in primary care (where the focus is on struc-
tures). Only P4Q programmes for hospitals in Croatia, Denmark, France and
Luxembourg include indicators for structures. Final health outcomes are only
measured in Norway (for example, five-year survival rate for different cancer
types, 30-day survival rates after hospital admission for hip fracture, AMI and
stroke) and in Croatia (all-cause-mortality). Patient-reported health outcomes
are measured in Advancing Quality (AQ) in the north-west of England (for
example, quality of life), while patient safety outcomes are measured in the
English “Non-payment for never-events” programme in terms of reduction of
14 never-events including wrong-side surgery, wrong implant/prosthesis, and
retained foreign object post procedure (AQuA, 2017; NHS England Patient Safety
Domain, 2015). Outcomes in terms of patient experience and patient satisfac-
tion (for example, experience or satisfaction with waiting times) are rewarded
by programmes in Denmark, Norway, Sweden and England (within AQ and
CQUIN) (Anell, 2013; AQuA, 2017; Olsen & Brandborg, 2016).
Acute myocardial infarction (AMI), acute stroke, renal failure, hip fracture, and
hip and knee replacement surgery are the main medical conditions targeted by
programmes in France, Italy, Norway, Portugal, Sweden and the UK for process
quality improvement. A few countries target additional conditions, such as
cancer (Norway), diabetes (Sweden, UK), postpartum haemorrhage (France)
and a few more in the UK. Indicators concern timely treatment (for example,
surgical treatment of hip-fracture within 48 hours of admission, initiation of
cancer treatment within 20 days), appropriate disease management (for example,
medication at admission, discharge and during the stay, disease monitoring and
diagnostic activities), and care coordination (for example, referrals to rehabilita-
tion and primary care, plans for disease management, discharge summary sent
within seven days).
Nine of the 13 identified programmes have penalties – either as a withhold of
reimbursement (for example, non-payment schemes in the UK), as a payment
adjustment of usual payment depending on performance (for example, CQUIN
in the UK, programmes in Italy, Norway, Portugal and Sweden), or as a pre-
defined fine if the targets are not met (for example, Journalauditindikatoren in
Denmark) (Kristensen, Bech & Lauridsen, 2016). Some of the programmes
have both penalties and bonuses (for example, schemes in Denmark, Portugal
and CQUIN in the UK). In France, Luxembourg and the AQ scheme in the
UK programmes rewarded providers with a bonus payment. The size of bonus
payments or penalties is usually relatively small (<2% of total hospital income)
and the payment is almost always made in relation to absolute performance.
Only in France, Norway and Portugal does the payment depend on relative
performance of providers compared to their peers. In most countries the bonus
Pay for Quality: using financial incentives to improve quality of care 371
or penalty amounts to less than 2% of the total hospital budget. The scheme in
Croatia is the only one where as much as 10% of a hospital’s revenue depends
on a broader measure of performance including activity- and quality-based
indicators (MSPY, 2016).
The earliest programme, the Incitant Qualité (IQ) in Luxembourg, was estab-
lished with the aim to improve patient-centredness, and the sensibility of actors
for quality of care. In the first four years the programme targeted prevention of
nosocomial infections, implementation of electronic health records, preventive
care and pain management, as well as the technical quality of mammography.
The financial incentive currently amounts to up to 2% of the annual budget.
The reward depends on the number of achieved points on a scale of 0 to 100 and
the corresponding percentage with respect to all the available points (i.e. 0% for
0–10 points, 10% for 10–20 points and so on) (Sante.lu, 2015).
The Norwegian Quality-Based Financing (QBF) programme was introduced
as a pilot among four regions in Norway and covers all public secondary care
providers and also private hospitals with a contract with the Regional Health
Authority (RHA) in Norway in January 2014. The rewards are paid to the four
RHAs according to their performance and the performance of hospitals in the
region measured by process, outcome and patient satisfaction indicators. While
most indicators are measured on the hospital level, the five-year survival rates for
cancer are measured on the regional level. The patient satisfaction results came
from the National Patient Satisfaction Survey. The QBF rewards four types of
performance: reporting quality, minimum performance level, best performance
and best relative improvement in performance of RHA. The rewards are based on
achieved points for the reporting quality and the three indicator types (outcome
indicators – 50 000 points, process indicators – 20 000 points and indicators of
patient satisfaction – 30 000 points). The fulfilment of reporting requirements
is the prerequisite for the possibility to generate indicator-based points. QBF
redistributes around 500 million Norwegian crones to RHAs according to the
weighted performance of the regions and the regions’ hospitals. However, the
RHAs have no fixed requirements regarding how to distribute the QBF rewards
among regional hospitals (Olsen & Brandborg, 2016).
The French programme Incitation financière à l’amélioration de la qualité (IFAQ)
was introduced as an experiment in 2012 and became a nationwide programme
in 2016. The aim of the programme is to improve management of myocardial
infarction, acute stroke, renal failure, the prevention and management of post-
partum haemorrhage, documentation and efficient medication prescription.
Only the upper 20% of the providers with the highest performance receive a
bonus between 0.2 and 0.6% of total income. The total remuneration of the
372 Improving healthcare quality in Europe
1 The review by Kondo et al. (2015) evaluated effects of both primary and hospital care but the presentation
of the results was split between Table 14.3 and Table 14.5.
Pay for Quality: using financial incentives to improve quality of care 373
With the exception of the reviews by Huang et al. (2013) and Ogundeji, Bland
& Sheldon (2016), all the included systematic reviews synthesized included
studies in a narrative manner. Ogundeji, Bland & Sheldon (2016) conducted a
meta-analysis and a meta-regression, while Huang et al. (2013) only performed
a meta-analysis.
schemes with the aim to identify odds of showing a positive effect was three times higher in schemes
features associated with success in with larger incentives (>5% of usual budget; OR = 3.38), less rigorous
P4Q schemes evaluation designs were 24 times more likely to have positive estimates
of effect than RCTs (OR = 24).
Barreto, 2015 Effect of P4Q on healthcare quality AC, EFFS 25 (27) US, UK, 1991- C/RCT=7OS = 20 NR Less frequently reported positive effects of P4Q schemes in RCTs
CC, SE, TW, IT 2011 compared to OS, due to methodological limitations of OS and the
PC heterogeneity (with respect to conceptual and contextual aspects) of
P4Q schemes
Damberg et al., Effects of P4Q on quality and AC, EFFS, 58 (89) UK, US 2001– C/RCT = 6, ITS = 2, L/M Studies with stronger methodological designs were less likely to identify
2014 resource use, efficiency and costs CC, RESP 2013 CBA = 19, UBA = 7, significant improvements associated with scheme – any identified effects
PC QE = 11, OS = 13 were relatively small; studies with weaker study designs reported more
often a significant association between P4Q and higher levels of quality,
with large effect sizes
Emmert et al., Analyse the existing literature CC, C-EFFS 9 UK, US, 1992– RCT = 3, CBA = 3, L Authors concluded that based on the full economic evaluations, P4Q
2012 regarding economic evaluation of PC DE 2010 UBA = 3 efficiency could not be demonstrated; several methodological limitations
P4Q undermine the importance of positive results of the partial economic
evaluations; ranges of costs and consequences were typically narrow,
and programmes differed considerably in design
Table 14.4 Overview of systematic reviews evaluating P4Q schemes in both primary and hospital care [continued]
Included studies
Quality
Care aim of Country Date Study Study
Review Review focus area review No. of origin range type quality Results
Van Herck et Effect of P4Q on healthcare quality AC, EFFS, 51 AU, DE, 1992– RCT/N-RCT = 9/3, M Mixed results depending on the primary objectives of the scheme; the
al., 2010 CC, RESP, (128) ES, IT, UK, 2009 ITS = 20, QE = 37, effects varied according to design choices and characteristics of the
PC C-EFFS US OS/EM = 51/8 context; authors found less evidence on the impact on coordination,
continuity, patient-centredness and cost-effectiveness.
Christianson, Effect of P4Q on healthcare quality AC, EFFS, 37 UK, US, 1992– RCT = 7, CBA = 7, NA Mixed findings – few significant impacts reported; authors complain
Leatherman & CC, RESP TW, ES 2007 ITS = 2, QE = 16, published research on hospital payments was too limited to draw
Sunderland, PC OS+S=5 conclusions with confidence; small, if any, effects on preventive care by
2007, 2008 RCTs; no separation of effects of concurrent QIs
Armour et al., Effects of FI on physician resource AC, EFFS 5(7) UK, US 1994– RCT = 2, OS = 3 NA Mixed results; authors conclude lack of knowledge of the relationship
2001 use and the quality of medical care PC 1998 between the MCO, the physician and the FI complicates the prediction of
the effectiveness
Abbreviations: NR = not reported; f.i. = from inception
Country codes: AU = Australia; CA = Canada; DE =Germany; ES =Spain; FR = France; IT = Italy; JP = Japan; KR = Republic of Korea (South); NL = The Netherlands; SE = Sweden;
TR = Turkey; TW =Taiwan; UK = United Kingdom; US = United States
Review objectives: ACO = accountable care organization; BP = bundled payment; HQID = Premier Hospital Quality Incentive Demonstration project by Centers for Medicare and Medicaid
Services; FI = financial incentives; P4Q = pay for quality; PCP = primary care physicians; QOF = Quality and Outcomes Framework
Included studies: CBA = controlled before-after; C-RCT = cluster randomized controlled trial; ITS = interrupted time-series; MA - meta-analysis; MR = meta-regression; N-RCT = non-
randomized controlled trial; OS = observational studies; QE = quasi-experimental; RCT = randomized controlled trial; S = survey; IW = interview; UBA = uncontrolled before-after studies
Study quality: L = low; L/M = low to moderate; M = moderate; M/H = moderate to high
Results: MCO = managed care organization; OR = odds ratio; RSS = recording the smoking status; SCA = smoking cessation advice; SMD = standardized mean difference
* 9(66) – 9 out of the 66 references evaluate effectiveness or cost-effectiveness of P4Q schemes for healthcare providers in high-income countries. Remaining studies do not meet the criteria
Pay for Quality: using financial incentives to improve quality of care 379
Table 14.5 Overview of systematic reviews evaluating P4Q schemes in hospital care
Included studies
Quality
Care aim of Country Date Study Study
Review Review focus area review No. of origin range type quality Results
Milstein and Impact of P4Q programmes in the AC EFFS 46 DK, CA, 2006– QE = 30, NA Modest, short-term improvements – possibly attributed to concurrent QIs
Schreyoegg, inpatient sector IT, KR, 2015 OS = 16 and increased awareness of data recording
2016 JP, TR,
UK, US
Kondo et al., Effect of P4Q on healthcare quality AC, EFFS, 7 IT, TW, 2010– CBA = 2, NR In US: limited effect on both POCs and patient outcomes, only one OS
2015 (hospital setting) CC, RESP UK, US 2014 UBA = 2, reported positive results on POCs. In TW and IT: generally positive effects
PC OS = 3 on POCs and patient outcomes. In UK (AQ): slowing down improvements,
380 Improving healthcare quality in Europe
failure and pneumonia (Damberg et al., 2014). For the remaining two studies,
reviews reported detrimental effects on acute emergency department visits related
to asthma, diabetes and heart failure (Kondo et al., 2015).
Effects of P4Q on responsiveness of care are reported in five reviews. Gillam,
Siriwardena & Steel (2012) found on the basis of six observational studies of
patient experience in QOF that no statistically significant changes in communica-
tion, nursing care, coordination or overall satisfaction were reported by patients
between 2003 and 2007. However, the same six original studies found that
timely access to chronic care worsened in terms of continuity of care and visits
to the usual physician, but not in terms of urgent appointments, which actually
improved statistically significantly. In general, and especially for older patients,
access to care in QOF worsened. Christianson, Leatherman & Sutherland (2007)
and van Herck et al. (2010) reported for several international P4Q programmes
that patient satisfaction with care did not change. Two other reviews highlighted
that positive effects on patient experience reported by original studies could not
be clearly attributed to a P4Q programme, either because of structural changes
implemented as part of the programme (for example, implementation of elec-
tronic reminder and prescribing systems) or because other quality improvement
interventions were implemented simultaneously with the P4Q programme
(Damberg et al., 2014; Kondo et al., 2015).
Finally, one cluster-RCT identified by Ivers et al. (2012) evaluated the effects
of financial incentives compared to audit and feedback on test-ordering. The
financial incentives turned out to be less effective than audit and feedback in
reducing test ordering.
between HQID hospitals and the comparison group in mortality rates associated
with AMI, CHF and pneumonia.
Patient safety or utilization outcomes were evaluated by seven studies included
in six reviews with respect to readmissions, length-of-stay (LOS), surgery-related
complications or infections, blood catheter-associated infections and other hos-
pital acquired conditions (HACs) in seven programmes –Advancing Quality;
Hawaii Medical Service Association Hospital Pay for Performance (HMSA-P4P);
HQID; HVBP; Non-payment for HACs by the US Centers for Medicare and
Medicaid Services; Geisinger ProvenCareSM integrated delivery system; and
MassHealth P4Q. Positive and statistically significant effects on preventable
conditions or LOS were only identified by two studies in HMSA-P4P and in
Non-payment for HACs, while in four studies positive effects were small and
statistically not significant (Christianson, Leatherman & Sutherland, 2008;
Damberg et al., 2014; Korenstein et al., 2016; Mehrotra et al., 2009; Milstein
& Schreyoegg, 2016).
Responsiveness in terms of patient experience was evaluated by four studies in
five reviews. The reviews by Kondo et al. (2015) and Milstein & Schreyoegg
(2016) did not find evidence for improved patient experience of care after the
introduction of HVBP but rather found a statistically non-significant worsening
of care. Patient satisfaction with inpatient care in HMSA-P4P hospitals improved
by a few percentage points. However, the evaluation did not involve a control
group and the statistical significance was not calculated either (Christianson,
Leatherman & Sutherland, 2008; Damberg et al., 2014; Mehrotra et al., 2009).
14.4.3 Cost-effectiveness
Emmert et al. (2012) is the only review that examined economic evaluations of
P4Q programmes. It identified only three full economic evaluations. Six stud-
ies were partial economic evaluations, which evaluated costs and consequences
separately or assessed only the impact on costs. The reviews by Christianson,
Leatherman & Sutherland (2007), van Herck et al. (2010), Gillam, Siriwardena
& Steel (2012), Hamilton et al. (2013) and Kondo et al. (2015) identified three
other studies with partial economic evaluations.
All full economic evaluations included in the review by Emmert et al. (2012)
reported positive cost-effectiveness. All three studies evaluated the effects of
financial incentives on processes of care in primary or hospital care in the US.
The RCTs by Kouides et al. (1998) evaluated effects of additional bonuses on
influenza immunization coverage. The study found additional costs of $4 362
and $1 443 for additional immunizations. Overall, in the intervention group
median improvement of coverage was 10.3% compared to the pre-intervention
Pay for Quality: using financial incentives to improve quality of care 385
period, while in the control group median improvement was only 3.5%. The
RCT by An et al. (2008) evaluated effects of incentives on referrals and enrol-
ment in a quit smoking programme. The programme resulted in 1 483 total
referrals and $95 733 total costs ($64 per referral) in the intervention group and
441 total referrals and $8937 total costs in the control ($20 per referral) group.
The referrals in the intervention group resulted in 289 additional enrolees in the
quit smoking programme and $300 per additional enrolee. The study by Nahra
et al. (2006) evaluated the hospital BCBS-P4P programme, focusing on effects
for AMI and CHF patients, and estimated costs per QALYs gained of between
$12 967 and $30 081.
Most partial economic evaluations also reported positive results (Emmert et al.,
2012; van Herck et al., 2010). Only one cost-effectiveness study conducted by
Salize et al. (2009) evaluated the effects side using a health outcome, i.e. smok-
ing abstinence. The RCT compared three arms with different combinations
of interventions – physician training, financial incentive and free medication
prescription – to usual care. In contrast to the two arms containing free medica-
tion prescription, the combination of physician training and financial incentive
turned out to be not cost-effective when comparing the intervention costs per
smoking-abstinent patient to the usual treatment. Even the third arm, which
contained training, free medication prescription and financial incentive, did not
dominate over the arm containing only training and free medication prescription
(Hamilton et al., 2013; Scott et al., 2011; van Herck et al., 2010).
In general, the economic evaluations included in identified reviews have a number
of weaknesses: included analyses predominantly considered process-of-care
indicators on the effects side and costs from the third-party-payer’s perspective
on the costs side. Costs from the provider’s perspective, such as administrative
costs or costs for participating in other quality improvement initiatives, were
not taken into account, and the costs were rarely described in detail (Emmert et
al., 2012). In addition, designs of the included analyses have several limitations
(for example, lack of separation of the effects generated by public reporting,
small sample sizes, unit-of-analysis errors, etc.), which restrict the reliability of
their conclusions on cost-effectiveness (Emmert et al., 2012; Mehrotra et al.,
2009). Furthermore, a number of evaluated programmes (for example, HQID,
QOF and HVBP) has been found to be ineffective in the long term (Gillam,
Siriwardena & Steel, 2012; Houle et al., 2012; Kondo et al., 2015). Therefore,
cost-effectiveness, if any, could have only been achieved in the programme’s
short term, when the combination of health gain and the sum of additional costs
(administrative and reward costs) of the programme did not exceed a pre-specified
amount. For many P4Q programmes, reviews found no positive effects which
means that these programmes could not be cost-effective because they required
additional financial resources.
386 Improving healthcare quality in Europe
1. What to incentivize?
3. Whom to incentivize?
would be desirable also from a societal perspective since they may prevent costly
disease-related complications in the long term.
Another challenge for outcome-based metrics is that some aspects of high-quality
care may take a long time to materialize, rendering them infeasible as a basis for
measurement and reward. Therefore, although they offer the most direct link to
desired objectives, the outcome-focused approach towards P4Q is likely to have
limited applicability in practice.
When deciding about the size of the financial incentive, prospective expected
incremental costs of the quality improvement and the share of total provider’s
income affected should be taken into account. If incentives are too small, they are
likely to be ineffective, while very large incentives are unlikely to be cost-effective.
For instance, Ogundeji, Bland & Sheldon (2016) showed in a meta-regression
that the positive effects of a programme tend to be higher in programmes apply-
ing larger incentives (≥ 5% of annual income).
The decisions on the structure of the financial incentive (for example, reward vs.
penalty), the source of the payment (for example, “old” money – withholding part
of the annual payment at the start of a period and redistributing it according to
performance at the end of the period, or “new” money – payment of additional
bonuses), the payment basis (for example, absolute vs. relative measurement,
attainment vs. improvement) and performance targets (for example, single ele-
ments vs. composite score, availability of a threshold) influence the reaction of
providers to the financial incentive. Each of these elements considered individu-
ally has various advantages and disadvantages.
Rewards of absolute performance measures are easy to manage and they provide
some certainty of payment to providers. However, evidence from many pro-
grammes shows that absolute performance rewards often do not lead to the desired
effects in the long term. The predetermined absolute performance thresholds
hamper continuous incentives for further improvement of quality in healthcare
if the targets are not revised on a regular basis (Langdown & Peckham, 2014).
390 Improving healthcare quality in Europe
There are also numerous negative aspects of penalties and relative performance
measurements (Arnold, 2017; Conrad, 2015). They may lead to discrimina-
tion and unfairness and result in low acceptance and negative (unintended)
behavioural reactions of providers or professionals. However, relative measures
can incentivize continuous improvement and penalties usually have a stronger
influence on performance due to the loss aversion of individuals (Emanuel et
al., 2016). Individuals will make more effort to protect their revenues rather
than to earn an uncertain reward. Furthermore, redistribution of “old” money
can be perceived as unfair by providers (Milstein & Schreyoegg, 2016), which
may again result in negative reactions (Eijkenaar, 2013; Kahneman, Knetsch &
Thaler, 1986).
There is no clear evidence that would support the superiority of one incentive
structure over another. However, blended payment systems, combining vari-
ous characteristics, can reduce the unintended consequences. For example, the
combination of “old” and “new” money, as well as of rewards, penalties and
relative performance measures, can exploit the advantages of these elements,
while avoiding some of the disadvantages. Loss aversion of individuals can be
exploited by rewarding P4Q participants with part of a quality-related payment
at the beginning of a period, which will be adjusted for performance at the end
of the period. Another approach can be to fine providers who are not achieving
quality aims, while a bonus is paid if further performance goals are reached.
In general, the emphasis of P4Q programmes should be to reward improvement
of individual performance from previous levels, especially compared to the
previous period. Highly competitive approaches that reward only the top 20%
of providers with the highest performance or the largest improvement should
rather be avoided because of the aforementioned potential negative consequences.
However, whichever choices are made, it is important that they are codified very
clearly, with statements of entitlements, conditions, time horizons and criteria
for receipt of funds.
assume that all participating providers can achieve the pre-specified targets in a
short period of time and calculate funds accordingly. Furthermore, it is impor-
tant that all relevant aspects of quality are monitored – not only incentivized
aspects – even if they are not included in the P4Q scheme.
Finally, implemented programmes have to be monitored and evaluated on a
regular basis. A number of recommendations for P4Q evaluations emerge from
the available literature (for example, Damberg et al., 2014; Kondo et al., 2015;
Mehrotra et al., 2009; Milstein & Schreyoegg, 2016). Evaluations should usually
be planned before a P4Q programme starts and an appropriate evaluation design
selected, depending on the number of participating providers and the time horizon
of the programme. For programmes with high participation rates (for example,
almost all hospitals), it is appropriate to apply an interrupted time-series design
when assessing programme effectiveness. In doing so, performance and quality
data should be collected for several years before and after the implementation of
the programme. However, because studies without a comparison group systemati-
cally over-estimate the positive effects of P4Q programmes (Ogundeji, Bland &
Sheldon, 2016), evaluation designs should, ideally, contain a comparison group,
adjust for baseline performance of participating and non-participating provid-
ers, and account for secular trends. Furthermore, an evaluation should account
for the implementation of concurrent quality improvement interventions, such
as audit and feedback and public reporting, and also for the – often – frequent
changes in programme design.
What to incentivize?
• Performance is ideally defined broadly, provided that the set of measures remains
comprehensible
• Concerns that P4Q encourages risk selection and “teaching to the test” should not be
dismissed.
• P4Q incentives should be aligned with professional norms and values; it is vital that
providers are actively involved in programme design and in the selection of performance
measures
How to measure quality?
• On balance, group incentives are preferred over individual incentives, mainly because
performance profiles are then more likely to be reliable
• Individual or small-group incentives, as well as using measures with small sample size,
will become increasingly feasible as methods for constructing composite scores evolve
• Caution should be upheld in applying hybrid schemes (for example,, using both group
and individual incentives for a team with high interdependence among team members)
• Participation is ideally voluntary provided that broad participation among eligible providers
can be realized
How to incentivize?
• Involving all relevant stakeholders, including providers, patients and payers, right from
the start of the programme development is key to its success
• Monitoring, structured feedback and sophisticated information technology will remain
important in preventing undesired provider behaviour
References
Achat H, McIntyre P, Burgess M (1999). Health care incentives in immunisation. Australian and
New Zealand Journal of Public Health, 23(3):285.
Almeida Simoes J de et al. (2017). Portugal: Health System Review. Health Systems in Transition,
19:(2).
An LC et al. (2008). A randomized trial of a pay-for-performance program targeting clinician
referral to a state tobacco quitline. Archives of Internal Medicine, 168(18):1993.
Anell A (2013). Vårdval i specialistvården: Utveckling och utmaningar. Stockholm: Sveriges
kommuner och landsting.
Anell A, Nylinder P, Glenngård AH (2012). Vårdval i primärvården: Jämförelse av uppdrag,
ersättningsprinciper och kostnadsansvar. Stockholm: Sveriges kommuner och landsting.
AQuA (2017). About Us: How does Advancing Quality measure performance? Available at:
http://www.advancingqualitynw.nhs.uk/about-us/, accessed 26 October 2017.
Armour BS et al. (2001). The effect of explicit financial incentives on physician behavior. Archives
of Internal Medicine, 161(10):1261.
Arnold DR (2017). Countervailing incentives in value-based payment. Healthcare (Amsterdam,
Netherlands), 5(3):125.
Barreto JO (2015). [Pay-for-performance in health care services: a review of the best evidence
available]. Cien Saude Colet, 20(5):1497.
Biscaia AR, Heleno LCV (2017). A Reforma dos Cuidados de Saúde Primários em Portugal:
portuguesa, moderna e inovadora. Ciencia & saude coletiva, 22(3):701.
Busse R, Blümel M (2015). Payment systems to improve quality, efficiency and care coordination
for chronically ill patients – a framework and country examples. In: Mas N, Wisbaum W (eds.).
The “Triple Aim” for the future of health care. Madrid: Spanish Savings Banks Foundation
(FUNCAS).
Cashin C (2014). Paying for performance in health care: Implications for health system performance
and accountability. European Observatory on Health Systems and Policies series. Maidenhead,
England: Open University Press, McGraw-Hill Education.
Pay for Quality: using financial incentives to improve quality of care 395
general-medical-services/quality-and-outcomes-framework/changes-to-qof-2012-13, accessed
13 March 2017.
NHS England Patient Safety Domain (2015). Revised Never Events Policy and Framework.
OECD (2016). OECD Health Systems Characteristics Survey: Section 10: Pay-for-performance
and other financial incentives for providers. Available at: https://qdd.oecd.org/subject.
aspx?Subject=hsc.
Ogundeji YK, Bland JM, Sheldon TA (2016). The effectiveness of payment for performance
in health care: a meta-analysis and exploration of variation in outcomes. Health Policy,
120(10):1141–50.
Olsen CB, Brandborg G (2016). Quality Based Financing in Norway: Country Background Note:
Norway. Norwegian Directorate of Health.
Petersen LA et al. (2006). Does Pay-for-Performance Improve the Quality of Health Care? Annals
of Internal Medicine, 145(4):265.
Rashidian A et al. (2015). Pharmaceutical policies: effects of financial incentives for prescribers.
Cochrane Database of Systematic Reviews, (8):CD006731.
Robinson JC (2001). Theory and Practice in the Design of Physician Payment Incentives. Milbank
Quarterly, 79(2):149.
Roland M, Dudley RA (2015). How Financial and Reputational Incentives Can Be Used to
Improve Medical Care. Health Services Research, 50(Suppl 2):2090.
Roland M, Guthrie B (2016). Quality and Outcomes Framework: what have we learnt? BMJ
(Clinical Research edition), 354:i4060.
Rosenthal MB et al. (2004). Paying For Quality: Providers’ Incentives For Quality Improvement.
Health Affairs, 23(2):127.
Roski J et al. (2003). The impact of financial incentives and a patient registry on preventive care
quality: increasing provider adherence to evidence-based smoking cessation practice guidelines?
Surveys available upon request from corresponding author. Preventive Medicine, 36(3):291.
Rynes SL, Gerhart B, Parks L (2005). Personnel psychology: performance evaluation and pay for
performance. Annual Review of Psychology, 56:571.
Sabatino SA et al. (2008). Interventions to increase recommendation and delivery of screening for
breast, cervical, and colorectal cancers by healthcare providers systematic reviews of provider
assessment and feedback and provider incentives. American Journal of Preventive Medicine,
35(1 Suppl):S67–74.
Salize HJ et al. (2009). Cost-effective primary care-based strategies to improve smoking cessation:
more value for money. Archives of Internal Medicine, 169(3):230–5 [discussion 235–6].
Sante.lu (2015). Incitants Qualité. Available at: http://www.sante.public.lu/fr/politique-sante/
systeme/financement/budget-hospitalier/incitants-qualite/index.html, accessed 4 July 2017.
Scott A et al. (2011). The effect of financial incentives on the quality of health care provided by
primary care physicians. Cochrane Database of Systematic Reviews, (9):CD008451.
Sorbero ME et al. (2006). Assessment of Pay-for-Performance Options for Medicare Physician
Services: Final Report. Washington, DC: RAND Corporation.
Srivastava D, Mueller M, Hewlett E (2016). Better Ways to Pay for Health Care. Paris: OECD
Publishing.
Town R et al. (2005). Economic incentives and physicians’ delivery of preventive care: a systematic
review. American Journal of Preventive Medicine, 28(2):234.
van Herck P et al. (2010). Systematic review: effects, design choices, and context of pay-for-
performance in health care. BMC Health Services Research, 10:247.
Walker S et al. (2010). Value for money and the Quality and Outcomes Framework in primary
care in the UK NHS. British Journal of General Practice, 60(574):e213–20.
Wharam JF et al. (2009). High quality care and ethical pay-for-performance: a Society of General
Internal Medicine policy analysis. Journal of General Internal Medicine, 24(7):854.
Part III
Chapter 15
Assuring and improving quality
of care in Europe: conclusions
and recommendations
15.1 Introduction
Part I of this book started with the observation that quality is one of the most
often-quoted principles of health policy – but that the understanding of the term
and what it encompasses varies. Therefore, Part I provided a definition of the
concept of quality (Chapter 1) before developing a comprehensive framework
for understanding and describing the characteristic features of different quality
strategies in Europe (Chapter 2). This was followed by an introduction to the
conceptual and methodological complexities of measuring the quality of care
(Chapter 3) and an analysis of the influence of international and European actors
in governing and guiding the development of quality assurance and improvement
strategies in Europe (Chapter 4).
Part II of this book provided an overview on the implementation of ten selected
quality strategies across European countries and assessed the evidence on their
effectiveness and, where possible, cost-effectiveness, before distilling recommen-
dations that are useful for policy-makers interested in prioritizing, developing
and implementing strategies to assure and improve the quality of care. The term
“strategy” is used here in a relatively narrow sense to describe certain activities
geared towards achieving selected quality assurance or improvement goals by
targeting specific health system actors (for example, health professionals, provider
organizations or patients). Elsewhere, these activities may be described as “qual-
ity interventions”, “quality initiatives”, or “quality improvement tools” (WHO,
2018). Together, these two parts of the book illustrate the high level of interest
and activity in the field of quality assurance and improvement – and at the same
402 Improving healthcare quality in Europe
time the lack of consensus about basic definitions and concepts, as well as the
limitations of the evidence about how best to assure and improve quality of care.
This chapter draws together the main findings from Parts I and II in order to
address the main question reflected in the title of this book, namely what we
know about the characteristics, the effectiveness and the implementation of
different quality strategies in Europe, and to make recommendations for policy-
makers interested in comprehensive approaches for improving quality of care in
their countries. The next section summarizes the main lessons from Part I of the
book, clarifying key terms and concepts that enable a systematic assessment of
the characteristics, the effectiveness and the implementation of the ten selected
strategies discussed in the subsequent section. The final section concludes with
policy recommendations on how to bring together the individual strategies into
a coherent approach for assuring and improving the quality of care.
quality assurance, i.e. using reliable quality information for external account-
ability and verification, and (2) quality improvement, i.e. using and interpreting
information about quality differences to motivate change in provider behaviour.
Depending on the purpose, quality measurement systems face different chal-
lenges with regard to indicators, data sources and the level of precision required.
More generally, the development of quality measurement systems should always
take into account the purpose of measurement and different stakeholders’ needs.
Depending on the purpose and the concerned stakeholders, it may be useful to
focus on indicators of structures (for example, for governments concerned about
the availability of appropriate facilities, technologies or personnel), processes (for
example, for professionals interested in quality improvement), or outcomes (for
example, for citizens or policy-makers interested in international comparisons).
Also, the appropriate level of aggregation of indicators into summary (composite)
measures depends on the intended users of the information. For example, pro-
fessionals will be interested mostly in detailed process indicators, which enable
the identification of areas for improvement, while policy-makers and patients
may be more interested in composite measures that help identify good (or best)
providers. However, the wide range of methodological choices that determine
the results of composite measures create uncertainty about the reliability of their
results (see Chapter 3). Therefore, it is useful to present composite measures in a
way that enables the user to disaggregate the information and see the individual
indicators that went into the construction of the composite. Furthermore, meth-
ods should always be presented transparently to allow users to assess the quality
of indicators and data sources (for example, using the criteria listed in Chapter
3), as well as the methods of measurement.
Existing conceptual frameworks and available approaches for measurement and
assessment, as well as national policies for quality assurance and improvement,
have been strongly influenced by WHO, the EU and other international actors.
The influence of these international actors on quality policies and strategies has
been explored in more detail in Chapter 4. The international influence is evident
through a range of different (legally binding or non-binding) mechanisms in
four main areas:
discussions in Chapter 1 and the rationale behind the two first lenses of the
five-lens framework in Chapter 2). More recently, the EU has also increased its
role in monitoring quality as part of the broader monitoring process of financial
sustainability, which has led to increasing activity in health system performance
assessment, as illustrated (amongst others) by the Expert Group on Health System
Performance Assessment (2016) report on quality.
• The first group consists of strategies that are mostly concerned with
healthcare structures and inputs, mainly by setting standards: the
regulation of health professionals (Chapter 5), of technologies through
Health Technology Assessment (Chapter 6), and of healthcare facilities
(Chapter 7). In addition, this group includes external institutional
strategies, such as accreditation, certification and supervision (Chapter
8). However, these strategies mark the transition towards the second
group of strategies because they set standards also for processes and
they are also concerned – to a considerable degree – with monitoring
compliance with these standards in order to assure improvements.
• The second group consists of strategies that steer and monitor quality
of healthcare processes. This group includes two strategies, which are
focused on setting standards for processes, i.e. clinical guidelines for
professionals (Chapter 9) and clinical pathways for provider institutions
(Chapter 12), and two strategies that focus on monitoring processes and
assuring improvements, i.e. audit and feedback directed primarily at
professionals (Chapter 10), and patient safety strategies (Chapter 11).
408 Improving healthcare quality in Europe
• The third group consists of two strategies that are concerned with
leveraging processes and outcomes; i.e. they use information about
quality of processes and outcomes to assure improvements in the
quality of care. This group includes public reporting (Chapter 13)
and pay-for-quality (Chapter 14).
Ch 5: Professionals
Ch 11: Patient
Ch 14: Pay-for-quality
safety strategies
are available (see Chapter 1) which could be implemented, such as training and
supervision of the workforce.
The next subsections discuss the main findings of Chapters 5 to 14 separately
for the three groups of strategies.
setting standards for healthcare structures, which is related to the fact that EU
regulations play only a minor role with regard to steering and monitoring the
quality of healthcare processes. Often Germany, the Netherlands and the UK
are amongst those countries that have relatively strong programmes. There is
much more research available on strategies concerned with healthcare processes
than on strategies concerned with structures, but results are mixed for clinical
guidelines (Chapter 9) and several patient safety strategies (Chapter 11). The
most reliable evidence is available for the effectiveness of audit and feedback
(Chapter 10) and clinical pathways (Chapter 12), although effects were often
relatively small and mostly related to process quality.
As discussed in Chapter 9, clinical guidelines inform clinical practice to facilitate
evidence-based healthcare processes. However, as guidelines need to be adapted
to the national context, they cannot be based exclusively on evidence from the
global scientific literature but have to consider the regulatory context as well
as empirical data, for example, about the availability of equipment and phar-
maceuticals in the specific country and context. Clinical guidelines are being
used in many countries as a quality strategy, albeit usually without a legal basis.
Country practices in Europe are diverse, ranging from well established, broad
and prolific systems to nascent utilization with cross-country borrowing. The
rigour of guideline development, mode of implementation and evaluation of
impact can be improved in many settings to enable their goal of achieving “best
practice” in healthcare.
There is mixed evidence about the effectiveness of guidelines at improving patient
outcomes but a clear link has been established between effects and the modalities
of guideline implementation. In particular, user experience should be taken into
account, which is already attempted to varying degrees by means of stakeholder
involvement in guideline development. There is currently no discussion about
a concerted centralization of the dissemination, let alone the development, of
guidelines at EU level (although umbrella organizations of different professional
associations produce European guidelines for their specialties). Persisting chal-
lenges for guideline implementation include up-to-dateness and inclusion of new
evidence; another issue that should receive sufficient consideration is the issue of
multimorbidity, which will need to be better addressed in guideline development.
Audit and feedback strategies may support the implementation of clinical
guidelines by monitoring compliance, and they may provide professionals
with information about their performance and the existence of best practices
(see Chapter 10). An audit is a systematic review of professional performance,
based on explicit criteria or standards. Often audits are based on a broad set of
indicators, including mostly process indicators (but sometimes also indicators
of structures and outcomes) that are mostly focused on the effectiveness and/
Assuring and improving quality of care in Europe: conclusions and recommendations 413
regarding health outcomes and patient safety. Patient experience and satisfac-
tion were rarely evaluated and usually did not improve. It is clear that P4Q
programmes are technically and politically difficult to implement. They seem
to be more effective when the focus of a scheme is on areas of quality where
change is needed and if the scheme embraces a more comprehensive approach,
covering many different areas of care. Again, all relevant stakeholders should be
involved in the process of scheme development and schemes should reinforce
professional norms and beliefs. The contents and structure of the scheme have
to be regularly reviewed and updated, and adverse behavioural responses need
to be monitored in order to avoid unintended consequences. More evidence is
needed on the comparative effectiveness of P4Q schemes in comparison to other
quality improvement initiatives.
implemented several of those strategies, and that although several of them are
effective (primarily regarding process indicators), the size of these effects is gen-
erally modest and data on relative effectiveness and cost-effectiveness are often
inconclusive or unavailable. What is more, while the volume of evidence on some
of the discussed strategies is considerable, the overall quality of evidence is low.
In general, political activities related to the quality strategies discussed in this
book are increasing, albeit with unsurprising variability across countries. At first
sight, this increase in activity might be surprising given the limitations of the
available evidence. However, from a policy-maker’s perspective, implementa-
tion of quality strategies may be warranted even if evidence is limited because
several of the strategies respond to important needs of patients and politicians.
For example, external institutional strategies may assure the population (and the
politicians) that quality is under control. Public reporting responds to the desire
of patients to have information about the quality of care (even if they do not
use it) and to increase transparency and accountability of providers. Similarly,
the need for continuous improvement in professional practice may warrant the
implementation of strategies such as audit and feedback.
Despite the increased political attention, quality strategies are often not coordi-
nated or placed within a coherent policy or overall strategic framework. Thus, from
a policy-maker’s perspective, the goal becomes understanding the potential for
best practice, the possibility for synergies between strategies and the meaningful-
ness of investing in different elements given existing practices and identified areas
where action is needed. Fig. 15.1 in this chapter provides a visual basis for these
considerations. The Handbook for National Quality Policy and Strategy provides
guidance for the development of a national quality policy and strategy (WHO,
2018). It highlights the importance of defining national priorities, developing a
local definition of quality, identifying relevant stakeholders, analysing the situ-
ation to identify care areas in need of improvement, assessing governance and
organizational structure, and selecting quality improvement interventions (or
strategies, according to the terminology of this book). In addition, it highlights
the importance of improving the health information system to enable reliable
measurement of selected quality indicators.
Indeed, the implementation of individual quality strategies is not enough to assure
the provision of high-quality care in a country. Instead, a holistic approach – or
an “overall strategy” – is required, encompassing a number of strategies that are
aligned to achieve optimal outcomes of care. Ideally, the selection and implemen-
tation of different strategies should be focused on those aspects of the healthcare
system that are in greatest need of improvement – also because evidence has
shown that several of the strategies are most effective if focused on care areas or
providers that are currently providing relatively poor care. Furthermore, regular
Assuring and improving quality of care in Europe: conclusions and recommendations 419
References
Donabedian A (1966). Evaluating the quality of medical care. Milbank Quarterly, 44(3, Pt.
2):166–203.
Donabedian A (1980). The Definition of Quality and Approaches to Its Assessment. Vol
1. Explorations in Quality Assessment and Monitoring. Ann Arbor, Michigan: Health
Administration Press.
Expert Group on Health Systems Performance Assessment (2016). So What? Strategies across
Europe to assess quality of care. Available at: https://ec.europa.eu/health/sites/health/files/
systems_performance_assessment/docs/sowhat_en.pdf, accessed 14 April 2019.
Juran JM, Godfrey A (1999). Juran’s Quality Handbook. New York: McGraw-Hill.
Slawomirski L, Auraaen A, Klazinga N (2017). The economics of patient safety. Paris: Organization
for Economic Development and Cooperation.
WHO (2007). Everybody’s business: strengthening health systems to improve health outcomes:
WHO’s framework for action. Geneva: World Health Organization.
WHO (2018). Handbook for national quality policy and strategy – a practical approach for
developing policy and strategy to improve quality of care. Geneva: World Health Organization.
Cover_WHO_nr52.qxp_Mise en page 1 28/08/2019 12:48 Page 1
51
Improving healthcare 53
THE ROLE OF PUBLIC HEALTH ORGANIZATIONS IN ADDRESSING PUBLIC HEALTH PROBLEMS IN EUROPE
Quality improvement initiatives take many forms, from the creation of standards for health
professionals, health technologies and health facilities, to audit and feedback, and from
quality in Europe
Health Policy
Series
fostering a patient safety culture to public reporting and paying for quality. For policy-
makers who struggle to decide which initiatives to prioritise for investment, understanding
the potential of different quality strategies in their unique settings is key.
This volume, developed by the Observatory together with OECD, provides an overall conceptual
framework for understanding and applying strategies aimed at improving quality of care. Characteristics, effectiveness and
Crucially, it summarizes available evidence on different quality strategies and provides
recommendations for their implementation. This book is intended to help policy-makers to
implementation of different strategies
understand concepts of quality and to support them to evaluate single strategies and
combinations of strategies. Edited by
Quality of care is a political priority and an important contributor to population health. This Reinhard Busse
book acknowledges that "quality of care" is a broadly defined concept, and that it is often
Niek Klazinga
unclear how quality improvement strategies fit within a health system, and what their
particular contribution can be. This volume elucidates the concepts behind multiple elements Dimitra Panteli
of quality in healthcare policy (including definitions of quality, its dimensions, related activities, Wilm Quentin
and targets), quality measurement and governance and situates it all in the wider context of
health systems research. By so doing, this book is designed to help policy-makers prioritize
and align different quality initiatives and to achieve a comprehensive approach to quality
improvement.
The editors
Reinhard Busse, Professor, Head of Department, Department of Health Care Management,
Berlin University of Technology and European Observatory on Health Systems and Policies
and Berlin University of Technology
Niek Klazinga, OECD Health Care Quality Indicator Programme, Organisation for Economic
Co-operation and Development, and Professor of Social Medicine, Academic Medical Centre,
University of Amsterdam