Barriers to learning from
incidents and accidents
ESReDA guidelines
Page 1 of 45
Barriers to learning from incidents and accidents
ESReDA guidelines
ESReDA Project Group Dynamic Learning as the Follow-up from Accident Investigations
Copyright ESReDA
Published 2015 at the ESReDA website: http://www.esreda.org/
Page 2 of 45
Barriers to learning from incidents and accidents
Executive summary
This document provides an overview of knowledge concerning barriers to
learning from incidents and accidents. It focuses on learning from accident
investigations, public inquiries and operational experience feedback, in
industrial sectors that are exposed to major accident hazards. The document
discusses learning at organizational, cross-organizational and societal levels
(impact on regulations and standards). From an operational standpoint, the
document aims to help practitioners to identify opportunities for improving
their event learning process. It should be useful in the context of a process
e ie of ou o ga izatio s lea i g s ste . Finally, it suggests a number
of practices and organizational features that facilitate learning.
There are known symptoms of failure to learn, which you may be able to
recognize within your organization thanks to the diagnostic questions
suggested in chapter 3.
Symptoms of failure to learn often point to an underlying pathogenic
condition (or a combination thereof) afflicting the culture of the
organization. A number of known pathogenic organizational factors have
been discussed in chapter 4.
Experience from a number of industries which have a long history of
incident reporting and learning shows that a number of enablers can
overcome obstacles to learning. Chapter 5 provides a list of enablers
that may be applicable in your industry and organization.
The main messages of the document are summarized below:
Learning from unwanted events, incidents and accidents, in particular at
an organizational level, is not as trivial as sometimes thought. Several
steps are required to achieve learning: reporting, analysis, planning
corrective actions, implementing corrective actions, and monitoring
their effectiveness. Obstacles may appear within each step, and learning
is not effective unless every step is completed. The obstacles may be
technical, organizational or cultural.
Learning from incidents, both as a formal company process and as an
informal workgroup activity, is an opportunity for dialogue and
collaborative learning across work groups and organizations. There may
be few other channels for communication on safety issues between
industrial companies, subcontractors, labour representatives, regulators
and inspectors, legislators and interested members of the public, but
these actors need to work together more effectively on common
problems.
The implementation of an effective experience feedback process
provides a strategic window for improving company equipment,
operating procedures and organizational characteristics in an integrated
manner, allowing different perspectives to converge towards better
preparation for the next event.
Page 3 of 45
Barriers to learning from incidents and accidents
Table of contents
1
1.1
1.2
1.3
1.4
Introduction
This do u e t s o je ti es
Target audience
Structure of this document
Authors
1.5 Using this document
2
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
Introduction to learning from incidents and accidents
Learning within organizations
Learning and knowledge
Learning from catastrophes, incidents and anomalies
Learning from both success and failure
Learning from others
Learning as a process
Dynamic learning
Levels of learning
3 Symptoms of failure to learn
3.1 Under-reporting
3.2 Poor quality of the reports
3.3 Analyses stop at direct causes
3.4 Self-centeredness (deficiencies in external learning)
3.5 Ineffective follow-up on recommendations
3.6 No evaluation of effectiveness of actions
3.7 La k of feed a k to ope ato s e tal odels of s ste safet
3.8 Loss of knowledge/expertise (amnesia)
3.9 Bad news are not welcome and whistleblowers are ignored
3.10 Ritualization of experience feedback procedures
5
5
5
6
6
8
9
9
10
10
11
12
12
13
13
14
14
16
16
17
18
19
19
21
22
23
4 Pathogens causing learning deficiencies
4.1 Denial
4.2 Complacency
4.3 Resistance to change
4.4 Inappropriate organizational beliefs
4.5 Overconfidence in the i estigatio tea s apa ilities
4.6 Anxiety or fear
4.7 Corporate dilemma between learning and fear of liability
4.8 Lack of psychological safety
4.9 Self-censorship
4.10 Cultural lack of experience of criticism
4.11 Drift into failure
4.12 Inadequate communication
4.13 Conflicting messages
4.14 Pursuit of the wrong kind of excellence
25
25
26
27
27
29
29
29
30
30
31
31
34
34
34
5 Enablers of learning
5.1 Importance of appropriate accident models
5.2 Training on speak-up behaviour
5.3 Safety imagination
5.4 Workshops and peer reviews
5.5 Learning agency
5.6 Dissemination by professional organizations
5.7 Standards
5.8 Role of regulators and safety boards
5.9 National inquiries
5.10 Cultural factors
36
36
36
36
37
37
38
38
39
39
39
7
41
References
Page 4 of 45
Barriers to learning from incidents and accidents
1
Introduction
1.1
This do u e t’s o je tives
The present document provides an overview of knowledge concerning the
barriers to learning from incidents and accidents. It focuses on learning
from accident investigations, public inquiries and operational experience
feedback, in industrial sectors that are exposed to major accident hazards,
1
but many of the principles are more widely applicable . While most research
on learning focuses on individual cognition, the focus in this document is
mainly on learning at an organizational level, while also taking into account
a cross-organizational and even societal/cultural level. It concerns both
organizational learning (the flow of lessons into new practices and modified
procedures) and policy learning (impact of lessons on public policy, law,
regulations and standards). The document also suggests a number of good
practices or organizational conditions, which have been shown, in certain
situations, to overcome obstacles to learning.
people who manage operational experience feedback, including
company HSE specialists, consultants, and safety inspectors in national
safety boards;
experts involved in developing the regulatory and legal framework of
safety-critical activities;
safety researchers and experts in academic and expertise organizations.
and more generally, to anyone who is interested by or involved in learning
from incidents and accidents to improve safety.
The document analyses barriers to learning at various levels in a sociotechnical system (see figure 1 below), within companies with hazardous
activities, trade associations and professional bodies, insurers, regulators,
and at the government level, as well as multi-level learning, which involves
the identification of deficiencies and the implementation of changes that
affect multiple system levels.
From an operational standpoint, the document aims to help practitioners to
identify opportunities for improving their event learning process. It should
be useful in the context of a process e ie of ou o ga izatio s lea i g
system. Finally, it suggests a number of practices and organizational features
that facilitate learning.
1.2
Target audience
The messages in this document primarily concern investigators and
practitioners in industrial sectors with significant hazards, such as the
process industries, energy and transport. The document is addressed
primarily to:
people who carry out safety investigations;
1
Concerning applications in healthcare, see [Tucker and Edmondson, 2003, Wrigstad et al.,
2014].
Figure 1: Structural hierarchy of actors in a complex socio-technical system, adapted
from [Rasmussen, 1997]
Page 5 of 45
Barriers to learning from incidents and accidents
We assume that the reader is familiar with accident investigation and
experience feedback systems (learning from events), and do not attempt to
replicate existing overview documents and guidance on these areas. The
following documents provide useful background material:
1.3
Chapter 5 concerns enablers or promoters of learning, and provides a list of
mechanisms or organizational practices which have been shown to facilitate
learning and to tackle some of the learning pathologies identified in
chapter 4.
Investigating accidents and incidents, UK HSE, ISBN 978-0717628278,
freely downloadable from
http://www.hse.gov.uk/pubns/priced/hsg245.pdf (a step-by-step guide
to investigations)
C. W. Johnson, Failure in Safety-Critical Systems: A Handbook of Accident
and Incident Reporting, University of Glasgow Press, Glasgow, Scotland,
October 2003, ISBN 0-85261-784-4. Available online at
http://www.dcs.gla.ac.uk/~johnson/book/.
Guidelines for safety investigation of accidents (ESReDA, 2009), freely
available from
http://www.esreda.org/Portals/31/ESReDA_GLSIA_Final_June_2009_Fo
r_Download.pdf
Shaping public safety investigations of accidents in Europe (ESReDA,
2005), ISBN: 978-8251503044, 183 pages.
Structure of this document
Chapter 2 on Learning from incidents and accidents provides an introduction
to knowledge on learning, focussing in particular on organizational learning.
Chapter 3 on Symptoms of failure to learn proposes a number of conditions
which may be observed in an organization and which suggest that learning
opportunities are being missed.
Chapter 4 on Learning pathologies analyzes a number of underlying
conditions within an organization which may contribute to failure to learn.
The learning pathologies can be thought of as underlying conditions which
can contribute to failure to learn. The chapter attempts to link these
pathologies to some of the symptoms which an interested observer could
identify.
Figure 2: Structure of this document
Figure 2 above proposes an illustration of the medical metaphors
(symptoms, pathologies, diagnosis) used in this document and their
relationship with failure to learn. Please note that these metaphors are not
intended to be read literally, but rather as an aid to understanding.
1.4
Authors
The ESReDA Project Group Dynamic Learning as the Follow-up from Accident
Investigations (PG DLAI) stands in a tradition of consecutive projects
exploring several aspects of accident investigation and of a series of
seminars transferring knowledge and opening new perspectives of domains
to be explored and studied.
The main objective of the Project Group has been to establish
recommendations on how to capture, document, disseminate and
implement insights, recommendations and experiences obtained in
Page 6 of 45
Barriers to learning from incidents and accidents
investigations of high-risk events (accident and near-misses, concerning both
safety and security) to relevant stakeholders:
1.
Proposing adaptation of investigation methods to specific features
of each sector and aimed at facilitating more impact;
2.
Identifying barriers within companies, public authorities and other
involved stakeholders that may hamper implementation of
recommended preventive measures;
3.
Providing methods for dynamic learning from accidents;
4.
Highlighting good practices on how to develop recommendations
from accident investigation findings and understanding relevant
preconditions for future learning (resilience, learning culture);
5.
Providing decision-makers with advice regarding operational
experience feedback systems.
Members of the project group are:
Nicolas Dechy, Engineer In organisational and human factors, IRSN,
France
Yves Dien, Expert Researcher, Electricité De France, EDF R&D,
France
Linda Drupsteen, Researcher , TNO Urban Environment and Safety,
The Netherlands
António Felício, Engineer In generation management (retired), EDP,
Portugal
Carlos Cunha, Engineer in optimization and flexibility (power
generation), EDP, Portugal
Sverre Røed-Larsen, Project manager, SRL HSE Consulting, Norway
Eric Marsden, Programme manager, Foundation for an Industrial
Safety Culture (FonCSI), France
John Stoop, Managing director Kindunos Safety Consultancy Ltd,
The Netherlands
Miod ag “t učić, Eu opea Co
issio , Joi t ‘esea h Ce t e,
Institute For Energy And Transport, The Netherlands
Ana Lisa Vetere Arellano, Scientific officer, European Commission,
Joint Research Centre, Institute for the Protection and Security of
the Citizen, Security Technology Assessment Unit, Italy
Johan K. J. van der Vorm, Senior technical consultant, TNO Urban
Environment and Safety, The Netherlands
Ludwig Benner, corresponding and Honorary Member of the
ESReDA project group.
Contact:
Tuuli Tulonen (Tukes), chairperson of the ESReDA DLAI project
group
Email: Tuuli.Tulonen@tukes.fi
Eric Marsden (FonCSI), editor of this ESReDA publication
Email: eric.marsden@foncsi.org
ESReDA, the European Safety, Reliability and Data Association, is a non-profit
European association that provides a forum for the exchange of information,
data and current research in Safety and Reliability. The safety and reliability
of processes and products are topics which are the focus of increasing
European wide interest. Safety and reliability engineering is viewed as being
an important component in the design of a system. However the discipline
and its tools and methods are still evolving and expertise and knowledge are
dispersed throughout Europe. There is a need to pool the resources and
knowledge within Europe and ESReDA provides the means to achieve this.
For more information: www.esreda.org.
Tuuli Tulonen, Senior researcher, Tukes, Finland
Page 7 of 45
Barriers to learning from incidents and accidents
1.5
Using this document
This ESReDA publication can be used and shared with others on noncommercial basis as long as reference is made to ESReDA as its author and
publisher and the content is not changed.
This document can be downloaded for free in electronic format from the
ESReDA website, www.esreda.org.
Page 8 of 45
Barriers to learning from incidents and accidents
2
Introduction to learning from incidents and
accidents
This chapter introduces some background information and research into
learning, organizational learning and dynamic learning from incidents and
accidents.
Learning is a very general concept, with strong links to both performance (in
a changing world, learning is a source of comparative advantage for
individuals and for organizations) and organizational culture (culture can be
thought of as the accumulation of prior learning based on past successes and
failures). In this document, we focus on learning from incidents and
accidents, a specific source of data, understanding and knowledge which
serves a number of purposes:
The understanding gained concerning the causal factors of unwanted
events can allow preventive and mitigating measures to be put in place;
The feed a k to people s e tal odels of s ste
improvement of their safety behaviour;
safet allo s the
The reliability data collected concerning failure typologies and event
frequencies is an essential input to the risk analysis process and provides
inputs to safety performance indicators.
Learning from incidents and accidents is of special importance in high-hazard
organizations, since they cannot allow themselves to learn in a traditional
trial-and-error manner, and must avoid the complacency that can arise from
learning only from successes. I fa t, hat distinguishes reliabilityenhancing organizations, is not their absolute error or accident rate, but
their effective management of innately risky technologies through
organisational control of oth haza d a d p o a ilit … [Rochlin, 1993]. As
stated by S. Sagan in his analysis of the safety of the US nuclear weapons
p og a
e, the social costs of accidents make learning very important; the
politics of blame, however, make learning very difficult [Sagan, 1994].
2.1
Learning within organizations
The term learning is generally used to refer to an individual human activity.
A fa ous uote of T. Kletz states that O ga izatio s ha e o e o . O l
people ha e e o a d the o e o [Kletz, 1993], embodying a view of
learning as the processes of thinking and remembering that take place
within an individual s ain. However, a significant body of research over the
last forty years suggests that it is useful also to think of organizations as
having learning potential, in the sense that they have adaptive capacity and
can incorporate knowledge in system artefacts (equipment, design rules,
operating procedures, databases, documents) and organizational structure
in order to improve their performance.
Researchers in disciplines ranging from management theory and
organization studies to psychology have proposed multiple definitions of
o ga izatio al lea i g [Jerez-Gómez et al., 2005]:
the process within the organization by which knowledge about actionoutcome relationships and the effect of the environment on these
relationships is developed [Duncan and Weiss, 1979];
the process through which organizations encode inferences from history
into routine behaviour [Levitt and March, 1988];
a dynamic process based on knowledge acquisition, information
distribution, information interpretation, and organizational memory,
which implies moving among the different levels of action, going from
the individual to the group level, and then to the organizational level
and back again [Huber, 1991].
This variety in definitions has led to a certain amount of conceptual
fragmentation, but means that the practitioner can draw insights from
research in a variety of scientific fields.
Nonaka and Takeuchi [Nonaka and Takeuchi, 1995] identified four steps
within the transfer of knowledge from individuals to groups:
Page 9 of 45
Barriers to learning from incidents and accidents
1
2
socialization: converting collective tacit knowledge into individual
tacit knowledge, by knowledge interexchange through
communication, observation or practice.
2
externalization: converting individual tacit knowledge into individual
explicit knowledge, through the use of metaphors, concepts or
models.
3
combination: converting individual explicit knowledge into collective
explicit knowledge, for instance by analyzing documents or by
thinking.
4
internalization: converting collective explicit knowledge into
collective tacit knowledge, through learning or knowledge
assimilation processes.
Single and double-loop learning
A well-known distinction in the organizational learning literature is between
so alled si gle-loop a d dou le-loop lea i g. “i gle-loop learning is
based on detecting and correcting errors, within a given set of governing
variables, leading to incremental change. If an organization exhibits singleloop learning, only the specific situation or processes which were involved in
the incident are improved. When an organization exhibits double-loop
learning, improvements are not limited to the specific situation; the values,
assumptions and policies that led to actions in the first place are also
questioned [Argyris and Schön, 1978, Argyris and Schön, 1996]. An
important kind of double-loop learning is the learning through which the
members of an organization may discover and modify the learning system.
This lea i g to lea
p o ess alled deutero learning by Argyris & Schön)
enables an organization continuously to improve [Senge et al., 1990].
2.2
Learning and knowledge
Learning from accidents is the acquisition of knowledge and skills from a
thorough study of accidents and their antecedents. The knowledge acquired
may concern the types of unwanted events which may occur, the factors
that can contribute to these unwanted events, the barriers which can
prevent their occurrence, the possible consequences of the unwanted
events, and the protective measures which can limit the consequences of
the events. The knowledge can also concern the factors that allow
organizations to function effectively and to adapt to changes in demand and
in their environment.
At an organizational level, the learning may be embedded within:
2.3
organizational beliefs and assumptions: culturally accepted; worldviews
about the system (what hazards are present, what risks are important,
3
what is normal, what is taken for granted, what should be ignored );
organizational routines, procedures and regulations (precautionary
norms);
organizational structure and relationships within organizations within
the sociotechnical system;
the design of equipment and implementation of technologies within the
sociotechnical system;
the knowledge of people working within or interacting with the
sociotechnical system.
Learning from catastrophes, incidents and anomalies
There is learning potential in events of various degrees of severity:
Catastrophes: significant system failures attract attention from
managers, regulators and outside stakeholders, and generate significant
pressure to investigate, understand and implement changes (though
2
Tacit knowledge (the opposite of formal or codified knowledge) is a kind of knowledge which is
difficult to share with another person only with words (written or oral); it reflects the notion that
ek o
o e tha e a tell .
3
D. Vaugha s ook o the Challe ge spa e shuttle a ide t poi ts out that NA“A s ultu e
p o ided a a of seei g that as si ulta eousl a a of ot seei g [Vaughan, 1996, p.392].
Page 10 of 45
Barriers to learning from incidents and accidents
unfortunately, this attention is often relatively short-lived). There can be
a i pli it assu ptio that a ig a ide t ust ha e ee aused a
ig istake , a o pa ied p essu e to ide tif the person
espo si le fo that hu a e o . The egulato a e ui e the
organization to hold a detailed investigation, or the legal system may
put its own inquiry in place.
Large accidents provide resources. They allow the preventive and
protective systems to be analyzed in detail. They may also provide
impetus for change (including to the regulatory system, which tends to
be resistant to evolution). However, they are (luckily!) infrequent in
most high-hazard systems, so we cannot wait for such events to learn
and trigger improvements, but must look for learning elsewhere.
Incidents: within high-hazard organizations, operational experience
feedback systems have been developed to analyze in a systematic
manner anomalies, deviations from procedure, and unwanted events. If
well implemented, these experience feedback systems allow learning
from events which did not escalate to a catastrophic level of impact.
Experience feedback also allows the detection of potentially dangerous
underlying trends.
The number of events of this type is much greater than catastrophic
events, providing more data for learning. However, it may be more
difficult to obtain a level of organizational goodwill that is sufficient to
justify significant changes to the sociotechnical system.
Anomalies and minor perturbations: many high-performance technical
systems include routine online system performance monitoring and
anomaly recording. Although this data is generally collected for quality
control and production optimization, it may also be analyzed as a source
of safety improvements.
The degree of severity and attention raised by the event which triggers
the investigation, analysis and learning will affect the resources available
for analysis and the level of leverage to implement system changes.
These fa to s ill also lead to diffe e t iases i people s eactions to
the investigation.
2.4
Learning from both success and failure
Safety investigations are classically launched in reaction to large and visible
system failures (catastrophic accidents). These investigations focus on what
went wrong, with an underlying assumption that safety is achieved by
reducing the number of adverse events. Typical characteristics of these
investigations are a search for underlying failures and malfunctions and an
organized attempt to eliminate their causes and improve safety barriers
[Hollnagel, 2014].
‘esea he s su h as B. Wilpe t efe to the gift of failu e p ese t i se ious
events and accidents [Hale et al., 1997]. In short, events offer an opportunity
to learn about safe and unsafe operations, to generate productive
conversations across engaged stakeholders, and to bring about beneficial
changes to technology, organization and mental models (understanding).
[Llory, 1996] argues that accidents a e the o al oad efe i g to F eud s
metaphor about dreams being the royal road to access the unconscious) to
access the real functioning of organizations (especially hidden phenomena,
the da k side of o ga izatio s referred to by [Vaughan, 1999]). The authors
add that these lessons can be capitalized in the form of a knowledge of
accidents and transferred as a culture of accidents in order to
counterbalance work on safety culture, which the authors argue is
excessively focused on best practices [Dien et al., 2012].
A alte ati e sou e of lea i g is to fo us o
hat goes ight , a d lea
from success through the study of normal operations. This way of thinking
sees safety as a result of the ability to succeed despite varying performance
demands and environmental variability. Through audits and observation of
work, in which organizational experts examine how real work is undertaken
at the sha p e d , ho outi e de iatio s a e dete ted a d a aged
operators, a better understanding of the system features that contribute to
resilience, performance and safety is developed. This work may include the
ide tifi atio of good p a ti e .
Research in this area includes the High Reliability Organizations and the
resilience engineering schools of thought on system safety.
These lessons from success and from failure may be of different natures, and
most often are complementary.
Page 11 of 45
Barriers to learning from incidents and accidents
There is a wide variety of methods available to investigate and to analyze
accidents. A number of studies have been published which list, compare and
classify accident analysis methods (for instance [Sklet, 2002, Qureshi et al.,
2006, Qureshi, 2008, Kontogiannis et al., 2000, Le Coze, 2008]). The goal of
incident analysis is gaining understanding of the origin of an event in order
to determine options for improvement: i.e., the lessons learned. All events,
such as accidents, disasters or near misses provide valuable information to
learn from. Regardless of the severity of the outcome, similar causes can
lead to different incidents. Therefore, to prevent future incidents, the
factors that have contributed to the incident and the barriers that have
failed to prevent the occurrence need to be identified and addressed.
[Lampel et al., 2009] named the learning in which precedents of events are
determined or so- alled lesso s lea ed a e ide tified lea i g about
e e ts i stead of lea i g from events.
2.5
Learning from others
The ha d lesso s o e fa es di e tl a e easie for individuals to remember
and have been a key factor in motivating people and organizations to take
some actions to avoid the recurrence of a similar event.
However, another key driving force for learning, has been to learn from
othe s hard lessons [Llory, 1996, Llory, 1999, Dien and Llory, 2004, Hayes
and Hopkins, 2012, Paltrinieri et al., 2012]. This indirect form of learning is
called observational or vicarious learning by learning theorists.
The exchange of lessons from accidents has been promoted across several
industries for many years. For example, the civil aviation sector shares
knowledge using the international databases are ADREP, the nuclear
industry (under the umbrella of IAEA, WANO, EU) manages a number of
databases, and the process industry in Europe shares knowledge via the
Major Accident Reporting System - eMARS at JRC-Ispra and the Hydrogen
Incident and Accident Database - HIAD at JRC-Petten.
This learning is inter-organizational, between countries, and sometimes
between industrial sectors, especially with disaster cases. This transversal
learning is inherently difficult, in particular because it is necessary to
t a slate e e ts to o e s ope ati g e i o e t and compensate for the
loss of context which is unavoidable when describing what happens in
complex systems [Koornneef, 2000].
2.6
Learning as a process
The aim of learning lessons through analyzing events is to identify
possibilities for improvement. Several stepwise models have been developed
that present learning as a process that starts with anomaly detection and
reporting, continues to event analysis and establishment of
recommendations, concluding with the practical application of the lessons
learned (such as [Drupsteen et al., 2013, Jacobsson, 2011, Kjellén, 2000,
Lindberg et al., 2010] . These odels i lude dete i i g the lessons
learned but also a follow-up on these lessons. For successful learning, the
information that is handled at all steps of the learning process needs to be
suffi ie tl detailed a d of high ualit [Jacobsson et al., 2010].
The Chain of Accident Investigation (CHAIN) model [Lindberg et al., 2010]
comprises 5 steps for learning from experience: reporting, selection of
incidents for further investigation, investigation, dissemination of results and
finally the actual prevention of accidents. This process should also be selfreflective and include evaluation activities that lead to improvement of the
process itself. The typical learning cycle, according to [Jacobsson et al.,
2010], includes data collection and reporting, analysis and evaluation,
decisions, implementations and follow-up. This cycle is derived from the
safety, health and environment (SHE) information system of Kjéllen [Kjellén,
2000]. [Drupsteen et al., 2013] also describe learning from events as an
organizational process. In this process events such as incidents are analyzed
and used to improve the organization and to prevent future occurrences.
The lea i g f o e e ts p o ess is modelled as five sequential stages:
1
reporting a situation
2
analyzing the situation
3
making plans for improvement
4
performing those plans
Page 12 of 45
Barriers to learning from incidents and accidents
5
evaluating their effect and the learning process itself.
2.7
Dynamic learning
The world is changing, so safety requires continual adaptation. Learning is
not a one-time act, but is an ongoing process in which the organization (and
society, culture, etc.) continually improves and adapts to new conditions. It
includes unlearning existing ways of work, procedures, processes and
behaviour. The counterpart to dynamic learning is static, one-off learning.
2.8
organizational investigation model developed by the CAIB during its
investigation into the disintegration of the Columbia space shuttle in 2003
[CAIB, 2003]. A third example is the Fukushima-Daiichi disaster, which by its
extent, has pushed several industries in multiple countries to review
scenarios of technological disasters triggered by natural hazards, and also
worst case scenarios beyond the design basis.
Levels of learning
A study by [Cedergren and Petersen, 2011] describes a categorization of
accident causes in three hierarchical levels, based on the models of
[Rasmussen, 1997] and [Sklet, 2004]. In accordance with the suggestion by
[Stoop, 1990], the levels are labeled micro-, meso- and macro-levels. The
highest level (the macrolevel) includes factors related to inter-organizational
aspects, regulatory bodies, inspectorates, associations and even
governments [Rasmussen and Svedung, 2000]. The next level (the
mesolevel) includes organizational aspects such as management issues and
other intra-organisational factors, whereas the lowest level (the microlevel)
includes equipment, actor activities, and physical processes [Rasmussen and
Svedung, 2000].
[Hovden et al., 2011] o
1.
2.
i et o
ea i gs of multilevel learning :
the level where learning is supposed to take place (micro, meso or
macro level);
how learning takes place within and between these levels.
An example where several lessons were identified at different levels of the
sociotechnical system (as classified by [Rasmussen, 1997]) is the Toulouse
disaster ([Dechy et al., 2004], a case described in the ESReDA Cube report
which is a companion document to this document). A second example
involves the US Chemical Safety and Investigation Board (CSB), which
integrated learning concerning investigation methodology from the
Columbia Accident Investigation Board (CAIB). The CSB adopted the
Page 13 of 45
Barriers to learning from incidents and accidents
3
Symptoms of failure to learn
In this chapter, we describe a number of symptoms of failure to learn:
behaviours or features of an organization or of a society which may suggest
the e iste e of a lea i g disease a d hi h a e o se ed people
working within the system, for example during a review of the event-analysis
process, or by people external to the system, such as accident investigators.
The ai is to help a pe so e og ize e a e u i g i to s pto
λ… , suggesti g a u e of diag osti uestio s . This document also
suggests a number of possible pathogens (underlying organizational
conditions, which are detailed in chapter 4), which may be linked to each
symptom.
Note that some barriers to learning (organizational or cultural issues which
contribute to learning deficiencies) may be difficult to classify as a symptom
or as a pathogen; we invite readers not to remain fixated on this distinction,
which we use primarily to provide a first level of structure for this document.
People sometimes assume that learning has occurred once an event has
been analyzed and lessons have been drawn from it. This omits an important
component of learning, that of change (in system design, in organizational
structure, in behaviour). Analysis is not learning. Learning includes both
understanding and action. If the system has not changed and no one
behaves differently, learning has not occurred. If new behaviours are not
accompanied by new understandings, then learning cannot be robust and
sustainable across time and ever-changing circumstances.
The symptoms described in this chapter are generally not the result of
explicit decisions to disregard safety concerns, but arise over time as a result
4
of organizational drift . For instance, complexity can appear in a progressive
manner over time, without any explicit objective to introduce it.
4
When facing pressure towards cost-effectiveness in aggressive, competitive environments,
organizations tend to migrate towards the limits of acceptable performance. This phenomenon,
called organizational drift, is generated by normal incremental processes of reconciling
differential pressures on an organization (efficiency, capacity utilization, safety) against a
background of uncertain technology and imperfect knowledge and the absence of a global view
of system safety.
3.1
Under-reporting
Voluntary incident reporting systems often suffer from chronic underreporting or under-logging, in which incidents are simply never reported.
This means that opportunities to learn are missed. It can lead to mistaken
o fide e i the safet of o e s s ste
ad i e ou lo e ordable
i ide t ate! . If the eposito of epo ted e e ts is used fo statisti al
analyses (analyzing trends in safety-related indicators, deciding on priorities
fo futu e i est e ts i safet e uip e t o o ga izatio al ha ges… , the
analyses will be affected by epidemiological biases5.
Under-reporting can be caused by:
a blame culture (see box below) and the fear of reprisals;
concern that incident reports will be used in litigation or interpreted in a
negative way in performance assessments;
perverse incentives which reward people for the absence of incidents.
For instance, performance bonuses linked to the rate of occupational
safety accidents a d safet halle ges su h as
da s ithout a
a ide t i the fa ilit
o stitute egati e i entives for reporting
incidents;
a feeling that the event learning process is not useful to shop floor
sha p e d
o ke s, ho a see it as ai l ai ed at p o idi g
statistics for managers, instead of as a source of learning and safety
improvement which provides benefits to all;
insufficient time available for reporting: if people have to report during
breaks or after hours, under-reporting is more likely;
uncertainty as to which events should be reported (scope or perimeter
of the reporting system);
insufficient feedback to reporters on lessons learned from the incident
report, and the absence of visible system changes linked to safety
5
Information bias in epidemiology arises from measurement error or from selection bias in
observations, and can lead to incorrect conclusions being drawn from the observations.
Page 14 of 45
Barriers to learning from incidents and accidents
reports epo ti g s ste see as a la k hole , ith dou ts as to
whether time and effort invested in reporting is well spent);
deficiencies in the reporting tool (too complex, inappropriate event
t pologies… ;
a elief that a ide ts a e
al., 1999];
o
al i
e tai li es of o k [Pransky et
insufficient promotion by management of the importance of incident
reporting and the safety benefits it generates.
More generally, there is evidence that under-reporting of safety-related
incidents is affected by the organizational safety climate [Probst et al., 2008].
A blame culture
A blame culture over-emphasizes the fault and responsibility of the
individual di e tl i ol ed i the i ide t ho ade the istake , rather
than identifying causal factors related to the system, organization or
management process that enabled or encouraged the mistake.
Organizations should i stead ai to esta lish a just ultu e , a
atmosphere of trust in which people are encouraged, even rewarded, for
providing essential safety-related information (including concerning
mistakes made), but in which they are also clear about where the line must
6
be drawn between acceptable and unacceptable behaviour .
As stated in [Dekker, 2007]:
‘espo ses to i ide ts a d a ide ts that a e see as u just a i pede
safety investigations, promote fear rather than mindfulness in people who do
safety-critical work, make organizations more bureaucratic rather than more
careful, and cultivate professional secrecy, evasion, and self-protection. A
just culture is critical for the creation of a safety culture. Without reporting of
failures and problems, without openness and information sharing, a safety
ultu e a ot flou ish.
Concerning incidents of a technical or technological nature, under-reporting
can be abated by the implementation of automated reporting systems. For
example, the frequency of the undesirable event signal passed at danger in
the railway sector can be measured using automated systems, as a
complement to reports made by train drivers. Automated reports are likely
to be more numerous, but provide less contextual information than a report
made a hu a . The also aise the isk of false positi es , hi h e ui e
additional investigative effort in order to identify them.
Beyond under- epo ti g i the o ga izatio s i te al e e t data ase s , the
level of reporting to the competent authority (the regulator or the safety
authority) should be examined. External reporting of events that fit certain
criteria is a useful manner of demonstrating and enhancing transparency in
safety management; it helps regulators to obtain a realistic per-industry
viewpoint on incident characteristics, and it is required by law in some
industries.
Items to help you in your diagnosis of under-reporting:
An incident database which has only a few incidents reported could be
an indication of possible under-reporting, which may require further
investigation as to why this may be the case. Note however that it is
diffi ult to judge e te all hat the ideal u e of e e ts pe ti e
period is, since this is dependent on the technology used and the
industry within which one is operating.
Ask about the latest occasion on site where someone could have been
injured, or environmental damage could have occurred, and check
whether it was reported in the incident database.
Study a major accident which occurred on the site, identify possible
precursor events, and ask whether they have occurred in the past year.
For more information:
6
Just culture proponents do not suggest that the notion of blame is entirely negative for safety;
indeed, the link between responsibility and accountability motivates individuals and
organizations to analyze their activities and their possible consequences. In many organizations,
however, the negative features of blame are insufficiently recognized and defended against.
Chapter 5 of [Johnson, 2003].
Page 15 of 45
Barriers to learning from incidents and accidents
3.2
Poor quality of the reports
Some reports provide little help in identifying safety improvements. The data
collected may be incomplete (facts missing, unclear sequence of events,
superficial description of the context of the event). The data may also be
biased, since a person reporting an incident will have a natural tendency to
include some subjective information on the event, and may attempt to lead
the eade to a i te p etatio of e e ts hi h puts the epo te s a tio s
(or that of his colleagues) in a more favourable light.
Can be caused by:
a feeling that the event learning process is not useful to shop floor
sha p e d
o ke s, ho a see it as ai l ai ed at p o idi g
statistics for managers, instead of as a source of learning and safety
improvement;
poor (incomplete, biased, superficial) data collection in the aftermath of
an accident;
lack of access to important data collection tools, such as a digital
camera, during fact-finding;
lack of management follow-up to implement lessons learned;
focus of performance indicators and of investment on reliability rather
than on safety;
non-involvement of key actors (witnesses or victims, labour
representatives, workers with strong knowledge of the technical
functioning of the system) in the fact-finding stage;
checklist mentality: in some systems there is an obligation to file a
report for every detected event. If these events are not followed up on
and do not lead to visible improvements, people can over time fall into a
do the i i u
espo se to su h o ligatio s;
3.3
Ask several people involved in an event to critically review the incident
report for that event, and see whether they identify oversights, missing
information, biases.
Analyze the quality of incident reports using your own judgment and
experience.
Organize an inter-site comparison of experience feedback reports in
which workers from two sites of the same company undertake a
collaborative critical review of their reports.
Is the experience feedback database thought of as a e ete fo
epo ts epo ts a u ulate the e to die, ithout receiving any
attention)?
Analyses stop at direct causes
In some learning systems, the analyses of the causal factors contributing to
events tend to be superficial, and are limited to the identification of the
direct causes, such as the technical failure of a piece of equipment, or the
behaviour of an operator who skipped a step in a procedure. The underlying
contributing factors − ofte alled oot auses − which allowed the direct
cause(s) to exist, and which are generally organizational (for instance,
insufficient budget for maintenance leading to corroded equipment; high
p odu tio p essu e a d supe iso tole a e of te po a sho t uts a d
related to the safety management system, are not identified.
To use terminology from the organizational learning literature, we can say
that recommendations are limited to single-loop learning (immediate fixes),
and do not include double-loop (underlying values) or deutero-learning
lea i g-to-lea
apa ilit .
strategic behaviour where information is seen as a source of power, and
thus there is a tendency to keep some essential information to oneself.
Some items to help you diagnose poor report quality:
Page 16 of 45
Barriers to learning from incidents and accidents
The
ad apple safety
odel
A safety model is a set of beliefs and assumptions about the sources of risk
in a system and the features and activities which allow it to operate safely. A
common safety model is based on the belief that mature systems, in which
designers have had the time to learn from early mistakes, would work fine if
it were not for a few careless individuals who do not pay attention to
procedures. The work of safety managers in these systems is to identify
these ad apples a d et ai o ep i a d the [Dekker, 2006].
However, experience shows that in large, complex sociotechnical systems,
variability in human performance is inevitable, and it contributes as much to
safety (through skilful recovery actions) as it does to incidents and accidents.
What so e a al sts still all hu a e o s a e o e a symptom of an
underlying problem (often related to design or system management) than a
cause of accidents.
Considering the notion of multilevel-learning, it is important to note that the
contributing factors to some incidents are not limited to the firm directly
responsible for the hazardous activity, but may also involve contractors,
insurers, the activity of the regulator, the legal system, and the legislative
framework within which the firm operates. In such cases, the
recommendations resulting from the analysis should address these other
organizations, and not only the firm directly responsible.
7
insufficient training of the people involved in event analysis
(identification of causal factors, understanding of the systemic causes of
failure in complex systems, human factors training to help identify
organizational contributions to accidents);
use of accident investigation methods that build on linear accident
7
models, rather than on multi-linear/systemic models , which provide
less structure helping to identify causal factors;
managerial bias towards technical fixes rather than organizational
changes (managers may wish to downplay their responsibility in
incidents, so downplay organizational contributions to the event).
WYLFIWYF & WYFIWYF
Safety researcher E. Hollnagel guards against the results of biased accident
i estigatio s ith the a o
WYLFIWYF What You Look Fo Is What You
Fi d [Lundberg et al., 2009]. These reflect the notion that accident
i estigatio is ot a full o je ti e e e ise, a d i estigato s a kg ou d,
training and preconceptions on factors which lead to accidents will inevitably
influence their findings. This bias inevitably influences the corrective actions
implemented, because WYLFIWYF What You Fi d Is What You Fi .
Items to help you diagnose overly superficial analyses:
Superficial analyses can be caused by:
insufficient time available for in-depth analysis;
3.4
Experience feedback reports allocate responsibility (and
recommendations for improvement) to lower-power individuals such as
ope ato s fo i sta e: i p o e t ai i g of the ope ato
athe tha
managers, who are responsible for organizational issues;
E a i e the ala e i e o
e datio s et ee ea ti e fi es se d
the operator to t ai i g , add e t a pe so al p ote ti e e uip e t
and deeper, more long-te
odifi atio s ha ge the o ga izatio ,
ha ge the s ste s desig , i ple e t i he e t safet p i iples .
Self-centeredness (deficiencies in external learning)
Insufficient sharing between sites, firms and industry sectors: There are
many institutional and cultural obstacles to the sharing of information on
events and generic lessons between sites from a same firm, firms in the
same industry sector, and − even more − between industry sectors.
Examples of systemic accident models are STEP, Tripod-Delta and FRAM.
Page 17 of 45
Barriers to learning from incidents and accidents
I se e al ajo a ide ts a d disaste s, failu e to lea f o othe s
incidents and accidents was a cause, among others, of the severe events.
The nuclear industry is an international affair that is subject to several levels
of exchange under the umbrella of organizations including IAEA, the OECD
NEA and WANO. A first example of this external learning deficiency is the
Three Mile Island accident in 1979, which had precursors in Beznau in
Switzerland in 1974 and in Davis-Besse in 1977. A more recent example is
the Fukushima-Daiichi disaster in 2011, where TEPCO (the operator of the
nuclear power plant) and the Japanese nuclear regulator did not implement
a safety mechanism that could have prevented escalation of the accident,
8
and which is widely implemented in US and European plants .
Several factors contribute to this difficulty in learning from others:
the feeli g that that ould t happe to us; e ope ate diffe e tl
(better!);
fears related to reputation or prestige (for oneself, o e s olleagues,
o e s o pa ; the idea that ou do t ash ou di t lau d i
pu li ;
the inherently contextual nature of much learning.
Items to help you diagnose self-centeredness:
8
Are people able to point to a recent incident on another site which led
them to make changes to an operating procedure, the design of some
equipment, some organizational issue?
Do ou ofte hea o
e ts su h as that ould t happe to us ,
without an accompanying explanation?
When nuclear fuel rods are insufficiently cooled, they can react with water steam to produce
hydrogen. In the Fukushima-Daiichi accident, hydrogen gas built up inside reactor buildings and
had to be vented to external buildings; the resulting explosive mixture detonated and severely
damaged several buildings, including a containment building. Most nuclear plants in the US and
Europe have for many years been equipped with units which are able to recombine hydrogen
and oxygen to produce water, before the explosive limit for hydrogen is reached.
3.5
Does your site have a systematic approach to internalising lessons
learned from incidents/accidents/near misses reported in other sites or
significant events reported in the news around the world?
Ineffective follow-up on recommendations
Certain recommendations or corrective actions are not implemented, or are
implemented very slowly.
Investigations in Swedish railways
[Cedergren, 2013] analyzed the implementation of recommendations from
accident investigations in the Swedish railway sector. The author found that
almost one in five recommendations made by the accident investigation
board did not lead to any corrective action at all. The main reasons for
absence of change identified by the people interviewed were:
1. a tio s ot falli g ithi the e ei e s a date the ta get felt that the
could not implement the recommendation, because it was outside of
their scope of actions), due to limited knowledge of respective
o ga izatio s oles a d a dates;
2. recommendations that were somewhat imprecise or lacking in guidance
in the specific areas which required change.
Ineffective follow-up can be caused by:
insufficient budget or time to implement corrective actions (production
is prioritized over safety, management is complacent concerning safety
issues);
la k of o e ship of e o
e datio s, i.e.no one feels responsible
for the recommendations - the e is a la k of u -in . This may be due
to i estigatio s ei g o opolized
people ho a e e te al to
operations, for instance the investigation and selection of
Page 18 of 45
Barriers to learning from incidents and accidents
recommendations and corrective actions being run by HSE experts
without input from operators or from local management;
resistance to change;
inadequate monitoring within the safety management system (missing
indicators, insufficient management supervision, significant turnover in
management positions leading to lack of historical knowledge by the
person holding a supervisory role);
poorly controlled interface with the management of change process,
which should ensure that the recommendations do not introduce new
risks or produce other unanticipated side-effects.
Concerning the timescale for implementation, practitioners should recognize
that it generally takes years for investigations of major accidents to result in
changes at the system level (typically involving the regulatory and legislative
processes).
Some items to help you diagnose ineffective follow-up:
Look through investigation reports to see whether recommendations
have been followed up on, and check whether they have led to real
change.
Analyze the strategic influence of the safety and investigate staff (their
position in the organizational chart): if they have a concern which may
require action from top management, do they have the power to be
heard by the necessary people? Is there past evidence of concerns
raised by investigators having led to reactions from top management?
No evaluation of effectiveness of actions
In order to make sure that the learning potential of incidents is consolidated,
organizations should ensure that the effectiveness of corrective actions is
evaluated. Did the implementation of recommendations really fix (or
contribute to fixing) the underlying problem that triggered the initial event?
Inadequate evaluation can be caused by:
compliance attitude (checklist mentality, in which people go through the
motions that are required of them, without thinking about the real
meaning of their work);
system change can make it difficult to measure effectiveness (to isolate
the effect of the recommendation from the effect of other changes);
o e o fide e i the o pete e of the safet p ofessio als
to eassess ou p e ious e elle t de isio s ;
o eed
lack of a systematic monitoring and review system that evaluates
effectiveness of lessons learned.
Items to help you diagnose inadequate evaluation:
3.7
3.6
political pressure: if the organization does not have an open, evaluationased ultu e, a egati e e aluatio of a a tio s effe ti e ess a e
seen as an implicit criticism of the person (likely a manager) who made
the decision to approve the action;
When was the last review of the learning from incidents process
undertaken?
What changes were made as a result of the review?
Can you identify the organizational role (function, entity) which is in
charge of the evaluation of effectiveness? Do such evaluation loops exist
for each phase in the system lifecycle (design, operations, maintenance,
etc.)?
Lack of feed a k to operators’
safety
e tal
odels of syste
Excellent reporting and root-cause analyses are not sufficient for learning to
take place. The safety of complex systems is assured by people who control
the proper functioning of the system, detect anomalies and attempt to
correct them (operators, maintenance personnel, managers, regulators,
etc. . These e pe ts ha e uilt o e ti e a e tal odel of the s ste s
operation, of the types of failures which might arise, their warning signs and
Page 19 of 45
Barriers to learning from incidents and accidents
the possible corrective actions. If they are not presented with new
information which challenges their mental models, such as feedback from
the reporting/learning system, then the learning loop will not be completed.
Questioning attitude
9
If the organizational culture does not value mindfulness or chronic unease,
the people s atu al te de
a e to assu e that the futu e ill e
similar to the past, and there will be organizational rigidity of beliefs as to
hat is isk a d hat is o al. Ps hologist J. ‘easo stated If ete al
igila e is the p i e of li e t , the h o i u ease is the p i e of safet .
Individuals demonstrate a questioning attitude by challenging assumptions,
investigating anomalies, and considering potential adverse consequences of
planned actions. This attitude is shaped by an understanding that accidents
often result from a series of decisions and actions that reflect flaws in the
shared assumptions, values, and beliefs of the organization. All employees
are watchful for conditions or activities that can have an undesirable effect
on safety.
Collective mindfulness
This lack of feedback can be caused by:
Collective mindfulness [Weick et al., 1999] is a i h a a e ess of
dis i i ato details . A od of esea h o Highly Reliable Organizations
(HROs) [Lekka, 2011] suggests that they are preoccupied with failure,
treating any deviation from the standard as something which is wrong with
the system. They are reluctant to simplify, resisting oversimplification and
trying to see more. They have a detailed understanding of rising threats and
of causes that interfere with such understanding. Their mindfulness allows
them to see the significance in weak signals and take action.
The Piper Alpha disaster
Senior management at the firm running the Piper Alpha production platform
in the North Sea, which suffered a massive explosion in 1988, leading to the
death of 167 workers, were found to be too easily satisfied and to rely on
the absence of feedback. They failed to ensure that training was sufficient,
adopted a superficial response and did not become personally involved
[Cullen, 1990, p. 238]. The allo ed a ultu e that did ot dis ou age
shortcuts, (thus) multiple jobs could be performed on a single permit [PatéCornell, 1993].
operational staff are too busy to reflect on the fundamentals which
produce safety in the system;
the organizational culture allows people to be overconfident;
mistrust of the analysis team;
elu ta e to a ept ha ge i o e s eliefs.
Questions to help you diagnose lack of feedback:
9
Ask hethe i the last ea , people ha e e ou te ed a su p ise
respect to safety, something that was unexpected.
ith
Are shortcuts tolerated amongst colleagues?
When discussing large accidents that occurred in another organization,
do people displa a ould t happe to us attitude?
Does your senior management tend to focus attention on avoiding small
(and frequent) safety problems that may disrupt production, with
seemingly little or no attention given to the possibility of severe (but
rare) accidents?
This text is based on the INPO (Institute of Nuclear Power Organizations) definition.
Page 20 of 45
Barriers to learning from incidents and accidents
3.8
Loss of knowledge/expertise (amnesia)
10
People forget things. Organizations forget things . The lessons learned from
incidents and accidents are slowly lost with the passing of time.
Loss of knowledge can be caused by:
poor management of subcontracting/outsourcing (knowledge is
transferred to people outside the organization that is responsible for the
hazardous operations and interfaces between organizations are badly
managed);
loss of information in case of change of ownership of a plant: design
information and records may not be transferred to the new owner, who
may lose information on why a facility was designed in a certain way,
which modifications have been made, the rationale for the inspection
and maintenance policies;
poor implementation of the learning repository: if reports are not easily
accessible to people working within the organization, the lessons they
contain will not feed into operations and design. The repository (which
will generally be computerized) should be accessible to all staff
categories, should allow easy to use searching (including full text
searches, with synonyms), and should allow the creation of categories of
events and feed into statistical tools;
aging of the workforce (a significant issue in many industrial sectors in
Europe) and insufficient knowledge transfer from more experienced
workers to incoming workers;
insufficient use of knowledge management tools;
insufficient or inadequate training;
Despite the fa ous uote of T. Kletz that O ga izatio s ha e o e o . O l people ha e
e o a d the
o e o . [Kletz, 99 ], o ga izational memory incorporating knowledge
from incidents or accidents is present in system artefacts such as operating procedures and
desig ules. “ ste s should e desig ed su h that e o e s do t ha e to u de take safet
a haeolog to t to e o st u t hypotheses on the motivation of prior generations in deciding
upon various technical and organizational features of the system.
lack of adaptation (including unlearning), which is necessary to cope
with changing environment/context.
Note that any deviation which is not properly processed through the
reporting system will eventually be forgotten.
Space shuttle Columbia disaster
In February 2003, the Columbia space shuttle disintegrated upon re-entry
into the atmosphere, leading to the death of all seven crew members. The
shuttle s heat shield had ee da aged du i g take-off by fragments of
foam insulation which broke off from the external fuel tank. Loss of parts of
the thermal foam insulation had been noticed on previous flights of the
shuttle, but had not led to any adverse effects and over time became
o side ed a o al phe o e o . The i estigatio i to the disaste
concluded that NASA had, over time, accepted deviations from design
criteria as normal when they happened on several flights and did not lead to
mission- o p o isi g o se ue es. This phe o e o of o alizatio
of de ia e had ee ide tified the so iologist D. Vaugha as a
organizational cause of the Challenger disaster, 10 years earlier.
Questions to help you diagnose this symptom:
10
Are people within the organization (frontline staff, managers at different
levels, safety staff, design engineers) familiar with the accidents and
high-potential incidents that have affected their site or company or
sector within the last 15 years?
Do people understand the reasons for various elements of procedures
that were introduced in the past as a result of learning from accidents?
If the length of maintenance shutdowns/stoppages or the level of
spending on maintenance has decreased in the last 15 years, is there a
record of a risk analysis undertaken to justify this change?
If the plant is operating above the design flow rates or pressures, is
there a record of a risk analysis undertaken to justify these changes?
Page 21 of 45
Barriers to learning from incidents and accidents
3.9
Does the organization have group-level (centralized) standards for safe
desig , ai te a e a d i spe tio ? What lo al adjust e ts a e
made with respect to these standards?
Bad news are not welcome and whistleblowers are
ignored
A number of major accidents have been preceded by warnings raised by
people familiar with the system and who attempted, unsuccessfully, to alert
people with an ability to change the system or the nature of the threat that
they perceived. The message of these whistleblowers is often not heard by
the organization, because of a culture in which bad news are not welcome
and contrarian voices are frowned upon.
Whistleblowers and Cassandras
In general usage, whistleblowing means making a disclosure in the public
interest. In the safety literature, the term has a narrower meaning of
reporting things that may constitute a threat to safety, such as the presence
of a risk which has not been properly managed.
The Herald of Free Enterprise disaster
In 1987, a car ferry named the Herald of Free Enterprise capsized in the
Belgian port of Zeebrugge, leading to the death of 193 passengers and crew.
The ship had left port with its front door open. The investigation found that
employees had aired their concerns on five previous occasions about the
ship sailing with its doors open. A member of staff had even suggested
fitting lights to the bridge to indicate whether the doors were closed. The
i ui o luded: If this se si le suggestio […] had e ei ed the se ious
o side atio it dese ed this disaste ight ell ha e ee p e e ted .
Paddington Junction railway accident
11
Before the tragic collision at Paddington train station in 1999 in a suburb of
London, several signals passed at danger (red colour signal with no stop)
occurred in that area, leading A. Forster, a manager from one of the
operating train companies, to ask the company managing the track
infrastructure to take actions. However, although the infrastructure operator
replied to her, and several working groups were put in place, there was
much discussion but little action.
Bad news at Texas City
In March 2005, a massive explosion at a BP-owned oil refinery in Texas killed
15 people and injured nearly 200. Under-investment in maintenance on the
site, resulting from cost-cutting campaigns driven by top management, was
identified as having contributed to the accident. Analyzing the culture on the
site prior to the accident, A. Hopkins identified a reluctance to communicate
ad e s to se io a age e t [Hopkins, 2008], which together with
inappropriate use of safety indicators contributed to se io a age e t s
distorted view that safety levels were high at the site.
The radical nature of whistleblowing (alerting the media or the regulator)
should not mask the fact that most whistleblowers attempt to raise their
concerns within their organization through different channels, often more
than once and to several different people, before using the blunt instrument
that is media attention. Organizations can usefully implement confidential
12
reporting systems , with a well-defined treatment path for the reports, to
ensure that messages are heard internally.
11
From the Cullen inquiry report Volume 1, pp 117 to 119, available at
http://www.railwaysarchive.co.uk/documents/HSE_Lad_Cullen001.pdf.
12
It is important to distinguish between confidential reporting and anonymous reporting. Many
successful voluntary reporting systems contain provisions covering the way the person making
the report is to be contacted if necessary to obtain a better understanding of the events.
Page 22 of 45
Barriers to learning from incidents and accidents
Questions to help you diagnose this symptom:
For more information:
Do people feel comfortable bringing bad news to management? Is
information simplified or watered down before it is passed to
managers? Does management expressly ask for bad news?
Is the e a shoot the
views?
esse ge
e talit
13
with respect to dissenting
Does the organizational culture favour 100% consensus on important
decisions? In a healthy high-hazard — and complex — organization, the
absence of dissenting views is a suspicious sign that dissent is in general
14
discouraged .
Is a confidential reporting system available to staff, allowing them to
communicate serious concerns directly to higher management? Are they
aware of its existence? Is it used?
Is critical, safety-related news that circumvents official channels
welcomed?
What follow-up is given to concerns raised using this channel?
Do messages get altered, with the tone softened, as they move up the
a age e t hai ? Is the e a ad e s filte i the epo ti g
process?
Are there any cases of outside whistleblowing (pressure on safety
concerns raised through the media, via the regulators)? How were they
handled?
Mo e usual is a ot a tea pla e attitude ith espe t to people ho aise o e s.
Note that organizations can be seen as being defined by what they ignore, by the collective
simplifying assumptions that members of the organization make in order to be able to work
collectively. These assumptions are called the worldview or mindset of members of the
organization, or their safety model for assumptions concerning risk and safety in the system;
the a e a o po e t of the o ga izatio s ultu e. This olle ti e i dset e o es
pathological if it leads to the immediate rejection of unwanted contrarian views, without
reflection on their validity.
13
14
British Standards (BSI) has published Whistleblowing Arrangements Code
of Practice under the classification PAS1998/2008. Available for free
from http://shop.bsigroup.com/forms/PASs/PAS-1998/.
UK: the Public Concern at Work charity provides advice to businesses on
ensuring that concerns are raised at an early stage.
3.10 Ritualization of experience feedback procedures or of
accident investigation
Ritualization, or a compliance attitude, is a feeling within the organization
that safety is ensured when everyone ticks the correct boxes in their
checklists and follows all procedures to the letter, without thought as to the
meaning of the procedures. It is related to safety theatre, the empty rituals
and ceremonies that played out after an accident, in order to show that
thi gs a e ei g do e , a d to the p o edu e ali i , the te de
to
implement additional procedures after an event as a way for safety
managers to demonstrate that they have reacted to the accident [Størseth
and Tinmannsvik, 2012]. This kind of organizational climate is not conducive
to learning.
Safety management becoming divorced from safety in the field
The 1999 Report of the Longford Royal Commission into the explosion at
Esso s Lo gfo d gas pla t i Aust alia fou d that although Esso had a o ld
class safety management system, the system had taken on a life of its own,
di o ed f o ope atio s i the field a d di e ti g atte tio a a f o
what was actually happening in the practical functioning of the plants at
Lo gfo d [Dawson and Brooks, 1999, p. 200].
Page 23 of 45
Barriers to learning from incidents and accidents
Safety management as the BP Texas City refinery
Before the March 2005 Texas City accident, some audits identified that
Te as Cit as a pla t at a high isk fo the he k the o
e talit . This
included going through the motions of checking boxes and inattention to the
risk after the check-off. C iti al e e ts, ea hes, failu es o eakdo s of
a iti al o t ol easu e a e ge e all ot atte ded to.
Items to help you diagnose ritualization of safety procedures:
When asking people why they reacted in a certain way, their responses
al a s fo us o the p o edu e as the oti atio fo thei a tio .
Safety audits are undertaken using checklists, with little thought to
reasons for possible deviations from the checklist.
People are unable to explain the reasons for various elements of
procedures or operating guidelines.
Page 24 of 45
Barriers to learning from incidents and accidents
4
Pathogens causing learning deficiencies
This chapter lists a number of pathogenic organizational factors which
hinder the effectiveness of the event-learning process. These underlying
characteristics, or pathogens in the medical metaphor used in this
document, are generally more difficult to detect or diagnose at an
operational level than the symptoms described in the previous chapter, and
may be responsible, to various degrees and possibly in combination with
other problems, for a number of symptoms. Their relationship with the
symptoms described in the previous chapter is illustrated in figure 3. Note
that these pathogens or underlying organizational conditions should not be
thought of as causes of potential accidents, but rather as conditions which
allow accidents to develop.
Pathogenic organizational factors
Pathogenic organizational factors [Reason, 1995, Dien, 2006, Rousseau and
Largier, 2008, Llory and Dien, 2010] are an aggregation of convergent signals
(markers, signs and symptoms) that allow the characterization of a negative
influence on the system safety. Reason was the first to use a medical
metaphor in analyzing organizational contributions to accidents, stating that
Fo dete i i g hethe a o ga izatio is i good health, it is fa si ple
to know the causes of the sickness. It is far more accessible to define a set of
pathogenic organizational factors than to exhaustively list the organizational
factors required and sufficient to ensure a good level of safety within the
o ga izatio . The i lude eak esses i o ga izatio al safet ultu e,
failures in day-to-day management of safety, weakness of control
mechanisms, difficulty in adapting to feedback, difficulty in handling
experience feedback and failure to re-examine design hypotheses.
4.1
Denial
De ial is the idea that it ould t happe to us . At a i di idual le el,
15
denial is related to cognitive dissonance , a phenomenon which can lead
people intellectually to refuse to accept the level of risk to which they are
exposed. At an organizational and institutional level, group-think
16
phenomena or commitment biases can lead to denial (rationalization of
decisions).
Failure indicates that our existing models of the world are inadequate,
requiring a search for new models that better represent reality [Cyert and
March, 1963]. This challenge to the status quo is expensive, which can
Figure 3: Symptoms and pathogens related to failure to learn
15
Cognitive dissonance is a concept from social psychology designating the discomfort which
arises from holding conflicting beliefs [Festinger, 1957]. People aim to make their beliefs
consistent with one another, so can reject new information which is inconsistent with their
current beliefs.
16
Groupthink is a phenomenon observed by social psychologists which occurs when people work
together as a group to reach decisions, in which the desire for harmony or conformity in the
group results in an irrational decision. Group members try to minimize conflict and reach a
consensus decision without critically evaluating alternative viewpoints, by actively suppressing
dissenting viewpoints, and by isolating themselves from outside influences.
Page 25 of 45
Barriers to learning from incidents and accidents
encourage people not to look too closely into warnings that something is not
exactly as one would like it to be.
Fukushima-Daiichi Nuclear Accident
The independent Investigation Commission into the 2011 nuclear accident in
Japan stated in its report:
The o st u tio of the Fukushi a Daii hi Pla t that ega in 1967 was
based on the seismological knowledge at that time. As research continued
over the years, researchers repeatedly pointed out the high possibility of
tsunami levels reaching beyond the assumptions made at the time of
construction, as well as the possibility of core damage in the case of such a
tsunami. TEPCO overlooked these warnings, and the small margins of safety
that e isted e e fa f o ade uate fo su h a e e ge
situatio .
Questions to help you identify a pattern of denial:
4.2
Do you hea e a ks su h as We ha e al a s do e thi gs this a ; it s
o l he people a e a eless that a ide ts happe ?
Are workers in the facility familiar with the major accident hazard
s e a ios that a e des i ed i the pla t s safet ase do u e t? Do
they know what the consequences of such accidents would be?
improvement. Complacency is the opposite of vigilance, or the sense of
17
vulnerability, or chronic unease .
The importance of avoiding complacency with respect to major accident
hazards was emphasized by the High Reliability Organizations theorists in the
2000s [Weick and Sutcliffe, 2001].
Complacency can be caused by:
Reliance on occupational safety statistics as the sole safety performance
indicator (no monitoring of process safety indicators), with incentives
a d e a ds ased o this a o − and possibly misleading − safety
indicator.
The o ga izatio s i atte tio to iti al safet data.
Superficial investigation of incidents, with a focus on the actions of
individuals rather than on systemic contributing factors.
Questions to help you identify a pattern of complacency:
1.
Do you detect a feeling of invulnerability at any level of the
organization?
2.
Do supervisors perform frequent checks to confirm that workers −
including contractors − are obeying safety rules and procedures? If
deviations are detected, are the rules and procedures reviewed to
ensure that they are still relevant and appropriate?
3.
Do people discount information that identifies a need to improve? Do
the p efe to e ei e i fo atio hi h o fi s the o ga izatio s
superior performance, or also look for warning signs of a negative
trend?
Complacency
Complacency occurs when there is a widely held belief that all hazards are
controlled, resulting in reduced attention to risk. The organization (or key
members within the organization) views itself as being uniquely better
(safer) than others, and, as a result, feels no need to conform to industry
standards or good practices, and sees no need to aim for further
Overconfidence in the safety system and its performance (possibly due
to a lack of accidents in the last few years, and a feeling that past
success guarantees future success).
17
Chronic unease is a belief that despite all efforts deployed, errors will inevitably occur and that
even minor problems can rapidly become system-threatening. As stated by [Reason, 2002],
h o i u ease alo g ith o ti uous vigilance and adjustment are still the main weapons in
the e o a age e t a ou .
Page 26 of 45
Barriers to learning from incidents and accidents
4.
Do people express an interest in learning from other organizations or
industries, and are lessons from related industry accidents routinely
discussed within the organization?
5.
Are those who raise concerns viewed negatively?
6.
Are people who take safety risks tacitly rewarded when their risk-taking
is successful?
7.
Is the response to safety concerns focused on explaining away the
concern rather than understanding it?
4.3
4.4
Inappropriate organizational beliefs about safety and
safety management
In mature industries dealing with hazards, accidents too often act as a
trigger which shows us that our worldview is incorrect, that some
fundamental (but sometimes unstated) assumptions we made concerning
the safety of the system were wrong. Some examples of these inappropriate
18
beliefs o u a
ths concerning safety:
Resistance to change
Change is uncomfortable for most people, bringing uncertainty and lowering
the degree of control we have over situations; we have a natural tendency
to resist it. At an individual level, resistance to change may be caused by
mistrust, lack of information, lack of ability or lack of sufficient incentives.
Note ho e e that esista e to ha ge is a o plai t ofte
ade
managers concerning resistance of shop-floor workers to a proposed
reorganization, which when analyzed in detail may be due to workers having
identified that the proposed change will lead to degraded working
conditions or lower safety.
The structuralist view of the Bird pyramid (the attractive but mistaken
ie that hippi g a a at the i o i ide ts fo i g the ase of the
p a id ill e essa il p e e t la ge a ide ts [Hale, 2002]). This
interpretation of the accident statistics compiled by Heinrich then by
Frank Bird is attractive, since it suggests an intervention strategy that is
fai l eas to i ple e t: fo us people s atte tio o a oidi g i o
incidents (slips, trips and falls) and their increased awareness of minor
safet p o le s ill p e e t the o u e e of ajo e e ts . While
there is some evidence that this is true concerning certain categories of
occupational accidents, it is likely to be false concerning process safety
and major accident hazards.
At an organizational level, resistance to change means that trying new ways
of doing things is not encouraged. It is well known that organizations have a
very low level of intrinsic capacity of change, and often require endogenous
pressure (from the regulator, from changes to legislation) to evolve. It may
also be caused by a o pete
t ap : si e epetitio a d p a ti e uild
competence, a team may have developed high performance in their
standard approach to a problem, which is an obstacle to trying out other,
potentially superior approaches.
Questions to help you identify a pattern of resistance to change:
1.
Do ou hea o
e ts su h as We ha e do e thi gs this a fo
ea s, h should e do it a diffe e tl o ? ?
A. Hale efe s to
[Hale, 2002].
18
eliefs hi h see
so plausi le that the
o
a di
ediate a epta e
Page 27 of 45
Barriers to learning from incidents and accidents
H. Heinrich was a pioneering occupational safety researcher, whose
publication Industrial Accident Prevention: A Scientific Approach in 1931 was
based on the analysis of large amounts of accident data collected by his
employer, an insurance company. This work, which continued for more than
thi t ea s, ide tified ausal fa to s of i dust ial a ide ts i ludi g u safe
a ts of people a d u safe e ha i al o ph si al o ditio s . The o k
was pursued and disseminated in the 1970s by F. Bird. The most famous
result is the incident pyramid, which posits a relatively constant frequency
ratio between minor incidents, injuries and accidents, and is often
isi te p eted to ea f e ue
edu tio ill t igge a se e it
edu tio .
This work was pioneering in encouraging managers to think about and invest
in prevention of occupational accidents. It was also ground-breaking work on
causality and the different ways of interrupting an accident sequence.
However, some of the ideas are no longer appropriate for safety
management in large, complex socio-technical systems. For instance,
Heinrich stated that p edo i a t auses of o-injury accidents are, in
average cases, identical with the predominant causes of major injuries, and
incidentall of i o i ju ies as ell.
This is incorrect and can lead to inappropriate allocation of safety resources.
Accident causality is often more complicated than Hei i h s uote suggests,
as indicated by the following extract from the BP report into the Deepwater
Horizon accident: The tea did ot ide tif a si gle a tio o i a tio
that caused this incident. Rather, a complex and interlinked series of
mechanical failures, human judgments, engineering design, operational
implementation and team interfaces came together to allow the initiation
a d es alatio of the a ide t.
A elated th is the i p o i g o upatio al safet i p o es p o ess
safet a oids ajo s a ide ts o ld ie . I fa t, the u de l i g
causal factors of major process accidents are mostly unrelated to those
responsible for occupational safety incidents. Accidents such as the BP
Texas City explosion in 2005, in a refinery which had very good
occupational safety results, demonstrate that occupational safety and
process safety can be quite uncorrelated in practice.
The e ha e t had a a ide t fo a lo g ti e, so e a e o safe as
a o ga izatio
th elie i g that past o -events predict future
19
non-events) .
The fe e u desi a le e e ts ea s highe safet h pothesis. Though
this may seem to be quite an intuitive notion, there is some evidence in
civil aviation that airlines with the lowest rate of minor incidents are also
those with the largest likelihood of experiencing major accidents. It is as
if people working in highly controlled systems, where deviations are very
rare, lose the ability to compensate for abnormal situations and their
understanding of how a system works (including outside of its nominal
conditions) disappears over time, so when a (very rare) deviation occurs,
they no longer know how to react. This notion is called immunization in
20
the resilience community.
Attitudes of lea ed helpless ess su h as
21
a
a
.
e o t e a le to ha ge
)e o- isk do t i es: if ta gets a e set i te s of u e of
undesirable events, there will be some tendency for these targets
magically to be reached, but most likely at the expense of knowledge on
eal pe fo a e. Fo e a ple, goals su h as
da s without an
a ide t a e k o to lead to u de -reporting of safety-related events.
This is si ila to a e t al the e i the fil La Hai e ( 995), in which a person falling from a
sk s ape , thi ks o his a do
so fa , so good as he passes ea h floo . But the a i
which one lands is more important than the way one falls.
20
After [Hollnagel et al., 2006], we define a resilient system as one which is able effectively to
adjust its functioning prior to, during, or following changes and disturbances, so that it can
continue to perform as required after a disruption or a major mishap, and in the presence of
continuous stresses.
21
In psychology, learned helplessness [Seligman, 1975] is a condition of a person or an animal in
which it has learned to behave helplessly, even when the opportunity is restored for it to help
itself by avoiding a negative circumstance to which it has been subjected. Coping mechanisms
are limited to stoicism (symptoms of depression).
19
Page 28 of 45
Barriers to learning from incidents and accidents
Confusion between reliability and safety: reliability is the ability of a
system or component to perform its required functions under stated
conditions for a specified period, whereas safety (in its most common
definition) is the absence of harm. Although in many cases both
properties will be correlated, they occasionally pull in opposite
directions. One example, given by [Hopkins, 2007], is an electricity
company, in which safe operation during maintenance on the network
may require cutting power to the network, compromising reliability.
22
Improving safety sometimes requires a paradigm shift . Unfortunately,
paradigm shifts are very expensive to individuals (since they require changes
to mental models and beliefs) and take a long time to lead to change. Safety
practitioners (and more generally, workers), have often invested many years
in their profession, and suggesting that some of their fundamental beliefs,
acquired over so many years, may be wrong, is threatening to them. Once
ha ge i people s eliefs a d assu ptio s has o u ed, the ust the eexamine the design basis of their system, the assumptions made concerning
its operation, and the resulting effects on maintenance procedures,
operating procedures, etc.
protocols for investigation should be designed and tested [ESReDA,
2009, Kingston et al., 2005].
4.6
Another area for fear or anxiety is the effect of reporting a negative event on
the o pa s o a olleague s reputation.
4.7
4.5
Over o fide e i the i vestigatio tea ’s apa ilities
The investigation and analysis teams may lack certain skills necessary for
ualit i estigatio s, o ha e i ade uate k o ledge of the s ste s
functioning and elements responsible for its safety, leading to substandard
investigations, poor credibility of the corrective actions decided and little
learning.
This overconfidence may result from:
lack of adequate proficiency/training;
insufficient resources;
lack of readiness to investigate. In addition to the training of the future
investigator and contributor to investigations, the structure and
Co side fo e a ple a o e f o a otte apple safet
odel to a o e s ste i
approach, or the integration of Safety-II ideas [Holl agel,
] i o e s safet thi ki g.
22
Anxiety or fear
Accidents and incidents often arouse powerful emotions, particularly where
they have resulted in death or serious injury. On the positive side, this
ea s that e e o e s atte tio a e fo used o i p o i g p e e tio
(awareness). On the negative side however, the same emotions can also
cause organizations and individuals to become highly defensive, leading to a
rejection of potentially change-inducing messages. This is natural and
understandable but needs to be addressed positively if a culture of openness
and confidence is to be engendered to support a mature approach to
learning.
Corporate dilemma between learning and fear of
litigation/liability
In a legal context where investigators work to allocate blame and lawsuits
23
for corporate manslaughter follow major accidents , certain companies may
be advised by their legal counsel not to implement an incident learning
system, or downplay its pe fo a e e ou agi g a do t get aught
attitude to deviations from prescribed operations). Indeed, the incident
reporting database (which may be seized by the police after an accident)
may contain information concerning precursor events, which demonstrate
that a age s k e of the possi le da ge i thei s ste , ut had ot
yet taken corrective action. Organizations may wish to avoid the
accumulation of what can be seen as incriminating knowledge. However,
suppressing the safety lessons which can be derived from this information
can create an organizational learning disability [Hopkins, 2006].
23
The legal world tends to hold the view that systems are inherently safe and that humans are
the main threat to that safety.
Page 29 of 45
Barriers to learning from incidents and accidents
This te de
to a ds the i i alizatio of hu a e o
has many negative consequences for learning.
[Dekker, 2011]
Organizations may also fear an increase in their legal liability after an
accident if they admit to the need to change some element of their design or
operations as a result of an event investigation. Certain pressures may be
indirect, via their insurer for example.
4.8
Lack of psychological safety
Psychological safety
A shared belief within a workgroup that people are able to speak up without
being ridiculed or sanctioned. When psychological safety is present, team
members think less about the potential negative consequences of expressing
a new or different idea than they would otherwise. As a result, they speak
up more when they feel psychologically safe and are motivated to improve
their team or company. There are no topics which team members feel are
ta oo a u spoke u de sta di g that certain issues are not to be
discussed and resolved) [Edmondson, 1999].
In the absence of psychological safety, people will hesitate to speak up when
they have questions or concerns related to safety. This can lead to underreporting of incidents, to poor quality of investigation reports (since people
do not feel that it is safe to mention possible anomalies which may have
contributed to the event), and to poor underlying factor analysis (it is easier
to point the finger at faulty equipment than at a poor decision made by the
u it s a age .
When psychological safety is low, it may be improved by:
incentives for reporting incidents and making suggestions;
4.9
a more participatory management style (empowering employees to
participate in organizational decision-making, encouraging workers to
voice their concerns).
Self-censorship
In many workplace situations, people do not dare to raise their concerns:
they choose silence over voice, withholding ideas and concerns about
procedures or processes which could have been communicated verbally to
someone within the organization with the authority to act. They have
developed self-protective implicit voice theories, socially acquired taken-forgranted beliefs about the conditions in which speaking up at work is
accepted, which they have internalized from their interactions with
authority over many years [Detert and Edmondson, 2011].
Self-censorship can be caused by a variety of factors:
concerns for o e s eputatio
development;
ithi the o k g oup, o fo o e s a ee
fear of damaging a relationship or of embarrassing a peer;
peer pressure;
feeling that one needs solid data, evidence or solutions to raise
concerns;
hierarchical conformity o fo it ith ules su h as do t e
the oss a d do t pass the oss ;
a ass
investigation stop-rules for causes of identified corrective measures
hi h a e outside the s ope of the i estigatio tea s a date, its
ability to communicate, its ability to influence.
This effect is related to the concept of psychological safety described in
section 4.8.
training for managers in encouraging feedback from their colleagues;
Page 30 of 45
Barriers to learning from incidents and accidents
Engineer/management approaches to risk communication
4.10 Cultural lack of experience of criticism
The famous physicist Richard Feyman was a member of the Rogers
commission into the Challenger space shuttle disaster. In his book Why do
you care what other people think?, Feynman describes the investigation of
the pre-launch teleconference between NASA and a subcontracting
company, concerning the effect of cold weather on O-rings in the booster
rockets. A group of engineers were worried that the cold would prevent
proper operation of the O-rings (and it did indeed lead to the explosion of
the launcher). A senior manager participating in the teleconference asked
o e of the e gi ee i g a age s to put o his a age e t hat i stead of
his e gi ee i g hat , a d the disse ti g a age the ha ged his positio
on delaying the launch.
In some national cultures, there are strong obstacles to producing and
addressing criticism or suggestions for improvement (which can be seen as
implicit criticism of the people who designed or manage the system).
Feynman describes how he asked each of the engineers and a manager to
write down an estimate of the probability that a flight would fail due to loss
of an engine. The engineers each produced a number, ranging between 1 in
200 and 1 in 300. Mr Lovingood, the manager, wrote some lines about past
experience, quality control, and engineering judgment. Feyman recounts:
Well , I said, I e got fou a s e s, a d o e of the
easeled.
I tu to M Lo i good, I thi k ou easeled.
I do t thi k I easeled.
You did t tell e hat ou o fide e as, si ; ou told e how you
dete i ed it. What I a t to k o is: afte ou dete i ed it, hat as it?
He sa s,
pe e t — the e gi ee s ja s d op,
ja d ops; I look at
him, everybody looks at him — uh, ugh, i us epsilo !
“o I sa
ell es, that s fi e. No the o l p o le is, WHAT I“ EP“ILON?
He sa s
to the i us .
Fukushima-Daiichi: a disaster Made in Japan
24
The foreword to the report by the National Diet of Japan Fukushima
Nuclear Independent Investigation Commission (NAIIC) states:
What ust e ad itted — very painfully — is that this as a disaste Made
i Japa . Its fu da e tal auses a e to e fou d i the i g ai ed
conventions of Japanese culture: our reflexive obedience; our reluctance to
uestio autho it ; ou de otio to sti ki g ith the p og a ; ou
g oupis ; a d ou i sula it .
4.11 Drift into failure
Performance pressures and individual adaptation put systems in the
direction of failure [Rasmussen and Svedung, 2000], and thereby gradually
reduce their safety margins and take on more risk. Operators know their
systems well. However, when reactive quick fixes are implemented more
frequently, i.e. when staff are more frequently required to work outside the
normal operating envelope, alarm bells should ring. This effect will generally
be difficult for operators within a system to observe (since it occurs
g aduall a d is elated to people s ai to o ti uall i prove
performance), but it can hopefully be detected by external auditors.
This illustrates both self-censorship in communication on risk and
ispe eptio of the ag itude of isk the Fe
a gap .
24
The report, which has been translated into English, is available at
http://warp.da.ndl.go.jp/info:ndljp/pid/3856371/naiic.go.jp/en/.
Page 31 of 45
Barriers to learning from incidents and accidents
This migration, and the associated erosion of safety margins, tends to be a
slow process, with multiple steps which occur over an extended period.
Be ause ea h step is usuall s all, the steps ofte go u oti ed, a e
o
is epeatedl esta lished o alizi g de ia e , a d o sig ifi a t
p o le s a e oti ed u til it s too late.
Normalization of deviance occurs when it becomes generally acceptable to
deviate from safety systems, procedures, and processes. The organization
fails to implement or consistently apply its management system across the
operation (regional or functional disparities exist). Safety rules and defenses
are routinely circumvented in order to get the job done.
The period during which deviations accumulate and are normalized is also
called the incubation period by B. Turner [Turner and Pidgeon, 1997].
Excessive production pressure occurs when there is an imbalance between
production and safety as leadership overly values production, such that the
emphasis is placed upon meeting the work demands, schedule or budget,
rather than working safely. Organizational goals and performance measures
are heavily weighted towards commercial and production outcomes over
protection and safety. Business strategy, plans, resourcing and processes fail
adequately to address safety considerations.
Figure 4: Migration and erosion of safety margins, after [Rasmussen, 1997]
From [Rasmussen, 1997]:
Companies today live in a very aggressive and competitive environment
which will focus the incentives of decision makers on short-term financial and
survival criteria rather than long-term criteria concerning welfare, safety and
the environment. Studies of several accidents revealed that they were the
effects of a systematic migration of organizational behavior toward accident
under the influence of pressure toward cost-effectiveness in an aggressive,
competitive environment.
A 2007 statement by C. Merritt, chair of the US Chemical Safety and Hazard
Investigation Board, on the Texas City accident, indicates: The combination
of cost-cutting, production pressures and failure to invest caused a
progressive deterioration of safety at the refinery.
Normalization of deviance at NASA
Behaviour that led to the Challenger and Columbia accidents, where people
within NASA became so much accustomed to a deviant behaviour that they
did t o side it as de ia t, despite the fa t that the fa e eeded thei
own rules for elementary safety [Vaughan, 1996].
The Piper Alpha platform
The Piper Alpha platform in the North Sea was operating at three times the
p essu e it had ee i itiall desig ed fo . A ide t epo ts state that the
level of activity had been gradually increased without appropriate checking
that the s ste etai ed a app op iate a gi of safet .
Page 32 of 45
Barriers to learning from incidents and accidents
Buncefield oil terminal
Since the late 1960s, throughput of product had increased four-fold,
resulting in a system that put supervisors under considerable pressure,
developing their own systems to overcome this, e.g. working overtime (12 hr
shifts). Additionally, during the filling process, although there were three oil
ta k high le el ala s user high, high and high high) in place, each of the
eight supervisors employed did not have the same way of using these alarms
(COMAH, 2011).
pressure to push more for production rather than safety. This occurred
for example in the leadup to the Challenger disaster, and was also
identified as a factor contributing to the Buncefield accident (increased
throughput led to increased pressure on operators [COMAH, 2011]);
pressure of cost reduction overriding safety concerns (for example BP
Texas City, Grangemouth) – chequebook mentality identified by internal
BP audits, in which safety spending is based on budgeted amounts
rather than on risk analysis;
confusion between reliability and safety, including reliance on past
success as a substitute for sound engineering practices [CAIB, 2003];
a li it ou sel es to o plia e e talit , i hi h safet i o atio s
which have not yet been mandated by the regulator are not put in place
(due to cost or to overtrust of the effectiveness of regulators);
organizational barriers which prevent effective communication of critical
safety information, stifle professional differences of opinion and
suppress minority viewpoints [CAIB, 2003];
evolution of an informal chain of command and decision-making
p o esses that ope ates outside the o ga izatio s ules [CAIB,
a tendency to weigh operational ease/comfort/performance more than
the restrictions which are often required for safe operation.
Questions to help you identify drift into failure:
The phenomenon can be caused by:
insufficient oversight by the regulator, or regulators who do not have
sufficient authority to enforce change in certain areas;
Are workarounds and shortcuts regularly used by workers to meet
deadlines?
Are there some systems that operate in a significantly different manner
to that originally designed (higher flow rate, pressures or temperatures
than the design capacity, lower staffing levels, higher maintenance
intervals)? If so, have formal risk assessments been undertaken to assess
the safety impact of the change?
Do managers become less strict in requiring work to be undertaken
according to procedures and safety guidelines when work falls behind
schedule?
Does the organization appear to provide insufficient financial, human
and technical resources for certain tasks or activities?
Are project deadlines often set based on overly optimistic assumptions?
Are they frequently revised later on in the project?
Are operational deviations risk assessed? Are they linked with the
management of change system?
Is it clear who is responsible for authorizing deviations from standard
practice and established procedures? Is there a formal procedure for
authorizing such deviations?
Is there a significant backlog of scheduled maintenance activities?
Are rewards and incentives heavily weighted towards production
outcomes, with little weight to safety and quality related indicators?
];
Page 33 of 45
Barriers to learning from incidents and accidents
BP Texas City
4.13 Conflicting messages
In the startup operation, the level of liquid used by operators in the
distillation column was higher than that recommended by the procedure.
This was due to operational concerns (underfill leading to production
problems) whose safety implications had not been checked.
The sociologist E. Goffmann has analyzed organizational behaviour using a
dramaturgical metaphor, i hi h i di iduals identity plays out through a
ole that the a e a ti g. I his d a atu gi al odel, so ial i te a tio is
analyzed in terms of how people live their lives like actors performing on a
stage. Goff a disti guishes the f o t-stage , where the actor formally
performs and adheres to conventions that have meaning to the audience,
f o the a k-stage he e pe fo e s a e p ese t ut ithout a
audie e. Whe the e is a dis o e t et ee
a age e t s f o t-stage
slogans concerning safet su h as “afet fi st a d the ealit of de isio s
or actions on the back-stage let s ait u til the e t pla ed shutdo to
do this ai te a e , a age e t essages lose thei edi ilit .
4.12 Inadequate communication
Organizational learning requires communication between the people who
are witnesses to the learning event, the people who analyze it and establish
recommendations, and the people who can implement changes and
internalize the new information. Communication is often impaired by the
organizational structure of a company: organization charts, policies,
regulations, budgeting, security systems. Efficient organizations with
enforced lines of communication and clearly demarcated responsibilities
mean that a manager in one department may not talk to a manager in
another department.
Inadequate communication may be caused by:
the communication medium: problems with the tools (for instance the
computer system or the newsletter) used to store and share
information;
cultural issues, such as the retention of information for issues of power;
poor filtering of information to decide which information can be useful
to which categories of participants, leading to information overload or
to excess scarcity;
the increasing specialization within certain worker trades;
the effects of subcontracting.
Information dissemination, and thus learning, will be facilitated by the
existence of shared spaces such as cafeteria and coffee spaces where
informal discussions can help overcome failings in the formal
communication channels.
[Langåker, 2007] has analyzed the importance of compatibility between the
front-stage and back-stage essages a age e t s a ilit to alk the
talk , o o
it e t to the ea i g of safet essages, as opposed to
appearances) for the effectiveness of organizational learning.
4.14 Pursuit of the wrong kind of excellence
Safety is a complex issue, and difficult to characterize in the form of
indicators. Some organizations focus on occupational safety indicators (such
as lost time at work), and do not measure or follow process safety
indicators, which estimate the level of technological risk. A further
dimension of safety which is not necessarily correlated with the previous
two dimensions is that of product safety. Accidents such as the explosion at
the BP Texas City refinery in 2005, where the occupational safety record was
very good but where budget restrictions had led to under-investment in the
maintenance of equipment and where the number of incidents such as
losses of confinement was high, demonstrate that following an incomplete
set of safety performance indicators can lead to a mistaken belief that the
le el of safet o o e s fa ilit is high.
Some organizations attempt to improve safety by providing incentives for
poo l hose safet ta gets, su h as ze o spills o
o a ide ts : su h
objectives cannot be achieved in the long term, and will tend to discourage
Page 34 of 45
Barriers to learning from incidents and accidents
reporting of incidents (no-one wants to be the worker who was
espo si le fo his olleagues ot o tai i g the o us fo a illio hou s
o ked ithout a e o da le i ju … . I e ti es o e i g leading
i di ato s su h as the u e of o e ti e a tio s i ple e ted ill
generally be more effective in leading to safety improvements [Hopkins,
2009].
Page 35 of 45
Barriers to learning from incidents and accidents
5
Enablers of learning
Enablers of learning are characteristics or procedures within an organization
that facilitate the recognition of the need for change, the identification of
relevant new knowledge and the implementation of changes. The next
paragraphs describe a selection of identified learning enablers. These diverse
practices should ideally be promoted within an organisation to evolve
towards a more learning culture.
5.1
Importance of appropriate accident models
Safety research over the last 30 years shows that chain-of-events analysis is
only the beginning to identifying organizational and systemic factors
underlying unwanted events. Each human working within the system
(operators, managers, maintenance staff) has a mental model of the system
that they control, concerning the interconnections between its elements,
the types of hazards present, the warning signs of degraded operation, and
the appropriate actions in each system state. They build this safety model
over time based on their observation of system operation, on discussion
with colleagues, training provided and information from the experience
feedback system. If there are significant divergences between this safety
model and real operation of the system (for example in some unusual,
transient phase), their control actions will be inappropriate, and may lead to
an accident. Incident investigations and experience feedback provide
important information to managers, regulators, and others about the
system. If the audience for this communication is not capable of handling
the feedback (too busy, overconfident, suffering from fixed ways of thinking,
paralyzed by fear of being wrong), then their safety models and their
knowledge will not be updated [Carroll and Fahlbruch, 2011].
In discussing an investigative process with outsiders, the rationale and
results frequently are explained through metaphors. Over decades, domino
theory, the iceberg principle and Swiss cheese models have been popular
representations. However, the powerful communication capabilities of these
theories, principles and models can be mistaken for descriptive, analytic and
even explanatory authority, which is not always present when these simple
metaphors are applied to complex socio-technical systems.
5.2
Training on speak-up behaviour
As described in section 4.8, psychological safety is a shared belief within a
workgroup that people are able to speak up without being ridiculed or
sanctioned. Higher levels of psychological safety increase information
sharing, trust and the acceptance of lessons learned.
Psychological safety can be increased by training managers on coaching
behaviour and training team members on speak-up behaviour or
assertiveness. These elements are typically included in Crew Resource
Management (CRM) training, as applied for some time already in civil
aviation and other sectors.
5.3
Safety imagination
Risk analysis, and discussions on safety within organizations, concerns a
number of anticipated and well-known hazards, often with a focus on those
that are unique to the industry within which the organization operates. This
focus, together with group-think effects, may lead organizations to overlook
less obvious, emerging hazards. The concept of safety imagination proposed
[Pidgeo a d O Lea ,
] o e s the a ilit the thi k outside the
o a d de elop i he a ide t s e a ios tha those o all o side ed
in quantitative risk assessments.
[Pidgeo a d O Lea ,
safet i agi atio :
] p opose a u
e of guideli es fo foste i g
attempt to fear the worst;
use good meeting management techniques to elicit varied viewpoints;
play the
hat if ga e ith pote tial haza ds;
allow no worst case situation to go unmentioned;
Page 36 of 45
Barriers to learning from incidents and accidents
suspend assumptions about how the safety task was completed in the
past;
approaching the edge of a safety issue a tolerance of ambiguity will be
required, as newly emerging safety issues will never be clear;
force ou self to isualize
accidents.
ea - iss situatio s de elopi g i to
Such exercises can help to reduce complacency on safety issues, can trigger
discussion on new approaches to managing risks and emerging hazards, and
are conducive to learning.
5.4
Workshops and peer reviews
Whilst the historical focus in organizational learning research has tended to
be on operational experience feedback, there is increasing recognition that
organizational learning is much wider and draws on many different
25
mechanisms, such as workshops, secondments , peer group exchange
26
forums, peer review and assist missions . These provide the opportunity to
learn from routine operational experience as well as from events.
Some examples of practices that facilitate learning:
Workshops and conferences, which allow interaction between
academia and industry, between different industrial sectors, are an
important way of being exposed to new ideas, new questions and
alternative manners of handling problems. Examples are the
conferences organized in the nuclear power area by organizations such
as IAEA, the OECD NEA, the WANO group of nuclear operators, etc.
5.5
Peer reviews, as used in the nuclear industry. A team of 10–20 people
from several plants in different countries visits a host plant for a period
of 2–3 weeks to assess performance in several organizational areas. This
practice gives a learning opportunity both for the host plant and for the
people taking part in the review. The effect of learning is enhanced by
revisiting the host plant some 18–36 months later after the peer review.
Leadership and safety walkthroughs: in general, unit and site
management walk around their facility/site, looking for hazards, and
making an effort to point out safe conditions along with areas for
improvement. Often based on a checklist of questions to address.
A learning agency
The existence of a learning agency (see box below) is posited by C. Argyris
and F. Koornneef [Koornneef, 2000] as being an enabler of learning. The
learning agency should comprise some members who understand
operations well, and others able to take a more global view. It helps to recontextualize the information provided by front-line workers and add any
additional necessary information. The learning agency can play the role of
intermediary between individuals and the organization, analyzing
operational surprises, disseminating the lessons to other parts of the firm,
and ensuring the lessons are captured in the corporate memory.
Learning agency
The learning agency consists of people who learn on behalf of the
organization and ensures that the learning experience becomes embedded
in the organization. This learning agency has a crucial role in recapturing and
preserving the contextual information lost in the notification process. It
should ot e see as a a of ha di g off the espo si ilit fo lea i g
to someone else.
25
A secondment is the temporary detachment of a person from their regular organizational
position to an assignment in another department or organization.
26
Assist Missions on Knowledge Management are a mechanism used by the IAEA (nuclear
energy sector) in which a small team of specialists visit a counterparty organization and transfer
good practice and suggestions for improvement.
Page 37 of 45
Barriers to learning from incidents and accidents
5.6
Dissemination by professional organizations
Most industrial sectors have national or international professional bodies,
which play an important role in promoting discussion and vicarious learning
across organizational boundaries (working groups, conferences, safety
bulletins, etc.). They play a positive role in encouraging learning by
professionals who work within the industry. These dissemination activities
help o ga izatio s to a oid a ide ts lea i g f o othe o ga izatio s
failures and crises.
The dissemination is generally materialized either in written form (magazine,
safet ulleti s… o p ese ted o all at o fe e es a d safet eeti gs.
This work requires good editorial capability to maximize reader engagement:
first-hand stories are useful for capturing attention, but should be focussed;
a clear and concise description of events (possibly including photos) and
lessons learned should be made available; possible links with similar
incidents should be made, including statistical or trend information if it is
available.
Examples of successful dissemination activities include:
27
the Safety Bulletin of the American Society of Chemical Engineers;
27
HindSight magazine , published by Eurocontrol for the air traffic
management community, and which always includes a segment on a
near miss, presented from different viewpoints of actors working in
different roles, together with reactions and analysis from experts and
people in operations. The focus on rich variety of points of view helps
the reader to reflect on the safety dimension of various interactions in
their daily work;
5.7
Available for free online at https://www.eurocontrol.int/content/hindsight.
28
The Major Accident Hazards Bureau disseminates publications such as
Lessons Learned Bulletins and Seveso Inspection Series to the Seveso
Competent Authorities of the 28 Member States, including Norway and
Iceland.
Standards
Sector-specific technical standards are a good way of accumulating
knowledge from past failures and from good practice into common design
principles. Standards are typically improved over time with input from
industry workgroups and from specialists from the insurance industry
[Brusson and Jacobsson, 2000]. They evolve more slowly than industrial
practice, but more quickly than legislation.
Examples:
The Flight Safety Foundation organizes well attended annual
conferences on safety in civil aviation, organizes working groups on
various safety topics, and publishes a magazine called AeroSafety World;
The World Association of Nuclear Operators (WANO) organizes the
exchange of operating experience between nuclear facility operators
worldwide, as well as technical support and peer reviews;
The European Clearinghouse on Nuclear Power Plant Operating
Experience for dissemination between nuclear safety authorities in EU
countries, using newsletters, publication of reports, and management of
databases;
29
American Petroleum Institute standards are regularly updated to
include knowledge from accidents and near misses.
30
The US National Fire Protection Authority standards on sprinklers and
other preventive equipment (often mandated by private insurers)
provide guidance on best practice in designing new facilities and
upgrading existing ones. The organization is mostly funded by the
insurance industry, and insurers are able to provide strong incentives for
the implementation of their guidelines by setting differentiated
premiums according to which standards are put in place.
28
https://minerva.jrc.ec.europa.eu/en/content/f30d9006-41d0-46d1-bf43e033d2f5a9cd/publications
29
API: http://www.api.org/.
30
NFPA: http://www.nfpa.com/.
Page 38 of 45
Barriers to learning from incidents and accidents
Volu ta p og a
es su h as ‘espo si le Ca e™ [ICCA,
] in the
chemicals industry encourage companies to share information on
incidents and accidents.
5.8
Role of regulators and safety boards
Regulators and safety boards have a responsibility to encourage learning
across organizations in the regulated industry. In the nuclear power sector,
for example, industry organizations such as IAEA play an important role in
disseminating lessons learned, establishing principles and standards, and
assisting individual organizations to self-assess and improve (despite clear
demonstrations of the inadequacy of this learning that arose from the
Fukushima-Daiichi accident). Commercial aviation is another example of an
industry that has become expert at collecting information from both nearmiss incidents and major events and then transferring knowledge across the
entire industry.
Examples:
EASA, the European civil aviation authority, publishes safety
recommendations and airworthiness directives on the basis of the
analyses of incidents undertaken by safety authorities of member states.
Implementation of these recommendations is mandatory for airlines, air
traffic management organizations and aircraft manufacturers alike.
The pedagogical animations and safety videos created by the US
31
Chemical Safety Board are a powerful mechanism for improving
awareness of various types of risks, and are widely used in safety
training worldwide.
5.9
accidents, it is difficult to generate sufficient political momentum and public
goodwill to allow such complex changes. However, the financial and
emotional cost of these inquiries should not be underestimated.
Some noteworthy examples of public inquiries which examined multi-level
factors contributing to an accident and led to changes at the system level:
The Columbia Accident Investigation Board produced a fantastic indepth analysis of the organizational factors that led to the failure of the
32
Columbia space shuttle in 2003 .
The investigation into the Prestige oil tanker spill off the coast of Galicia
in 2002 led to changes in regulations on liability for shippers in case of
accidents.
The main enablers at a cultural level for successful learning can be
summarized as follows [Størseth and Tinmannsvik, 2012]:
National inquiries
See http://www.csb.gov/.
The Cullen inquiry into the 1999 Paddington Junction (Ladbroke Grove)
railway accident in the UK led to major changes in the regulation of
railway safety and the implementation of a train protection system
nationwide.
5.10 Cultural factors
Large public inquiries into major accidents tend to play a very positive role in
leading to changes in the legal framework and public attitudes with respect
to certain industries. Indeed, without the public pressure generated by large
31
The Cullen inquiry into the 1988 Piper Alpha accident in the North Sea
led to better internal design of offshore platforms, more stringent
inspection standards, and the obligation for firms operating offshore
platforms to prepare a safety case, an evidence-based demonstration
that the major accident hazards are well managed.
32
Cooperation is a non-competitive joint activity of two or more parties
whose outcome is mutually beneficial. Cooperation is most common
between parties in the same organization, but can also exist between
firms, between a company and an authority, between a firm and an
industry group, between a firm and a stakeholder group, for instance.
Report available at http://www.nasa.gov/columbia/home/CAIB_Vol1.html.
Page 39 of 45
Barriers to learning from incidents and accidents
Motivation is the willingness of personnel, management, authorities,
et . to go head to head ith the p o le i the ho est uest for
change/learning. Extrinsic motivating tools used to motivate personnel
are based on physical and monetary rewards and can be effective to
some extent. But intrinsic motivators are much more difficult to define
due to different personalities and psychological needs of individuals.
Motivating process is continuous and adequate company policy should
define it well. Only a motivated individual can learn effectively from
many different sources and even without special external help.
Trust refers to both intra-types of trust (within a given company or
organization) and inter-types of trust (company-authority, companysector organizations, employee-company organization, etc.). Trust is in
most cases based on own or others experience gained through time in
communication and activities with the other party. Trust is also
dependent on the interest of parties but even if there are competitive
parties involved trust could be based on mutual, many times unwritten,
agreement. There are also many additional factors like openness,
transparency and congruency which increase. A trustful approach in
learning is esse tial to a oid u e essa ti e fo fi di g p oof i.e.
wasting time.
Existence of a shared language and concepts. Understanding each other
is usually taken as natural but in fact it is many times big unrecognized
issue. The problem could be multifold but a typical part is when the
e ei e of i fo atio does t p o ide ade uate feed a k of his/he
interpretation to the sender of the information. This is in many cases the
reason for misunderstanding. If the information is accepted the wrong
way it could later result in tangible problems. According to typical
taxonomy of causes (e.g. IAEA/NEA Incident Reporting System), the
ai oot ause of this de iatio o u e e, e e t… is la k of a
questioning attitude. Deeper investigation could reveal that differences
in culture, dialect, terminology or some external obstructers could lie
behind the ineffectiveness of the communication. There are usually two
parties in the communication process and at least from one side effort is
needed to enable mutual understanding i.e. learning.
Individual curiosity and vigilance depends on his/her interest. This
e a le ould e of u ial i po ta e to s it h so eo e s a a e ess
to active listener/learner state. Whether the interest to learn is
professional or not, both sides (in person(s) to person(s) learning) should
apply sufficient effort to ensure the lesson is interesting and turn this
switch on.
Ability to embrace new ideas and change at both the individual and
organizational levels. In other words it is necessary to be open-minded
to accept possible improvements. And vice versa: blind-minded
individuals or organizations are continuously missing opportunities to
learn i.e. they are creating fruitful ground for disaster.
Presence of a supporting culture (learning culture, just culture per
[Reason, 1997]). Many discussions are conducted about this enabler but,
to u de sta d it ette , “ hei s odel of o ga izatio al ultu e [“ hei ,
1990] is widely used to explain the basis. Three distinct levels in
organizational cultures are artefacts and behaviours; espoused values;
and assumptions. Understanding all three levels is a good starting point
if someone wants to improve learning abilities as well as other
characteristics needed to enable positive changes in an organization.
Furthermore, organizations must also be willing to unlearn outdated or
ineffective procedures if they wish to learn better safety management
strategies. Unlearning is usually not defined as a process or activity in the
organization but increasing demand on acquiring new knowledge means
that it has to be considered and appropriately managed. Unlearning simply
provides the space for new knowledge and thus can be treated as additional
enabler for learning.
Page 40 of 45
Barriers to learning from incidents and accidents
6
References
Argyris, C. and Schön, D. A. (1978). Organizational learning: a theory of
action perspective. Addison Wesley, Reading, MA, USA. ISBN: 9780201001747, 356 pages.
Argyris, C. and Schön, D. A. (1996). Organizational learning II: Theory,
method, and practice. Addison-Wesley Publishing Company, New York,
NY, USA.
Brusson, N. and Jacobsson, B. (2000). A world of standards. Oxford
University Press. ISBN: 978-0199256952, 198 pages.
CAIB (2003). Report of the Columbia accident investigation board. Technical
report, NASA. Available at
http://www.nasa.gov/columbia/caib/html/start.html.
Cullen, L. W. D. (1990). The public inquiry into the Piper Alpha disaster.
Technical report, H. M. Stationery Office, London.
Cyert, R. M. and March, J. G. (1963). A behavioural theory of the firm.
Blackwell, Cambridge, MA, USA. ISBN: 978-0631174516, 268 pages.
Dawson, D. M. and Brooks, B. J. (1999). The Esso Longford gas plant
accident: report of the Longford Royal Commission. Technical report,
Longford Royal Commission.
Dechy, N., Bourdeaux, T., Ayrault, N., Kordek, M.-A., and Coze, J.-C. L. (2004).
First lessons of the Toulouse ammonium nitrate disaster, 21st September
2001, AZF plant, France. Journal of Hazardous Materials, 111(1–3):131–
138. A Selection of Papers from the JRC/ESReDA Seminar on Safety
Investigation Accidents, Petten, The Netherlands, 12-13 May, 2003. DOI:
10.1016/j.jhazmat.2004.02.039.
Dekker, S. W. (2006). The Field Guide to Understanding Human Error.
Ashgate. ISBN: 978-0754648260, 236 pages.
Ca oll, J. “. a d Fahl u h, B.
. The gift of failu e: Ne app oa hes to
analyzing and learning from events and near- isses. Ho o i g the
contributions of Bernhard Wilpert. Safety Science, 49(1):1–4. DOI:
10.1016/j.ssci.2010.03.005.
Dekker, S. W. (2007). Just Culture: Balancing Safety and Accountability.
Ashgate. ISBN: 978-0754672678, 166 pages.
Cedergren, A. (2013). Implementing recommendations from accident
investigations: A case study of inter-organisational challenges. Accident
Analysis & Prevention, 53(0):133–141. DOI: 10.1016/j.aap.2013.01.010.
Dekker, S. W. (2011). The criminalization of human error in aviation and
healthcare: A review. Safety Science, 49(2):121–127. DOI:
10.1016/j.ssci.2010.09.010.
Cedergren, A. and Petersen, K. (2011). Prerequisites for learning from
accident investigations – a cross-country comparison of national accident
investigation boards. Safety Science, 49(8–9):1238–1245. Available at
http://lup.lub.lu.se/record/2072590/file/4392752.pdf, DOI:
10.1016/j.ssci.2011.04.005.
Detert, J. R. and Edmondson, A. C. (2011). Implicit voice theories: Taken-forgranted rules of self-censorship at work. Academy of Management
Journal, 54(3):461–488. DOI: 10.5465/AMJ.2011.61967925.
COMAH (2011). Buncefield: Why did it happen? The underlying causes of the
explosion and fire at the Buncefield oil storage depot, Hemel Hempstead,
Hertfordshire, on 11 December 2005. Technical report, COMAH, UK.
Available at http://www.hse.gov.uk/comah/buncefield/buncefieldreport.pdf.
Dien, Y. (2006). Chapter Les facteurs organisationnels des accidents
industriels (in French) in Risques industriels — Complexité, incertitude et
décision: une approche interdisciplinaire (Magne, L. and Vasseur, D., Ed.),
pages 133–174. Lavoisier.
Dien, Y., Dechy, N., and Guillaume, È. (2012). Accident investigation: From
searching direct causes to finding in-depth causes – problem of analysis
or/and of analyst? Safety Science, 50(6):1398–1407. DOI:
10.1016/j.ssci.2011.12.010.
Page 41 of 45
Barriers to learning from incidents and accidents
Dien, Y. and Llory, M. (2004). Effects of the Columbia space shuttle accident
on high-risk industries or can we learn lessons from other industries? In
Proceedings of the Hazards XVIII conference, IChemE Symposium series
no. 150.
Drupsteen, L., Groeneweg, J., and Zwetsloot, G. I. J. M. (2013). Critical steps
in learning from incidents: Using learning potential in the process from
reporting an incident to accident prevention. International Journal of
Occupational Safety and Ergonomics, 19(1):63–77. Available at
http://archiwum.ciop.pl/58225.
Duncan, R. and Weiss, A. (1979). Chapter Organizational Learning:
Implications for Organizational Design in Research in Organizational
Behavior (Staw, B., Ed.), pages 75–123. Jai Press, Greenwich, CT.
Edmondson, A. C. (1999). Psychological safety and learning behavior in work
teams. Administrative Science Quarterly, 44(2):350–383. DOI:
10.2307/2666999.
ESReDA (2005). Shaping public safety investigations of accidents in Europe.
An ESReDA working group report (Ed. J. Stoop, S. Roed-Larsen, E.
Funnemark). ISBN: 978-8251503044.
ESReDA (2009). Guidelines for safety investigation of accidents. Technical
report, ESReDA. Available at
http://www.esreda.org/Portals/31/ESReDA_GLSIA_Final_June_2009_For
_Download.pdf
Faure, M. and Escresa, L. (2011). Chapter Social stigma in Production of Legal
Rules (Parisi, F., Ed.). Edward Elgar.
Festinger, L. (1957). A theory of cognitive dissonance. Stanford University
Press. ISBN: 978-0804701310, 291 pages.
Hale, A. R. (2002). Conditions of occurrence of major and minor accidents —
urban myths, deviations and accident scenarios. Tijdschrift voor
toegepaste Arbowetenschap, 15:34–41. Available at
http://www.arbeidshygiene.nl/~uploads/text/file/200203_Hale_full%20paper%20trf.pdf.
Hale, A. R., Freitag, M., and Wilpert, B., Ed. (1997). After the Event — From
Accident to Organizational Learning. Pergamon. ISBN: 978-0080430744,
250 pages.
Hayes, J. and Hopkins, A. (2012). Deepwater horizon — lessons for the
pipeline industry. Journal of Pipeline Engineering, 11(3):145–153.
Hollnagel, E. (2014). Safety-I and Safety-II: The Past and Future of Safety
Management. Ashgate. ISBN: 978-1472423085, 200 pages.
Hollnagel, E., Woods, D. D., and Leveson, N. (2006). Resilience Engineering:
Concepts and Precepts. Ashgate Publishing, Aldershot, UK. ISBN: 9780754646419, 410 pages.
Hopkins, A. (2006). A corporate dilemma: To be a learning organisation or to
minimise liability. Technical report, Australian National University,
Canberra, Australia. National Center for OSH regulation Working Paper
43. Available at
https://digitalcollections.anu.edu.au/bitstream/1885/43147/2/wp43corporatedilemma.pdf.
Hopkins, A. (2007). The problem of defining high reliability organisations.
Technical report, National Research Centre for Occupational Health and
Safety Regulation, Australia. Working Paper 51. Available at
http://regnet.anu.edu.au/sites/default/files/WorkingPaper_51.pdf.
Hopkins, A. (2008). Failure to learn: the BP Texas City Refinery Disaster. CCH
Australia. ISBN: 978-1921322440, 200 pages.
Hopkins, A. (2009). Thinking about process safety indicators. Safety Science,
47:460–465.
Hovden, J., Størseth, F., and Tinmannsvik, R. K. (2011). Multilevel learning
from accidents – case studies in transport. Safety Science, 49(1):98–105.
DOI: 10.1016/j.ssci.2010.02.023.
Huber, G. P. (1991). Organizational learning: The contributing processes and
the literatures. Organization Science, 2(1):88–115. Special Issue:
Organizational Learning: Papers in Honor of (and by) James G. March.
DOI: 10.1287/orsc.2.1.88.
Page 42 of 45
Barriers to learning from incidents and accidents
ICCA (2006). Responsible care global charter. Technical report, International
Council of Chemical Associations. Available at http://www.iccachem.org/ICCADocs/09_RCGC_EN_Feb2006.pdf.
Jacobsson, A. (2011). Methodology for Assessing Learning from Incidents – a
Process Industry Perspective. PhD thesis, Lund University. Available at
http://lup.lub.lu.se/record/1939961/file/1939964.pdf.
Jacobsson, A., Sales, J., and Mushtaq, F. (2010). Underlying causes and level
of learning from accidents reported to the MARS database. Journal of
Loss Prevention in the Process Industries, 23(1):39–45. DOI:
10.1016/j.jlp.2009.05.002.
Jerez-Gómez, P., Céspedes-Lorente, J., and Valle-Cabrera, R. (2005).
Organizational learning capability: a proposal of measurement. Journal of
Business Research, 58(6):715–725. Special Section: The Nonprofit
Marketing Landscape. DOI: 10.1016/j.jbusres.2003.11.002.
Johnson, C. W. (2003). Failure in Safety-Critical Systems: A Handbook of
Accident and Incident Reporting. University of Glasgow Press, Glasgow,
Scotland. ISBN: 0-85261-784-4. Available at
http://www.dcs.gla.ac.uk/~johnson/book/.
Kingston, J., Frei, R., Koornneef, F., and Schallier, P. (2005). Defining
operational readiness to investigate. Technical report, NRI/RoSPA. DORI
white paper. Available at http://www.nri.eu.com/WP1.pdf.
Kjellén, U. (2000). Prevention of Accidents Through Experience Feedback.
Taylor & Francis, London. ISBN: 978-0748409259, 424 pages.
Kletz, T. A. (1993). Lessons from Disaster: How Organizations Have No
Memory and Accidents Recur. Gulf Professional Publishing. ISBN: 9780884151548, 192 pages.
Kontogiannis, T., Leopoulos, V., and Marmaras, N. (2000). A comparison of
accident analysis techniques for safety-critical man–machine systems.
International Journal of Industrial Ergonomics, 25(4):327–347. DOI:
10.1016/S0169-8141(99)00022-0.
http://repository.tudelft.nl/view/ir/uuid:fa37d3d9-d364-4c4c-925891935eae7246/.
Lampel, J., Shamsie, J., and Shapira, Z. (2009). Experiencing the improbable:
Rare events and organizational learning. Organization Science,
20(5):835–845. DOI: 10.1287/orsc.1090.0479.
Langåker, L. (2007). An inquiry into the front roads and back alleys of
organisational learning. In Proceedings of the Organization Learning,
Knowledge and Capabilities Conference, London, Ontaria, Canada.
Available at
http://www2.warwick.ac.uk/fac/soc/wbs/conf/olkc/archive/olkc2/paper
s/langaker_and_nylehn.pdf.
Le Coze, J.-C. (2008). Disasters and organisations: From lessons learnt to
theorising. Safety Science, 46(1):132–149. DOI:
10.1016/j.ssci.2006.12.001.
Lekka, C. (2011). High reliability organisations — a review of the literature.
Technical report, UK Health and Safety Executive. Available at
http://www.hse.gov.uk/research/rrpdf/rr899.pdf.
Levitt, B. and March, J. G. (1988). Organizational learning. Annual review of
sociology, 14:319–340. DOI: 10.1146/annurev.so.14.080188.001535.
Lindberg, A.-K., Hansson, S. O., and Rollenhagen, C. (2010). Learning from
accidents — what more do we need to know? Safety Science, 48(6):714–
721. DOI: 10.1016/j.ssci.2010.02.004.
Llory, M. (1996). Accidents industriels, le coût du silence. Opérateurs privés
de parole et cadres introuvables. L Ha atta , Pa is. I“BN:
2738442260, 364 pages.
Llory, M. (1999). L a ide t de la e t ale u léai e de Th ee Mile Isla d.
L Ha atta . I“BN:
-2738477088, 368 pages.
Llory, M. and Dien, Y. (2010). Systèmes complexes à risques — analyse
organisationnelle de la sécurité. Te h i ues et s ie es de l i gé ieu .
Référence AG1577.
Koornneef, F. (2000). Organised Learning from Small-scale Incidents. PhD
thesis, Technische Universiteit Delft, Delft. Available at
Page 43 of 45
Barriers to learning from incidents and accidents
Lundberg, J., Rollenhagen, C., and Hollnagel, E. (2009). What-you-look-for-iswhat-you-find: The consequences of underlying accident models in eight
accident investigation manuals. Safety Science, 47(10):1297–1311. DOI:
10.1016/j.ssci.2009.01.004.
Nonaka, I. and Takeuchi, H. (1995). The Knowledge-Creating Company: How
Japanese Companies Create the Dynamics of Innovation. Oxford
University Press. ISBN: 978-0195092691, 298 pages.
Paltrinieri, N., Dechy, N., Salzano, E., Wardman, M., and Cozzani, V. (2012).
Lessons learned from Toulouse and Buncefield disasters: from risk
analysis failures to the identification of atypical scenarios through a
better knowledge management. Risk Analysis. DOI: 10.1111/j.15396924.2011.01749.x.
Pate-Cornell, M. E. (1993). Learning from the Piper Alpha Accident: A
Postmortem Analysis of Technical and Organizational Factors. Risk
Analysis, 13(2):213-232. DOI: 10.1111/j.1539-6924.1993.tb01071.x.
Pidgeo , N. F. a d O Lea , M.
. Chapte Organizational safety culture:
implications for aviation practice in (Johnston, N. A., McDonald, N., and
Fuller, R., Ed.), pages 21–43. Avebury Technical Press, Aldershot.
Pidgeo , N. F. a d O Lea , M.
. Ma -made disasters: why technology
and organizations (sometimes) fail. Safety Science, 34:15–30. DOI:
10.1016/S0925-7535(00)00004-7.
Pransky, G., Snyder, T., Dembe, A., and Himmelstein, J. (1999). Underreporting of work-related disorders in the workplace: a case study and
review of the literature. Ergonomics, 42(1):171–182. DOI:
10.1080/001401399185874.
collaboration in the intellectual bandwidth model. Group Decision and
Negotiation, 15(3):197–220. DOI: 10.1007/s10726-006-9018-x.
Qureshi, Z. H. (2008). A review of accident modelling approaches for
complex critical sociotechnical systems. Technical report DSTO-TR-2094,
Australian Defense Science and Technology Organization. Available at
http://www.dtic.mil/get-tr-doc/pdf?AD=ADA482543.
Rasmussen, J. (1997). Risk management in a dynamic society: a modelling
problem. Safety Science, 27(2):183–213. DOI: 10.1016/S09257535(97)00052-0.
Rasmussen, J. and Svedung, I. (2000). Proactive risk management in a
dynamic society. Technical report, Swedish Rescue Services Agency,
Karlstad, Sweden. Available at
https://www.msb.se/ribdata/filer/pdf/16252.pdf.
Reason, J. (1995). Understanding adverse events: human factors. Quality in
Health Care, 4(2):80–89. Available at
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1055294/, DOI:
10.1136/qshc.4.2.80.
Reason, J. (1997). Managing the risks of organizational accidents. Ashgate.
ISBN: 978-1840141054, 252 pages.
Reason, J. (2002). Combating omission errors through task analysis and good
reminders. Quality and Safety in Health Care, 11:40–44. DOI:
10.1136/qhc.11.1.40.
‘o hli , G. I.
. Defi i g high elia ilit o ga izatio s i practice: a
taxonomic prologue. In K. H. Roberts (Ed.), New challenges to
understanding organizations (pp. 11-32). New York: Macmillan.
Probst, T. M., Brubaker, T. L., and Barsotti, A. (2008). Organizational injury
rate underreporting: the moderating effect of organizational safety
climate. Journal of Applied Psychology, 93(5):1147–1154. DOI:
10.1037/0021-9010.93.5.1147.
Rousseau, J.-M. and Largier, A. (2008). Industries à risques: conduire un
diagnostic organisationnel par la recherche de facteurs pathogènes.
Te h i ues de l I gé ieu . AG 1576.
Qureshi, S., Briggs, R. O., and Hlupic, V. (2006). Value creation from
intellectual capital: Convergence of knowledge management and
Sagan, S. D. (1994). Toward a political theory of organizational reliability.
Journal of Contingencies and Crisis Management, 2(4):228–240. DOI:
10.1111/j.1468-5973.1994.tb00048.x.
Page 44 of 45
Barriers to learning from incidents and accidents
Schein, E. H. (1990). Organizational culture. American Psychologist, 45:109–
119. DOI: 10.1037/0003-066X.45.2.109.
Seligman, M. E. P. (1975). Helplessness: On Depression, Development, and
Death. W. H. Freeman, San Francisco. ISBN: 0-7167-2328-X.
Senge, P., Roberts, C., and Smith, B. J. (1990). The Fifth Discipline Fieldbook:
Strategies and Tools for Building a Learning Organization. Currency, New
York, USA.
Sklet, S. (2002). Methods for accident investigation. Technical report,
Norwegian University of Science and Technology. Available at
http://frigg.ivt.ntnu.no/ross/reports/accident.pdf.
Sklet, S. (2004). Comparison of some selected methods for accident
investigation. Journal of Hazardous Materials, 111(1–3):29–37. DOI:
10.1016/j.jhazmat.2004.02.005.
Weick, K. E. and Sutcliffe, K. M. (2001). Managing the unexpected: assuring
high performance in an age of uncertainty. Jossey-Bass. ISBN: 9780787956271, 224 pages.
Weick, K. E., Sutcliffe, K. M., and Obstfeld, D. (1999). volume 1, Chapter
Organizing for high reliability: Processes of collective mindfulness in
Research in organizational behaviour, pages 81–123. Elsevier.
Wrigstad, J., Bergström, J., and Gustafson, P. (2014). Mind the gap between
recommendation and implementation — principles and lessons in the
aftermath of incident investigations: a semi-quantitative and qualitative
study of factors leading to the successful implementation of
recommendations. British Medical Journal Open, 4(5). DOI:
10.1136/bmjopen-2014-005326.
Stoop, J. (1990). Scenarios in the design process. Applied Ergonomics,
21(4):304-310.
Størseth, F. and Tinmannsvik, R. K. (2012). The critical re-action: Learning
from accidents. Safety Science, 50(10):1977–1982. Papers selected from
5th Working on Safety International Conference (WOS 2010). DOI:
10.1016/j.ssci.2011.11.003.
Tu ke , A. L. a d Ed o dso , A. C.
. Wh hospitals do t lea f o
failures: Organizational and psychological dynamics that inhibit system
change. California Management Review, 42(2):55–72. DOI:
10.1225/CMR248.
Turner, B. A. and Pidgeon, N. F. (1997). Man-made disasters. ButterworthHeinemann. ISBN: 978-0750620871, 200 pages.
Vaughan, D. (1996). The Challenger launch decision: Risky technology, culture
and deviance at NASA. University of Chicago Press, Chicago. ISBN: 978-0226-85175-4.
Vaughan, D. (1999). The dark side of organizations: Mistake, misconduct,
and disaster. Annual Review of Sociology, 25:271–305. DOI:
10.1146/annurev.soc.25.1.271.
Page 45 of 45
View publication stats