Just Culture Accident Model: June 2017

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/320270005

Just Culture Accident Model

Conference Paper · June 2017

CITATIONS READS
0 1,215

1 author:

Shem Malmquist
Florida Institute of Technology
3 PUBLICATIONS   1 CITATION   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Accident Investigation Methods View project

All content following this page was uploaded by Shem Malmquist on 07 October 2017.

The user has requested enhancement of the downloaded file.


Running head: JCAM 1

Just Culture Accident Model – JCAM

Captain Shem Malmquist, FRAeS 1

1
Inquiries for this work should be sent to shem.malmquist@gmail.com
JCAM 2

Abstract

This paper proposes that the concepts developed for Just Culture may provide an avenue to
broaden the scope of accident investigation and move away from the "blame" outcome of most
reports through the use of a simple Just Culture algorithm to mitigate cognitive bias on the part
of the investigator. Absent a formal strategy, cognitive bias has a high probability of occurring,
and becoming integrated into the investigators subconscious during the early stages of an
accident investigation. Just Culture is becoming widely accepted, and as such the transition to
integrating an investigative model utilizing the concept should be easier to implement and may
encounter less political push back than some of the more complex approaches proposed in recent
years, yet still provide a robust path to causality and human factors aspects that is more
comprehensive than that offered through the traditional models that are still in use by most
organizations.
JCAM 3

Introduction

The early stages of an accident investigation are extremely stressful for investigators.
There is considerable time pressure resulting in long hours, little sleep, often combined with
harsh environmental conditions, plus there are emotional aspects that can be significant with
casualties, severe injuries and the like. This combination of circumstances challenges the
cognitive abilities of even the most experienced investigators, and increases the probabilities of
error. It is under these extreme circumstances that investigators will naturally start forming their
own initial impressions and opinions as to what occurred in the accident. Absent a practiced
formal structure it is extremely improbable for a human to avoid this as pattern matching (Klein,
2008) and finding causality are innate human traits (Cohen, Rundell, Spellman & Cashon,
1999).

It is not surprising then that accident investigation has been primarily focused on
assigning responsibility (who caused it). There are limits to what this can accomplish, however.
In recent years it has become apparent that the majority of accidents occur while all personnel
involved are trying to get the “job done” while actively trying to avoid an accident. This clearly
makes assigning blame more problematic at best. A better approach was sought.

Recognizing that individuals involved in accidents are usually trying to operate in as safe
a manner possible while completing their job, the primary focus for accident investigators has
shifted from finding out who to blame to instead focusing on how to prevent a reoccurrence of
the event. There may still be a place for holding an individual responsible, what do we do if that
action works against our goal of preventing a future accident? While the legal profession has
continued its focus on finding the responsible (hence liable) party, the safety industry has now
mostly shifted to preventing an accident in the first place.

The typical outcome of an accident investigation is a report which lists probable causes,
then, reacting to these “causes”, the industry tries to find mitigations to prevent each of them
from recurring. However, a review of accident reports utilizing a more robust accident model
reveals that many of the reports completely missed the actual factors that led to the accident.

Recognizing that accident investigation was not necessarily preventing more accidents
from occurring, in recent years the industry has also implemented reporting and data collection
in order to see trends and identify “precursor” events. The concept is relatively simple, in that it
is generally agreed that just looking at accidents alone does not provide sufficient data to identify
dangerous trends. Once a trend is identified, it can then can be “mitigated” to prevent a future
accident. There is a limitation in that the data collected from safety reporting may be limited in
that they typically are only generated following a safety event, and events generally occur in a
narrow, and very unique, combination of circumstances (Hollnagel, 2004).

Whether investigating a safety event or full blown accident, the initial impressions of the
investigative team will somewhat “anchor” the investigators to those impressions. This can taint
the entire investigation, leading to missing key components even when more comprehensive
analysis is later applied.
JCAM 4

What is needed is a broadly accepted format to frame our accident investigations such
that investigators consider all of the factors but is still relatively easy to understand and easy to
use. That format should be capable of mitigating cognitive biases before they set in, and that
requires that it be implemented in the very early stages of the investigation. Fortunately, the
concepts developed for Just Culture may provide an avenue to a solution for this problem.

Just Culture

It has been said "to err is human", however an error can easily become a crime in most
cultures. A person who forgets to stop at the store to pick up milk on the way home may meet an
unhappy spouse, but that omission is not a crime. A pilot that forgets to take an action that
results in an accident could be charged with a crime in much of the world, and a surgeon who
does the same, resulting in a patient death, may similarly be charged. In addition, civil liability
may be sought by injured parties. As accidents went from being considered random/
uncontrollable events to most often being attributed to human error, the risk of criminal (and
civil) proceedings as a consequence of those errors increased. If a pilot (or other operator) is in a
position to stop the event, then, the logic follows, they are responsible if they do not do so
(Dekker, 2007).

Society tends to want to find a guilty party who can be held accountable after any
tragedy. It is in our nature, and fits in with the human perception of a causal world (Cohen et al,
1999). As a consequence we put a lot of responsibility on those in positions that can effectuate a
safe outcome. Unfortunately, this also can (and often does) lead to a culture where mistakes and
errors are kept hidden as people try to avoid blame (Reason, 1997). This problem, while not
isolated to front-line employees, is particularly acute when it comes to those who were at the
"sharp end" of any endeavor.

Problems that are buried can hide major flaws in processes, procedures or design that go
unreported due to fear of some sort of penalty, whether that is a verbal reprimand, professional
reputation, civil penalties or criminal charges. James Reason, in looking to find ways to
improve safety reporting, stated that "What is needed is a Just Culture, an atmosphere of trust in
which people are encouraged for providing essential safety-related information—but in which
they are also clear about where the line must be drawn between acceptable and unacceptable
behaviour." (Reason, 1997, p. 195). Subsequently, Eurocontrol published a formal definition of
what they consider a Just Culture (Trögeler, n.d.):

Just Culture has been defined as a culture where front line operators or others are
not punished for actions, omissions or decisions taken by them that are
commensurate with their experience and training, but where gross negligence,
willful violations and destructive acts are not tolerated. This is important in
aviation, because we know we can learn a lot from the so-called ‘honest mistakes’
(Eurocontrol, 2008, p. 11).

A key element of Just Culture is that the outcome must be separated from the act. The
judgment of the behavior has to be made within the bounds of what the person knew at the time.
It is always easy, after the fact, to see that a particular choice would lead to a bad outcome once
JCAM 5

that outcome has occurred. This heuristic is called "hindsight bias" (Tversky and Kahneman,
1973). Sidney Dekker (2007) elaborates on the problem of hindsight bias, stating it will:

 oversimplify causality ("this led to that") because we can start from the outcome
and reason backwards to presumed or plausible "causes;"
 overestimate the likelihood of the outcome (and people's ability to foresee it),
because we already have the outcome in our hands;
 overrate the role of rule or procedure "violations." While there is always a gap
between written guidance and actual practice (and this almost never leads to
trouble), that gap takes on causal significance once we have a bad outcome to
look at and reason back from;
 misjudge the prominence or relevance of data presented to people at the time;
 match the outcome with the actions that went before it. If the outcome was bad,
then the actions leading up to it must have been bad too—missed opportunities,
bad assessment, wrong decisions and misperceptions (Dekker, 2007, pp. 66-67).

Clearly we will not be able to accurately judge an action as long as we have tied the
action to the outcome, however, untying that action is very difficult to do. To aid in this process,
it is beneficial to constrain the thoughts to force a cognitive process as opposed to one based on
intuition (Kahneman, 2011). As outlined by Dekker (2007), Reason (1997) and others, a key
step in that process is that the pertinent human action be categorized. Marx (2001) divided the
actions of the human operator into four categories:

 Human Error: Should have done other than what they did.
 Negligence: Failure to exercise expected care. Should have been aware of
substantial and unjustifiable risk.
 Recklessness: Conscious disregard of substantial and unjustifiable risk.
 Intentional Rule Violation: Knowingly violates a rule or procedure.

As the last two could be construed to come from the same (flawed) mental model, it is
also possible to reduce these to three categories:

 Error. An unintentional act that was a result of the variation of normal human
behavior or known cognitive weaknesses.
 At-Risk Behavior. This term is less "loaded" than "negligent". The definition
here is that the person knowingly did not follow a procedure, but the motivation
was for a positive reason, such as a "work-around" a known problem, or, it could
be a person who was just lazy, but they are not intentionally taking any risk.
 Reckless. This would capture both the conscious disregard of a substantial and
unjustifiable risk, as well as the intentional rule violations.

These categories are then utilized to determine the corrective action. Those aspects that
are errors are considered to be "system problems", in that the person was trying to do everything
right. The fact that an error occurred indicates that the policy, procedure, training or equipment,
was not designed properly, allowing normal human fallibility to result in an unplanned and
undesired event. Human variability is no surprise—it should be planned for.
JCAM 6

Those events categorized as at-risk are considered a combination. They would partially
be considered a system issue. If the policies, procedure, or technology were properly designed,
the individual would not have tried to "work around" the problem, e.g., if a pilot decided to skip
a checklist item because there are not enough time to complete it when a controller was rushing
him/her, that points to a problem in both the checklist (too long) and the air traffic control
procedures. However, there is also an individual component, in that the pilot still knowingly
chose to disregard a procedure. They could have refused the clearance, even if it resulted in a
long delay and passengers missing a connecting flight. The decision they made was the wrong
one, for the “right reason”. Hence, if there is a negative outcome, the responsibility should be
shared.

In the reckless category (grouping reckless and intentional rule violations into one), there
is nothing wrong with the system. The procedures, rules, policies and design are fine, but we
have an individual intentionally violating them for no external reason. These issues must be
dealt with at the individual level.

Making errors is part of learning, and a learning culture is not only a safer culture, but
also a more efficient one (Hollnagel and Woods, 2006). Just Culture was conceptualized in
order to develop consistency for front line employees, and encourage a safety culture where
employees could feel protected if they brought forward their own mistakes. Without a strong
reporting culture, it is impossible to have a strong safety culture, as the reporting is, literally, the
feedback to the organization. As a consequence, Just Culture has become widely viewed as a
core value to instill a strong safety culture (Flight Safety Foundation, 2004).

Just Culture as an Investigative Tool

Investigators are human, and the ability to separate out our own biases from fact is
extremely challenging (if not impossible) absent a structured method to do so. Time constraints
during the investigation further limit our ability to objectively view events while filtering out the
lens of our own previous experience and bias. Additionally, there may be significant political
pressure to "find someone to blame", pushing towards a human error. It can be a lot less
expensive to terminate one employee than change an entire system found to be flawed, both in
terms of capital as well as political cost to those in power. Political, financial, cultural and
societal pressures combine with the legal system to encourage the simple approach. It is much
easier to "sell" the finding that is a result of a simple sequential model, and many organizations
would prefer to not look beyond the front-line operator. These models are well known, and
integrated into many policy manuals.

While the front-line investigator has a desire to learn what actually happened, due to a
variety of factors, an accident investigation, like any other endeavor, will create its own
momentum, carving out a path like a river carving a canyon. Once that path is carved out, it is
very difficult to change. It is important to ensure that the investigation does not carve out a
channel at the outset, and progresses with as few biases as possible.

Independently, Just Culture has now been widely accepted as a requirement for a good
safety culture (Federal Aviation Administration, 2006). The use of Just Culture to improve
JCAM 7

safety reporting is well documented, and it is being integrated into Safety Management Systems
(Dekker, 2007). Utilizing the Just Culture model as a basis for investigating accidents may then
be a much smaller, and simpler incremental step, than the whole-scale implementation of a
complex systemic accident investigative model.

The concept is that the accident investigators would utilize the Just Culture algorithm at
the outset of the investigation. Each aspect would be viewed to determine if it falls into the
category of error, at-risk, or reckless category. The Just Culture algorithm would then be a tool
to create a starting point in the investigation that would ensure that investigators do not fall into
the trap of categorizing the event as a simple human error and to guard against hindsight bias.

This Just Culture Accident Model (JCAM) can serve as a stand-alone for simple
investigations, or be implemented as a tool to be utilized in conjunction within an existing
accident investigation framework. While Just Culture is not necessarily intended to replace other
models, the model can provide a framework to prevent some cognitive traps.

What is wrong with existing models?

In his book, Managing the Risks of Organizational Accidents, Reason (1997) discussed
three approaches to safety management, comprised of three different models. The first is the
"person model", which views people as free agents who could choose between safe and unsafe
behavior. Based on Reason’s (1990) early work, it is the most widely adopted model for
understanding accidents, particularly if its subset of the Human Factors and Classification
System (HFACS) is included. The model also conveniently allows for a worldview that
someone is responsible for an accident, and that "someone" is typically the operator at the sharp
end, i.e., "it was a pilot error".

The second is the engineering model, which would argue that the front-line operators
(such as pilots, mechanics, air traffic controllers, etc.) are influenced by the workplace. Human
errors, in this model, are a result of system designers not accounting for the cognitive or physical
strengths and weaknesses of the human controller.

The third is the "organizational model", which considers human error a consequence, or
symptom, of an organizational problem, rather than the cause. It is a quality improvement
approach (Reason, 1997). It is this latter approach that is beginning to gain favor among safety
professionals.

HFACS

Despite the fact that there are several accident models, actual accident investigation has
followed causal models that have been relatively unchanged for many years. Regardless of what
name may be used or what agency is involved, most accident investigation today utilizes a
modified sequential model, typically arranged as a series of events. Depending on the
organization, information on human error found during the process may be analyzed utilizing the
Human Factors Analysis Classification System (HFACS), developed by Shappell and
Wiegmann, (1999).
JCAM 8

HFACS was developed to create a method for classifying human error utilizing the Swiss
Cheese Model (Shappell and Wiegmann, 1999). Utilizing HFACS, investigators could work
through the analysis of an event and, assuming it was deemed to be a human error, find out what
sort of error it was. Although there are trees used (see Figure 2), the basic assumption is that the
accident is a sequential process.

Figure 2 (Shappel, et al, 1999)

The model breaks down into a number of sub-categories, as can be seen in Figure 3.
Utilizing such a list (the example is not complete), the investigator can either manually, or
through the use of software, assess how to classify the human failure. There are two notable

Figure 3 (Shappel, et al, 1999)

shortcomings with this process. The first is that the investigator is generally making the decision
as to the classification based on their own personal judgment. This judgment would be subject to
the full range of human biases or just flaws in their reasoning based on lack of training or
JCAM 9

experience (Kahneman, 2011). The second is that HFACS is part of an event chain model.
Levson (2011) writes:

While dominoes, event chains and holes in Swiss cheese are very
compelling because they are easy to grasp, they oversimplify causality and
thus the approaches used to prevent accidents (p. 91).

More recently, Shappell and Wiegmann (2006) have proposed a supplement to HFACS,
the Human Factors Intervention Matrix (HFIX), to elicit intervention strategies based on
HFACS. Shappell and Wiegmann (2006) recognize the problems associated with cognitive bias
and momentum, and propose HFIX is a way out of this box. While HFIX may provide an
avenue to do that, it would still have to overcome the previous momentum and bias that has set
in. It does not appear to be well suited to “use on the fly”. However, absent such an approach
the investigation may stop too short. For example, was an omitted checklist item a result of a
procedure that required the operator to utilize prospective memory? Perhaps it was an error due
to another cognition problem that could be addressed? HFACS, used alone, stops at the list
above, and from that point it is left open as to what remedies might be used. As the organization
has now found the "cause", there is generally little incentive to take further action or to take an
introspective view of any organizational shortcomings.

As outlined by Tversky and Kahneman (1971), people tend to make large errors on
seemingly intuitive probability estimates. This can lead to large errors in assumptions made
regarding the probability of certain risks. This highlights another weakness in an event chain
model where it can lead to artificially low probability estimates by treating the coincidence of
events as independent failures which then have such a low probability of occurrence in
conjunction that they are disregarded. Leveson (2011), describes some of the problems with this
approach in a discussion on probabilistic risk assessment, quoting Reason (1990), she writes
"Reason, in his popular Swiss Cheese Model of accident causation based on defense in depth,
does the same, arguing that in general 'the chances of such a trajectory of opportunity finding
loopholes in all of the defenses at any one time is very small indeed" (Leveson, 2011, p. 34).
Leveson (2011) continues:

Most accidents in well-designed systems involve two or more low-probability


events occurring in the worst possible combination. When people attempt to
predict the system risk, they explicitly or implicitly multiply events with low
probability – assuming independence – and come out with impossibly small
numbers, when in fact, the events are dependent. This dependence may be related
to common systemic factors that do not appear in the event chain (Leveson, 2011,
p. 34).

The initial valuations of events formed in the investigator’s mind have a great influence
on subsequent assessments. While there are a great many heuristic biases that can impact the
perceptions of an investigator (a complete discussion is outside the scope of this paper), some
that are fairly obvious include:
JCAM 10

 Confirmation bias (Nickerson, 1998), where a person subconsciously filters out


information that disagrees with their preferred mental model, and “allows through”
information that conforms to it;
 Availability heuristic (Kahneman, 2011), where events appear more likely based on the
subjective emotional salience for the individual;
 Anchoring bias (Kahneman, 2011), where an initial provided reference provides an
“anchor” that then is subconsciously used to make valuations.

These biases create an extremely difficult to move pre-judgment of causality. While


human factors is becoming part of the process in many parts of the world, even a human factors
expert can be subject to these types of biases, and even if they are not, they can have a difficult
time overcoming the momentum set by those working the other investigative groups in a major
investigation.

Despite these issues, a sequential model utilizing the HFACS (or similar) framework is
the model most commonly utilized for accident investigation worldwide.

FRAM and STAMP

In response to research indicating that the event chain type of approach to accident
investigation may miss critical aspects of an accident (Leveson, 2011, pp. 389-390), two very
robust systemic models have been developed, the Functional Resonance Accident Model
(FRAM), proposed by Hollnagel (2004) and the System Theoretic Accident Model and
Processes (STAMP), proposed by Leveson (2011). These are both still considered experimental,
although preliminary work shows that they are quite viable in identifying significantly more
problems than previous methods. The FRAM and STAMP methods were applied by Hollnagel,
Pruchnickin, Woltjer and Etcher (2008) and Nelson (2008), respectively, to the Comair 5191
accident. Each yielded significant findings that were absent from the official report. This is not
to state that every investigation should utilize methods such as FRAM or STAMP. First, it
should be pointed out that in both of these “re-analysis” studies, the team looking at the accident
was looking at it “fresh”. It is possible that the original team, already channeled into their first
impressions and the momentum of the investigation, would have been less able to reach new
findings even utilizing these model’s structured methodology. In addition, there are many
situations where a much simpler, causal model, could suffice, as Hollnagel & Speziali (2008)
point out "…when faced with the need to investigate an accident it is important that the method
chosen is appropriate for the system and the situation, i.e., that it is capable of providing an
explanation" (p. 38).

While robust, properly implementing either FRAM or STAMP is no small undertaking.


The models are complex and require considerably more work than the traditional sequential
analysis. More significantly, implementing these models would require an organizational change
to “business as usual” from the outset, and implementing the techniques can be daunting even for
the most experienced investigator.

Further, HFACS, FRAM or STAMP require a large amount of factual evidence. As such,
they are really not designed to be utilized during the early stages of an investigation. It is
JCAM 11

unlikely that field investigators would be categorizing the factual information gathered in real
time as the field investigation is just not conducive to that type of approach. Unfortunately, it is
during these early stages that the investigator is more likely to succumb to a cognitive biases
described previously, which insidiously wind their way into the subconscious thought process
acting as “filters” as new information comes in. Once in place these are extremely difficult to
dislodge.

JCAM is not intended to supplant these or other models, but rather to serve as a tool to
channel the investigator into a full consideration of the human factors aspects and offset
cognitive bias at the outset of the investigation. As such, it can be utilized as an adjunct to these
models or independently in the examination of each system or human failure that led to the
accident. More importantly, JCAM can channel the investigation to look more deeply at
multiple factors without adding more work in terms of categorizing items during the field phase.
The last thing the “go-team” wants is more work! Instead, the investigator is just applying
JCAM to each aspect as it is discovered, rather than completely re-thinking the entire process.
The reader is encouraged to explore FRAM and STAMP as both offer the possibility of a more
comprehensive understanding of an accident than is possible with a traditional sequential model.
Regardless of which method is utilized, JCAM can serve as a tool to ensure each process is fully
explored by mitigating the early stages of bias.

Reason (2008) argued that there is no single "correct" accident model, but rather what is
important is its practical utility, with the following being considered facets the model must
satisfy:

 Does it match the knowledge, understanding and expectations of its users?


 Does it make sense and does it assist in 'sense-making'?
 It is easily communicable? Can it be shared?
 Does it provide insights into the more covert latent conditions that
contribute to accidents?
 Do these insights lead to a better interpretation of reactive outcome data
and proactive process measures?
 Does the application of the model lead to more effective measures for
strengthening the system's defenses and improving its resilience? (Reason,
2008, p.95).

JCAM meets all of these objectives. Furthermore, as we can (and should) apply the model to
each successive supervisory model, the use of JCAM also serves to ensure that the supervisors
are provided the same error, at-risk, reckless consideration as the front line employee – this can
help alleviate political push-back.
JCAM 12

Using JCAM

The JCAM model can be proceduralized as follows:

1. Determine if personnel were involved in an error, at-risk or reckless act.


2. If it was an error or at-risk, conduct an analysis of the system.
3. If it was reckless, determine what system problems broke down, if any, to allow for
the reckless behavior.
4. Apply other accident model as required.

These concepts should be taught to all investigators, so, from the very moment they get
the call to join the accident, the investigator looks at each aspect through the JCAM lens. Doing
so can help mitigate the trap of cognitive biases described earlier.

If the sharp-end operator falls into the error or at-risk categories, that would indicate we
must look for a system problem. This problem could lead us to another person, or to a flawed
procedure. If we reach another person who made a flawed decision (i.e., a checklist that
contained a procedural problem, etc.), that person then becomes are new “sharp-end”, and we
should again apply the Just Culture algorithm to that person determine if we have more system
issues. What was in the framework that led that person to make that choice or decision? This
process can be continued, as each aspect has a "sharp" and "blunt" end, with each blunt end
being another's sharp end (Hollnagel, 2004).

Ideally, as the investigation continues, a human factors expert will be utilized to analyze
the accident to find what sort of cognitive issues were present and determine which category to
utilize. For example, did the system design require a person to use prospective memory, or was a
warning not salient? Was the person startled, inhibiting their thought process? What about
feedback? Did the person have any way of actually being aware of the problem? If there was
feedback, was it salient?

While not complex, the approach of Just Culture may be a dramatic paradigm shift for
many people. It may, therefore, be instructive to explore the foundation of each category to
better understand the approach.

The Error Classification

How could it have been an “error”? This is a response encountered by the author in
discussing the JCAM approach with other investigators. Perhaps, though, the question should be
posed in a different way: "Would a person get in a car or an airplane knowing in advance that
there would be an accident?" The question is silly on its face. No sane person would take part in
any activity if they knew for certain the outcome would be bad. The fact is that they do not have
that knowledge. A pilot does not know that they will make a critical error in judgment resulting
in an unrecoverable situation any more than a driver knows that a tire is at a critical point so it
will blow when at speed on the road. If the person had the feedback that there was a problem
they would likely take action prior to the event. Of course, they have to trust that the feedback is
accurate, but assuming that it is, imagine the driver of the car with the faulty tire. Would that
JCAM 13

person still drive the car at 120 km/h knowing that the tire would fail at that speed? What if
there was a mechanism to let the operator know? That feedback would change the outcome.

It needs to be understood that an error is actually a manifestation of normal human


behavior. Given the information and conditions present at the time, would a significant
percentage of people take the same actions? Was the person acting in a manner that, absent a set
of unique circumstances, would normally not have resulted in an accident? An error should not
lead to an accident absent problems in the system design. It also follows an error that leads to no
negative outcomes would not be investigated, implying that the controls or barriers in the system
worked. Humans behave within a predictable set of tolerances and responses, and, knowing this,
if controls and barriers are not in place to prevent an accident while a person is acting within the
framework of normal cognition and responses, then the system must be re-designed to provide
that protection.

Feedback is often referred to as a problem of perception (Shappell and Wiegmann, 1999).


The analysis should explore what information the operator had and if the information they had
was accurate. Does the feedback mechanism consider human cognitive issues, salience, or allow
for sufficient time to prevent a negative outcome? Does it reduce the heuristic response, or
channel such response appropriately? The Just Culture algorithm can assist with this.

At-risk.

An at-risk event requires two paths. One, at the individual level, the factors that led the
person to disregard the policy or procedure must be explored in the same manner as a “reckless”
act might be investigated. Coupled to this would be the system level. An at-risk event implies a
coupling between the individual and other system features, and that might exclude accident
methods that assume only causal processes. If it is both a system and individual problem, these
two mechanism are by definition “tightly coupled”, meaning that there are likely complex
systematic issues involved. Which caused the negative event? Was there an actual final lapse
that led to the accident? It is possible to have an event where two normal variances, each
operating within expected parameters, unexpectedly overlapped leading to the accident
(Hollnagel, 2004). That might change how we view the human action.

Reckless

By definition, a reckless event was a direct consequence of an operator’s conscious


disregard of a substantial and unjustifiable risk. However, individuals do not operate in a
vacuum, so the factors that led up to the behavior should be investigated. Were there cues
missed? What other factors might be involved?

As previously stated, but requires reiteration, it is important to also recognize that JCAM
does not stop at the sharp end operator. Again, every blunt end is another person's sharp end
(Hollnagel, 2004). Perhaps the reckless action was not on the part of the operator, but actually
their supervisor, or a result of the culture itself. If an individual committed a reckless act, then
did the supervisor commit an error when they missed cues that might have been precursors to the
behavior? Were there policies and procedures in place to identify and adequately address such
JCAM 14

individuals? Was it an at-risk action on the part of management, where the supervisor did not
choose to follow procedures even though the indications of a problem individual were present?

As can be seen, JCAM's path can move across the system. As an example, consider just
the first listed “probable cause” in the National Transportation Safety Board (NTSB) report of
the 1994 accident of USAir 1016 in Charlotte, NC (NTSB, 1994). This was a windshear
accident in which the flight crew encountered severe windshear on approach to landing, and hit
terrain during an attempted missed approach. In reviewing the first of the NTSB cited the
primary probable causes, what occurs if JCAM is applied?:

1. The flight crew’s decision to continue the approach into severe convective activity that
was conducive to a microburst.

Would this be considered an error, at-risk or reckless? It would hard to argue that the crew
knew there was actually severe weather and then intentionally flew into it. Who would do that?
Were these “risk taking” people? What information did the crew actually have at the time?
Would other crews likely make the same error today, given the same information?

From the report, the flight crew knew there was convective weather in the area. Is that
significant? Well, it might appear that way, but the truth is that air carrier pilots fly in the
vicinity of convective weather on a regular basis. In fact, if they did not do so, air traffic would
come to a standstill a good part of the year. Were they deviating from established procedures?
If so, that might push it to “at-risk”. It could be argued that to “keep the system working”,
virtually all flight crews are operating in an “at-risk” margin during most of their Springtime
flights. If that is the case, is that due to inadequate procedures or is that due to flight crews
“pushing the envelope”? It can easily be established that air carrier crews routinely, and without
negative outcome, fly closer to convective weather than the regulatory guidelines recommend
(Rhoda & Pawlak, 1999). If most flight crews would do the same, can we really argue that it is
not an error? What are the system problems that led the crew to make that mistake? Have they
been addressed? Based on the research of Rhoda and Pawlak (1999), it would appear that they
have not been2. Perhaps our methods of assessing severe weather in the approach environment
are inadequate?

If we establish that it was an “error”, then the probable cause listing the crew’s decision
would have to change, and the focus moved to what factors led to that decision. This would
result in a deeper examination of their mental models along with the actual information they
were receiving.

Utilizing JCAM, we find that “reckless” is eliminated, and we are left with either an
“error” or “at-risk”. Examining the evidence we can see that the pilot appeared to not be aware
of the severity of the weather. Did the pilot have the ability to ascertain the severity before it
was too late? What factors went into the pilot decision making? From the report we have the
following sources:

2
An data analysis by the author found that after the data is normalized for the number of departures and the
amount of severe weather present, the rate of severe weather penetrations remains unchanged despite the
implementation of various improvements in technology and training.
JCAM 15

1. Visual, out the window: The pilot was receiving information from visual observation
that they could see the runway through “a thin veil of rain” (NTSB, 1994, p. 5).
2. ATC: However, the report spends a great deal of time discussing fact that the crew
was given inadequate information from ATC to assess the severity of the weather.
3. Pilot reports: These are extremely salient and pilots tend to give great weight to a
pilot report from an aircraft just a few miles (a minute or two) in front of them. It has
been, and remains, a key factor in the decision making process of pilots. In Charlotte,
as with the tragic accident of Delta 191 in DFW, the preceding aircraft just ahead of
the accident flight reported relatively benign conditions.
4. Weather radar. Unfortunately, most pilots set the radar tilt below where it would
need to be to detect an impending microburst 3, so the weather radar would just serve
to confirm the visual observation of “a thin veil of rain” (it possible some
confirmation bias would also be a factor in this scenario).

Utilizing these tools, it appears that crew’s actions should fall into the “error” category,
as it is most probable that the crew did not know that they were continuing into severe weather.
That would push the entire investigation towards examining how to better provide accurate
information to flight crews so they could make informed decisions.

Conclusion

Accident investigation has moved from simple sequential causal models to a deeper
consideration of all aspects, with a strong emphasis on human factors. Models such as HFACS,
and the more comprehensive FRAM and CAST, are able to capture more significant aspects than
was previously possible. Unfortunately, these models have a limitation in that they are more
work intensive and not easy to implement before a significant quantity of factual information has
been collected, and by the time the investigation has reached the stage those models can be used,
many investigators may already be biased in their assessment. Once a bias has “set in” it is
extremely difficult to remove, and acts at an unconscious level in influencing human judgment.

Just Culture is gaining widespread support in industries that are high risk, from the
medical field and nuclear industry to aviation. The momentum can be captured to pull the
industry towards more comprehensive systemic model, with JCAM becoming a starting point.
The combination of the existing industry “buy in” with the protections that are also afforded to
those in successive supervisory roles should alleviate much of the political push-back to
implementing a new model.

JCAM will channel investigators to consider human factors in a more careful manner,
preventing gaps in the investigation and mitigating investigator cognitive biases. JCAM focuses
on the human element, and places renewed emphasis on exploring the human element at all
levels of failure. Implementing the JCAM strategy at the outset can mitigate the cognitive traps
investigators may fall prey to in the early stages of an accident investigation, and can continue to

3
Most flight crew manuals recommend a tilt setting of 5-7 degrees “nose-up” in the approach environment. This
leads to a radar scan of just 8,000 to 10,000 feet above the ground when within 10 miles of the airport on a normal
glideslope. Unfortunately, the main body of water in an impending microburst is generally in the range of 15-
20,000 feet (Wolfson, 1988).
JCAM 16

mitigate them throughout the process. Hence, JCAM can be a valuable addition to the safety
arsenal.
JCAM 17

References

Cohen, L. B., Rundell, L. J., Spellman, B. A., & Cashon, C. H. (1999). Infants' perception of
causal chains. Psychological Science, 10(5), 412-418.

Dekker, S. (2007). Just Culture. Farnham, England: Ashgate.

Eurocontrol. (2008). Just Culture Guidance Material for Interfacing with the Judicial System.
Brussels: EATM Infocentre.

Federal Aviation Administration. (2006). Introduction to Safety Management Systems for Air
Operators. AC No. 120-92. Retrieved from http://www.airweb.faa.gov/Regulatory_
and_Guidance_Library/rgAdvisoryCircular.nsf/0/6485143d5ec81aae8625719b0055c9e5/
$FILE/AC%20120-92.pdf

Flight Safety Foundation. (2004). A Roadmap to a Just Culture: Enhancing the Safety
Environment. Retrieved from http://flightsafety.org/files/just_culture.pdf

Hollnagel, E. (2004). Barriers and Accident Prevention. Surrey, England: Ashgate.

Hollnagel, E., & Woods, D. D. (2006). Epilogue: Resilience engineering precepts. Resilience
Engineering–Concepts and Precept. Surrey: Ashgate.

Hollnagel, E., Pruchnicki, S, Woltjer, R., & Etcher, S. (2008). Analysis of Comair flight 5191
with the Functional Resonance Accident Model. Retreived from http://www.crc.mines-
paristech.fr/csi/files/Hollnagel-et-al--FRAM-analysis-flight-5191.pdf

Hollnagel, E., & Speziali, J. (2008). Study on Developments in Accident Investigation Methods:
A Survey of the "State of the Art". In STATENS KÄRNKRAFTINSPEKTION. Retrieved
from http://hal.archives-ouvertes.fr/docs/00/56/94/24/PDF/SKI-Report2008_50.pdf

Kahneman, D. (2011). Thinking Fast and Slow. New York: Ferrar, Strauss and Giroux.

Klein, G. (2008). Naturalistic decision making. Human Factors: The Journal of the Human
Factors and Ergonomics Society, 50(3), 456-460.

Leveson, N. (2011). Engineering a Safer World. Cambridge: MIT Press.

Marx, D. (2001). Patient safety and the "Just Culture": A Primer for health care executives,
Report for Columbia University under a grant provided by the National Heart, Lung and
Blood Institute. Retrieved from http://www.unmc.edu/rural/patient-
safety/tools/Marx%20Patient%20Safety%20and%20Just%20Culture.pdf.

National Transportation Safety Board. (1994). Flight into Terrain During Missed Approach,
NTSB/AAR-95/03. Washington, D.C.: NTSB.
View publication stats

JCAM 18

Nelson, P. (2008). A STAMP ANALYSIS OF THE LEX COMAIR 5191 ACCIDENT. Retrieved
from http://sunnyday.mit.edu/safer-world/nelson-thesis.pdf.

Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review


of General Psychology, 2(2), 175

Reason, J. (1990). Human Error. Cabridge, England: Cambridge University Press.

Reason, J. (1997). Managing the Risk of Organizational Accidents. Hampshire, England:


Ashgate.

Reason, J. (2008). The Human Contribution. Surrey, England: Ashgate.

Rhoda, D. A., & Pawlak, M. L. (1999). An assessment of thunderstorm penetrations and


deviations by commercial aircraft in the terminal area.Massachusetts Institute of
Technology, Lincoln Laboratory, Project Report NASA/A-2, 3.

Shappell, S., and Wiegmann, D., (1999). Human Error, Safety, and System Development.
HESSD'99 Pre-Proceedings. Retrieved from
http://www.hf.faa.gov/docs/508/docs/HFACS1999Ca.pdf.

Trögeler, M. (n.d.) Criminalisation of air accidents and the creation of a Just Culture. Retrieved
from http://www.eala.aero/library/Mildred%20Trgeler%20EALA%20prize.pdf

Tversky, A., Kahneman, D. (1973). Availability: A heuristic for judging frequency and
probability. Cognitive psychology. 5, 207-232

Wolfson, M. M. (1988). Characteristics of microbursts in the continental United States.


Cambridge: MIT. Retrieved from http://www.ll.mit.edu/publications/journal/pdf/
vol01_no1/1.1.4.microbursts.pdf

You might also like