Noncoercive human intelligence gathering
Article (Published Version)
Dando, Coral J and Ormerod, Thomas C (2019) Noncoercive human intelligence gathering.
Journal of Experimental Psychology: General. ISSN 0096-3445
This version is available from Sussex Research Online: http://sro.sussex.ac.uk/id/eprint/88953/
This document is made available in accordance with publisher policies and may differ from the
published version or from the version of record. If you wish to cite this item you are advised to
consult the publisher’s version. Please see the URL above for details on accessing the published
version.
Copyright and reuse:
Sussex Research Online is a digital repository of the research output of the University.
Copyright and all moral rights to the version of the paper presented here belong to the individual
author(s) and/or other copyright owners. To the extent reasonable and practicable, the material
made available in SRO has been checked for eligibility before being made available.
Copies of full text items generally can be reproduced, displayed or performed and given to third
parties in any format or medium for personal research or study, educational, or not-for-profit
purposes without prior permission or charge, provided that the authors, title and full bibliographic
details are credited, a hyperlink and/or URL is given for the original metadata page and the
content is not changed in any way.
http://sro.sussex.ac.uk
Journal of Experimental Psychology:
General
Noncoercive Human Intelligence Gathering
Coral J. Dando and Thomas C. Ormerod
Online First Publication, December 23, 2019. http://dx.doi.org/10.1037/xge0000724
CITATION
Dando, C. J., & Ormerod, T. C. (2019, December 23). Noncoercive Human Intelligence Gathering.
Journal of Experimental Psychology: General. Advance online publication.
http://dx.doi.org/10.1037/xge0000724
Journal of Experimental Psychology: General
© 2019 American Psychological Association
ISSN: 0096-3445
2019, Vol. 1, No. 999, 000
http://dx.doi.org/10.1037/xge0000724
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
Noncoercive Human Intelligence Gathering
Coral J. Dando
Thomas C. Ormerod
University of Westminster
University of Sussex
Despite widespread recognition that coercive methods for intelligence gathering are unethical and
counterproductive, there is an absence of empirical evidence for effective alternatives. We compared 2
noncoercive methods—the Modified Cognitive Interview (MCI) and Controlled Cognitive Engagement
(CCE)—adapted for intelligence gathering by adding a moral frame to encourage interviewees to
consciously consider sharing intelligence. Participants from the general population experienced an
unexpected live event where equipment was damaged, and an argument ensued. Prior to interview,
participants were incentivized to withhold information about a target individual implicated in the event.
CCE yielded more target information more frequently than MCI (67% vs. 36%). Similarly, framing
yielded target information more often (65% vs. 39%). The effects of interview and framing appear to be
additive rather than interactive. Our results indicate combining noncoercive interview methods with
moral framing can enhance intelligence gain.
Keywords: human intelligence, interview, noncoercive, framing, information-gathering
culminating in an unwillingness to yield any information
(Goodman-Delahunty, Martschuk, & Dhami, 2014; Intelligence
Science Board, 2009).
If, as many believe, coercive methods are contra-indicators for
information gain, what of noncoercive methods? Despite recent
concerns regarding coercive methods and a realization that they
may not be effective (Senate Select Committee on Intelligence,
2014), there is little psychological research aimed at testing
intelligence-gathering alternatives (Meissner et al., 2017; Vrij et
al., 2017). Indeed, we know of no empirically evaluated noncoercive interview protocols designed specifically for human intelligence gathering.1
Evans and colleagues (2014) compared the effectiveness of
interrogation techniques taken from the US Army field manual
(2-22-3; 2006). In particular, they compared direct questioning, in
which answers to open-ended questions are followed up by probes
for detail, with emotional framing of direct questioning. Emotional
framing was of two kinds: positive, using “fear-down” and “prideand-ego up” techniques in which, for example, the seriousness of
the offense was minimized (fear-down), and the participant was
complemented on their honesty (pride-and-ego up); and negative,
using “fear-up,” “pride-and-ego down,” and “futility” techniques
in which the seriousness of the offense and consequences for the
participant (fear-up), the impact of being seen to be dishonest
(pride-and-ego down) and the ease with which the offense could be
detected (futility) were emphasized. Emotional framing was more
effective in gaining admissions of complicity and other event
information than direct questioning alone, with positive framing
I have spoken with people at the highest levels of intelligence and I
asked them the question, does it work, does torture work, and the
answer was yes, absolutely.
—Donald Trump, January 26, 2017, ABC News
Intelligence interviewers involved in national security operations seek to gain useful information in situations of conflicting
interest where interviewees (e.g., detainees) typically wish to withhold information (e.g., Human Intelligence Collector Operation,
2006). Calls for the use of torture create a need for psychological
research to test intelligence-gathering methods (Meissner,
Surmon-Böhr, Oleszkiewicz, & Alison, 2017; Vrij et al., 2017).
National security restrictions and the ethics of psychological research mean that the outcomes of torture cannot be evaluated
empirically. However, interview transcripts and post hoc accounts
from interrogators and detainees (e.g., Kassin, 2017; Porter, Rose,
& Dilley, 2016; Vanderhallen & Vervaeke, 2014) indicate that
coercive and aggressive techniques (e.g., Inbau, 2013) result in
false confessions and incorrect or misleading information (Meissner, Redlich, Bhatt, & Brandon, 2012) and can entrench attitudes
X Coral J. Dando, School of Psychology, University of Westminster;
Thomas C. Ormerod, School of Psychology, University of Sussex.
This work was funded by the U.S. Government’s High-Value Detainee
Interrogation Group (HIG) Contract DJF-16-1200-V-0000737. Statements
of fact, opinion and analysis in the article are those of the authors and do
not reflect the official policy or position of the HIG or the U.S. Government. We thank the team of researchers and interviewers who helped make
this research possible, and the actors who brought this event to life.
Correspondence concerning this article should be addressed to Coral J.
Dando, School of Psychology, University of Westminster, 115 New
Cavendish Street, London W1W 6UW, United Kingdom. E-mail:
c.dando@westminster.ac.uk
1
It has been suggested that the Scharf technique may offer advantages
for intelligence gathering (May & Granhag, 2016). However, to date there
is no consistent interview protocol associated with the technique, it has yet
to be evaluated in the field, and in its current form, it resembles a set of
general best-practice guidelines rather than an interview method per se.
1
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
2
DANDO AND ORMEROD
out-performing negative framing. This study is valuable in showing the potential for adapting interrogation methods to gain more
information. However, it differs from the study reported below in
two key respects: participants were complicit in the offense event
(they cheated along with a confederate) and were thus withholding
about themselves, and also the protocols used were designed for
detecting deception via maximization and minimization approaches found in coercive interviewing methods (e.g., Inbau,
2013), which may account for the high participant attrition rates
(73 out of 196 originally recruited participants did not complete
the study).
Here we introduce two methods for yielding intelligence
from interviewees who have been incentivized to withhold
information about events they have seen and heard rather than
been involved in. Criminal and terrorist cells typically operate
with individuals in the cell not knowing each other or the
identity of their leaders, and often they will not be directly
involved in front-line criminal or terrorist activity. Yet, they
will be in possession of information of interest to the authorities, which they will not wish to yield (e.g., Harms, 2017;
Harris-Hogan, 2013). The methods build upon interview protocols, selected because they have been subjected to empirical
field as well as laboratory trials, which were originally designed
to maximize information yield in the context of detecting deception: Controlled Cognitive Engagement (Ormerod & Dando,
2015) and Modified Cognitive Interviewing (Colwell et al.,
2009; De Rosa et al., 2018; Morgan, Rabinowitz, Leidy, &
Coric, 2014; Morgan, Rabinowitz, Palin, & Kennedy, 2015).
We investigate the utility of the methods, both in their original
form and with a protocol created specifically to encourage
information yield from interviewees incentivized to withhold
elements of an event.
The Cognitive Interview (CI; Geiselman & Fisher, 2014) provides a theoretically grounded, empirically supported technique for
retrieving information from cooperative individuals (witnesses,
victims, and persons of interest). The CI is well documented,
comprising a series of mnemonics and strategies aimed at optimizing the social and communication aspects of an interview (see
Geiselman & Fisher, 2014). Although not designed for uncooperative interviewees, a number of CI techniques have been used to
detect deception by amplifying the differences between deceivers
and truth tellers (see Morgan et al., 2014 for a protocol).
Morgan et al. (2014) used a Modified Cognitive Interviewing
method, which they refer to as the MCI, comprising five mnemonic prompts; a) tell me everything in as much detail as you
can, starting at the beginning, b) a visual account, c) an auditory
account, d) an emotional account, and e) a temporal account,
following which interviewees are invited to add any missed
information and alter any errors. A typical finding in deception
studies, and one that builds upon the reality monitoring framework of Johnson and Raye (1981), is that verbal accounts of
truth-tellers contain richer descriptions of the memorial details
of events than those of deceivers (e.g., De Rosa et al., 2018;
Memon, Fraser, Colwell, Odinot, & Mastroberardino, 2010).
The MCI mnemonics aim to capitalize upon this difference,
thereby amplifying differences in verbal behavior of liars and
truth-tellers. Empirical evaluations of MCI confirm this effect,
resulting in improved deception detection (Colwell, HiscockAnisman, Memon, Rachel, & Colwell, 2007; Morgan et al.,
2014). Truth-tellers appear to benefit from the mnemonics
while the same mnemonics can induce cues to deceit in deceivers such as increased errors, shorter responses, and information
leakage (Colwell et al., 2007, 2009; Morgan et al., 2014, 2015;
Vrij et al., 2008). In laboratory studies with professionals and
lay interviewers, the MCI resulted in the detection of between
82 and 92% of deceivers (Morgan et al., 2014, 2015).
Controlled Cognitive Engagement (CCE; Ormerod & Dando,
2015) is an interview method that focusses upon coherence, consistency, and behavioral reactions to interviewer challenges. The
CCE protocol invokes three cyclical stages of a) baselining, or
building rapport and establishing a neutral behavioral baseline, b)
information-gathering questions that commit the interviewee to an
account of the truth, and c) veracity testing, in which the interviewer asks questions that pose tests of expected knowledge,
setting up an implicit challenge for deceptive interviewees that
triggers behavior change. CCE operates via carefully managed
question types and employs tactical questioning techniques
(Dando & Bull, 2011; Dando, Bull, Ormerod, & Sandham, 2015;
Parkhouse & Ormerod, 2018; Vrij et al., 2008). In an in vivo,
double-blind, randomized-control field trial conducted in international airports, CCE produced the highest levels of deception
detection found to date (up to 74% accuracy against a 1:1000
deceiver: truth-teller base rate). Encouragingly for the current
context, CCE significantly increased the amount of information
gained from both truth-tellers and deceivers compared with current
security interview practices.
Neither MCI nor CCE were developed for intelligence gathering, and so intentionality on the part of the interviewer is not
explicit in either method (Perloff, 2010); rather, both seek to detect
deception by discretely depleting the cognitive resources of deceivers compared with truth-tellers (Ormerod & Dando, 2015;
Vrij, Fisher, Mann, & Leal, 2006; Vrij, et al., 2008). Hence,
interviewees may unknowingly leak information when answering
questions. As the interview progresses, if interviewees become
aware that they have inadvertently revealed information, this may
lead them to yield additional information to save face or to recover
consistency of their account. Similarly, deceivers may become less
coherent and more inconsistent as the interview progresses, resulting in information leakage or a decision to yield information to
maintain consistency.
The mnemonic prompts of MCI capture differences between
truth-tellers’ and deceivers’ perceptual experiences of events (the
sounds, sights, emotions, etc.; Morgan et al., 2014, 2015). In
contrast, the tactical questioning of CCE aims to capture differences between truth-tellers and deceivers in their reactions to
challenges to reveal information they ought to know if their
account is true, thus focusing on individuals’ conceptual understandings of events. Research has emphasized a distinction between conceptual and perceptual memory processes. Implicit perceptual memory concerns physical or sensory event units whereas
explicit conceptual memory focuses on meaning and semantic
features of an event (e.g., Gong et al., 2016; Roediger, 1990;
Roediger & McDermott, 1995; Srinivas & Roediger, 1990; Vakil,
Wasserman, & Tibon, 2018). Accordingly, it is reasonable to
expect that the target information yielded may differ both qualitatively and quantitatively across MCI and CCE interview conditions.
NONCOERCIVE HUMAN INTELLIGENCE GATHERING
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
Adapting MCI and CCE Techniques for
Intelligence Gathering
We then considered how to move from covert intentionality on
the part of the interviewer to promote overt intelligence gathering
with MCI and CCE— being explicit about the need for information
rather than relying on depleted cognitive resources. We included a
persuasive message within each technique to encourage the interviewee to consciously yield information despite being incentivized
to withhold. To maximize the impact of the persuasive message,
we encouraged effortful processing with reference to dual-process
models of persuasion (e.g., Petty & Cacioppo, 1986; Chaiken,
Liberman, & Eagly, 1989), which postulate that motivating people
to process persuasive messages carefully can improve persuasion
outcomes (e.g., Meyers-Levy & Maheswaran, 2004; Martin, Hewstone, & Martin, 2007). Drawing on one model, the HeuristicSystematic Model of persuasion (HSM; Chaiken et al., 1989), we
motivated participants to actively scrutinize the persuasive message using positive framing, which originates from prospect theory
(Tversky & Kahneman, 1981). Faced with two choices— one
posing little risk, the other posing more—preference for one option
over another can be influenced by the manner in which alternatives
are framed.
This effortful type of processing is in contrast to heuristic
processing, which is comparatively effortless and characterized by
the application of simple decision rules such as the credibility of
the source, or here the financial incentivization to withhold target
information, for example. In intelligence gathering contexts, both
types of cognitive processing— heuristic and systematic—may
have value. Indeed, simultaneous processing of persuasive messages is believed to be commonplace (e.g., Petty, Cacioppo, Strathman, & Priester, 2005). However, a consistent finding is that
systematic processing typically improves persuasion outcomes,
and when systematic processing is appreciable, heuristic cues have
less persuasive impact (Martin et al., 2007; Neuwirth, Frederick, &
Mayo, 2002; Teng, Khong, & Goh, 2015; Wegener, Petty, Smoak,
& Fabrigar, 2004). We also made the interviewer’s intentions overt
using positive moral framing, which has been effective in improving motivation to change eating and exercise behavior and consumer choice (Moon, Bergey, Bove, & Robinson, 2016; O’Keefe
& Jensen, 2006). Moral framing has also been found to reduce the
gap between political opponents by emphasizing similarities over
differences (Feinberg & Willer, 2013, 2015).
Current Research
Below we report an empirical evaluation of MCI and CCE
techniques for human intelligence-gathering, comparing them to
adapted versions of each, which we refer to as Framed-MCI and
Framed-CCE. In the adapted versions, each information-gathering
request is preceded by a positive responsibility frame that explicitly highlights personal responsibilities and alternative outcomes.
The intention was to encourage interviewees to process the persuasion message systematically and to provide a more complete
account of what they saw rather than to choose the alternative,
incentivized option of withholding event information.
In the study, participants witnessed an unexpected live event
where conditions of social power, in-group favoritism, and outgroup bias were manipulated to encourage affinity for one of the
3
scenario actors (a confederate playing the role of a student who we
refer to as the “student”) over another (a confederate playing the
role of a researcher). Initial situational judgments of blame were
measured, following which participants were incentivized to withhold all information about the student and her involvement in the
event. Interviewers naïve to the event interviewed participants to
elicit detailed event information using one of the four interview
techniques. To understand the efficacy of the techniques for intelligence interviewing, we considered preinterview blame judgments, target information yield, post interview blame judgments,
and interviewee’s perceptions of their verbal behavior to gain
insight into the locus of effects. We are not concerned with the
detection of deception nor with eyewitness accounts per se, rather
whether reluctant participants yielded any target information and if
so, how much target information they yielded.
We formulated the following hypotheses. First, because the
unmodified MCI and CCE techniques were devised to raise cognitive load, we hypothesized that both would result in some target
information yield despite participants being incentivized to withhold. Second, we hypothesized that participants in the MCI condition would yield less target intelligence because it focusses less
than CCE on the concrete elements of an event. Third, the FramedMCI and Framed-CCE techniques were devised both to raise
cognitive load and to persuade participants to consciously yield,
and so we hypothesized that in combination our modifications
would increase the amount of target information yielded compared
with the unmodified versions. Fourth, information yield in the
MCI and CCE unframed conditions would likely be as a result of
unconscious leakage and so post interview, participants would be
less aware that they had revealed target information than participants in the Framed-MCI and Framed-CCE conditions.
Method
Participants
A total of 157 adult participants from the general population
took part as interviewees (60 males & 95 females). Participants
were recruited from Sussex (South of England), Wolverhampton
(West Midlands of England), and London through word of mouth,
advertisements placed on social media, and flyers distributed in
local cafes. A priori Gⴱpower analysis indicated this sample size of
157 was sufficient to detect a medium effect at 95% CI, with 0.80
power (Cohen, 1988). The US Federal Bureau of Investigation
Institutional Review board and University of Westminster and
University of Sussex research ethics committees approved the
study. The mean age of participants was 28.36 years (SD ⫽ 5.29),
ranging from 18 to 45 years. Forty-three (27%) were randomly
assigned to the Modified Cognitive Interview (MCI), 37 (24%) to
Controlled Cognitive Engagement (CCE), 37 (24%) to the Framed
Modified Cognitive Interview (F-MCI) and 40 (26%) to the
Framed Controlled Cognitive Engagement (F-CCE). Eight interviewers took part (CT; KV; CJ; TC; LU; SH; FE; SO) with a mean
age of 33.49 years (SD ⫽ 19.12), ranging from 23 to 56 years. The
aforementioned interviewers conducted 24, 19, 18, 18, 21, 22, 18,
and 17 interviews respectively across each of the four conditions,
completing all interviews in one condition before undergoing
training for the next condition then completing all interviews in
4
DANDO AND ORMEROD
that condition and so on. Interviewers and participants were naïve
to the experimental hypotheses and research design.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
Design
Participants were assigned to either CCE or MCI interview groups.
Each group was further subdivided into Framed or Unframed groups,
giving four interview conditions. Key dependent measures included
the number of participants in each condition who yielded target
information (i.e., the information they were incentivized to withhold),
the number of information items revealed overall, and the number of
target information items revealed.
Procedure
In the experiment, participants witnessed a staged accident during
a classroom session in which a laptop computer fell from a table as it
was being moved by two confederates, one posing as a researcher and
the other posing as a student taking part as a study participant. The
event was designed to set up conditions under which participants
would, in a subsequent interview, want to withhold information about
the student confederate. Participants saw the researcher decide to
move the table in order to reach a power socket for the laptop charger.
The researcher then asked one of the participants (in all instances, the
student confederate) to help move the table. The researcher then
moved the table before the participant could get a full grip of their side
of the table, causing the laptop to crash to the ground and smash the
screen. The researcher then verbally abused the student confederate
and blamed them for causing the accident, suggesting that the student
confederate would be made to pay for damage to the laptop. During
this verbal interaction the researcher ensured that the participants
could see the extensive damage to the laptop. Thus, the event was
configured in such a way that participants would doubt the accusations of the researcher but at the same time be open to a concern
(subsequently reinforced by another confederate prior to interview)
that the student confederate might be falsely accused of culpability if
their involvement was drawn to the attention of those investigating the
incident.
Prior to running the experiment, a pilot study was conducted to
check that the event created the required conditions for participants
to choose to withhold information. Two pilot events were undertaken with five participants in each. After witnessing the event,
participants completed a written free recall of the event, with the
instruction simply to recall everything they saw during the event.
All participants reported the basic details of the event correctly, in
particular all reporting that both the student and the researcher
were involved in attempting to move the table. Nine out of 10 of
the pilot participants reported without prompting that, although the
student was involved, the researcher was to blame and had unfairly
accused the student of culpability.
In the experiment, participants experienced the event in groups
of between six and eight people. Embedded in each group was a
confederate playing the role of a participant (from here on referred
to as the student), and a confederate playing the role of researcher
(from here on referred to as the researcher). The researcher greeted
participants and ran the session during which the event occurred.
Participants were primed to be sympathetic to student in three
ways: a) placing the student within the participant group. The
student arrived and interacted with participants and completed the
same tasks, creating conditions for in-group favoritism (Turner,
Brown, & Tajfel, 1979); b) the scenario, that is, the manner in
which the event unfolded, and verbal exchanges that took place
between the researcher and the student during the event, which
demonstrated unfairness and hostility on the part of the researcher;
and c) the researcher taking charge of the session, which created a
perceived imbalance of social influence (Brewer, 1979) in which
the researcher was clearly more powerful than the student. The
researcher and student repeated the live event on 13 occasions,
following a script. Participants’ initial judgments of blame for the
accident in the event were measured immediately post event, prior
to being interviewed.
Participants were initially paid $30 each to participate. After the
stimulus event, and following completion of a post-event questionnaire, participants were instructed by a further confederate
(from here on referred to as the assistant) to withhold all information about the presence or involvement of the student in order to
avoid the student being wrongly held responsible for the damage to
the laptop. The assistant further encouraged participants to withhold information by explaining that an additional payment of $60
would be forthcoming from the assistant dependent upon the
participant’s interview performance. Details of what good interview performance constituted were kept deliberately vague so that
participants were not unduly influenced in how they would act
during the interview but so that they would likely infer a positive
link between the instruction to withhold information and receipt of
the additional payment. In fact, all participants were paid the entire
amount ($90) irrespective of interview performance. The research
procedure involved seven phases for all participants:
1.
Upon arrival, all participants were brought by the assistant into a reception area where they were introduced to
the other participants, and each was provided with an
information sheet. Once participants had read the information sheet, they were offered an opportunity to ask
questions prior to signing the first general consent form.
To mask the true nature of the research, we adopted a
two-part approach to gaining consent. The first explained
that the research comprised two sessions and that in the
first session participants would complete a series of
paper-based questionnaires designed to collect information about individual cognitive style and mood. Further,
the questionnaires would be completed individually but
in the same room as several other participants. It was then
explained that a second session would take place after the
first (following a short delay), when participants would
be individually interviewed to understand more about the
way in which they make decisions. Finally, they were
asked to complete a questionnaire to give their opinions
about the interview and the research, in general.
2.
The assistant then took the participants as a group to a
classroom, where she introduced them to the researcher
and left them with the instruction to complete two questionnaires individually. While completing the first questionnaire, an unexpected event took place involving the
student and the researcher. An altercation ensued during
which a laptop computer was apparently seriously damaged. The researcher verbally blamed the student and
NONCOERCIVE HUMAN INTELLIGENCE GATHERING
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
threatened to make the student responsible for paying for
the damage. The entire session, from entering the classroom to leaving lasted approximately 25 min.
3.
The assistant returned and then took participants from the
classroom to a reception area where individual attitudes
concerning who was to blame for the damage to the
laptop were collected from each participant using a hardcopy mock Health and Safety report proforma.
4.
Upon completion of the report, participants were informed by the assistant that they would be interviewed
individually about the event. Prior to interview, participants were given additional instructions by the assistant
that, to avoid any risk of the student being wrongly
accused of being responsible for damaging the laptop,
during the interview they should withhold all information
about the presence and involvement of the student in the
incident. These instructions were provided individually
in hard copy and verbally by the assistant in the interview
suite. To continue, participants were asked to sign a
second consent form because to withhold event information, participants would need to formulate a lie script to
ensure that their “story” made sense during the interview.
Participants were given 30 mins alone in the interview
suite and were encouraged by the assistant to use this
time to formulate a convincing account of the event to
exclude all mention of the student. Participants were able
to ask questions prior signing the second consent form,
and it was made clear that they could withdraw at this
point should they wish.
5.
Participants were interviewed using the appropriate technique according to condition. All interviews were digitally audio and video recorded.
6.
Having completed the interview, participants completed
a final self-report measure of interview performance and
perceptions of interviewer behaviors and techniques.
Interviewers followed a protocol for each condition. They underwent one full day of classroom training (given by Coral J. Dando, a
qualified and experienced interviewer) for each of the four interview
protocols, which included a detailed explanation of the relevant interview protocol and role-play practice. Interviewers also took part in
an additional half-day practice session prior to conducting interviews
for each condition, which was audio recorded. All received detailed
verbal and written feedback on their performance. Interviewers were
naïve to the design and experimental hypotheses, but they were
provided with the following instructions: “The researcher’s computer
was seriously damaged during the data collection session. Your job is
to interview the people in the room and find out exactly how the
damage happened.”
Materials
Interviews
Irrespective of condition, all interviews comprised the same
number of discrete phases (5 in total) in the same order: a) explain
5
and build rapport, b) free account, c) probed questioning, d)
challenge, and e) closure. The interview protocols differed as a
function of condition in the free account and the probed questioning only. The remaining phases were identical across conditions,
as follows:
Explain and build rapport. Here, the interviewer explained
to the participant that they were aware that a laptop had been
seriously damaged and would be asking a series of questions about
what had occurred. The interviewer then provided a general overview of the interview process and explained four ground rules
(Report everything; Do not guess; Say if you do not understand;
Say if you do not know the answer to a question). Participants were
then offered the chance to ask questions.
Once the participant acknowledged that he or she was clear
about the interview process and understood the ground rules, the
interviewer then engaged the interviewee in more informal conversation to build rapport. The importance of rapport is acknowledged, albeit that what constitutes forensic rapport and how to
build it is poorly operationalized (e.g., Walsh & Bull, 2012). One
promising technique is interviewer self-disclosure, and so here
rapport was initiated by the interviewer by offering some information about their personal situation, delivered in a manner to
initiate a response from the interviewee (see Evans et al., 2014).
Rapport building continued for a minimum of 6 mins during which
the interviewer led the participant to understand the reciprocal
nature of the interview, using silences to encourage the participant
to speak/respond following self-disclosure statements made by the
interviewer.
Challenge. This was the last of the information gathering
phases of all interviews where, irrespective of information yield
and/or interview performance, interviewees were verbally challenged concerning the completeness of the account given thus far,
and “pushed” for more information:
I think I have a fair understanding of what has happened, but I am not
sure that you have told me everything you know. I have interviewed
others who were in the room at the same time and they have provided
me with more information than you have. It is important that you tell
me as much as you can because otherwise I cannot fully understand
what has happened. Take a few minutes and have another think about
what happened. Tell me everything.
The interviewer sat silently waiting for the interviewee to respond for up to 10 s (counting silently). If the interviewee responded, then the interviewer sat and listened. No extra questions
were asked. It the interviewee did not respond after 10 s, then the
interviewer moved seamlessly to the next phase.
Closure. This final phase marked the end of the interview.
Here the interviewer explained that the interview had now finished
and thanked the participant for taking part and for explaining how
the laptop was damaged. The participant was offered the opportunity to ask any questions. The recording device was turned off.
Modified Cognitive Interview (MCI)
Free account. This phase is the initial information gathering
phase of the interview. Participants were first asked the following
two blame questions (verbatim) a) who was to blame for the
damage to the laptop and b) was anyone else involved. Once these
DANDO AND ORMEROD
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
6
questions had been answered, the participant was asked to provide
a detailed account of what had happened, verbatim as follows:
Telling me everything is what you ought to do, too. That way I can
fully understand what happened. Would you agree?
What I would like you to do now is to tell me exactly what happened
from the time you entered the room until the time you left, in as much
detail as possible. Remember the four ground rules I described earlierreport everything; do not guess; say if you do not understand; say if
you do not know the answer to a question.
So, if we are in agreement that providing full details is what you ought
to do, and should do to ensure a fair investigation, please have good
think about what happened and answer all my questions in as much
detail as you can. Thank you.
The free account was uninterrupted by the interviewer. Once the
interviewee had finished speaking, the interviewer silently waited
a further 5 s (counting silently to 5) before thanking the interviewee and moving into the questioning phase of the interview.
During this account, the interviewer displayed attentive listening
behaviors.
Probed questioning. This phase of the MCI interviews comprised four information-gathering segments a) visual, b) auditory,
c) emotional and d) extra information, replicating the structure of
the MCI protocol employed by Morgan et al. (2014). The prompts
used for each segment were as follows:
1.
Visual. “Describe to me absolutely everything that you
saw from the time you entered the classroom until the
time you left. Provide as much detail as possible because
I was not there.”
2.
Auditory. “Describe absolutely everything that you heard
from the time you entered the classroom until the time
you left. Provide as much detail as possible because I was
not there.”
3.
Emotional. “Explain what the experience in the classroom was like for you— how did it feel?”
4.
Mistakes. “Have you left anything out or made any
mistakes in what you have told me about what happened
in the classroom? Please take the time to think hard about
what happened and tell me everything. It is important.”
Framed Modified Cognitive Interview (Framed-MCI)
Free account. The second phase of interviews in this condition commenced with a positively framed moral rationale as to
why the participant ought to fully explain what had occurred and
who had been involved (adapted from Cesario, Higgins, & Scholer, 2008). The following prompts were delivered slowly, with a 5s
pause between each prompt:
I am going to ask you to tell me in more detail what happened in the
classroom earlier. Unlike people who choose not to tell me everything, telling me about everyone that was involved and everything that
happened when the laptop was damaged will make you feel that you
are doing something to ensure that innocent people do not end up
getting blamed for the damage.
I have found that even those people who did not initially want to tell
me everything understand that the best way forward is when everyone
exercises their responsibility to provide full details.
Those who have told me everything have done just that— exercised
their responsibility, and that is the right and proper thing to do—they
have done the right thing.
From then on, this phase mirrored the free account protocol
described above.
Probed questioning. Each segment mirrored that described in
the MCI probed questioning but was preceded by a shorted reinforcement of the framed persuasion message that had been delivered at the start of the free account as described above.
Controlled Cognitive Engagement (CCE)
Free account. As in the MCI interviews, this is the first of the
information gathering phases, but here it begins with an opportunity for the interviewer to watch and listen to the interviewee when
he or she is providing information about an event or experience
unrelated to the witnessed event (this is referred to as baselining—
see Ormerod & Dando, 2015). Interviewers were able to choose
from a bank of baselining questions according to context and/or
participant. Each baseline question was an open ended invitation to
provide a detailed overview of an experience or event (e.g., Tell in
as much detail as you can about other research you have taken part
in; Tell me all about a recent holiday; Tell me in as much detail as
you can all about your job, and what it entails). The interviewer
listened and watched the interviewee’s baseline behavior using
silences and supportive interviewer behavior (nodding and smiling) to encourage an extended baseline narrative, asking follow-up
questions where appropriate to ensure that participants spoke for a
minimum of 3 min (in addition to the 6-min rapport-building phase
described above). Participants were then asked the two blame
questions (verbatim), following which participants were asked to
provide a detailed account of what had happened, verbatim as
described above.
Probed questioning. This phase of the CCE interviews comprised four information-gathering segments concerning people,
actions, verbal and mistakes.
1.
People involved. “Describe to me absolutely everything
about everyone involved in the damage to the laptop. I
know this will be really hard for you, but can you try and
provide as much detail as possible because I was not
there, and so I don’t know what happened.”
2.
Movements. “Can you talk me through everyone’s movements in as much detail as possible. Again, I know this is
difficult for you, but can you try and provide as much
detail as possible because I was not there, and so I don’t
know what happened.”
3.
Verbal. “Describe everything that was said by everyone
in the room. It doesn’t matter if you can’t remember
everything that was said, but try hard to explain who said
what, and who spoke to whom. Thank you.”
4.
Mistakes. “Brilliant, you have been so helpful. Just before we finish, I wonder, have you left anything out or
NONCOERCIVE HUMAN INTELLIGENCE GATHERING
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
made any mistakes in what you have told me about what
happened in the classroom? Please take the time to think
hard about what happened and tell me everything. It is
really important that I understand because then I can find
out what happened to the laptop.”
Framed Controlled Cognitive Engagement
(Framed-CCE)
7
Table 1
Perceptions of Blame for the Damage as a Function of
Interview Condition (Number and Percentages)
Condition
Researcher
Researcher & Student (Joint)
Student
CCE
MCI
Framed-CCE
Framed-MCI
28 (76%)
29 (67%)
31 (78%)
27 (73%)
6 (16%)
13 (30%)
9 (22%)
9 (24%)
3 (8%)
1 (3%)
0
1 (3%)
Free account. The second phase of interviews in this condition commenced with the interviewer providing the same positively framed moral rationale as to why the participant ought to
fully explain what had occurred as in the Framed-MCI (above).
From then on, the protocol mirrored that of the CCE condition.
Probed questioning. Each segment mirrored that described in
the CCE phase 3, but here each was preceded by a short reinforcement of the framed persuasion message (as in the MCI-Framed)
that had been delivered at the start of free account.
p ⫽ .411. Overall, mean confidence in blame judgments was 4.69
(SD ⫽ 1.10). There were no significant main effects or interactions
for confidence across conditions (MCCE ⫽ 4.21, SD ⫽ .98; MCCEFramed ⫽ 4.56, SD ⫽ 1.02; MMCI-Framed ⫽ 4.19, SD ⫽ 1.10;
MMCI ⫽ 4.78, SD ⫽ .97) all Fs ⬍ 3.287 all ps ⬎ .078.
Questionnaires
Blame Questions
Postincident judgment questionnaire. Participants completed a questionnaire comprising two questions asking a) who
was to blame (forced choice), and b) how confident are you when
deciding who was to blame (a 5-point Likert style confidence
scale).
Post interview perceptions questionnaire. Immediately following each interview, participants completed an additional questionnaire comprising a series of 15 questions - 2 dichotomous, and
12 Likert-type scale questions pertaining to how much information
about the confederate and her involvement in the damage to the
laptop they had revealed, the interview procedure itself, and interviewer style. Finally, participants answered an open-ended invitation to explain what had encouraged them to mention the student,
if they had done so. On completion, each participant returned their
questionnaire to the experimenter, who then asked them verbally
whether, prior to interview, they had seen the accident as a genuine
event or whether they had considered it might have been staged.
All participants confirmed that they had not considered the event
to be staged.
At the commencement and end of the free account phase of all
interviews, participants were asked two blame questions: a) who
was to blame for the damage to the laptop and b) was anyone else
involved. All participants complied with the experimenter instructions by replying “the researcher” and “no”, respectively.
Results
Manipulation and Paradigm Efficiency Analysis
Postincident judgments. The event was constructed to ensure
participants saw the researcher as to blame for the accident while
creating a concern that the student might be blamed erroneously by
others. To see whether the paradigm had been effective in bringing
about stronger judgment of blame for the researcher over the
student, participants’ blame judgments were first considered.
Overall, 73% (115) of respondents reported the researcher to be
entirely to blame for the damage to the laptop, 24% (37) reported
believing that both the researcher and the student were jointly to
blame, while 3% (5) reported that the student was entirely to
blame. See Table 1 for blame as a function of interview condition.
Interview and blame. There were nonsignificant associations
between interview (CCE, MCI) and blame, 2(2) ⫽ 1.546, p ⫽
.462 and Framing (framed, unframed) and blame, 2(2) ⫽ 1.779,
Note. Framed-CCE ⫽ Framed Controlled Cognitive Engagement;
Framed-MCI ⫽ Framed Modified Cognitive Interview.
Interviewer Performance
A dip sample of 25% of each interviewer’s interviews across
each of the conditions (a total of 40 interviews) was scored by two
independent researchers for adherence to the protocol. Using four
measures, ranging from 1 to 5 (1 ⫽ did not adhere; 5 ⫽ completely
adhered), performance was scored for a) inclusion of every phase
in the correct order, b) phase instructions correctly verbalized, c)
framing script correctly administered/excluded according to condition, and d) rapport building using self-disclosure. As expected,
because protocols were applied verbatim, both researchers independently agreed that interviewers had all scored 5 on measures a,
b, and c above (M ⫽ 5). Some mean rating differences for rapport
(measure d) across interviewers did emerge, largely because this
was a nonverbatim aspect of the interview protocols, so interviewer behaviors were less consistent. However, Cohen’s kappa
revealed a good level of agreement between raters for rapport, ⫽
.81, p ⫽ .008.
Information Revealed
Verbatim transcripts of the interviews were coded for the overall
total amount of event information provided, which was then classified as target or nontarget information. For the purposes of this
research, target information is defined as being any information
concerning or indicating the presence and/or involvement of another person other than the researcher in the damage to the computer (here the student confederate), including the confederate’s
speech, her actions, any objects she may have touched, and so
forth. For example, where a participant says “The researcher asked
the student to help move the table,” this would be coded as
providing one piece of target information, that is the presence
of/involvement of “the student” in the incident.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
8
DANDO AND ORMEROD
Unique event information items yielded in the information gathering phases were coded only once (i.e., repetitions were ignored).
For example, the first time the participant mentioned “the student”
during the free account, it was coded as a target detail on this first
occasion, only. If a participant repeated “the student” this information was not recoded. However, if the participant stated, “the
student picked up the table” in a later phase the “picked up” and
“table” utterances were coded as new target information items. All
other unique event information utterances were coded as nontarget
information and were only coded once.
A dip sample of 25% of transcripts from each of the four
interview conditions (9 CCE, 10 MCI, 10 CCE Framed, 9 MCI
Framed) were coded independently by two experienced but independent researchers who were naïve to the experimental design
and research hypotheses. Coders underwent a 4-hr training session
provided by Coral J. Dando during which the bespoke coding
approach was fully explained. Coders also worked through several
examples alongside Coral J. Dando until they fully understood the
nature of the coding process for the purposes of this research. Any
misunderstandings and/or discrepancies were fully discussed during this training session until agreement was reached. Cohen’s
Kappa for the overall amount of event information revealed very
good levels of agreement between coders for both, ⫽ .91, p ⫽
.004, and ⫽ .88, p ⫽ .003, respectively.
Overall Event Information
Transcripts were first analyzed for overall number of event
information items verbalized (combination of nontarget event information ⫹ target event information) across all of the information
gathering phases (free account, probed questioning, challenge).
Participants in the MCI condition verbalized more event information than those in the CCE condition (see Table 2). A two-way
between-subjects ANOVA with interviewer as a random effect
revealed a significant main effect of interview, F(1, 153) ⫽
13.432, p ⬍ .001, p2 .81. The interviewer random effect was
nonsignificant, as were all other main effects and interactions, all
Fs ⬍ 8.42, all ps ⬎ .083.
Target Information Yielded
The numbers of participants who yielded in each condition is
displayed in Table 3. A logistic regression using Interview (CCE
vs. MCI), Framing (Framed vs. Unframed), and the interaction
between these factors as predictors yielded a significant model,
2(3, N ⫽ 157) ⫽ 30.28, p ⬍ .001, with Interview (Wald ⫽ 13.01,
p ⬍ .001), and Framing (Wald ⫽ 10.23, p ⫽ .001) as significant
predictors in the model. The interaction between Interview and
Framing (Wald ⫽ 1.68, p ⫽ .19) did not reach significance.
The total number of target information items revealed during the
course of the interview (summing all target relevant information
across all of the information gathering phases) was then analyzed
using a two-way between-subjects ANOVA again with interviewer
as a random effect. Participants in the CCE condition, (M ⫽ 2.95,
SD ⫽ 2.40, 95% CI [2.36, 3.53]), revealed more target information
than those in the MCI condition, (M ⫽ 1.37, SD ⫽ 1.70, 95% CI
[.90, 1.85]), F(1, 7.58) ⫽ 9.037, p ⫽ .018, p2 ⫽ .54. Participants
in the Framed condition, (M ⫽ 3.04, SD ⫽ 2.28, 95% CI [2.44,
3.63]), revealed more target information than those in the non-
framed condition, (M ⫽ 1.40, SD ⫽ 1.80, 95% CI [.91, 1.89]), F(1,
7.363) ⫽ 21.627, p ⫽ .002, p2 ⫽ .77. The Interviewer random
effect was nonsignificant, as were Interview ⫻ Framing and Interview ⫻ Interviewer ⫻ Framing interactions, all ps ⬎ .118, all
Fs ⬍ 3.120.
Target Information Yielded in Each Phase
Separate analyses of the number of target information items in
each phase were then conducted (see Table 4)2. In the free account,
participants in Framed conditions yielded more target information
than those in Unframed conditions, F(1, 78) ⫽ 26.359, p ⬍ .001,
p2 ⫽ .25. The main effect of Interview and the Interview ⫻
Framing interaction were nonsignificant, all Fs ⬍ 2.573, all ps ⬎
.113.
In the probed questioning phase, participants in CCE conditions
yielded more target information than those in MCI conditions, F(1,
78) ⫽ 10.137, p ⫽ .002, p2 ⫽ .16. The main effect of Framing and
the Interview ⫻ Framing interaction were non-significant, all Fs ⬍
.712, all ps ⬎ .401.
In the Challenge phase, participants in MCI conditions yielded
more target information than those in CCE conditions, F(1, 78) ⫽
12.042, p ⫽ .001, p2 ⫽ .13, and participants in nonframed conditions revealed significantly more target information than those in
Framed conditions, F(1, 78) ⫽ 14.443, p ⬍ .001, p2 ⫽ .16. A
significant Interview ⫻ Framing interaction also emerged, F(1,
78) ⫽ 9.410, p ⫽ .003, p2 ⫽ .11, with participants in the Unframed
MCI condition yielding more target information than those in all
other conditions.
Post Interview Perceptions
The post interview perceptions questionnaire concerned compliance with researcher instructions (yes or no), a repetition of the
blame question asked immediately post event, and 11 Likert scale
type questions (ranging from 1 ⫽ I strongly agree to 5 ⫽ I strongly
disagree) concerning each participant’s perception of the interview
process.
Question 1 asked participants whether they thought they had
complied with instructions to withhold all information about the
student and the student’s involvement in the incident. Table 5
shows, for each condition, the number of respondents who stated
they had yielded target information compared with the number
who actually did reveal target information. Overall, 79% (124) of
respondents reported that they had complied with instructions
while 22% (34) reported that they had not. For the CCE conditions,
the number of participants reporting compliance was 61/77 (73%).
The corresponding number for the MCI conditions was 62/80
(78%). For the framed conditions, the number of participants
reporting compliance was 57/77 (74%). The corresponding number for the unframed conditions was 66/80 (83%). A logistic
regression using Interview (CCE vs. MCI), Framing (Framed vs.
Unframed), and the interaction between these factors as predictors
did not yield a significant model, 2(3, N ⫽ 157) ⫽ 4.03, p ⫽ .259.
2
The aim of this analysis was not to examine whether participants from
each condition had yielded or not, which is done by the preceding analysis,
but to examine for those who did yield the point at which they did so.
Accordingly, participants who did not yield were excluded from the phase
analyses.
NONCOERCIVE HUMAN INTELLIGENCE GATHERING
9
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
Table 2
Total Number of Event Information Items Verbalized (Non-Target ⫹ Target Event Info)
Condition
Framed mean
(SD: [95% CIs])
Unframed mean
(SD: [95% CIs])
Overall mean
(SD: [95% CIs])
MCI
CCE
Overall mean (SD: 95% CIs)
13.1 (4.1: [12.4, 13.8])
12.3 (3.1: [10.5, 11.8])
12.7 (3.6: [12.0, 13.4])
13.3 (3.0: [12.6, 13.6])
10.2 (2.4: [10.0, 11.5])
11.8 (3.1: [11.1, 12.5])
13.2 (3.6: [12.5, 13.9])
11.3 (3.0: [10.6, 12.0])
Note.
CCE ⫽ Controlled Cognitive Engagement; MCI ⫽ Modified Cognitive Interview.
Question 2 repeated the postincident blame question. Overall,
75% (117) of respondents reported the researcher to be entirely to
blame for the damage to the laptop, 22% (35) reported believing
that both the researcher and the student were jointly to blame,
while 3% (5) reported that the student was entirely to blame. A
McNemars test revealed no change in blame post interview, p ⫽
.607.
Question 3 of the post interview questionnaire asked participants to rate, on a Likert-type scale, how much information they
had/had not revealed to the interviewer about the presence or
involvement of a student in the incident. The scale ranged from 1
(I provided no information at all) to 5 (I explained fully). We refer
to this as the perceived revelation scale. The overall mean perceived revelation score was 2.00 (SD ⫽ .99), and as a function
of interview condition, MCIrevelation scale ⫽ 2.08 (SD ⫽ .89),
CCErevelation scale ⫽ 2.00 (SD ⫽ .97), Framed MCIrevelation scale ⫽
2.22 (SD ⫽ 1.23), and Framed CCErevelation scale ⫽ 1.72 (SD ⫽
.82). The mean scores did not differ significantly across conditions, F ⫽ 1.1711, p ⫽ .167. The relationship between compliance
behavior (the actual revelation of target information) and perceived compliance (revelation scale) was nonsignificant, r(157) ⫽
.059, p ⫽ .465.
Perceptions and Experience of the Interview Process
Perceptions of interview. Two-way between subjects ANOVAs on the remaining 9 scale questions revealed significant main
effects of Framing (Framed; No Frame) for two: a) I found the
interview cognitive demanding—I had to think very hard about
what I said, F(1, 149) ⫽ 23.196, p ⬍ .001, p2 ⫽ .14, and b) I found
answering the interviewer’s questions difficult, F(1, 149) ⫽ 9.944,
p ⫽ .002, p2 ⫽ .06. Participants in the Framed condition strongly
agreed that the interview was cognitively demanding (Framed ⫽
1.82, 95% CI [1.49, 2.15]) whereas participants in the Unframed
condition neither agreed nor disagreed (Unframed ⫽ 2.95, 95% CI
[2.63, 3.27]). Participants in the Framed condition neither agreed
Table 3
Number (%) of Participants Who Yielded Information in
Each Condition
Condition
Framed
No. (%)
Unframed
No. (%)
Overall
No. (%)
MCI
CCE
Overall
22/37 (59)
30/40 (75)
52/77 (68)
8/43 (19)
20/37 (54)
28/80 (35)
28/80 (36)
50/77 (65)
Note. CCE ⫽ Controlled Cognitive Engagement; MCI ⫽ Modified Cognitive Interview.
nor disagreed the questions were difficult, (Framed ⫽ 2.76, 95%
CI [2.51, 3.02]) whereas participants in the Unframed condition
disagreed (Unframed ⫽ 3.30, 95% CI [3.06, 3.54]).
A significant main effect of Interview (CCE; MCI) also
emerged for question c) I found answering the interviewer’s questions difficult, F(1, 149) ⫽ 4.833, p ⫽ .002, p2 ⫽ .10. Participants
in the MCI condition agreed that the questions were difficult to
answer (MCI ⫽ 2.84, 95% CI [2.59, 3.08]) whereas participants in
the CCE condition disagreed that the questions were difficult to
answer (CCE ⫽ 4.03, 95% [CI 3.28, 4.98]). All other main effects
of Interview condition and Framing for the remaining questions
were nonsignificant, all Fs ⬍ 4.833, all ps ⬎ .013. Likewise, all
Framing ⫻ Interview condition interactions were nonsignificant,
Fs ⬍ 1.558, all ps ⬎ .185.
Reasons for revealing information. The final open-ended
question was analyzed employing qualitative content analysis
(QCA; Schreier, 2012). Using a coding frame that covers all the
meanings featured in written responses, a number of unique coding
frame dimensions (primary codes) emerged.
Overall, despite 80 (51%) of participants having yielded some
target information, just 51 participants (32%) responded to this
question. Three categories emerged (some participants contributing to more than one category), as follows:
1.
Cognitive effort. Twenty two participants (39%) referred
to finding the interview task too difficult to withhold
information. Four were in the Framed MCI condition,
four were in the MCI condition, and the remaining 14
were in the Framed-CCE condition.
2.
Fairness and justice. Nineteen participants (37%) reported they had deliberately yielded information because
it was the right thing to do. Twelve participants were in
Framed CCE condition, and seven were in the FramedMCI condition.
3.
Interviewer affect. Fifteen participants (22%) stated they
had yielded information because they liked the interviewer and felt the interviewer was fair and positive. Of
the participants whose responses fitted this category, nine
were in the Framed-CCE condition, and the remainder
were in the Unframed-MCI condition.
Discussion
The merits of two noncoercive interview techniques designed to
increase cognitive load for yielding information about a target
were investigated, and we also examined whether framing a persuasive message might enhance information yield. The presence of
DANDO AND ORMEROD
10
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
Table 4
Number of Target Information Items Yielded (Mean, SD, and 95% CIs) as a Function of Phase
(n ⫽ 83)
Phase
Framed
(SD: [95% CIs])
Unframed
(SD: [95% CIs])
Overall mean
(SD: [95% CIs])
Free account MCI
Free account CCE
Overall mean
Probed questioning MCI
Probed questioning CCE
Overall mean
Challenge MCI
Challenge CCE
Overall mean
1.55 (1.01: [1.07, 2.26])
2.13 (1.46: [1.72, 2.54])
1.88 (1.34: [1.52, 2.16])
1.18 (.91: [.68, 1.68])
2.17 (1.55: [1.73, 2.60])
1.67 (1.40: [1.34, 2.00])
.27 (.55: [⫺.02, .56])
.33 (.71: [.08, 58])
.31 (.64: [.11, .50])
.63 (.74: [⫺.17, 1.42])
.30 (.57: [⫺.20, .80])
.46 (.63: [⫺.08, .93])
1.25 (.46: [.42, 2.08])
2.60 (.94: [2.07, 3.13])
1.93 (1.03: [1.43, 2.42])
1.38 (.92: [.89, 1.86])
.40 (.68: [.10, .71])
.89 (.86: [.60, 1.73])
1.09 (1.08: [.62, 1.55])
1.22 (1.45: [.89, 1.54])
Note.
1.20 (.81: [.73, 1.70])
2.34 (1.35: [1.04, 2.72])
.82 (.82: [.17, .56])
.37 (.69: [.54, 1.11])
CCE ⫽ Controlled Cognitive Engagement; MCI ⫽ Modified Cognitive Interview.
significant main effects of interview method and of framing
throughout our data, and the absence of interactions between these
factors, suggest that the effects of interview method and framing
are independent but additive. We hypothesized that both CCE and
MCI would result in some target information yield despite participants being incentivized to withhold, but because of its emphasis
on perceptual memory, participants in MCI conditions may yield
less target information. Our results support these hypotheses. Both
techniques yielded some target information, but approaching three
times as many participants yielded during unframed CCE than
unframed MCI interviews (54% vs. 19%), and they yielded more
than twice the amount of target information.
The two methods are identical in structure (having the same
general instructions, phases, and number of information-gathering
segments in each phase) and intended impact (to gain information).
The focus of MCI questions is on the perceptual experiences of
each participant (visual, auditory & emotional), and MCI questions
are less directive and more global in nature in that they encourage
interviewees to remember and then describe the overall event. The
focus of the CCE questions is on assessing the validity of reported
event occurrences, and the questions are more probing and directive about specific event elements (people, actions, objects, etc.)
and so may make it more difficult for interviewees to verbally
maneuver their way around the target information (Dando & Bull,
2011; Dando et al., 2015).
Perceptual richness is a feature of episodic memory (Conway,
2009; Rubin, Schrauf, & Greenberg, 2003). Although some perceptual elements can become integrated within core memory for an
event, most (e.g., sights, sounds, and smells) are typically periph-
eral to the event’s main themes (Winocur & Moscovitch, 2011;
Winocur, Moscovitch, & Bontempi, 2010). Hence, one explanation for the greater yield of target intelligence with CCE might be
the focus of the technique on questioning that probes the fundamental constructs of the event, compared with the focus of MCI on
questioning that probes the surrounding context of the event. As a
consequence, the degrees of freedom available for CCE interviewees to evade the revelation of target information are reduced, since
they must address questions that are event-focused. MCI interviewees can use contextual information (feelings, sights, etc.) to
mask the absence of concrete target information in their answers.
In essence, MCI allows interviewees to prevaricate while appearing compliant in providing nontarget information.
In support of this explanation, counts of the number of unique
event information items (sum of target and nontarget) revealed that
participants in the MCI condition provided over 35% more than
those in the CCE. However, of that information, just 11% was
target information, whereas 40% of the information revealed by
participants in the CCE condition was target information. CCE
participants were asked to recall and verbalize concrete information concerning the building blocks (actors, actions, objects) of the
event that are central to the target information they were incentivized to withhold. MCI participants were able to recall and verbalize peripheral event information, which may have been easier to
disassociate from the target information (e.g., St-Laurent, Moscovitch, & McAndrews, 2016; Winocur & Moscovitch, 2011).
We also hypothesized that the Framed-MCI and Framed-CCE
techniques would increase the amount of target information
yielded compared with unframed versions. Our results also support
Table 5
Number of Participants in Each Condition Who Stated They Had Complied but Yielded (Did Not
Comply) and Percentages
Framed
Unframed
Conditon
Stated had complied
Stated had complied
but yielded (%)
MCI
CCE
Overall
25
32
57
16 (64)
23 (72)
39 (68)
Note.
Stated had
complied
Stated had complied
but yielded (%)
37
29
66
7 (19)
14 (48)
21 (32)
CCE ⫽ Controlled Cognitive Engagement; MCI ⫽ Modified Cognitive Interview.
Overall
62/23 (37)
61/37 (61)
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
NONCOERCIVE HUMAN INTELLIGENCE GATHERING
this hypothesis. Positive moral framing encourages systematic
processing of the persuasion message, which emphasizes duties,
responsibilities, and obligations (Petty & Cacioppo, 1986). In
doing so, framing reduces the heuristic salience of the financial
incentive and situational affinity for the target confederate. Positive framing has been shown to increase the power of persuasive
messages in other domains (e.g., Feinberg & Willer, 2013, 2015;
Pelletier & Sharp, 2008) but has not been empirically evaluated in
intelligence interviewing. Here, framing increased the odds of
yielding target information by approaching four times more than in
unframed interviews, and participants revealed over twice the
amount of target information. Postinterview feedback indicates
the locus of the framing effect: Fairness and Justice was one of the
primary reasons why participants yielded target information, all of
whom were in the framed condition. Cognitive effort and interviewer effects were also important, but here participants were
evenly spread across framed and unframed and across CCE and
MCI conditions.
Despite being the more effective method with respect to persuasion to yield, participants in CCE conditions reported being
more comfortable during the interviews and finding the questions
less difficult than in MCI conditions. This finding is unexpected,
but it further supports arguments that that interviewer behavior and
questioning approaches are important for facilitating cooperation
and increasing information gain (e.g., Abbe & Brandon, 2013;
Brandon, Wells, & Seale, 2018). Here, the CCE interview protocol
was judged more conversational and less formal than MCI. In
addition to the rapport-building phase common to both methods,
CCE also included a baseline phase that allows the interviewer to
understand how interviewees behave in context when not being
directly questioned to provide target/event information. The additional element of a CCE prior to the intelligence gathering phases
combined with the more conversational style may have been
important for shaping interviewee behavior in terms of encouraging more extensive narratives and increasing the elicitation of
accurate event information. Even in domains such as sex offender
interrogations (e.g., Kebbell, Alison, Hurren, & Mazerolle, 2010)
where empathic and sympathetic interviewing often go against the
interviewer’s natural instincts because of the nature of the suspected crime (Dando & Oxburgh, 2016; Vrij et al., 2017), information gathering, less formal integrative styles have been found to
be effective.
Coercive interrogation, often referred to as enhanced interrogation, has received considerable attention in the last decade for
being, among other things, ineffective (e.g., Dimitriu, 2013;
Costanzo & Gerrity, 2009; Vrij et al., 2017). The empirical scientific literature on intelligence interviewing is less advanced than
the detecting deception literature and so offers few concrete alternatives for practitioners. Rather, the emphasis has been on understanding which techniques to avoid and suggesting how interviewers might behave for improving cooperation and increasing
information gain (e.g., Alison, Alison, Noone, Elntib, & Christiansen, 2013; Kelly, Miller, & Redlich, 2016; Meissner, Kelly, &
Woestehoff, 2015; Walsh & Bull, 2012). Our findings and the
associated protocols indicate a number of promising tools for
noncoercive interviewing and yet again challenge assumptions
concerning the necessity of so-called “enhanced” coercive and
torturous interrogation methods (also see Vrij et al., 2017).
11
In the current study, both MCI and CCE yielded information
despite the vast majority of participants across all groups reporting
(immediately post event but prior to interview) that they believed
the researcher was to blame. It remains possible that the degree of
incentivisation to withhold varied across participants, since the
reason for receiving financial reward implied but did not make
explicit the fact that the desired interview performance was to
withhold target information. This manipulation was designed to
mimic real-world uncertainties about the impacts on interviewees
of their interviewing behaviours. Nonetheless, overall the results
are consistent with an intention among participants to withhold
that was overcome by effective intelligence interviewing. We
cannot be certain of the extent to which our findings will play out
outside of a controlled experiment, and it may be that individuals
with stronger incentives to withhold (e.g., through ideological
commitment or threat of reprisal) may be more resistant to yielding
information than participants in the current study. Also, the nature
of the intelligence to be gathered may impact the effectiveness of
interview methods: this study focused upon a known event, but
there are intelligence gathering contexts where, to paraphrase
Donald Rumsfeld, a previous United States Secretary of Defense,
there are “unknown unknowns” and where the risks of confabulations in information yield may be greater. However, we believe
our findings provide a mandate to practitioners to utilize noncoercive methods such as those tested here in order to gain first-hand
experience of their practical effectiveness.
This laboratory-based research is not without limitations. Although we checked via a pilot study that participants interpreted
the event as intended, future studies might benefit from the addition of a control group who are not incentivized to withhold,
allowing a comparison to be made of the relative effectiveness of
noncoercive interviewing methods for reluctant versus willing
interviewees. Moreover, while our participants were adults of
various ages, genders, ethnicities, and educational and career backgrounds sampled from the general population, we did not control
systematically for these factors, and our interviewers were all
white British. Future research should also consider the impact of
culture, particularly where interviewer and interviewee are from
different cultures. Nonetheless, the research adds to the accumulating evidence for the effectiveness of rapport-based informationgathering approaches as an alternative to aggressive interrogations.
President Trump’s views about torture— his justification of its
use as effective and fair given the barbaric nature of the activities
of his adversaries—ignore the primary purpose of interrogations,
which is to gain intelligence rather than to punish. Yet, accusatorial, aggressive interviews are known to have significant negative
effects both for the amount of information elicited from an interviewee and the utility of that information for intelligence purposes.
This article adds to the ongoing debate and the empirical literature
toward the efficacy of evidence-based ethical interviewing techniques and shows that noncoercive techniques offer a potential
alternative.
Context
The backdrop to this research is the attempts of Barack Obama
during his presidency to close the prison facilities at Guantánamo
due to concerns over its ethical status, which stalled in the face of
resistance from Congress and the Pentagon over claims of opera-
DANDO AND ORMEROD
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
12
tional effectiveness (Bruck, 2016). Recent promotion by the current POTUS of coercive approaches such as torture gives a new
urgency to find evidence for effective ethical alternatives. In 2009,
President Obama established the High-Value Detainee Interrogation Group (HIG) to conduct research, training and intelligence
operations. The research reported here was funded by the HIG as
part of an effort to find ethical alternatives to differentiating
innocent individuals from those who pose a threat or hold valuable
intelligence. The research builds on the authors’ previous work to
develop Controlled Cognitive Engagement (CCE), a short face-toface interview protocol for detecting deception as a marker of risk
in aviation security screening. Our experiences gained while evaluating CCE alongside security professionals, US Transport Security Administration, and U.K. Dept. for Transport, led us to consider that CCE may be relevant for human intelligence gathering.
Similarly, the Modified Cognitive Interviewing (MCI) technique
had also been evaluated in the field, and given the similar pattern
of results, we considered this may also be a promising technique.
With reference to cognitive and social psychological research, we
modified both techniques by including framing of persuasive messages in situations of conflicting interest. Our results are promising
and indicate there are viable alternatives that emphasize the primary purpose of interrogations, which is to gain intelligence rather
than to punish.
References
Abbe, A., & Brandon, S. E. (2013). The role of rapport in investigative
interviewing: A review. Journal of Investigative Psychology and Offender Profiling, 10, 237–249. http://dx.doi.org/10.1002/jip.1386
Alison, L. J., Alison, E., Noone, G., Elntib, S., & Christiansen, P. (2013).
Why tough tactics fail and rapport gets results: Observing rapport-based
interpersonal techniques (ORBIT) to generate useful information from
terrorists. Psychology, Public Policy, and Law, 19, 411– 431. http://dx
.doi.org/10.1037/a0034564
Brandon, S. E., Wells, S., & Seale, C. (2018). Science-based interviewing:
Information elicitation. Journal of Investigative Psychology and Offender Profiling, 15, 133–148. http://dx.doi.org/10.1002/jip.1496
Brewer, M. B. (1979). In-group bias in the minimal intergroup situation: A
cognitive-motivational analysis. Psychological Bulletin, 86, 61–79.
Bruck, C. (July, 2016). Why Obama has failed to close Guantánamo. The
New Yorker. Retrieved from https://www.newyorker.com/magazine/
2016/08/01/why-obama-has-failed-to-close-guantanamo
Cesario, J., Higgins, E. T., & Scholer, A. A. (2008). Regulatory fit and
persuasion: Basic principles and remaining questions. Social and Personality Psychology Compass, 2, 444 – 463.
Chaiken, S., Liberman, A., & Eagly, A. H. (1989). Heuristic and systematic
information processing within and beyond the persuasion context. In
J. S. Uleman & J. A. Bargh (Eds.), Unintended thought (pp. 212–252).
New York, NY: Guilford Press.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences.
Abingdon, England: Routledge.
Colwell, K., Hiscock-Anisman, C., Memon, A., Colwell, L. H., Taylor, L.,
& Woods, D. (2009). Training in assessment criteria indicative of
deception to improve credibility judgments. Journal of Forensic Psychology Practice, 9, 199 –207. http://dx.doi.org/10.1080/1522893090
2810078
Colwell, K., Hiscock-Anisman, C., Memon, A., Rachel, A., & Colwell, L.
(2007). Vividness and spontaneity of statement detail characteristics as
predictors of witness credibility. American Journal of Forensic Psychology, 25, 5–14.
Conway, M. A. (2009). Episodic memories. Neuropsychologia, 47, 2305–
2313. http://dx.doi.org/10.1016/j.neuropsychologia.2009.02.003
Costanzo, M. A., & Gerrity, E. (2009). The effects and effectiveness of
using torture as an interrogation device: Using research to inform the
policy debate. Social Issues and Policy Review, 3, 179 –210. http://dx
.doi.org/10.1111/j.1751-2409.2009.01014.x
Dando, C. J., & Bull, R. (2011). Maximising opportunities to detect verbal
deception: Training police officers to interview tactically. Journal of
Investigative Psychology and Offender Profiling, 8, 189 –202. http://dx
.doi.org/10.1002/jip.145
Dando, C. J., Bull, R., Ormerod, T. C., & Sandham, A. L. (2015). Helping
to sort the liars from the truth-tellers: The gradual revelation of information during investigative interviews. Legal and Criminological Psychology, 20, 114 –128. http://dx.doi.org/10.1111/lcrp.12016
Dando, C. J., & Oxburgh, G. E. (2016). Empathy in the field: Towards a
taxonomy of empathic communication in information gathering interviews with suspected sex offenders. The European Journal of Psychology Applied to Legal Context, 8, 27–33. http://dx.doi.org/10.1016/j.ejpal
.2015.10.001
De Rosa, J., Hiscock-Anisman, C., Blythe, A., Bogaard, G., Hally, A., &
Colwell, K. (2018). A comparison of different investigative interviewing
techniques in generating differential recall enhancement and detecting
deception. Journal of Investigative Psychology and Offender Profiling,
16, 44 –58. http://dx.doi.org/10.1002/jip.1519
Dimitriu, I. G. (2013). Interrogation, coercion and torture: Dutch debates
and experiences after 9/11. Intelligence and National Security, 28,
547–565.
Evans, J. R., Houston, K. A., Meissner, C. A., Ross, A. B., LaBianca, J. R.,
Woestehoff, S. A., & Kleinman, S. M. (2014). An empirical evaluation
of intelligence-gathering interrogation techniques from the United States
Army field manual. Applied Cognitive Psychology, 28, 867– 875. http://
dx.doi.org/10.1002/acp.3065
Feinberg, M., & Willer, R. (2013). The moral roots of environmental
attitudes. Psychological Science, 24, 56 – 62. http://dx.doi.org/10.1177/
0956797612449177
Feinberg, M., & Willer, R. (2015). From gulf to bridge: When do moral
arguments facilitate political influence? Personality and Social Psychology Bulletin, 41, 1665–1681. http://dx.doi.org/10.1177/014616721
5607842
Geiselman, R. E., & Fisher, R. P. (2014). Interviewing witnesses and
victims. In M. St. Yves (Ed.), Investigative Interviewing: Handbook of
Best Practices (pp. 29 – 40). Toronto, ON: Thomson Reuters Publishers.
Gong, L., Wang, J., Yang, X., Feng, L., Li, X., Gu, C., . . . Cheng, H.
(2016). Dissociation between conceptual and perceptual implicit memory: Evidence from patients with frontal and occipital lobe lesions.
Frontiers in Human Neuroscience, 9, 722. http://dx.doi.org/10.3389/
fnhum.2015.00722
Goodman-Delahunty, J., Martschuk, N., & Dhami, M. K. (2014). Interviewing high value detainees: Securing cooperation and disclosures.
Applied Cognitive Psychology, 28, 883– 897. http://dx.doi.org/10.1002/
acp.3087
Harms, J. (2017). The war on terror and accomplices: An exploratory study
of individuals who provide material support to terrorists. Security Journal, 30, 417– 436. http://dx.doi.org/10.1057/sj.2014.48
Harris-Hogan, S. (2013). Anatomy of a terrorist cell: A study of the
network uncovered in Sydney in 2005. Behavioral Sciences of Terrorism
and Political Aggression, 5, 137–154. http://dx.doi.org/10.1080/
19434472.2012.727096
Human Intelligence Collector Operation. (2006). Field manual. Washington, DC: U. S. Army.
Inbau, F. E. (2013). Essentials of the Reid technique. Burlington, MA:
Jones & Bartlett Publishers.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
NONCOERCIVE HUMAN INTELLIGENCE GATHERING
Intelligence Science Board. (2009). Intelligence interviewing: Teaching
papers and case studies. USA: Office of the Director of National
Intelligence.
Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological
Review, 88, 67– 85.
Kassin, S. M. (2017). False confessions: How can psychology so basic be
so counterintuitive? American Psychologist, 72, 951–964. http://dx.doi
.org/10.1037/amp0000195
Kebbell, M., Alison, L., Hurren, E., & Mazerolle, P. (2010). How do sex
offenders think the police should interview to elicit confessions from sex
offenders? Psychology, Crime & Law, 16, 567–584. http://dx.doi.org/10
.1080/10683160902971055
Kelly, C. E., Miller, J. C., & Redlich, A. D. (2016). The dynamic nature of
interrogation. Law and Human Behavior, 40, 295–309. http://dx.doi.org/
10.1037/lhb0000172
Martin, R., Hewstone, M., & Martin, P. Y. (2007). Systematic and heuristic
processing of majority and minority-endorsed messages: The effects of
varying outcome relevance and levels of orientation on attitude and
message processing. Personality and Social Psychology Bulletin, 33,
43–56. http://dx.doi.org/10.1177/0146167206294251
May, L., & Granhag, P. A. (2016). Techniques for eliciting human intelligence: Examining possible order effects of the Scharff tactics. Psychiatry, Psychology and Law, 23, 275–287. http://dx.doi.org/10.1080/
13218719.2015.1054410
Meissner, C. A., Kelly, C. E., & Woestehoff, S. A. (2015). Improving the
effectiveness of suspect interrogations. Annual Review of Law and
Social Science, 11, 211–233. http://dx.doi.org/10.1146/annurevlawsocsci-120814-121657
Meissner, C. A., Redlich, A., Bhatt, S., & Brandon, S. (2012). Interview
and interrogation methods and their effects on investigative outcomes.
Campbell Systematic Reviews, 8, 1–52.
Meissner, C. A., Surmon-Böhr, F., Oleszkiewicz, S., & Alison, L. J.
(2017). Developing an evidence-based perspective on interrogation: A
review of the U.S. government’s high-value detainee interrogation group
research program. Psychology, Public Policy, and Law, 23, 438 – 457.
http://dx.doi.org/10.1037/law0000136
Memon, A., Fraser, J., Colwell, K., Odinot, G., & Mastroberardino, S.
(2010). Distinguishing truthful from invented accounts using realitymonitoring criteria. Legal and Criminological Psychology, 15, 177–194.
http://dx.doi.org/10.1348/135532508X401382
Meyers-Levy, J., & Maheswaran, D. (2004). Exploring message framing
outcomes when systematic, heuristic, or both types of processing occur.
Journal of Consumer Psychology, 14, 159 –167.
Moon, S., Bergey, P. K., Bove, L. L., & Robinson, S. (2016). Message
framing and individual traits in adopting innovative, sustainable products (ISPs): Evidence from biofuel adoption. Journal of Business Research, 69, 3553–3560. http://dx.doi.org/10.1016/j.jbusres.2016.01.029
Morgan, C. A., III, Rabinowitz, Y., Leidy, R., & Coric, V. (2014). Efficacy
of combining interview techniques in detecting deception related to
bio-threat issues. Behavioral Sciences & the Law, 32, 269 –285. http://
dx.doi.org/10.1002/bsl.2098
Morgan, C. A., III, Rabinowitz, Y., Palin, B., & Kennedy, K. (2015). Who
should you trust? Discriminating genuine from deceptive eyewitness
accounts. Open Criminology Journal, 8, 49 –59. http://dx.doi.org/10
.2174/1874917801508010049
Neuwirth, K., Frederick, E., & Mayo, C. (2002). Person-effects and
heuristic-systematic processing. Communication Research, 29, 320 –
359. http://dx.doi.org/10.1177/0093650202029003005
O’Keefe, D. J., & Jensen, J. D. (2006). The advantages of compliance or
the disadvantages of noncompliance? A meta-analytic review of the
relative persuasive effectiveness of gain-framed and loss-framed messages. Annals of the International Communication Association, 30,
1– 43. http://dx.doi.org/10.1080/23808985.2006.11679054
13
Ormerod, T. C., & Dando, C. J. (2015). Finding a needle in a haystack:
Toward a psychologically informed method for aviation security screening. Journal of Experimental Psychology: General, 144, 76 – 84. http://
dx.doi.org/10.1037/xge0000030
Parkhouse, T., & Ormerod, T. C. (2018). Unanticipated questions can yield
unanticipated outcomes in investigative interviews. PLoS ONE, 13(12),
e0208751. http://dx.doi.org/10.1371/journal.pone.0208751
Pelletier, L. G., & Sharp, E. (2008). Persuasive communication and proenvironmental behaviours: How message tailoring and message framing
can improve the integration of behaviours through self-determined motivation. Canadian Psychology/Psychologie canadienne, 49, 210 –217.
http://dx.doi.org/10.1037/a0012755
Perloff, R. M. (2010). The dynamics of persuasion: Communication and
attitudes in the 21st Century (4th ed.). New York, NY: Routledge.
Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of
persuasion. Advances in Experimental Social Psychology, 19, 123–205.
http://dx.doi.org/10.1016/S0065-2601(08)60214-2
Petty, R. E., Cacioppo, J. T., Strathman, A. J., & Priester, J. R. (2005). To
think or not to think: Exploring two routes to persuasion. In T. Brock &
M. Green (Eds.), Persuasion: Psychological insights and perspectives
(2nd ed., pp. 81–116). Thousand Oaks, CA: Sage.
Porter, S., Rose, K., & Dilley, T. (2016). Enhanced interrogations: The
expanding roles of psychology in police investigations in Canada. Canadian Psychology/Psychologie canadienne, 57, 35– 43. http://dx.doi
.org/10.1037/cap0000042
Roediger, H. L., III. (1990). Implicit memory. Retention without remembering. American Psychologist, 45, 1043–1056. http://dx.doi.org/10
.1037/0003-066X.45.9.1043
Roediger, H. L., & McDermott, K. B. (1995). Creating false memories:
Remembering words not presented in lists. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 21, 803– 814. http://dx
.doi.org/10.1037/0278-7393.21.4.803
Rubin, D. C., Schrauf, R. W., & Greenberg, D. L. (2003). Belief and
recollection of autobiographical memories. Memory & Cognition, 31,
887–901. http://dx.doi.org/10.3758/BF03196443
Schreier, M. (2012). Qualitative content analysis in practice. London, UK:
Sage Publications.
Senate Select Committee on Intelligence. (December, 2014). Committee
study of the Central Intelligence Agency’s detention and interrogation
program. Washington, DC: United States Senate. Retrieved from https://
www.intelligence.senate.gov/sites/default/files/publications/CRPT113srpt288.pdf
Srinivas, K., & Roediger, H. L., III. (1990). Classifying implicit memory
tests: Category association and anagram solution. Journal of Memory
and Language, 29, 389 – 412. http://dx.doi.org/10.1016/0749-596X
(90)90063-6
St-Laurent, M., Moscovitch, M., & McAndrews, M. P. (2016). The retrieval of perceptual memory details depends on right hippocampal
integrity and activation. Cortex, 84, 15–33. http://dx.doi.org/10.1016/j
.cortex.2016.08.010
Teng, S., Khong, K. W., & Goh, W. W. (2015). Persuasive communication:
A study of major attitude-behavior theories in a social media context.
Journal of Internet Commerce, 14, 42– 64. http://dx.doi.org/10.1080/
15332861.2015.1006515
Turner, J. C., Brown, R. J., & Tajfel, H. (1979). Social comparison and
group interest in ingroup favouritism. European Journal of Social Psychology, 9, 187–204. http://dx.doi.org/10.1002/ejsp.2420090207
Tversky, A., & Kahneman, D. (1981). The framing of decisions and the
psychology of choice. Science, 211, 453– 458. http://dx.doi.org/10.1126/
science.7455683
United States Department of the Army. (2006). Human intelligence collector operations. United States Department of the Army Field Manual
(2–22.3). Washington, DC: United States Army Publishing Directorate.
This document is copyrighted by the American Psychological Association or one of its allied publishers.
Content may be shared at no cost, but any requests to reuse this content in part or whole must go through the American Psychological Association.
14
DANDO AND ORMEROD
Vakil, E., Wasserman, A., & Tibon, R. (2018). Development of perceptual
and conceptual memory in explicit and implicit memory systems. Journal of Applied Developmental Psychology, 57, 16 –23. http://dx.doi.org/
10.1016/j.appdev.2018.04.003
Vanderhallen, M., & Vervaeke, G. (2014). Between investigator and suspect: The role of the working alliance in investigative interviewing. In R.
Bull (Ed.), Investigative Interviewing (pp. 63–90). New York, NY:
Springer. http://dx.doi.org/10.1007/978-1-4614-9642-7_4
Vrij, A., Fisher, R., Mann, S., & Leal, S. (2006). Detecting deception by
manipulating cognitive load. Trends in Cognitive Sciences, 10, 141–142.
http://dx.doi.org/10.1016/j.tics.2006.02.003
Vrij, A., Mann, S. A., Fisher, R. P., Leal, S., Milne, R., & Bull, R. (2008).
Increasing cognitive load to facilitate lie detection: The benefit of
recalling an event in reverse order. Law and Human Behavior, 32,
253–265. http://dx.doi.org/10.1007/s10979-007-9103-y
Vrij, A., Meissner, C. A., Fisher, R. P., Kassin, S. M., Morgan, C. A., III,
& Kleinman, S. M. (2017). Psychological perspectives on interrogation.
Perspectives on Psychological Science, 12, 927–955. http://dx.doi.org/
10.1177/1745691617706515
Walsh, D., & Bull, R. (2012). Examining rapport in investigative interviews with suspects: Does its building and maintenance work? Journal
of Police and Criminal Psychology, 27, 73– 84. http://dx.doi.org/10
.1007/s11896-011-9087-x
Wegener, D. T., Petty, R. E., Smoak, N. D., & Fabrigar, L. R. (2004).
Multiple routes to resisting attitude change. Resistance and Persuasion,
13–38.
Winocur, G., & Moscovitch, M. (2011). Memory transformation and
systems consolidation. Journal of the International Neuropsychological
Society, 17, 766 –780. http://dx.doi.org/10.1017/S1355617711000683
Winocur, G., Moscovitch, M., & Bontempi, B. (2010). Memory formation
and long-term retention in humans and animals: Convergence towards a
transformation account of hippocampal-neocortical interactions. Neuropsychologia, 48, 2339 –2356. http://dx.doi.org/10.1016/j.neuropsychologia.2010.04.016
Received May 8, 2019
Revision received September 11, 2019
Accepted October 23, 2019 䡲