haig2018

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Chapter 3

An Abductive Theory of Scientific


Method

This chapter is concerned with scientific method in the behavioural sciences. Its
principal goal is to outline a broad theory of scientific method by making use of
selected developments in contemporary research methodology. The time now seems
right to intensify efforts to assemble knowledge of research methods into larger
units of understanding. Currently, behavioural scientists use a plethora of specific
research methods and a number of different investigative strategies when study-
ing their domains of interest. Among this diversity, the well-known inductive and
hypothetico-deductive accounts of scientific method have brought some order to our
investigative practices. The former method speaks to the discovery of empirical gen-
eralizations, whereas the latter method is used to test hypotheses and theories in
terms of their predictive success.
However, although inductive and hypothetico-deductive methods are commonly
regarded as the two main theories of scientific method (Laudan, 1981; and, in fact, are
sometimes regarded as the principal claimants for the title of the definitive scientific
method), they are better thought of as restrictive accounts of method that can be used
to meet specific research goals (Nickles, 1987), not broad accounts of method that
pursue a range of research goals. In fashioning empirical generalizations, the induc-
tive method undoubtedly addresses an important part of scientific inquiry. However,
it is a part only. Of equal importance is the process of theory construction. Here,
however, the hypothetico-deductive method, with its focus on theory testing, speaks
only to one, although important, part of the theory construction process (Simon,
1977).
The theory of method outlined in this chapter is a broader account of scientific
method than either the inductive or hypothetico-deductive theories of method. This
more comprehensive theory of method endeavors to describe systematically how one
can first discover empirical facts and then construct theories to explain those facts.
Although scientific inquiry is often portrayed in hypothetico-deductive fashion as
an undertaking in which theories are first constructed and facts are then gathered in
order to test those theories, this should not be thought of as its natural order. In fact,
scientific research frequently proceeds the other way around. The theory of method
© Springer Nature Switzerland AG 2018 35
B. D. Haig, Method Matters in Psychology, Studies in Applied Philosophy,
Epistemology and Rational Ethics 45, https://doi.org/10.1007/978-3-030-01051-5_3
36 3 An Abductive Theory of Scientific Method

described here adopts this alternative, facts-before-theory sequence, claiming that


it is a search for the understanding of empirical phenomena that gives explanatory
theory construction its point. With this theory of method, phenomena exist to be
explained rather than serve as the objects of prediction in theory testing.

3.1 Two Theories of Method

Before presenting the proposed theory of scientific method, the well-known inductive
and hypothetico-deductive accounts of scientific method are briefly considered. This
serves to define their proper limits as methods of science and, at the same time,
provide useful contrasts to the more comprehensive theory of method.

3.1.1 Inductive Method

In popular accounts of inductive method (e.g., Chalmers, 1999), the scientist is typ-
ically portrayed as reasoning inductively by enumeration from secure observation
statements about singular events to laws or theories in accordance with some gov-
erning principle of inductive reasoning. Sound inductive reasoning is held to create
and justify theories simultaneously, so that there is no need for subsequent empirical
testing. Some have criticized this view of method for placing excessive trust in the
powers of observation and inductive generalization, and for believing that enumer-
ative induction is all there is to scientific inference. In modern behavioural science,
the radical behaviourism of B. F. Skinner is a prominent example of a research tradi-
tion that uses an inductive conception of scientific method (Sidman, 1960; Skinner,
1984). Within this behaviourist tradition, the purpose of research is to detect empir-
ical phenomena of learning that are subsequently systematized by nonexplanatory
theories.
Although the inductive method has received considerable criticism, especially
from those who seek to promote a hypothetico-deductive conception of scientific
inquiry, it nevertheless stresses, in a broad-brush way, the scientific importance of
fashioning empirical generalizations. Shortly, it will be shown that the alternative
theory of scientific method to be presented uses the inductive method in the form of
enumerative induction, or induction by generalization, in order to detect empirical
phenomena.

3.1.2 Hypothetico-Deductive Method

For more than 150 years, hypothetico-deductivism has been the method of choice
in the natural sciences (Laudan, 1981), and it assumed hegemonic status in 20th
3.1 Two Theories of Method 37

century psychology (Cattell, 1966). Psychology’s textbook presentations of scien-


tific method are often cast in hypothetico-deductive form, and the heavy emphasis
psychological researchers have placed on testing hypotheses by using traditional
statistical significance test procedures basically conforms to a hypothetic-deductive
structure.
The hypothetico-deductive method is standardly portrayed in minimal terms: The
researcher is required to take a hypothesis or a theory and test it indirectly by deriv-
ing from it one or more observational predictions. These predictions are amenable
to direct empirical test. If the predictions are borne out by the data, then that result
is taken as a confirming instance of the theory in question. If the predictions fail to
square with the data, then that fact counts as a disconfirming instance of the theory.
Although tacitly held by many scientists, and endorsed in different ways by promi-
nent philosophers of science (e.g., Hempel, 1966; Popper, 1959), the hypothetico-
deductive account of method has been strongly criticized by both philoso phers and
psychologists (e.g., Cattell, 1966; Glymour, 1980; Rorer, 1991; Rozeboom, 1999).
The central criticism of the hypothetico-deductive method is that it is confirma-
tionally lax. This laxity arises from the fact that any positive confirming instance of a
hypothesis obtained by the hypothetico-deductive method can confirm any hypoth-
esis that is conjoined with the test hypothesis, however plausible, or implausible,
that conjunct might be. This criticism has prompted some methodologists (e.g., Gly-
mour, 1980; Rozeboom, 1999) to declare that the hypothetico-deductive method
is hopeless and should therefore be abandoned. Although this is a fair assessment
of the confirmational worth of the orthodox account of the hypothetico-deductive
method, it should be noted that the method can be recast in a more sophisticated
form and put to useful effect in hypothesis testing research (Giere, 1983). Although
the hypothetico-deductive method does not figure as a method of theory appraisal in
the comprehensive theory of scientific method presented here, it can play a legitimate
role in hypothesis and theory testing. It should thus be seen as complementary to the
broader theory of method, not a rival to it. I comment briefly on this matter toward
the end of the chapter.
The theory of method introduced in the next section is a broader theory than both
the inductive and hypothetico-deductive theories. However, it should be acknowl-
edged at the outset that it has its own omissions. Most obviously, the method begins
by focusing on data analysis and thereby ignores the important matters of research
design, measurement, and data collection. This is a limit to its comprehensiveness
that it shares with the two theories of method just canvassed.

3.2 Overview of the Broad Theory

According to the broad theory of method, scientific inquiry proceeds as follows.


Guided by evolving research problems that comprise packages of empirical, con-
ceptual, and methodological constraints, sets of data are analyzed in order to detect
robust empirical regularities, or phenomena. Once detected, these phenomena are
38 3 An Abductive Theory of Scientific Method

explained by abductively inferring the existence of underlying causal mechanisms.1


Here, abductive inference involves reasoning from phenomena, understood as pre-
sumed effects, to their theoretical explanation in terms of underlying causal mecha-
nisms. Upon positive judgments of the initial plausibility of these explanatory the-
ories, attempts are made to elaborate on the nature of the causal mechanisms in
question. This is done by constructing plausible models of those mechanisms by
analogy with relevant ideas in domains that are well understood. When the theo-
ries are well developed, they are assessed against their rivals with respect to their
explanatory goodness. This assessment involves making judgments of the best of
competing explanations.
An important feature of the broad theory of scientific method is its ability to
serve as a framework within which a variety of more specific research methods can
be located, conjoined, and used. Operating in this way, these otherwise separate
specific research methods can be viewed as sub-methods of the parent method. In
turn, the submethods provide the parent theory with the operational bite that helps it
make scientific inquiry possible. Comprehensive methods are often constituted by a
number of submethods and strategies that are ordered according to an overarching
structure (Ross, 1981). In characterizing the broad theory, I indicate how a number
of specific research methods are deployed within its compass. Table 3.1 contains a
variety of research methods and strategies that can be placed within the structure of
the comprehensive theory of scientific method. A number of these are discussed in
the exposition of the method that follows, but most of them are not required for its
characterization.2 The majority of submethods selected here obviously endeavor to
throw some light on the nature of scientific inquiry. It also has some clear implications
for the way research is carried out within its purview. However, partly because of its
incomplete nature, the theory is not accompanied by a set of instructions for its ready
implementation. Such an accompaniment awaits a fuller account of the method and
would have to be modified as a function of the nature of the submethods chosen to
operate within it. Because of the prominence of abductive reasoning in this broad
theory of method, I henceforth refer to it as the abductive theory of method (ATOM).

1 The term causal mechanism is ambiguous. In the broad theory of method being proposed, the

generation of theories involves explanatory inference to claims about the existence of causal entities.
It is not until the development of these theories is undertaken that the mechanisms responsible for the
production of their effects are identified and spelled out. Also, in this chapter it is assumed that the
productivity of causal mechanisms is distinct from the regularities that they explain (Bogen, 2005;
but cf. Woodward, 2003). Of course, this does not preclude the methodological use of generalizations
that describe natural regularities in order to help identify the causal mechanisms that produce them.
2 Note, however, that the strategy of analogical modeling is essential for theory development in the

abductive theory of method and that the theory of explanatory coherence does heavy-duty work
in the abductive theory of method because it is the best developed method of inference to the best
explanation currently available.
3.2 Overview of the Broad Theory 39

Table 3.1 Submethods and strategies of the abductive theory of method


Phenomena detection Theory construction
Theory generation Theory development Theory appraisal
Strategies Abductive methods Strategies Inference to the best
explanation
Control for confounds Exploratory factor Analogical modelling Theory of explanatory
analysis coherence
Calibration of Grounded theory
instruments method
Data analytic Heuristics (e.g.,
strategies principle of the
common cause)
Constructive
replication
Methods
Methods initial data
analysis
Exploratory data
analysis (e.g.,
stem-and-leaf, box
plot)
Computer-intensive
resampling methods
(e.g., bootstrap,
jackknife,
cross-validation)
Meta-analysis
Note For the most part, particular methods and strategies subsumed by the abductive theory are
appropriate either for phenomena detection or for theory construction, but not for both. Exceptions
include exploratory factor analysis and grounded theory method, both of which have data analytic
components that can contribute to phenomena detection

The exposition of the method begins with an account of phenomena detection and
then considers the process of constructing explanatory theories. Toward the end of
the chapter, two pairs of important methodological ideas that feature prominently
in ATOM are examined. The chapter concludes with a discussion of the nature and
limits of the method.

3.3 Phenomena Detection

Scientists and philosophers often speak as though science is principally concerned


with establishing direct relationships between observation and theory. There is empir-
ical evidence that psychologists speak, and sometimes think, in this way (Clark &
40 3 An Abductive Theory of Scientific Method

Paivio, 1989), whereas philosophers of science of different persuasions often say


that scientific theories are evaluated with respect to statements about relevant data
(Bogen & Woodward, 1988). Despite what they say, scientists frequently behave in
accord with the view that theories relate directly to claims about phenomena, such
as empirical generalizations, not data, while in turn, claims about phenomena relate
directly to claims about data. That is, talk of a direct relationship between data and
theory is at variance with empirical research practice, which often works with a
threefold distinction between data, phenomena, and theory.
As just noted, ATOM assigns major importance to the task of detecting empirical
phenomena, and it views the completion of this task as a requirement for subsequent
theory construction. This section of the chapter discusses the process of phenomena
detection in psychological research. First, the distinction between data and phenom-
ena is drawn. Then, a multistage model of data analysis is outlined. This model serves
to indicate one way in which a variety of statistical methods available to psychologists
can be combined in phenomena detection.

3.3.1 The Nature of Phenomena

Bogen and Woodward (1988; Woodward, 1989, 2000) have argued in detail that it is
claims about phenomena, not data, that theories typically seek to predict and explain
and that, in turn, it is the proper role of data to provide the observational evidence
for phenomena, not for theories. Phenomena are relatively stable, recurrent, general
features of the world that, as researchers, we seek to explain. The more striking of
them are often called effects, and they are sometimes named after their principal dis-
coverer. The so-called phenomenal laws of physics are paradigmatic cases of claims
about phenomena. By contrast, the so-called fundamental laws of physics explain the
phenomenal laws about the relevant phenomena. For example, the electron theory of
Lorentz is a fundamental law that explains Airy’s phenomenological law of Faraday’s
electro-optical effect (Cartwright, 1983). Examples of the innumerable phenomena
claims in psychology include the matching law (the law of effect), the Flynn effect
of intergenerational gains in IQ, and recency effects in human memory.
Although phenomena commonly take the form of empirical regularities, they
comprise a varied ontological bag that includes objects, states, processes, events,
and other features that are hard to classify. Because of this variety, it is generally
more appropriate to characterize phenomena in terms of their role in relation to
explanation and prediction (Bogen & Woodward, 1988). For example, the relevant
empirical generalizations in cognitive psychology might be the objects of explana-
tions in evolutionary psychology that appeal to mechanisms of adaptation, and those
mechanisms might in turn serve as phenomena to be explained by appealing to the
mechanisms of natural selection in evolutionary biology.
Phenomena are frequently taken as the proper objects of scientific explanation
because they are stable and general. Among other things, systematic explanations
require one to show that the events to be explained result from the causal factors
3.3 Phenomena Detection 41

appealed to in the explanation. They also serve to unify the events to be explained.
Because of their ephemeral nature, data will not admit of systematic explanations.
In order to understand the process of phenomena detection, phenomena must be
distinguished from data. Unlike phenomena, data are idiosyncratic to particular inves-
tigative contexts. Because data result from the interaction of a large number of causal
factors, they are not as stable and general as phenomena, which are produced by a
relatively small number of causal factors. Data are ephemeral and pliable, whereas
phenomena are robust and stubborn. Phenomena have a stability and repeatability
that is demonstrated through the use of different procedures that often engage dif-
ferent kinds of data. Data are recordings or reports that are perceptually accessible;
they are observable and open to public inspection. Despite the popular view to the
contrary, phenomena are not, in general, observable; they are abstractions wrought
from the relevant data, frequently as a result of a reductive process of data analysis.
As Cartwright (1983) remarked in her discussion of phenomenal and theoretical laws
in physics, “the distinction between theoretical and phenomenological has nothing
to do with what is observable and what is unobservable. Instead the terms separate
laws which are fundamental and explanatory from those that merely describe” (p. 2).
Examples of data, which serve as evidence for the aforementioned psychological
effects, are rates of operant responding (evidence for the matching law), consistent
intergenerational IQ score gains (evidence for the Flynn effect), and error rates in
psychological experiments (evidence for recency effects in short-term memory).
The methodological importance of data lies in the fact that they serve as evidence
for the phenomena under investigation. In detecting phenomena, one extracts a signal
(the phenomenon) from a sea of noise (the data). Some phenomena are rare, and
many are difficult to detect; as Woodward (1989) noted, detecting phenomena can
be like looking for a needle in a haystack. It is for this reason that, when extracting
phenomena from the data, one often engages in data exploration and reduction by
using graphical and statistical methods.

3.3.2 A Model of Data Analysis

In order to establish that data are reliable evidence for the existence of phenom-
ena, scientists use a variety of methodological strategies. These strategies include
controlling for confounding factors (both experimentally and statistically), empiri-
cally investigating equipment (including the calibration of instruments), engaging in
data analytic strategies of both statistical and nonstatistical kinds, and constructively
replicating study results. As can be seen in Table 4.1, these procedures are used in
the detection of phenomena, but they are not used in the construction of explanatory
theory (cf. Franklin, 1990; Woodward, 1989). The later discussion of the importance
of reliability in the process of phenomena detection helps indicate why this is so.
Given the importance of the detailed examination of data in the process of phenom-
ena detection, it is natural that the statistical analysis of data figures prominently in
that exercise. A statistically oriented, multistage account of data analysis is therefore
42 3 An Abductive Theory of Scientific Method

outlined in order to further characterize the phenomena detection phase of ATOM.


The model proceeds through the four stages of initial data analysis, ex-ploratory
data analysis, close replication, and constructive replication. However, it should be
noted that, although the behavioural sciences make heavy use of statistical methods
in data analysis, qualitative data analytic methods can also be used in the detection
of phenomena (Strauss, 1987).
Initial Data Analysis. The initial examination of data (Chatfield, 1985) refers to
the first informal scrutiny and description of data that is undertaken before exploratory
data analysis proper begins. It involves screening the data for its quality. Initial data
analysis variously involves checking for the accuracy of data entries, identifying and
dealing with missing and outlying data, and examining the data for their fit to the
assumptions of the data analytic methods to be used. Data screening thus enables
one to assess the suitability of the data for the type of analysis intended.
This important, and time-consuming, preparatory phase of data analysis has failed
to receive the amount of explicit attention that it deserves in behavioural science edu-
cation. However, the American Psychological Association’s Task Force on Statistical
Inference (Wilkinson & The Task Force on Statistical Inference, 1999) recommended
changes to current practices in data analysis that are broadly in keeping with the goals
of initial data analysis. Fidell and Tabachnick (2003) provided a useful overview of
the importance of the work required to identify and correct problems in data.
It should be clear, even from these brief remarks, that the initial examination of
data is a requirement of successful data analysis in science, for data that lack integrity
can easily result in the misuse of data analytic methods and the drawing of erroneous
conclusions.
Exploratory Data Analysis. Exploratory data analysis uses multiple forms of
description and display and involves descriptive, and frequently quantitative, detec-
tive work designed to reveal the structure or patterns in the data under scrutiny
(Behrens & Yu, 2003; Tukey, 1977).3 The exploratory data analyst is encouraged
to undertake an unfettered investigation of the data and perform multiple analyses
using a variety of intuitively appealing and easily used techniques.
The compendium of methods for the exploration of data is designed to facilitate
both the discovery and the communication of information about data. These methods
are concerned with the effective organization of data, the construction of graphical
displays, and the examination of distributional assumptions and functional depen-
dencies. The stem-and-leaf display and the box-and-whisker plot are two well-known
exploratory methods.
Two attractive features of exploratory methods are their robustness to changes in
underlying distributions and their resistance to outliers in data sets. Exploratory meth-
ods with these two features are particularly suited to data analysis in the behavioural

3 Behrens and Yu suggested that the inferential foundations of exploratory data analysis are to
be found in the notion of abduction. By contrast, ATOM regards exploratory data analysis as a
descriptive pattern detection process that is a precursor to the inductive generalizations involved in
phenomena detection. Abductive inference is reserved for the construction of causal explanatory
theories that are introduced to explain empirical phenomena. Behrens and Yu’s suggestion conflates
description and explanation in this regard.
3.3 Phenomena Detection 43

sciences, where researchers are frequently confronted with ad hoc data sets on man-
ifest variables that have been acquired in convenient ways.
Close Replication. Successfully conducted exploratory analyses will suggest
potentially interesting data patterns. However, it will normally be necessary to check
on the stability of the emergent data patterns through use of confirmatory data anal-
ysis procedures. Computer-intensive resampling methods such as the bootstrap, the
jackknife, and cross-validation (Efron & Tibshirani, 1993) constitute an important
set of confirmatory procedures that are well suited to the demands of modern data
analysis. Such methods free us, as researchers, from the assumptions of orthodox
statistical theory, and permit us to gauge the reliability of chosen statistics by making
thousands, even millions, of calculations on many data points. Statistical resampling
methods like these are used to establish the consistency, or reliability, of sample
results. In doing this, they provide us with the kind of validating strategy that is
needed to achieve close replications.4
Now that psychology has finally begun to embrace exploratory data analysis,
one can hope for a corresponding increase in the companionate use of statistical
resampling methods in order to ascertain the validity of the data patterns initially
suggested by the use of exploratory methods.
Constructive Replication. In establishing the existence of phenomena, it is nec-
essary that science undertakes both close and constructive replications. The statistical
resampling methods just mentioned are concerned with the consistency of sample
results that help researchers achieve close, or internal, replications. By contrast, con-
structive replications are undertaken to demonstrate the extent to which results hold
across different methods, treatments, and occasions. In other words, constructive
replication is a triangulation strategy designed to ascertain the generalizability of
the results identified by successful close replication (Lindsay & Ehrenberg, 1993).
Constructive replication, in which researchers vary the salient conditions, is a time-
honored strategy for justifying claims about phenomena.
In recognition of the need to use statistical methods that are in keeping with the
practice of describing predictable phenomena, researchers should seek the generaliz-
ability of relationships rather than their statistical significance (Ehrenberg & Bound,
1993)—hence, the need to use observational and experimental studies with multiple
sets of data, observed under quite different sets of conditions. The recommended task
here is not to figure what model best fits a single set of data but to ascertain whether
the model holds across different data sets. Seeking reproducible results through con-
structive replications, then, requires data analytic strategies that are designed to detect
significant sameness rather than significant difference.
The four-stage model of data analysis just outlined assists in the detection of phe-
nomena by attending in turn to data quality, pattern suggestion, pattern confirmation,
and generalization. In effect, this process is one of enumerative induction in which

4 Statisticalresampling methods can be used in a hypothetico-deductive manner within ATOM in


order to test descriptive hypotheses that are suggested by exploratory data analytic work. How-
ever, this use of the hypothetico-deductive method should be distinguished from its use to evaluate
explanatory hypotheses and theories. The latter takes place outside the methodological space pro-
vided by ATOM.
44 3 An Abductive Theory of Scientific Method

one learns empirically, on a case-by-case basis, the conditions of applicability of


the empirical generalizations that represent the phenomena. Thus, as noted earlier,
the importance of inductive reasoning shown by the traditional inductive method is
shared by ATOM’s account of phenomena detection.
It bears repeating that this model of data analysis is clearly not the only way in
which phenomena detection can be achieved. In addition to the several strategies of
phenomena detection mentioned earlier, meta-analysis is a prominent example of a
distinctive use of statistical methods by behavioural scientists to aid in the detection of
phenomena. As is well-known, meta-analysis is widely used to conduct quantitative
literature reviews. It is an approach to data analysis that involves the quantitative
analysis of the data analyses of primary empirical studies. By calculating effect sizes
across primary studies in a common domain, meta-analysis helps researchers detect
general positive effects (Schmidt, 1992). By using statistical methods to ascertain the
existence of robust empirical regularities, meta-analysis can be usefully viewed as
the statistical analogue of direct experimental replication. It is in this role that meta-
analysis arguably performs its most important work in science. Contrary to the claims
made by some of its critics in psychology (e.g., Sohn, 1996), meta-analysis can be
regarded as a legitimate and important means of detecting empirical phenomena in
the behavioural sciences (Gage, 1996).

3.4 Theory Construction

Detecting empirical phenomena is a major goal of scientific research, and their suc-
cessful detection constitutes an important type of scientific discovery in its own right.
However, once detected, phenomena serve the important function of prompting the
search for their own understanding. This understanding is commonly met in science
by constructing relevant explanatory theories.
For inductivists, inductively grounded conclusions about phenomena are of
paramount importance. However, although inductivists often subsequently construct
theories, their theories do not provide explanations of phenomena that appeal to
causal mechanisms. Instead, their theories function as tools or instruments concerned
with the description, economical ordering, and prediction of empirical relationships.
For hypothetico-deductivists, theories are said to be generated amethodologically
through free use of the imagination (Hempel, 1966; Popper, 1959). Theories obtained
in this manner are often regarded as explanatory in nature, but their worth is princi-
pally judged in terms of their predictive success, rather than their ability to explain
empirical phenomena.
ATOM, by contrast, maintains that theory construction is neither inductive nor
amethodological. For it, theory construction comprises three methodological phases:
theory generation, theory development, and theory appraisal. These phases do not
occur in a strictly temporal order, for although theory generation precedes theory
3.4 Theory Construction 45

development, theory appraisal begins with theory generation, continues with theory
development, and extends to the comparative appraisal of well-developed theories.
Further, ATOM’s characterization of theory construction is abductive through and
through: Theory generation, theory development, and theory appraisal are all por-
trayed as abductive, or explanatory, undertakings, although the form of abduction X
is different in each case. The account of theory construction that follows articulates
the abductive character of each of the three phases.

3.4.1 Theory Generation

Abductive Inference. This section begins with a general characterization of the


type of abductive reasoning that is often involved in theory generation. It is followed
by a discussion of the method of exploratory factor analysis that is presented as
a prominent example in psychology of an abductive method of theory generation.
The discussion of exploratory factor analysis, therefore, serves as an optional and
restricted account of theory generation for ATOM. The characterizations of abduction
and factor analysis are adapted from Haig (2005).
The basic idea of abductive inference can be usefully traced back to the Amer-
ican philosopher and scientist Charles Sanders Peirce (1931–1958). More recent
developments in the fields of philosophy of science and artificial intelligence (e.g.,
Josephson & Josephson, 1994; Magnani, 2001, 2009; Thagard 1988, 1992) have built
on Peirce’s ideas to significantly advance researchers’ understanding of abductive
reasoning.
Abduction is a form of reasoning involved in both the generation and evaluation
of explanatory hypotheses and theories. For Peirce (1931–1958), “abduction consists
in studying the facts and devising a theory to explain them” (Vol. 5, 1934, p. 90). It
is “[t]he first starting of an hypothesis and the entertaining of it, whether as a simple
interrogation or with any degree of confidence” (Vol. 6, 1934, p. 358).
Traditionally, abduction was thought to take its place at the inception of scien-
tific hypotheses, where it often involves making an inference from puzzling facts
to hypotheses that might well explain them. However, there are a number of differ-
ent ways in which explanatory hypotheses can be abductively obtained. In focusing
on the generation of hypotheses, Thagard (1988) helpfully distinguished between
existential and analogical abduction. As he put it, “Existential abduction postulates
the existence of previously unknown objects, such as new planets, …[whereas] ana-
logical abduction uses past cases of hypothesis formation to generate hypotheses
similar to existing ones” (p. 54). Existential abduction is the type of abduction cen-
trally involved in the factor analytic generation of explanatory hypotheses. Later, it
is shown that the theory development phase of ATOM adopts a modelling strategy
that involves analogical abduction, and its approach to comparative theory appraisal
uses a further form of abduction known as inference to the best explanation.
Existential abduction can be characterized in the following general schema:
46 3 An Abductive Theory of Scientific Method

The surprising empirical phenomenon, P, is detected.


But if hypothesis H were approximately true, and the relevant auxiliary knowledge, A, was
invoked, then P would follow as a matter of course.
Hence, there are grounds for judging H to be initially plausible and worthy of further pursuit.

This schematic characterization of existential abduction, as it occurs within the


theory generation phase of ATOM, is coarse grained and far from sufficient. It should,
therefore, be understood in the light of the following supplementary remarks.
First, as indicated in the discussion of phenomena detection, the facts to be
explained in science are not normally particular events, but empirical generaliza-
tions or phenomena, and, strictly speaking, they are not typically observed.
Second, confirmation theory in the philosophy of science, and the nature of the
hypothetico-deductive method in particular, make it clear that the facts, or phenom-
ena, are derived not just from the proposed theory but from that theory in conjunction
with accepted auxiliary claims taken from relevant background knowledge.
Third, the antecedent of the conditional assertion in the second premise of the
argument schema should not be taken to imply that abductive inferences produce
truths as a matter of course. Although science aims to provide true, or approximately
true, theories of the world, the supposition that the proposed theory be true is not
a requirement for the derivation of the relevant facts. All that is required is that the
theory be plausible enough to be provisionally accepted. It is important to distinguish
between truth, understood as a guiding ideal for science (a goal that we, as scientists,
strive for but never fully reach), and the justification of theories, which is based on
epistemic criteria such as predictive success, simplicity, and explanatory breadth. As
proxies for truth, justificatory criteria such as these are indicative of truth, but they
are not constitutive of truth.
Fourth, it should be noted that the conclusion of the argument schema does not
assert that the hypothesis itself is true, only that there are grounds for thinking that
the proposed hypothesis might be true. This is a weaker claim that allows one to
think of a sound abductive argument as delivering a judgment that the hypothesis is
initially plausible and worthy of further pursuit. Assessments of initial plausibility
constitute a form of justification that involves reasoning from warranted premises
to an acceptance of the knowledge claims in question. This form of justification is
discussed later in the section on ATOM and Scientific Methodology.
Fifth, the schema depicting abductive inference focuses on its logical form only. It
is, therefore, of limited value in understanding the theory construction process unless
it is combined with a set of regulative constraints that enable us to view existential
abduction as an inference, not just to any conceivable explanation, but to a plausible
explanation. The description of research problems presented later indicates that the
constraints that regulate the abductive generation of scientific theories comprise a
host of heuristics, rules, and principles that govern what counts as good explanations.
Exploratory Factor Analysis. Unfortunately, there is a dearth of codified abduc-
tive methods available for ready use in the behavioural sciences. A notable exception
is the method of exploratory factor analysis. Exploratory factor analysis is designed
to facilitate the postulation of latent variables that are thought to underlie patterns
3.4 Theory Construction 47

of correlations in new domains of manifest variables. It does this by using multiple


regression and partial correlation theory to model sets of manifest or observed vari-
ables in terms of linear functions of other sets of latent, or unobserved, variables.
Although the nature and purpose of exploratory factor analysis is a matter of some
debate, it can plausibly be understood as an abductive method of theory genera-
tion (Haig, 2005; Rozeboom, 1972; Stephenson, 1961).5 This characterization of the
inferential nature of exploratory factor analysis is seldom given in expositions of the
method; however, it is an interpretation that coheres well with its general acceptance
as a latent variable method.
On this interpretation, exploratory factor analysis facilitates the achievement of
useful existential abductions, although for this to happen, the method must be used
in an exemplary manner (Fabrigar, Wegener, MacCallum, & Strahan, 1999; Preacher
& MacCallum, 2003) with circumspect interpretation of the factors. As noted earlier,
existential abductions enable us, as researchers, to hypothesize the existence of enti-
ties previously unknown to us. The innumerable examples of existential abduction
in science include the initial postulation of hidden entities such as atoms, genes,
tectonic plates, and personality traits. In cases like these, the primary thrust of the
initial abductive inferences is to claims about the existence of theoretical entities6
in order to explain empirical facts or phenomena. Similarly, the hypotheses given
to us through the use of exploratory factor analysis postulate the existence of latent
variables such as Spearman’s g and extraversion. It remains for further research to
elaborate on the first rudimentary conception of these variables.
The factor analytic use of existential abduction to infer the existence of, say, the
theoretical entity g can be coarsely reconstructed in accordance with the aforemen-
tioned schema for abductive inference along the following lines:
The surprising empirical phenomenon known as the positive manifold7 is identified.
If g exists, and it is validly and reliably measured by a Wechsler intelligence scale (and/or
some other objective test), then the positive manifold would follow as a matter of course.
Hence, there are grounds for judging the hypothesis of g to be initially plausible and worthy
of further pursuit.

This example serves to illustrate the point that the method of exploratory factor
analysis proper should be taken to include the factor analyst’s substantive interpre-
tation of the statistical factors. It is important to realize that the factor analyst has

5 Some take exploratory factor analysis to be a data analytic method, only. My principal reason for
assigning a theory generation role to exploratory factor analysis is based on the belief that factors
are best regarded as latent common causes and that inference to such causes is abductive in nature
(Haig, 2005).
6 The term entity is used as a catch-all ontological term that covers a miscellany of properties that

includes states, processes, and events. Although existential abductions in exploratory factor analysis
are to properties expressed as the values of variables, not all existential abductions need take this
form.
7 The positive manifold is a term that is sometimes used to refer to the striking, and well-established,

fact that almost all different tests of ability correlate positively with one another to a significant
degree. Despite its historical link to Spearman’s theory of general intelligence, the positive manifold
can be taken as evidence for the existence of two or more factors.
48 3 An Abductive Theory of Scientific Method

to resort to his or her own abductive powers when reasoning from correlational
data patterns to under- lying common causes. Note that the schema for abductive
inference, and its application to the generation of Spearman’s hypothesis of g, are
concerned with the form of the arguments involved, not with the actual generation
of the explanatory hypotheses. In each case, the explanatory hypothesis is given in
the second premise of the argument. An account of the genesis of the explanatory
hypothesis must, therefore, be furnished by some other means. It is plausible to
suggest that reasoning to explanatory hypotheses trades on human beings’ evolved
cognitive ability to abductively generate such hypotheses. Peirce (1931–1958) him-
self maintained that the human ability to engage readily in abductive reasoning was
founded on a guessing instinct that has its origins in evolution. More suggestively,
Carruthers (2002) maintained that our ability, as humans, to engage in explanatory
inference is almost certainly largely innate, and he speculated that it may be an
adaptation selected for because of its crucial role in the fitness-enhancing activities
of our ancestors such as hunting and tracking. Whatever its origin, an informative
methodological characterization of the abductive nature of factor analytic inference
must appeal to the scientist’s own psychological resources as well as those of logic.
Exploratory factor analysis, then, can usefully function as a submethod of ATOM
by being located in that theory’s context of theory generation. Although it exemplifies
well the character of existential abduction, exploratory factor analysis is clearly
not an all-purpose method for abductively generating explanatory hypotheses and
theories. With its focus on common factors, it can properly serve as a generator of
elementary theories only in those multivariate domains where there are common
causal structures.
Understood in the context of theory generation, methods of existential abduction
like exploratory factor analysis should not be expected to achieve highly developed
and well-validated scientific theories. At best, they deliver rudimentary theories that
have initial plausibility. It is important to realize that these abductive methods enable
us to justify the initial plausibility of the theories they spawn. The very process of
the abductive generation of theories has a bearing on the first determinations of their
worth, in that we appeal to the soundness of the abductive arguments used in the
introduction of theories in order to evaluate their early epistemic promise (Whitt,
1992).
Relatedly, the nascent theories bequeathed us by methods like exploratory factor
analysis postulate the existence of hidden causal mechanisms, but they do not pro-
vide an informative characterization of their nature. Such theories have the status
of dispositional theories in that they provide us with oblique characterizations of
the properties we attribute to things by way of their presumed effects under spec-
ified conditions (Mumford, 1998). A move beyond the rudimentary nature of their
dispositional characterization requires subsequent elaboration. It is to a strategy for
developing such theories that I now turn.
3.4 Theory Construction 49

3.4.2 Theory Development

Models in Science. The standard inductive and hypothetico-deductive views of sci-


entific method give little attention to the process of theory development. The use
of traditional inductive method leads to theories that are organized summaries of
their constituent empirical generalizations, and the orthodox hypothetico-deductive
method assumes that hypotheses and theories emerge fully formed, ready for imme-
diate testing.
In contrast to these two theories of scientific method, ATOM is concerned with the
development of explanatory theories. As just noted, the theories it generates through
existential abduction are dispositional in nature, and explicit provision has to be made
for their development before they are systematically evaluated against rival theories
with respect to their explanatory goodness. As noted earlier, ATOM recommends that
this be done by building analogical models of the causal mechanisms in question.
There is a long-held view (e.g., Duhem, 1914/1954), still popular in some quar-
ters, that analogical models are dispensable aids to formulating and understanding
scientific theories. This negative view of the cognitive value of analogical models in
science contrasts with the positive view that they are an essential part of the develop-
ment of theories (Campbell, 1920; Harré, 1976; Hesse, 1966). Contemporary studies
of scientific practice, including philosophy of science, frequently accord analogical
models a genuine, indispensable, cognitive role in science (e.g., Abrantes, 1999;
Giere, 1988; Harré, 1988).
Science uses different types of models for different purposes. For example, iconic
models8 are constructed to provide a good resemblance to the object or property
being modeled, mathematical models offer an abstract symbolic representation of
the domain of interest, and analogue models express relevant relations of analogy
between the model and the reality being represented. Harré (1970) contains a useful
taxonomy of this variety. Although it is acknowledged that there is a need to use
a variety of different modelling strategies in science, ATOM adopts the strategy of
using analogical models to help develop explanatory theories. Because analogical
modelling is a strategy that increases the content of explanatory theories, its reasoning
takes the form of analogical abduction.
Analogical Modelling. The idea that analogical models are important in the devel-
opment of scientific theories can be traced back to N. R. Campbell (1920). Although
analogies are not always used in scientific explanation, their role in theory devel-
opment within ATOM is of central importance. The need for analogical modelling
within ATOM stems from two features of its characterization of theory generation.
First, as with exploratory factor analysis, the abductive generation of theories takes
the form of existential abduction, through which the existence of theoretical enti-

8 More precisely, iconic models are constructed as representations of reality, real or imagined. In
ATOM they stand in for the hypothesized causal mechanisms. Although representations, iconic
models are themselves things, structures, or processes that correspond in some way with things,
structures, or processes that are the objects of modelling. They are, therefore, the sorts of things
sentences can be about (Harré, 1976).
50 3 An Abductive Theory of Scientific Method

ties is postulated. Therefore, an appropriate research strategy is required to learn


about the nature of these hidden entities. For this, the strategy of analogical mod-
elling is used to do the required elaborative work. Second, recall that the postulation
of theoretical entities through existential abduction confers an assessment of initial
plausibility on those postulations. However, for claims about those latent entities to
have the status of genuine knowledge, further evaluative work has to be done. The
construction of appropriate analogical models serves to assess the plausibility of our
expanded understanding, as well as to expand our understanding of those entities.
For ATOM, theory development expands our knowledge of the nature of our
theories’ causal mechanisms. This is achieved by using the pragmatic strategy of
conceiving of these unknown mechanisms in terms of what is already familiar and
well understood. Well known examples of models that have resulted from this strat-
egy are the molecular model of gases, based on an analogy with billiard balls in a
container; the model of natural selection, based on an analogy with artificial selection;
and the computational model of the mind, based on an analogy with the computer.
To understand the nature of analogical modelling, it is helpful to distinguish
between a model, the source of the model, and the subject of the model (Harré, 1976;
Hesse, 1966). From the known nature and behaviour of the source, one builds an
analogical model of the unknown subject or causal mechanism. To take the biolog-
ical example just mentioned, Darwin fashioned his model of the subject of natural
selection by reasoning by analogy from the source of the known nature and behaviour
of the process of artificial selection. In this way, analogical models play an important
creative role in theory development. However, this role requires the source, from
which the model is drawn, to be different from the subject that is modeled. For
example, the modern computer is a well-known source for the modelling of human
cognition, though our cognitive apparatus is not generally thought to be a real com-
puter. Models in which the source and the subject are different are sometimes called
paramorphs. Models in which the source and the subject are the same are sometimes
called homoeomorphs (Harré, 1976). The paramorph can be an iconic, or pictorial,
representation of real or imagined things. It is iconic paramorphs that feature centrally
in the creative process of theory development through analogical modelling.
In evaluating the aptness of an analogical model, the analogy between its source
and subject must be assessed, and for this one needs to consider the structure of
analogies. The structure of analogies in models comprises a positive analogy in
which the source and subject are alike, a negative analogy in which the source and
subject are unlike, and a neutral analogy where we have no reliable knowledge about
matched attributes in the source and subject of the model. The negative analogy is
irrelevant for purposes of analogical modelling. Because we are essentially ignorant
of the nature of the hypothetical mechanism of the subject apart from our knowledge
of the source of the model, we are unable to specify any negative analogy between
the model and the mechanism being modeled. Thus, in considering the plausibility of
an analogical model, one considers the balance of the positive and neutral analogies
(Harré, 1976). This is where the relevance of the source for the model is spelled out.
As is shown in the next section, ATOM subscribes to a view of comparative theory
appraisal that takes good analogies as a criterion of explanatory worth.
3.4 Theory Construction 51

Analogical reasoning is important in science and clearly lies at the inferential heart
of analogical modelling. However, as noted above, because the theories fashioned
by ATOM are explanatory theories, the analogical models involved in theory devel-
opment will involve explanatory analogical reasoning, that is, analogical abduction.
The reasoning involved in analogical abduction can be simply stated in the form of
a general argument schema as follows:
Hypothesis H* about property Q was correct in situation S1.
Situation S1 is like the situation S2 in relevant respects.
Therefore, an analogue of H* might be appropriate in situation S2.

Darwin’s theory or model of natural selection, and the other aforementioned ana-
logical models, can plausibly be construed to be based on analogical abduction. The
general argument for analogical abduction just given can be rewritten in simplified
form for Darwin’s case as follows:
The hypothesis of evolution by artificial selection was correct in cases of selective domestic
breeding.
Cases of selective domestic breeding are like cases of the natural evolution of species with
respect to the selection process.
Therefore, by analogy with the hypothesis of artificial selection, the hypothesis of natural
selection might be appropriate in situations where variants are not deliberately selected for.

The methodology of modelling through analogical abduction is quite well devel-


oped and provides a general, though useful, source of guidance for behavioural sci-
entists. Instructively for psychology, Harré (Harré & Secord, 1972) followed his own
account of analogical modelling to construct a rule model of microsocial interaction
in social psychology. Here, Goffman’s (1969) dramaturgical perspective provides
the source model for understanding the underlying causal mechanisms involved in
the production of ceremonial, argumentative, and other forms of social interaction.
Thus far, it has been suggested that, for ATOM, the epistemic worth of hypotheses
and theories generated by existential abduction are evaluated in terms of their initial
plausibility and that these assessments are subsequently augmented by judgments of
the appropriateness of the analogies that function as source models for their devel-
opment. However, with ATOM, well-developed theories are appraised further with
respect to a number of additional criteria that are used when judgments about the
best of competing explanatory theories are made.

3.4.3 Theory Appraisal

Contemporary scientific methodology boasts a number of general approaches to


the evaluation of scientific theories. Prominent among these are the hypothetico-
deductive method, which evaluates theories in terms of predictive success; Bayesian
accounts of confirmation, which assign probabilities to hypotheses via Bayes’s theo-
rem; and inference to the best explanation, which accepts a theory when it is judged
52 3 An Abductive Theory of Scientific Method

to provide a better explanation of the evidence than its rivals do. Of these three
approaches, the hypothetico-deductive method is by far the most widely used in psy-
chology (Cattell, 1966; Rorer, 1991; Rozeboom, 1999). Despite some urgings (e.g.,
Edwards, Lindman, & Savage, 1963; Lee & Wagenmakers, 2005; Rorer, 1991),
psychologists have been reluctant to use Bayesian statistical methods to test their
research hypotheses, preferring instead to perpetuate the orthodoxy of classical sta-
tistical significance testing within a hypothetico-deductive framework. Despite the
fact that inference to the best explanation is frequently used in science, and exten-
sively discussed in the philosophy of science, it is virtually unheard of, let alone
used, to appraise theories in psychology.
True to its name, ATOM adopts an abductive perspective on theory evaluation
by using a method of inference to the best explanation. It is shown shortly that, in
contrast to the hypothetico-deductive method, ATOM adopts an approach to inference
to the best explanation that measures empirical adequacy in terms of explanatory
breadth, not predictive success, and, in contrast with Bayesianism, it takes theory
evaluation to be an exercise that focuses directly on explanation, not a statistical
undertaking in which one assigns probabilities to theories. The basic justification for
using inference to the best explanation when evaluating explanatory theories is that
it is the only method researchers have that explicitly assesses such theories in terms
of the scientific goal of explanatory worth.
In considering theory evaluation in ATOM, the idea of inference to the best expla-
nation is introduced. Then, a well-developed method of inference to the best expla-
nation is presented and discussed. Thereafter, inference to the best explanation is
defended as an important perspective on theory evaluation.
Inference to the Best Explanation. In accordance with its name, inference to the
best explanation is founded on the belief that much of what we know about the world
is based on considerations of explanatory worth. Being concerned with explanatory
reasoning, inference to the best explanation is a form of abduction. As mentioned
earlier, it involves accepting a theory when it is judged to provide a better explanation
of the evidence than its rivals do. In science, inference to the best explanation is often
used to adjudicate between well-developed, competing theories (Thagard, 1988).
A number of writers have elucidated the notion of inference to the best explanation
(e.g., Day & Kincaid, 1994; Lipton, 2004; Thagard, 1988). The most prominent
account is due to Lipton, who suggested that inference to the best explanation is
not an inference to the “likeliest” explanation, but to the “loveliest” explanation,
where the loveliest explanation comprises the various explanatory virtues such as
theoretical elegance, simplicity, and coherence; it is the explanatory virtues that
provide the guide to inference about causes in science. However, the most developed
formulation of inference to the best explanation as a method of theory evaluation
was provided by Thagard (1992). Thagard’s formulation of inference to the best
explanation identifies, and systematically uses, a number of evaluative criteria in a
way that has been shown to produce reliable judgments of best explanation in science.
For this reason it is adopted as the method of choice for theory evaluation in ATOM.
The Theory of Explanatory Coherence. Thagard’s (1992) account of inference
to the best explanation is known as the theory of explanatory coherence (TEC).
3.4 Theory Construction 53

According to TEC, inference to the best explanation is centrally concerned with


establishing relations of explanatory coherence. To infer that a theory is the best
explanation is to judge it as more explanatorily coherent than its rivals. TEC is not
a general theory of coherence that subsumes different forms of coherence such as
logical and probabilistic coherence. Rather, it is a theory of explanatory coherence in
which the propositions hold together because of their explanatory relations. Relations
of explanatory coherence are established through the operation of seven principles.
These principles are symmetry, explanation, analogy, data priority, contradiction,
competition, and acceptability. The determination of the explanatory coherence of a
theory is made in terms of three criteria: consilience, simplicity, and analogy (Tha-
gard, 1988). I next consider the criteria, and then the principles. The criterion of
consilience, or explanatory breadth, is the most important criterion for choosing the
best explanation. It captures the idea that a theory is more explanatorily coherent
than its rivals if it explains a greater range of facts. For example, Darwin’s theory of
evolution explained a wide variety of facts that could not be explained by the accepted
creationist explanation of the time. Consilience can be static or dynamic. Static con-
silience judges all the different types of facts available. Dynamic consilience obtains
when a theory comes to explain more classes of fact than it did at the time of its
inception. A successful new prediction that is also an explanation can often be taken
as a sign of dynamic consilience.
The notion of simplicity that Thagard (1988) deemed the most appropriate for
theory choice is a pragmatic notion that is closely related to explanation; it is captured
by the idea that preference should be given to theories that make fewer special or ad
hoc assumptions. Thagard regarded simplicity as the most important constraint on
consilience; one should not sacrifice simplicity through ad hoc adjustments to a theory
in order to enhance its consilience. Darwin believed that the auxiliary hypotheses
he invoked to explain facts, such as the gaps in the fossil record, offered a simpler
explanation than the alternative creationist account.
Finally, analogy is an important criterion of inference to the best explanation
because it can improve the explanation offered by a theory. Thus, as noted in the
earlier discussion of analogical modelling, the explanatory value of Darwin’s theory
of natural selection was enhanced by its analogical connection to the already under-
stood process of artificial selection. Explanations are judged more coherent if they
are supported by analogy to theories that scientists already find credible.
Within TEC, each of the three criteria of explanatory breadth, simplicity, and
analogy are embedded in one or more of the seven principles. Thagard (1992, 2000)
formulated these principles in both formal and informal terms. They are stated here
informally in his words as follows (Thagard, 2000):
1. Symmetry. Explanatory coherence is a symmetric relation, unlike, say, condi-
tional probability. That is, two propositions p and q cohere with each other
equally.
2. Explanation. (a) A hypothesis coheres with what it explains, which can either be
evidence or another hypothesis. (b) Hypotheses that together explain some other
54 3 An Abductive Theory of Scientific Method

proposition cohere with each other. (c) The more hypotheses it takes to explain
something, the lower the degree of coherence.
3. Analogy. Similar hypotheses that explain similar pieces of evidence cohere.
4. Data Priority. Propositions that describe the results of observations have a degree
of acceptability on their own.
5. Contradiction. Contradictory propositions are incoherent with each other.
6. Competition. If p and q both explain a proposition, and if p and q are not explana-
torily connected, then p and q are incoherent with each other (p and q are explana-
torily connected if one explains the other or if together they explain something).
7. Acceptance. The acceptability of a proposition in a system of propositions
depends on its coherence with them. (p. 43)
Limitations of space preclude a discussion of these principles; however, the fol-
lowing points should be noted. The principle of explanation is the most important
principle in determining explanatory coherence because it establishes most of the
coherence relations. The principle of analogy is the same as the criterion of analogy,
where the analogy must be explanatory in nature. With the principle of data pri-
ority, the reliability of claims about observations and generalizations, or empirical
phenomena, will often be sufficient grounds for their acceptance. The principle of
competition allows noncontradictory theories to compete with each other.9 Finally,
with the principle of acceptance, the overall coherence of a theory is obtained by
considering the pairwise coherence relations through use of Principles 1–6.
The principles of TEC combine in a computer program, ECHO (Explanatory
Coherence by Harmany10 Optimization), to provide judgments of the explanatory
coherence of competing theories. This computer program is connectionist in nature
and uses parallel constraint satisfaction to accept and reject theories based on their
explanatory coherence.
The theory of explanatory coherence has a number of virtues that make it an
attractive theory of inference to the best explanation: It satisfies the demand for
justification by appeal to explanatory considerations rather than predictive success;
it takes theory evaluation to be a comparative matter; it can be readily implemented
by, and indeed is instantiated in, the computer program, ECHO, while still leaving
an important place for judgment by the researcher; and it effectively accounts for
a number of important episodes of theory assessment in the history of science. In
short, TEC and ECHO combine in a successful method of explanatory coherence
that enables researchers to make judgments of the best of competing explanatory
theories. Thagard (1992) is the definitive source for a detailed explication of the
theory of explanatory coherence.
Psychology is replete with competing theories that might usefully be evaluated
with respect to their explanatory coherence. Durrant and Haig (2001) hinted at how
two competing theories of language evolution might be judged in terms of their

9 Inthe principles of symmetry and competition, p and q are to be understood as propositions


(hypotheses or evidence statements) within a theory (system of propositions).
10 The spelling of Harmany is deliberate, being a tribute to Harman (1965), who coined the term

inference to the best explanation and introduced the corresponding idea to modern philosophy
3.4 Theory Construction 55

explanatory coherence. However, examples of the full use of TEC to appraise the
best of competing explanatory theories in the behavioural sciences have yet to be
provided.

3.5 Research Problems

A number of authors (e.g., Haig, 1987; Laudan, 1977; Nickles, 1981) have stressed
the value of viewing scientific inquiry as a problem-solving endeavor. It will be
recalled that the overview of ATOM indicated the method’s commitment to the
notion of a research problem. This acknowledgment of the importance of research
problems for inquiry contrasts with the orthodox inductive and hypothetico-deductive
accounts of method, neither of which speaks of problem solving as an essential
part of its characterization. In an effort to depict scientific inquiry as a problem-
solving endeavor, ATOM uses a constraint-inclusion view of research problems
(Haig, 1987; Nickles, 1981). The idea of problems as constraints has been taken from
the problemsolving literature in cognitive psychology (Simon, 1977) and groomed
for a methodological role. Briefly, the constraint-inclusion theory depicts a research
problem as comprising all the constraints on the solution to that problem, along
with the demand that the solution be found. With the constraint-inclusion theory, the
constraints do not lie outside the problem but are constitutive of the problem itself;
they actually serve to characterize the problem and give it structure. The explicit
demand that the solution be found is prompted by a consideration of the aims of the
research, the pursuit of which is intended to fill the outstanding gaps in the problem’s
structure.
Note that all relevant constraints are included in a problem’s formulation. This is
because each constraint contributes to a characterization of the problem by helping to
rule out some solutions as inadmissible. However, at any one time, only a manageable
subset of the problem’s constraints will be relevant to the specific research task at
hand. Also, by including all the constraints in the problem’s articulation, the problem
enables the researcher to direct inquiry effectively by pointing the way to its own
solution. In a very real sense, stating the problem is half the solution!
The constraint-inclusion account of problems stresses the fact that in good scien-
tific research, problems typically evolve from an ill-structured state and eventually
attain a degree of well-formedness such that their solution becomes possible. From
the constraint-inclusion perspective, a problem will be ill-structured to the extent that
it lacks the constraints required for its solution. Because the most important research
problems will be decidedly ill-structured, we can say of scientific inquiry that its
basic purpose is to better structure our research problems by building in the various
required constraints as our research proceeds. It should be emphasized that the prob-
lems dimension of ATOM is not a temporal phase to be dealt with by the researcher
before moving on to other phases such as observing and hypothesizing. Instead,
the researcher deals with scientific problems all the time; problems are generated,
selected for consideration, developed, and modified in the course of inquiry.
56 3 An Abductive Theory of Scientific Method

Across the various research phases of ATOM there will be numerous problems
of varying degrees of specificity to articulate and solve. For example, the success-
ful detection of an empirical phenomenon produces an important new constraint on
the subsequent explanatory efforts devised to understand that constraint; until the
relevant phenomenon, or phenomena, are detected, one will not really know what
the explanatory problem is. Of course, constraints abound in theory construction.
For example, constraints that regulate the abductive generation of new theories will
include methodological guides (e.g., give preference to theories that are simpler, and
that have greater explanatory breadth), aim-oriented guides (e.g., theories must be
of an explanatory kind that appeals to latent causal mechanisms), and metaphysical
principles (e.g., social psychological theories must acknowledge humankind’s essen-
tial rule-governed nature). The importance of research problems, viewed as sets of
constraints, is that they function as the “range riders” of inquiry that provide ATOM
with the operation force to guide inquiry. The constraints themselves comprise rel-
evant substantive knowledge as well as heuristics, rules, and principles. Thus, the
constraint inclusion account of problems serves as a vehicle for bringing relevant
background knowledge to bear on the various research tasks subsumed by ATOM.

3.6 ATOM and Scientific Methodology

Before concluding the chapter, I want to identify and briefly discuss two important
methodological ideas that are part of the deep structure of ATOM. These ideas are
presented in two contrasts: (a) generative and consequentialist methodology and (b)
reliabilist and coherentist justification.

3.6.1 Generative and Consequentialist Methodology

Modern scientific methodology promotes two different research strategies that can
lead to justified knowledge claims. These are known as consequentialist and genera-
tive strategies (Nickles, 1987). Consequentialist strategies justify knowledge claims
by focusing on their consequences. By contrast, generative strategies justify knowl-
edge claims in terms of the processes that produce them. Although consequentialist
strategies are used and promoted more widely in contemporary science, both types
of strategy are required in an adequate conception of research methodology. Two
important features of ATOM are that it is underwritten by a methodology that pro-
motes both generative and consequentialist research strategies in the detection of
phenomena, and generative research strategies in the construction of explanatory
theories.
Consequentialist reasoning receives a heavy emphasis in behavioural science
research through use of the hypotheticodeductive method, and null hypothesis signifi-
cance testing, and structural equation modelling within it. Consequentialist methods
3.6 ATOM and Scientific Methodology 57

reason from the knowledge claims in question to their testable consequences. As


such, they confer a retrospective justification on the theories they seek to confirm.
In contrast to consequentialist methods, generative methods reason from war-
ranted premises to an acceptance of the knowledge claims in question. Exploratory
factor analysis is a good example of a method of generative justification. It affords
researchers generative justifications by helping them reason forward from established
correlational data patterns to the rudimentary explanatory theories that the method
generates. As noted earlier, it is judgments of initial plausibility that constitute the
generative justifications afforded by exploratory factor analysis. Generative justifi-
cations are forward looking because they are concerned with heuristic appraisals of
the prospective worth of theories.

3.6.2 Reliabilist and Coherentist Justification

In addition to embracing both generative and consequentialist methodologies, ATOM


uses two different theories of justification, although it does so in a complementary
way. These approaches to justification are known as reliabilism and coherentism.
Reliabilism asserts that a belief is justified to the extent that it is acquired by reli-
able processes or methods (e.g., Goldman, 1986). For example, under appropriate
conditions, beliefs produced by perception, verbal reports of mental processes, and
even sound argumentation can all be justified by the reliable processes of their pro-
duction. ATOM makes heavy use of reliability judgments because they furnish the
appropriate type of justification for claims about empirical phenomena.
For example, as noted earlier, statistical resampling methods such as the bootstrap,
and the strategy of constructive replication, are different sorts of consistency tests
through which researchers seek to establish claims that data provide reliable evidence
for the existence of phenomena.11
By contrast with reliabilism, coherentism maintains that a belief is justified in
virtue of its coherence with other accepted beliefs. One prominent version of coher-
entism, explanationism, asserts that coherence is determined by explanatory rela-
tions and that all justification aims at maximizing the explanatory coherence of
belief systems (Lycan, 1988). However, the claim that all justification is concerned
with explanatory coherence is too extreme, as the existence of reliabilist justification
makes clear.
It should be emphasized that, although reliabilism and explanationism are differ-
ent and are often presented as rivals, they do not have to be seen as competing theories
of justification. ATOM adopts a broadly coherentist perspective on justification that
accommodates both reliabilism and explanationism and allows for their coexistence,
complementarity, and interaction. It encourages researchers first to seek and accept

11 The use of reliability as a mode of justification, or validation, differs from the normal psychometric

practice in which reliability and validity are presented as contrasts. However, the use of consistency
tests to validate knowledge claims on reliabilist grounds is widespread in science.
58 3 An Abductive Theory of Scientific Method

knowledge claims about empirical phenomena based solely on reliabilist grounds,


and then to proceed to construct theories that will explain coherently those claims
about phenomena. Thus, when using TEC, one is concerned with delivering judg-
ments of explanatory coherence, but TEC’s principle of data priority presupposes
that the relevant empirical generalizations have been justified on reliabilist grounds.
Further, the acceptability of the claims about phenomena will be enhanced when
they coherently enter into the explanatory relations that contain them. Alternatively,
the explanatory coherence, specifically the explanatory breadth, of a theory will be
reduced as a consequence of rejecting a claim about a relevant phenomenon that was
initially accepted on insufficient reliabilist grounds.

3.7 Discussion and Conclusion

This concluding section of the chapter briefly comments on the nature and limits of
ATOM and its implications for research practice. In doing so, it also makes some
remarks about the nature of science.

3.7.1 Phenomena Detection and Theory Construction Again

Recognition of the fundamental importance of the distinction between empirical phe-


nomena and explanatory theory suggests the need to differentiate between empirical
progress and theoretical progress in science. The successful detection of a phe-
nomenon is a major achievement in its own right, and it is a significant indicator of
empirical progress in science. (The importance of phenomena detection in science is
underscored by the fact that more Nobel prizes are awarded for the discovery of phe-
nomena than for the construction of explanatory theories.) From the perspective of
ATOM, theoretical progress is to be understood in terms of the goodness of explana-
tory theories as determined by TEC. Arguably, behavioural science methodology has
placed a heavier professional emphasis on the description of empirical regularities
than on the construction of explanatory theories. However, ATOM takes phenomena
detection and theory construction to be of equal worth.
The characterization of phenomena given earlier in the chapter helps correct
two widely held misunderstandings of science. First, it makes clear that taking the
distinction between observation and theory to be of fundamental methodological
importance prevents one from being able to conceptualize properly the process of
phenomena detection. This holds whether or not one subscribes to a hard-and-fast
observation–theory distinction, or whether one accepts a relative observation–theory
distinction and the ambiguous idea of theory ladenness that goes with it. To correctly
understand the process of phenomena detection, one needs to replace the observa-
tion–theory distinction with the threefold distinction between data, phenomena, and
theory.
3.7 Discussion and Conclusion 59

This suggested replacement also serves to combat the tendency to overemphasize


the importance of observation as a source of evidence in science. For it is phenom-
ena, not data, that typically serve as evidence for theories. Moreover, although data
serve as evidence for phenomena, their perceptual qualities in this role are of sec-
ondary importance. Methodologically speaking, what matters in science is not the
phenomenal or experiential qualities of perception but whether or not perception is
a reliable process (Woodward, 1989). It is for this reason that reliable nonhuman
measurement techniques are just as important as human perceptual techniques in
detecting phenomena.
Generally speaking, the implications of ATOM’s account of phenomena detec-
tion for research practice in the behavioural sciences is consistent with a number
of recent proposals for improving researchers’ data analytic practices. In particular,
the model of data analysis outlined in this chapter reinforces the importance now
accorded exploratory data analysis in psychology (Behrens & Yu, 2003). In addi-
tion, it highlights the need to recognize that computer-intensive resampling methods
are a valuable source of pattern confirmation—a point oddly ignored by the Amer-
ican Psychological Association’s Task Force on Statistical Inference (Wilkinson &
The Task Force on Statistical Inference, 1999). Of interest, at a general level, the
acknowledgment of phenomena detection as a distinctive research undertaking in
its own right enables behavioural scientists to endorse the inductivism of radical
behaviourist methodology but eschew its instrumentalist prescriptions for theoriz-
ing and postulate latent causal mechanisms instead. This constructive part of radical
behaviourism is an account of phenomena detection that can be found in the biologi-
cal sciences (Sidman, 1960). As such, it deserves a wider adoption in the behavioural
sciences than is currently the case.
ATOM’s account of theory construction is at variance with the way many
behavioural scientists understand theory construction in science. Most behavioural
scientists probably use, or at least endorse, a view of theory construction that is
strongly shaped by the guess-and-test strategy of the hypothetico-deductive method.
In contrast with this prevailing conception of scientific method, ATOM asserts that
(a) theory generation can be a logical, or rational, affair, where the logic takes the
form of abductive reasoning; (b) theory development is an important part of theory
construction—an undertaking that is stifled by an insistence on immediate testing;
and (c) empirical adequacy, understood as predictive success, is not by itself an ade-
quate measure of theory goodness, there being a need to use additional virtues to do
with explanatory worth.
ATOM’s three phases of theory construction have varying degrees of application
in the behavioural sciences. Codified methods that generate theories through existen-
tial abduction are rare. The use of exploratory factor analysis to postulate common
causes is a striking exception, although the explicit use of this method as an abductive
generator of elementary plausible theory is rarely acknowledged. Grounded theory
method (e.g., Strauss, 1987), which is increasingly used in behavioural research,
can be regarded as an abductive method that helps generate theories that explain the
qualitative data patterns from which they are derived. However, it does not confine
itself to existential abduction, and it imposes weaker constraints on the abductive
60 3 An Abductive Theory of Scientific Method

reasoning permitted by the researcher than does exploratory factor analysis. The
earlier suggestion that as human beings, we have an evolved cognitive ability to
abductively generate hypotheses leads to the plausible suggestion that scientists fre-
quently reason to explanatory hypotheses without using codified methods to do so.
Two prominent examples in the behavioural sciences are Chomsky’s (1972) pub-
licly acknowledged abductive inference to his innateness hypothesis about universal
grammar, and Howard Gardner’s (Walters & Gardner, 1986) self-described use of
“subjective factor analysis” to postulate his multiple intelligences. Also, it is likely
that behavioural scientists use some of the many heuristics for creative hypothesis
generation listed by McGuire (1997) in order to facilitate their abductive reasoning
to hypotheses.
The strategy of analogical modelling is sometimes used in the behavioural sci-
ences to develop theories. This is not surprising, given that many of the proposed
causal mechanisms in these sciences are theoretical entities whose natures can only
be got at indirectly using such a modelling strategy. However, there is little evi-
dence that the behavioural sciences explicitly incorporate such a strategy into their
methodology and their science education practices. Given the importance of such a
strategy for the expansion of explanatory theories, methodologists in the behavioural
sciences need to promote analogical modelling as vigorously as they have promoted
structural equation modelling. Structural equation modelling provides knowledge
of causal networks. As such, it does not so much encourage the development of
detailed knowledge of the nature of the latent variables as it specifies the range and
order of causal relations into which such variables enter. By contrast, analogical
modelling seeks to provide more detailed knowledge of the causal mechanisms by
enumerating their components and activities. These different forms of knowledge
are complementary.
Inference to the best explanation is an important approach to theory appraisal
that has not been explicitly tried in the behavioural sciences. Instead, hypothetico-
deductive testing for the predictive success of hypotheses and theories holds sway.
TEC, which is the only codified method of inference to the best explanation, can be
widely used in those domains where there are two or more reasonably well-developed
theories that provide candidate explanations of relevant phenomena. By acknowledg-
ing the centrality of explanation in science, one can use TEC to appraise theories
with respect to their explanatory goodness. It is to be hoped that behavioural science
education will soon add TEC to its concern with cutting-edge research methods.

3.7.2 The Scope of ATOM

Although ATOM is a broad theory of scientific method, it should not be under-


stood as a fully comprehensive account. ATOM is a singular account of method that
is appropriate for the detection of empirical phenomena and the subsequent con-
struction of postulational theories, where those theories purportedly refer to hidden
causal mechanisms, and where their causes are initially given a rudimentary, dis-
3.7 Discussion and Conclusion 61

positional characterization. However, in dealing with explanatory theories in which


the causal mechanisms referred to are more directly accessible than theoretical enti-
ties, researchers do not have to use a strategy of analogical modelling in order to
provide a more informative characterization of their theories. The use of functional
brain imaging techniques, such as functional magnetic resonance imaging, in order
to map neuronal activity in the brain is a case in point. Further, although the eval-
uation of theories in terms of explanatory criteria deserves a heavy weighting in
science, inference to the best explanation will not always be an appropriate, or a
sufficient, resource for evaluating theories. For example, although predictive suc-
cess has probably been overemphasized in both scientific methodology and practice
(Brush, 1995), it nevertheless remains an important criterion of a theory’s worth. It,
may, therefore, be sought in a modified hypothetico-deductive strategy that corrects
for the confirmational inadequacies of its simple form.
Like all theories of scientific method, ATOM is normative in the sense that it
advises researchers of what to do in a limited number of research contexts. However,
it is important to stress that the normative force of ATOM is conditional in nature.
More precisely, its recommendations are subjunctive conditionals that take the form
“If you want to reach goal X, then use strategy Y.” The justification for pursuing goal
X rests with the researcher; it is not to be found in ATOM. Laudan (1996) argued
in detail for the conditional nature of methodological recommendations, and Proctor
and Capaldi (2001) recently commended his view of methodology to psychologists.

3.8 Conclusion

ATOM aspires to be a coherent theory that brings together a number of different


research methods and strategies that are normally considered separately. The account
of phenomena detection offered is a systematic reconstruction of a practice that is
common in science but that is seldom presented as a whole in methodological writ-
ings. The abductive depiction of theory construction endeavors to make coordinated
sense of the way in which science sometimes comes to obtain knowledge about the
causal mechanisms that figure centrally in the understanding of the phenomena that
they produce. With rare exceptions, the abductive generation of elementary plausible
theory, the strategy of analogical modelling, and the method of inference to the best
explanation are all yet to receive explicit consideration in psychology and the other
behavioural sciences— but see Rozeboom (1999), Harré and Secord (1972), and
Eflin and Kite (1996), respectively. ATOM serves to combine these methodological
resources in a broad theory of scientific method.
The question of whether ATOM is a genuinely coherent theory of method remains
to be answered. Although it is a fairly comprehensive account of method, and
although it seems to capture a natural order of scientific inquiry, further development
is required before its cohesiveness can be properly judged. My hope is that, upon
fuller explication, ATOM might be shown in a reflexive way to be an explanatorily
coherent theory of scientific method.
62 3 An Abductive Theory of Scientific Method

Since the time that this chapter was first written (Haig, 2005), a considerably
expanded book-length treatment of ATOM has been developed (Haig, 2014).

References

Abrantes, P. (1999). Analogical reasoning and modeling in the sciences. Foundations of Science, 4,
237–270.
Behrens, J. T., & Yu, C.-H. (2003). Exploratory data analysis. In J. A. Schinka & W. F. Velicer
(Eds.), Handbook of psychology (Vol. 2, pp. 33–64). New York, NY: Wiley.
Bogen, J. (2005). Regularities and causality: Generalizations and causal explanations. Studies in
the History and Philosophy of Biological and Biomedical Sciences, 36, 397–420.
Bogen, J., & Woodward, J. (1988). Saving the phenomena. Philosophical Review, 97, 303–352.
Brush, S. G. (1995). Dynamics of theory change: The role of predictions. Philosophy of Science
Association, 1994(2), 133–145.
Campbell, N. R. (1920). Physics: The elements. Cambridge, England: Cambridge University Press.
Carruthers, P. (2002). The roots of scientific reasoning: Infancy, modularity, and the art of tracking. In
P. Carruthers, S. Stich, & M. Siegal (Eds.), The cognitive basis of science (pp. 73–95). Cambridge,
England: Cambridge University Press.
Cartwright, N. (1983). How the laws of physics lie. Oxford, England: Oxford University Press.
Cattell, R. B. (1966). Psychological theory and scientific method. In R. B. Cattell (Ed.), Handbook
of multivariate experimental psychology (pp. 1–18). Chicago, IL: Rand McNally.
Chalmers, A. F. (1999). What is this thing called science? (3rd ed.). St. Lucia, Australia: University
of Queensland Press.
Chatfield, C. (1985). The initial examination of data. Journal of the Royal Statistical Society, Series
A, 148, 214–254.
Chomsky, N. (1972). Language and mind. New York, NY: Harcourt, Brace, Jovanovich.
Clark, J. M., & Paivio, A. (1989). Observational and theoretical terms in psychology: A cognitive
perspective on scientific language. American Psychologist, 44, 500–512.
Day, T., & Kincaid, H. (1994). Putting inference to the best explanation in its place. Synthese, 98,
271–295.
Duhem, P. (1954). The aim and structure of physical theory (2nd ed., P. P. Winer, Trans.). Princeton,
NJ: Princeton University Press. (Original work published 1914).
Durrant, R., & Haig, B. D. (2001). How to pursue the adaptationist program in psychology. Philo-
sophical Psychology, 14, 357–380.
Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological
research. Psychological Review, 70, 193–242.
Efron, B., & Tibshirani, R. (1993). An introduction to the bootstrap. New York, NY: Chapman &
Hall.
Ehrenberg, A. S. C., & Bound, J. A. (1993). Predictability and prediction. Journal of the Royal
Statistical Society: Series B, 156, 167–206.
Elfin, J. T., & Kite, M. E. (1996). Teaching scientific reasoning through attribution. Teachings of
Psychology, 23, 87–91.
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of
exploratory factor analysis in psychological research. Psychological Methods, 4, 272–299.
Fidell, L. S., & Tabachnick, B. G. (2003). Preparatory data analysis. In J. A. Schinka & W. F. Velicer
(Eds.), Handbook of psychology (Vol. 2, pp. 115–121). New York, NY: Wiley.
Franklin, A. (1990). Experiment, right or wrong. Cambridge, England: Cambridge University Press.
Gage, N. L. (1996). Confronting counsels of despair for the behavioral sciences. Educational
Researcher, 25(5–15), 22.
References 63

Giere, R. N. (1983). Testing theoretical hypotheses. In J. Earman (Ed.), Testing scientific theories
(pp. 269–298). Minneapolis, MN: University of Minnesota Press.
Giere, R. N. (1988). Explaining science: A cognitive approach. Chicago, IL: University of Chicago
Press.
Glymour, C. (1980). Theory and evidence. Princeton, NJ: Princeton University Press.
Goffman, E. (1969). The presentation of self in everyday life. London, England: Penguin.
Goldman, A. I. (1986). Epistemology and cognition. Cambridge, MA: Harvard University Press.
Haig, B. D. (1987). Scientific problems and the conduct of research. Educational Philosophy and
Theory, 19, 22–32.
Haig, B. D. (2005). Exploratory factor analysis, theory generation, and scientific method. Multi-
variate Behavioral Research, 40, 303–329.
Haig, B. D. (2014). Investigating the psychological world: Scientific method in the behavioral
sciences. Cambridge, MA: MIT Press.
Harman, G. (1965). The inference to the best explanation. Philosophical Review, 74, 88–95.
Harré, R. (1970). The principles of scientific thinking. Chicago, IL: University of Chicago
Press.modeling.
Harré, R. (1976). The constructive role of models. In L. Collins (Ed.), The use of models in the
social sciences (pp. 16–43). London, England: Tavistock.
Harré, R. (1988). Where models and analogies really count. International Studies in the Philosophy
of Science, 2, 119–133.
Harré, R., & Secord, P. F. (1972). The explanation of social behavior. Oxford, England: Blackwell.
Hempel, C. (1966). Philosophy of natural science. Englewood Cliffs, NJ: Prentice Hall.
Hesse, M. B. (1966). Models and analogies in science. Notre Dame, IN: University of Notre Dame
Press.
Josephson, J. R., & Josephson, S. G. (1994). Abductive inference: Computation, philosophy, tech-
nology. New York, NY: Cambridge University Press.
Laudan, L. (1977). Progress and its problems. Berkeley, CA: University of California Press.
Laudan, L. (1981). Science and hypothesis. Dordrecht, The Netherlands: Reidel.
Laudan, L. (1996). Beyond positivism and relativism: Theory, method, and evidence. Berkeley:
University of California Press.
Lee, M. D., & Wagenmakers, E. J. (2005). Bayesian statistical inference in psychology: Comment
on Trafimow (2003). Psychological Review, 112, 662–668.
Lindsay, R. M., & Ehrenberg, A. S. C. (1993). The design of replicated studies. American Statisti-
cian, 47, 217–228.
Lipton, P. (2004). Inference to the best explanation (2nd ed.). London, England: Routledge.
Lycan, W. G. (1988). Judgement and justification. Cambridge, England: Cambridge University
Press.
Magnani, L. (2001). Abduction, reason, and science: Processes of discovery and explanation. New
York, NY: Kluwer/Plenum Press.
Magnani, L. (2009). Abductive cognition: The epistemological and eco-cognitive dimensions of
hypothetical reasoning. Berlin, Germany: Springer.
McGuire, W. J. (1997). Creative hypothesis generating in psychology: Some useful heuristics.
Annual Review of Psychology, 48, 1–30.
Mumford, S. (1998). Dispositions. Oxford, England: Oxford University Press.
Nickles, T. (1981). What is a problem that we might solve it? Synthese, 47, 85–118.
Nickles, T. (1987). Twixt method and madness. In N. J. Nersessian (Ed.), The process of science
(pp. 41–67). Dordrecht, The Netherlands: Martinus Nijhoff.
Peirce, C. S. (1931–1958). In C. Hartshorne, P. Weiss, & A. Burks (Eds.) Collected papers (Vols.
1–8). Cambridge, MA: Harvard University Press.
Popper, K. R. (1959). The logic of scientific discovery. London, England: Hutchinson.
Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift’s electric factor analysis machine.
Understanding Statistics, 2, 13–43.
64 3 An Abductive Theory of Scientific Method

Proctor, R. W., & Capaldi, E. J. (2001). Empirical evaluation and justification of methodologies in
psychological science. Psychological Bulletin, 127(759), 772.
Rorer, L. G. (1991). Some myths of science in psychology. In D. Cicchetti & W. M. Grove (Eds.),
Thinking clearly about psychology (Vol. 1, pp. 61–87)., Matters of public interest Minneapolis,
MN: University of Minnesota Press.
Ross, S. D. (1981). Learning and discovery. New York, NY: Gordon & Breach.
Rozeboom, W. W. (1972). Scientific inference: The myth and the reality. In R. S. Brown & D. J.
Brenner (Eds.), Science, psychology, and communication: Essays honoring William Stephenson
(pp. 95–118). New York, NY: Teachers College Press.
Rozeboom, W. W. (1999). Good science is abductive, not hypothetico-deductive. In L. L. Harlow,
S. A. Mulaik, & J. H. Steiger (Eds.), What if there were no significance tests? (pp. 335–391).
Hillsdale, NJ: Erlbaum.
Schmidt, F. L. (1992). What do data really mean? Research findings, meta-analysis, and cumulative
knowledge in psychology. American Psychologist, 47, 1173–1181.
Sidman, M. (1960). Tactics of scientific research. New York, NY: Basic Books.
Simon, H. A. (1977). Models of discovery. Dordrecht, the Netherlands: Reidel.
Skinner, B. F. (1984). Methods and theories in the experimental analysis of behavior. Behavioral
and Brain Sciences, 7, 511–546.
Sohn, D. (1996). Meta-analysis and science. Theory and Psychology, 6, 229–246.
Stephenson, W. W. (1961). Scientific creed—1961. Psychological Record, 11, 1–25.
Strauss, A. L. (1987). Qualitative analysis for social scientists. Cambridge, England: Cambridge
University Press.
Thagard, P. (1988). Computational philosophy of science. Cambridge, MA: MIT Press.
Thagard, P. (1992). Conceptual revolutions. Princeton, NJ: Princeton University Press.
Thagard, P. (2000). Coherence in thought and action. Cambridge, MA: MIT Press.
Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison Wesley.
Walters, J. M., & Gardner, H. (1986). The theory of multiple intelligence: Some issues and answers.
In R. J. Sternberg & R. K. Wagner (Eds.), Practical intelligence: Nature and origins of competence
in the everyday world (pp. 163–182). Cambridge, England: Cambridge University Press.
Whitt, L. A. (1992). Indices of theory promise. Philosophy of Science, 59, 393–472.
Wilkinson, L., & The Task Force on Statistical Inference. (1999). Statistical methods in psychology
journals: Guidelines and explanations. American Psychologist, 54, 594–604.
Woodward, J. (1989). Data and phenomena. Synthese, 79, 393–472.
Woodward, J. (2000). Data, phenomena, and reliability. Philosophy of Science, 67(Suppl.), 163–179.
Woodward, J. (2003). Making things happen. Oxford, England: Oxford University Press.

You might also like