haig2018
haig2018
haig2018
This chapter is concerned with scientific method in the behavioural sciences. Its
principal goal is to outline a broad theory of scientific method by making use of
selected developments in contemporary research methodology. The time now seems
right to intensify efforts to assemble knowledge of research methods into larger
units of understanding. Currently, behavioural scientists use a plethora of specific
research methods and a number of different investigative strategies when study-
ing their domains of interest. Among this diversity, the well-known inductive and
hypothetico-deductive accounts of scientific method have brought some order to our
investigative practices. The former method speaks to the discovery of empirical gen-
eralizations, whereas the latter method is used to test hypotheses and theories in
terms of their predictive success.
However, although inductive and hypothetico-deductive methods are commonly
regarded as the two main theories of scientific method (Laudan, 1981; and, in fact, are
sometimes regarded as the principal claimants for the title of the definitive scientific
method), they are better thought of as restrictive accounts of method that can be used
to meet specific research goals (Nickles, 1987), not broad accounts of method that
pursue a range of research goals. In fashioning empirical generalizations, the induc-
tive method undoubtedly addresses an important part of scientific inquiry. However,
it is a part only. Of equal importance is the process of theory construction. Here,
however, the hypothetico-deductive method, with its focus on theory testing, speaks
only to one, although important, part of the theory construction process (Simon,
1977).
The theory of method outlined in this chapter is a broader account of scientific
method than either the inductive or hypothetico-deductive theories of method. This
more comprehensive theory of method endeavors to describe systematically how one
can first discover empirical facts and then construct theories to explain those facts.
Although scientific inquiry is often portrayed in hypothetico-deductive fashion as
an undertaking in which theories are first constructed and facts are then gathered in
order to test those theories, this should not be thought of as its natural order. In fact,
scientific research frequently proceeds the other way around. The theory of method
© Springer Nature Switzerland AG 2018 35
B. D. Haig, Method Matters in Psychology, Studies in Applied Philosophy,
Epistemology and Rational Ethics 45, https://doi.org/10.1007/978-3-030-01051-5_3
36 3 An Abductive Theory of Scientific Method
Before presenting the proposed theory of scientific method, the well-known inductive
and hypothetico-deductive accounts of scientific method are briefly considered. This
serves to define their proper limits as methods of science and, at the same time,
provide useful contrasts to the more comprehensive theory of method.
In popular accounts of inductive method (e.g., Chalmers, 1999), the scientist is typ-
ically portrayed as reasoning inductively by enumeration from secure observation
statements about singular events to laws or theories in accordance with some gov-
erning principle of inductive reasoning. Sound inductive reasoning is held to create
and justify theories simultaneously, so that there is no need for subsequent empirical
testing. Some have criticized this view of method for placing excessive trust in the
powers of observation and inductive generalization, and for believing that enumer-
ative induction is all there is to scientific inference. In modern behavioural science,
the radical behaviourism of B. F. Skinner is a prominent example of a research tradi-
tion that uses an inductive conception of scientific method (Sidman, 1960; Skinner,
1984). Within this behaviourist tradition, the purpose of research is to detect empir-
ical phenomena of learning that are subsequently systematized by nonexplanatory
theories.
Although the inductive method has received considerable criticism, especially
from those who seek to promote a hypothetico-deductive conception of scientific
inquiry, it nevertheless stresses, in a broad-brush way, the scientific importance of
fashioning empirical generalizations. Shortly, it will be shown that the alternative
theory of scientific method to be presented uses the inductive method in the form of
enumerative induction, or induction by generalization, in order to detect empirical
phenomena.
For more than 150 years, hypothetico-deductivism has been the method of choice
in the natural sciences (Laudan, 1981), and it assumed hegemonic status in 20th
3.1 Two Theories of Method 37
1 The term causal mechanism is ambiguous. In the broad theory of method being proposed, the
generation of theories involves explanatory inference to claims about the existence of causal entities.
It is not until the development of these theories is undertaken that the mechanisms responsible for the
production of their effects are identified and spelled out. Also, in this chapter it is assumed that the
productivity of causal mechanisms is distinct from the regularities that they explain (Bogen, 2005;
but cf. Woodward, 2003). Of course, this does not preclude the methodological use of generalizations
that describe natural regularities in order to help identify the causal mechanisms that produce them.
2 Note, however, that the strategy of analogical modeling is essential for theory development in the
abductive theory of method and that the theory of explanatory coherence does heavy-duty work
in the abductive theory of method because it is the best developed method of inference to the best
explanation currently available.
3.2 Overview of the Broad Theory 39
The exposition of the method begins with an account of phenomena detection and
then considers the process of constructing explanatory theories. Toward the end of
the chapter, two pairs of important methodological ideas that feature prominently
in ATOM are examined. The chapter concludes with a discussion of the nature and
limits of the method.
Bogen and Woodward (1988; Woodward, 1989, 2000) have argued in detail that it is
claims about phenomena, not data, that theories typically seek to predict and explain
and that, in turn, it is the proper role of data to provide the observational evidence
for phenomena, not for theories. Phenomena are relatively stable, recurrent, general
features of the world that, as researchers, we seek to explain. The more striking of
them are often called effects, and they are sometimes named after their principal dis-
coverer. The so-called phenomenal laws of physics are paradigmatic cases of claims
about phenomena. By contrast, the so-called fundamental laws of physics explain the
phenomenal laws about the relevant phenomena. For example, the electron theory of
Lorentz is a fundamental law that explains Airy’s phenomenological law of Faraday’s
electro-optical effect (Cartwright, 1983). Examples of the innumerable phenomena
claims in psychology include the matching law (the law of effect), the Flynn effect
of intergenerational gains in IQ, and recency effects in human memory.
Although phenomena commonly take the form of empirical regularities, they
comprise a varied ontological bag that includes objects, states, processes, events,
and other features that are hard to classify. Because of this variety, it is generally
more appropriate to characterize phenomena in terms of their role in relation to
explanation and prediction (Bogen & Woodward, 1988). For example, the relevant
empirical generalizations in cognitive psychology might be the objects of explana-
tions in evolutionary psychology that appeal to mechanisms of adaptation, and those
mechanisms might in turn serve as phenomena to be explained by appealing to the
mechanisms of natural selection in evolutionary biology.
Phenomena are frequently taken as the proper objects of scientific explanation
because they are stable and general. Among other things, systematic explanations
require one to show that the events to be explained result from the causal factors
3.3 Phenomena Detection 41
appealed to in the explanation. They also serve to unify the events to be explained.
Because of their ephemeral nature, data will not admit of systematic explanations.
In order to understand the process of phenomena detection, phenomena must be
distinguished from data. Unlike phenomena, data are idiosyncratic to particular inves-
tigative contexts. Because data result from the interaction of a large number of causal
factors, they are not as stable and general as phenomena, which are produced by a
relatively small number of causal factors. Data are ephemeral and pliable, whereas
phenomena are robust and stubborn. Phenomena have a stability and repeatability
that is demonstrated through the use of different procedures that often engage dif-
ferent kinds of data. Data are recordings or reports that are perceptually accessible;
they are observable and open to public inspection. Despite the popular view to the
contrary, phenomena are not, in general, observable; they are abstractions wrought
from the relevant data, frequently as a result of a reductive process of data analysis.
As Cartwright (1983) remarked in her discussion of phenomenal and theoretical laws
in physics, “the distinction between theoretical and phenomenological has nothing
to do with what is observable and what is unobservable. Instead the terms separate
laws which are fundamental and explanatory from those that merely describe” (p. 2).
Examples of data, which serve as evidence for the aforementioned psychological
effects, are rates of operant responding (evidence for the matching law), consistent
intergenerational IQ score gains (evidence for the Flynn effect), and error rates in
psychological experiments (evidence for recency effects in short-term memory).
The methodological importance of data lies in the fact that they serve as evidence
for the phenomena under investigation. In detecting phenomena, one extracts a signal
(the phenomenon) from a sea of noise (the data). Some phenomena are rare, and
many are difficult to detect; as Woodward (1989) noted, detecting phenomena can
be like looking for a needle in a haystack. It is for this reason that, when extracting
phenomena from the data, one often engages in data exploration and reduction by
using graphical and statistical methods.
In order to establish that data are reliable evidence for the existence of phenom-
ena, scientists use a variety of methodological strategies. These strategies include
controlling for confounding factors (both experimentally and statistically), empiri-
cally investigating equipment (including the calibration of instruments), engaging in
data analytic strategies of both statistical and nonstatistical kinds, and constructively
replicating study results. As can be seen in Table 4.1, these procedures are used in
the detection of phenomena, but they are not used in the construction of explanatory
theory (cf. Franklin, 1990; Woodward, 1989). The later discussion of the importance
of reliability in the process of phenomena detection helps indicate why this is so.
Given the importance of the detailed examination of data in the process of phenom-
ena detection, it is natural that the statistical analysis of data figures prominently in
that exercise. A statistically oriented, multistage account of data analysis is therefore
42 3 An Abductive Theory of Scientific Method
3 Behrens and Yu suggested that the inferential foundations of exploratory data analysis are to
be found in the notion of abduction. By contrast, ATOM regards exploratory data analysis as a
descriptive pattern detection process that is a precursor to the inductive generalizations involved in
phenomena detection. Abductive inference is reserved for the construction of causal explanatory
theories that are introduced to explain empirical phenomena. Behrens and Yu’s suggestion conflates
description and explanation in this regard.
3.3 Phenomena Detection 43
sciences, where researchers are frequently confronted with ad hoc data sets on man-
ifest variables that have been acquired in convenient ways.
Close Replication. Successfully conducted exploratory analyses will suggest
potentially interesting data patterns. However, it will normally be necessary to check
on the stability of the emergent data patterns through use of confirmatory data anal-
ysis procedures. Computer-intensive resampling methods such as the bootstrap, the
jackknife, and cross-validation (Efron & Tibshirani, 1993) constitute an important
set of confirmatory procedures that are well suited to the demands of modern data
analysis. Such methods free us, as researchers, from the assumptions of orthodox
statistical theory, and permit us to gauge the reliability of chosen statistics by making
thousands, even millions, of calculations on many data points. Statistical resampling
methods like these are used to establish the consistency, or reliability, of sample
results. In doing this, they provide us with the kind of validating strategy that is
needed to achieve close replications.4
Now that psychology has finally begun to embrace exploratory data analysis,
one can hope for a corresponding increase in the companionate use of statistical
resampling methods in order to ascertain the validity of the data patterns initially
suggested by the use of exploratory methods.
Constructive Replication. In establishing the existence of phenomena, it is nec-
essary that science undertakes both close and constructive replications. The statistical
resampling methods just mentioned are concerned with the consistency of sample
results that help researchers achieve close, or internal, replications. By contrast, con-
structive replications are undertaken to demonstrate the extent to which results hold
across different methods, treatments, and occasions. In other words, constructive
replication is a triangulation strategy designed to ascertain the generalizability of
the results identified by successful close replication (Lindsay & Ehrenberg, 1993).
Constructive replication, in which researchers vary the salient conditions, is a time-
honored strategy for justifying claims about phenomena.
In recognition of the need to use statistical methods that are in keeping with the
practice of describing predictable phenomena, researchers should seek the generaliz-
ability of relationships rather than their statistical significance (Ehrenberg & Bound,
1993)—hence, the need to use observational and experimental studies with multiple
sets of data, observed under quite different sets of conditions. The recommended task
here is not to figure what model best fits a single set of data but to ascertain whether
the model holds across different data sets. Seeking reproducible results through con-
structive replications, then, requires data analytic strategies that are designed to detect
significant sameness rather than significant difference.
The four-stage model of data analysis just outlined assists in the detection of phe-
nomena by attending in turn to data quality, pattern suggestion, pattern confirmation,
and generalization. In effect, this process is one of enumerative induction in which
Detecting empirical phenomena is a major goal of scientific research, and their suc-
cessful detection constitutes an important type of scientific discovery in its own right.
However, once detected, phenomena serve the important function of prompting the
search for their own understanding. This understanding is commonly met in science
by constructing relevant explanatory theories.
For inductivists, inductively grounded conclusions about phenomena are of
paramount importance. However, although inductivists often subsequently construct
theories, their theories do not provide explanations of phenomena that appeal to
causal mechanisms. Instead, their theories function as tools or instruments concerned
with the description, economical ordering, and prediction of empirical relationships.
For hypothetico-deductivists, theories are said to be generated amethodologically
through free use of the imagination (Hempel, 1966; Popper, 1959). Theories obtained
in this manner are often regarded as explanatory in nature, but their worth is princi-
pally judged in terms of their predictive success, rather than their ability to explain
empirical phenomena.
ATOM, by contrast, maintains that theory construction is neither inductive nor
amethodological. For it, theory construction comprises three methodological phases:
theory generation, theory development, and theory appraisal. These phases do not
occur in a strictly temporal order, for although theory generation precedes theory
3.4 Theory Construction 45
development, theory appraisal begins with theory generation, continues with theory
development, and extends to the comparative appraisal of well-developed theories.
Further, ATOM’s characterization of theory construction is abductive through and
through: Theory generation, theory development, and theory appraisal are all por-
trayed as abductive, or explanatory, undertakings, although the form of abduction X
is different in each case. The account of theory construction that follows articulates
the abductive character of each of the three phases.
This example serves to illustrate the point that the method of exploratory factor
analysis proper should be taken to include the factor analyst’s substantive interpre-
tation of the statistical factors. It is important to realize that the factor analyst has
5 Some take exploratory factor analysis to be a data analytic method, only. My principal reason for
assigning a theory generation role to exploratory factor analysis is based on the belief that factors
are best regarded as latent common causes and that inference to such causes is abductive in nature
(Haig, 2005).
6 The term entity is used as a catch-all ontological term that covers a miscellany of properties that
includes states, processes, and events. Although existential abductions in exploratory factor analysis
are to properties expressed as the values of variables, not all existential abductions need take this
form.
7 The positive manifold is a term that is sometimes used to refer to the striking, and well-established,
fact that almost all different tests of ability correlate positively with one another to a significant
degree. Despite its historical link to Spearman’s theory of general intelligence, the positive manifold
can be taken as evidence for the existence of two or more factors.
48 3 An Abductive Theory of Scientific Method
to resort to his or her own abductive powers when reasoning from correlational
data patterns to under- lying common causes. Note that the schema for abductive
inference, and its application to the generation of Spearman’s hypothesis of g, are
concerned with the form of the arguments involved, not with the actual generation
of the explanatory hypotheses. In each case, the explanatory hypothesis is given in
the second premise of the argument. An account of the genesis of the explanatory
hypothesis must, therefore, be furnished by some other means. It is plausible to
suggest that reasoning to explanatory hypotheses trades on human beings’ evolved
cognitive ability to abductively generate such hypotheses. Peirce (1931–1958) him-
self maintained that the human ability to engage readily in abductive reasoning was
founded on a guessing instinct that has its origins in evolution. More suggestively,
Carruthers (2002) maintained that our ability, as humans, to engage in explanatory
inference is almost certainly largely innate, and he speculated that it may be an
adaptation selected for because of its crucial role in the fitness-enhancing activities
of our ancestors such as hunting and tracking. Whatever its origin, an informative
methodological characterization of the abductive nature of factor analytic inference
must appeal to the scientist’s own psychological resources as well as those of logic.
Exploratory factor analysis, then, can usefully function as a submethod of ATOM
by being located in that theory’s context of theory generation. Although it exemplifies
well the character of existential abduction, exploratory factor analysis is clearly
not an all-purpose method for abductively generating explanatory hypotheses and
theories. With its focus on common factors, it can properly serve as a generator of
elementary theories only in those multivariate domains where there are common
causal structures.
Understood in the context of theory generation, methods of existential abduction
like exploratory factor analysis should not be expected to achieve highly developed
and well-validated scientific theories. At best, they deliver rudimentary theories that
have initial plausibility. It is important to realize that these abductive methods enable
us to justify the initial plausibility of the theories they spawn. The very process of
the abductive generation of theories has a bearing on the first determinations of their
worth, in that we appeal to the soundness of the abductive arguments used in the
introduction of theories in order to evaluate their early epistemic promise (Whitt,
1992).
Relatedly, the nascent theories bequeathed us by methods like exploratory factor
analysis postulate the existence of hidden causal mechanisms, but they do not pro-
vide an informative characterization of their nature. Such theories have the status
of dispositional theories in that they provide us with oblique characterizations of
the properties we attribute to things by way of their presumed effects under spec-
ified conditions (Mumford, 1998). A move beyond the rudimentary nature of their
dispositional characterization requires subsequent elaboration. It is to a strategy for
developing such theories that I now turn.
3.4 Theory Construction 49
8 More precisely, iconic models are constructed as representations of reality, real or imagined. In
ATOM they stand in for the hypothesized causal mechanisms. Although representations, iconic
models are themselves things, structures, or processes that correspond in some way with things,
structures, or processes that are the objects of modelling. They are, therefore, the sorts of things
sentences can be about (Harré, 1976).
50 3 An Abductive Theory of Scientific Method
Analogical reasoning is important in science and clearly lies at the inferential heart
of analogical modelling. However, as noted above, because the theories fashioned
by ATOM are explanatory theories, the analogical models involved in theory devel-
opment will involve explanatory analogical reasoning, that is, analogical abduction.
The reasoning involved in analogical abduction can be simply stated in the form of
a general argument schema as follows:
Hypothesis H* about property Q was correct in situation S1.
Situation S1 is like the situation S2 in relevant respects.
Therefore, an analogue of H* might be appropriate in situation S2.
Darwin’s theory or model of natural selection, and the other aforementioned ana-
logical models, can plausibly be construed to be based on analogical abduction. The
general argument for analogical abduction just given can be rewritten in simplified
form for Darwin’s case as follows:
The hypothesis of evolution by artificial selection was correct in cases of selective domestic
breeding.
Cases of selective domestic breeding are like cases of the natural evolution of species with
respect to the selection process.
Therefore, by analogy with the hypothesis of artificial selection, the hypothesis of natural
selection might be appropriate in situations where variants are not deliberately selected for.
to provide a better explanation of the evidence than its rivals do. Of these three
approaches, the hypothetico-deductive method is by far the most widely used in psy-
chology (Cattell, 1966; Rorer, 1991; Rozeboom, 1999). Despite some urgings (e.g.,
Edwards, Lindman, & Savage, 1963; Lee & Wagenmakers, 2005; Rorer, 1991),
psychologists have been reluctant to use Bayesian statistical methods to test their
research hypotheses, preferring instead to perpetuate the orthodoxy of classical sta-
tistical significance testing within a hypothetico-deductive framework. Despite the
fact that inference to the best explanation is frequently used in science, and exten-
sively discussed in the philosophy of science, it is virtually unheard of, let alone
used, to appraise theories in psychology.
True to its name, ATOM adopts an abductive perspective on theory evaluation
by using a method of inference to the best explanation. It is shown shortly that, in
contrast to the hypothetico-deductive method, ATOM adopts an approach to inference
to the best explanation that measures empirical adequacy in terms of explanatory
breadth, not predictive success, and, in contrast with Bayesianism, it takes theory
evaluation to be an exercise that focuses directly on explanation, not a statistical
undertaking in which one assigns probabilities to theories. The basic justification for
using inference to the best explanation when evaluating explanatory theories is that
it is the only method researchers have that explicitly assesses such theories in terms
of the scientific goal of explanatory worth.
In considering theory evaluation in ATOM, the idea of inference to the best expla-
nation is introduced. Then, a well-developed method of inference to the best expla-
nation is presented and discussed. Thereafter, inference to the best explanation is
defended as an important perspective on theory evaluation.
Inference to the Best Explanation. In accordance with its name, inference to the
best explanation is founded on the belief that much of what we know about the world
is based on considerations of explanatory worth. Being concerned with explanatory
reasoning, inference to the best explanation is a form of abduction. As mentioned
earlier, it involves accepting a theory when it is judged to provide a better explanation
of the evidence than its rivals do. In science, inference to the best explanation is often
used to adjudicate between well-developed, competing theories (Thagard, 1988).
A number of writers have elucidated the notion of inference to the best explanation
(e.g., Day & Kincaid, 1994; Lipton, 2004; Thagard, 1988). The most prominent
account is due to Lipton, who suggested that inference to the best explanation is
not an inference to the “likeliest” explanation, but to the “loveliest” explanation,
where the loveliest explanation comprises the various explanatory virtues such as
theoretical elegance, simplicity, and coherence; it is the explanatory virtues that
provide the guide to inference about causes in science. However, the most developed
formulation of inference to the best explanation as a method of theory evaluation
was provided by Thagard (1992). Thagard’s formulation of inference to the best
explanation identifies, and systematically uses, a number of evaluative criteria in a
way that has been shown to produce reliable judgments of best explanation in science.
For this reason it is adopted as the method of choice for theory evaluation in ATOM.
The Theory of Explanatory Coherence. Thagard’s (1992) account of inference
to the best explanation is known as the theory of explanatory coherence (TEC).
3.4 Theory Construction 53
proposition cohere with each other. (c) The more hypotheses it takes to explain
something, the lower the degree of coherence.
3. Analogy. Similar hypotheses that explain similar pieces of evidence cohere.
4. Data Priority. Propositions that describe the results of observations have a degree
of acceptability on their own.
5. Contradiction. Contradictory propositions are incoherent with each other.
6. Competition. If p and q both explain a proposition, and if p and q are not explana-
torily connected, then p and q are incoherent with each other (p and q are explana-
torily connected if one explains the other or if together they explain something).
7. Acceptance. The acceptability of a proposition in a system of propositions
depends on its coherence with them. (p. 43)
Limitations of space preclude a discussion of these principles; however, the fol-
lowing points should be noted. The principle of explanation is the most important
principle in determining explanatory coherence because it establishes most of the
coherence relations. The principle of analogy is the same as the criterion of analogy,
where the analogy must be explanatory in nature. With the principle of data pri-
ority, the reliability of claims about observations and generalizations, or empirical
phenomena, will often be sufficient grounds for their acceptance. The principle of
competition allows noncontradictory theories to compete with each other.9 Finally,
with the principle of acceptance, the overall coherence of a theory is obtained by
considering the pairwise coherence relations through use of Principles 1–6.
The principles of TEC combine in a computer program, ECHO (Explanatory
Coherence by Harmany10 Optimization), to provide judgments of the explanatory
coherence of competing theories. This computer program is connectionist in nature
and uses parallel constraint satisfaction to accept and reject theories based on their
explanatory coherence.
The theory of explanatory coherence has a number of virtues that make it an
attractive theory of inference to the best explanation: It satisfies the demand for
justification by appeal to explanatory considerations rather than predictive success;
it takes theory evaluation to be a comparative matter; it can be readily implemented
by, and indeed is instantiated in, the computer program, ECHO, while still leaving
an important place for judgment by the researcher; and it effectively accounts for
a number of important episodes of theory assessment in the history of science. In
short, TEC and ECHO combine in a successful method of explanatory coherence
that enables researchers to make judgments of the best of competing explanatory
theories. Thagard (1992) is the definitive source for a detailed explication of the
theory of explanatory coherence.
Psychology is replete with competing theories that might usefully be evaluated
with respect to their explanatory coherence. Durrant and Haig (2001) hinted at how
two competing theories of language evolution might be judged in terms of their
inference to the best explanation and introduced the corresponding idea to modern philosophy
3.4 Theory Construction 55
explanatory coherence. However, examples of the full use of TEC to appraise the
best of competing explanatory theories in the behavioural sciences have yet to be
provided.
A number of authors (e.g., Haig, 1987; Laudan, 1977; Nickles, 1981) have stressed
the value of viewing scientific inquiry as a problem-solving endeavor. It will be
recalled that the overview of ATOM indicated the method’s commitment to the
notion of a research problem. This acknowledgment of the importance of research
problems for inquiry contrasts with the orthodox inductive and hypothetico-deductive
accounts of method, neither of which speaks of problem solving as an essential
part of its characterization. In an effort to depict scientific inquiry as a problem-
solving endeavor, ATOM uses a constraint-inclusion view of research problems
(Haig, 1987; Nickles, 1981). The idea of problems as constraints has been taken from
the problemsolving literature in cognitive psychology (Simon, 1977) and groomed
for a methodological role. Briefly, the constraint-inclusion theory depicts a research
problem as comprising all the constraints on the solution to that problem, along
with the demand that the solution be found. With the constraint-inclusion theory, the
constraints do not lie outside the problem but are constitutive of the problem itself;
they actually serve to characterize the problem and give it structure. The explicit
demand that the solution be found is prompted by a consideration of the aims of the
research, the pursuit of which is intended to fill the outstanding gaps in the problem’s
structure.
Note that all relevant constraints are included in a problem’s formulation. This is
because each constraint contributes to a characterization of the problem by helping to
rule out some solutions as inadmissible. However, at any one time, only a manageable
subset of the problem’s constraints will be relevant to the specific research task at
hand. Also, by including all the constraints in the problem’s articulation, the problem
enables the researcher to direct inquiry effectively by pointing the way to its own
solution. In a very real sense, stating the problem is half the solution!
The constraint-inclusion account of problems stresses the fact that in good scien-
tific research, problems typically evolve from an ill-structured state and eventually
attain a degree of well-formedness such that their solution becomes possible. From
the constraint-inclusion perspective, a problem will be ill-structured to the extent that
it lacks the constraints required for its solution. Because the most important research
problems will be decidedly ill-structured, we can say of scientific inquiry that its
basic purpose is to better structure our research problems by building in the various
required constraints as our research proceeds. It should be emphasized that the prob-
lems dimension of ATOM is not a temporal phase to be dealt with by the researcher
before moving on to other phases such as observing and hypothesizing. Instead,
the researcher deals with scientific problems all the time; problems are generated,
selected for consideration, developed, and modified in the course of inquiry.
56 3 An Abductive Theory of Scientific Method
Across the various research phases of ATOM there will be numerous problems
of varying degrees of specificity to articulate and solve. For example, the success-
ful detection of an empirical phenomenon produces an important new constraint on
the subsequent explanatory efforts devised to understand that constraint; until the
relevant phenomenon, or phenomena, are detected, one will not really know what
the explanatory problem is. Of course, constraints abound in theory construction.
For example, constraints that regulate the abductive generation of new theories will
include methodological guides (e.g., give preference to theories that are simpler, and
that have greater explanatory breadth), aim-oriented guides (e.g., theories must be
of an explanatory kind that appeals to latent causal mechanisms), and metaphysical
principles (e.g., social psychological theories must acknowledge humankind’s essen-
tial rule-governed nature). The importance of research problems, viewed as sets of
constraints, is that they function as the “range riders” of inquiry that provide ATOM
with the operation force to guide inquiry. The constraints themselves comprise rel-
evant substantive knowledge as well as heuristics, rules, and principles. Thus, the
constraint inclusion account of problems serves as a vehicle for bringing relevant
background knowledge to bear on the various research tasks subsumed by ATOM.
Before concluding the chapter, I want to identify and briefly discuss two important
methodological ideas that are part of the deep structure of ATOM. These ideas are
presented in two contrasts: (a) generative and consequentialist methodology and (b)
reliabilist and coherentist justification.
Modern scientific methodology promotes two different research strategies that can
lead to justified knowledge claims. These are known as consequentialist and genera-
tive strategies (Nickles, 1987). Consequentialist strategies justify knowledge claims
by focusing on their consequences. By contrast, generative strategies justify knowl-
edge claims in terms of the processes that produce them. Although consequentialist
strategies are used and promoted more widely in contemporary science, both types
of strategy are required in an adequate conception of research methodology. Two
important features of ATOM are that it is underwritten by a methodology that pro-
motes both generative and consequentialist research strategies in the detection of
phenomena, and generative research strategies in the construction of explanatory
theories.
Consequentialist reasoning receives a heavy emphasis in behavioural science
research through use of the hypotheticodeductive method, and null hypothesis signifi-
cance testing, and structural equation modelling within it. Consequentialist methods
3.6 ATOM and Scientific Methodology 57
11 The use of reliability as a mode of justification, or validation, differs from the normal psychometric
practice in which reliability and validity are presented as contrasts. However, the use of consistency
tests to validate knowledge claims on reliabilist grounds is widespread in science.
58 3 An Abductive Theory of Scientific Method
This concluding section of the chapter briefly comments on the nature and limits of
ATOM and its implications for research practice. In doing so, it also makes some
remarks about the nature of science.
reasoning permitted by the researcher than does exploratory factor analysis. The
earlier suggestion that as human beings, we have an evolved cognitive ability to
abductively generate hypotheses leads to the plausible suggestion that scientists fre-
quently reason to explanatory hypotheses without using codified methods to do so.
Two prominent examples in the behavioural sciences are Chomsky’s (1972) pub-
licly acknowledged abductive inference to his innateness hypothesis about universal
grammar, and Howard Gardner’s (Walters & Gardner, 1986) self-described use of
“subjective factor analysis” to postulate his multiple intelligences. Also, it is likely
that behavioural scientists use some of the many heuristics for creative hypothesis
generation listed by McGuire (1997) in order to facilitate their abductive reasoning
to hypotheses.
The strategy of analogical modelling is sometimes used in the behavioural sci-
ences to develop theories. This is not surprising, given that many of the proposed
causal mechanisms in these sciences are theoretical entities whose natures can only
be got at indirectly using such a modelling strategy. However, there is little evi-
dence that the behavioural sciences explicitly incorporate such a strategy into their
methodology and their science education practices. Given the importance of such a
strategy for the expansion of explanatory theories, methodologists in the behavioural
sciences need to promote analogical modelling as vigorously as they have promoted
structural equation modelling. Structural equation modelling provides knowledge
of causal networks. As such, it does not so much encourage the development of
detailed knowledge of the nature of the latent variables as it specifies the range and
order of causal relations into which such variables enter. By contrast, analogical
modelling seeks to provide more detailed knowledge of the causal mechanisms by
enumerating their components and activities. These different forms of knowledge
are complementary.
Inference to the best explanation is an important approach to theory appraisal
that has not been explicitly tried in the behavioural sciences. Instead, hypothetico-
deductive testing for the predictive success of hypotheses and theories holds sway.
TEC, which is the only codified method of inference to the best explanation, can be
widely used in those domains where there are two or more reasonably well-developed
theories that provide candidate explanations of relevant phenomena. By acknowledg-
ing the centrality of explanation in science, one can use TEC to appraise theories
with respect to their explanatory goodness. It is to be hoped that behavioural science
education will soon add TEC to its concern with cutting-edge research methods.
3.8 Conclusion
Since the time that this chapter was first written (Haig, 2005), a considerably
expanded book-length treatment of ATOM has been developed (Haig, 2014).
References
Abrantes, P. (1999). Analogical reasoning and modeling in the sciences. Foundations of Science, 4,
237–270.
Behrens, J. T., & Yu, C.-H. (2003). Exploratory data analysis. In J. A. Schinka & W. F. Velicer
(Eds.), Handbook of psychology (Vol. 2, pp. 33–64). New York, NY: Wiley.
Bogen, J. (2005). Regularities and causality: Generalizations and causal explanations. Studies in
the History and Philosophy of Biological and Biomedical Sciences, 36, 397–420.
Bogen, J., & Woodward, J. (1988). Saving the phenomena. Philosophical Review, 97, 303–352.
Brush, S. G. (1995). Dynamics of theory change: The role of predictions. Philosophy of Science
Association, 1994(2), 133–145.
Campbell, N. R. (1920). Physics: The elements. Cambridge, England: Cambridge University Press.
Carruthers, P. (2002). The roots of scientific reasoning: Infancy, modularity, and the art of tracking. In
P. Carruthers, S. Stich, & M. Siegal (Eds.), The cognitive basis of science (pp. 73–95). Cambridge,
England: Cambridge University Press.
Cartwright, N. (1983). How the laws of physics lie. Oxford, England: Oxford University Press.
Cattell, R. B. (1966). Psychological theory and scientific method. In R. B. Cattell (Ed.), Handbook
of multivariate experimental psychology (pp. 1–18). Chicago, IL: Rand McNally.
Chalmers, A. F. (1999). What is this thing called science? (3rd ed.). St. Lucia, Australia: University
of Queensland Press.
Chatfield, C. (1985). The initial examination of data. Journal of the Royal Statistical Society, Series
A, 148, 214–254.
Chomsky, N. (1972). Language and mind. New York, NY: Harcourt, Brace, Jovanovich.
Clark, J. M., & Paivio, A. (1989). Observational and theoretical terms in psychology: A cognitive
perspective on scientific language. American Psychologist, 44, 500–512.
Day, T., & Kincaid, H. (1994). Putting inference to the best explanation in its place. Synthese, 98,
271–295.
Duhem, P. (1954). The aim and structure of physical theory (2nd ed., P. P. Winer, Trans.). Princeton,
NJ: Princeton University Press. (Original work published 1914).
Durrant, R., & Haig, B. D. (2001). How to pursue the adaptationist program in psychology. Philo-
sophical Psychology, 14, 357–380.
Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological
research. Psychological Review, 70, 193–242.
Efron, B., & Tibshirani, R. (1993). An introduction to the bootstrap. New York, NY: Chapman &
Hall.
Ehrenberg, A. S. C., & Bound, J. A. (1993). Predictability and prediction. Journal of the Royal
Statistical Society: Series B, 156, 167–206.
Elfin, J. T., & Kite, M. E. (1996). Teaching scientific reasoning through attribution. Teachings of
Psychology, 23, 87–91.
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of
exploratory factor analysis in psychological research. Psychological Methods, 4, 272–299.
Fidell, L. S., & Tabachnick, B. G. (2003). Preparatory data analysis. In J. A. Schinka & W. F. Velicer
(Eds.), Handbook of psychology (Vol. 2, pp. 115–121). New York, NY: Wiley.
Franklin, A. (1990). Experiment, right or wrong. Cambridge, England: Cambridge University Press.
Gage, N. L. (1996). Confronting counsels of despair for the behavioral sciences. Educational
Researcher, 25(5–15), 22.
References 63
Giere, R. N. (1983). Testing theoretical hypotheses. In J. Earman (Ed.), Testing scientific theories
(pp. 269–298). Minneapolis, MN: University of Minnesota Press.
Giere, R. N. (1988). Explaining science: A cognitive approach. Chicago, IL: University of Chicago
Press.
Glymour, C. (1980). Theory and evidence. Princeton, NJ: Princeton University Press.
Goffman, E. (1969). The presentation of self in everyday life. London, England: Penguin.
Goldman, A. I. (1986). Epistemology and cognition. Cambridge, MA: Harvard University Press.
Haig, B. D. (1987). Scientific problems and the conduct of research. Educational Philosophy and
Theory, 19, 22–32.
Haig, B. D. (2005). Exploratory factor analysis, theory generation, and scientific method. Multi-
variate Behavioral Research, 40, 303–329.
Haig, B. D. (2014). Investigating the psychological world: Scientific method in the behavioral
sciences. Cambridge, MA: MIT Press.
Harman, G. (1965). The inference to the best explanation. Philosophical Review, 74, 88–95.
Harré, R. (1970). The principles of scientific thinking. Chicago, IL: University of Chicago
Press.modeling.
Harré, R. (1976). The constructive role of models. In L. Collins (Ed.), The use of models in the
social sciences (pp. 16–43). London, England: Tavistock.
Harré, R. (1988). Where models and analogies really count. International Studies in the Philosophy
of Science, 2, 119–133.
Harré, R., & Secord, P. F. (1972). The explanation of social behavior. Oxford, England: Blackwell.
Hempel, C. (1966). Philosophy of natural science. Englewood Cliffs, NJ: Prentice Hall.
Hesse, M. B. (1966). Models and analogies in science. Notre Dame, IN: University of Notre Dame
Press.
Josephson, J. R., & Josephson, S. G. (1994). Abductive inference: Computation, philosophy, tech-
nology. New York, NY: Cambridge University Press.
Laudan, L. (1977). Progress and its problems. Berkeley, CA: University of California Press.
Laudan, L. (1981). Science and hypothesis. Dordrecht, The Netherlands: Reidel.
Laudan, L. (1996). Beyond positivism and relativism: Theory, method, and evidence. Berkeley:
University of California Press.
Lee, M. D., & Wagenmakers, E. J. (2005). Bayesian statistical inference in psychology: Comment
on Trafimow (2003). Psychological Review, 112, 662–668.
Lindsay, R. M., & Ehrenberg, A. S. C. (1993). The design of replicated studies. American Statisti-
cian, 47, 217–228.
Lipton, P. (2004). Inference to the best explanation (2nd ed.). London, England: Routledge.
Lycan, W. G. (1988). Judgement and justification. Cambridge, England: Cambridge University
Press.
Magnani, L. (2001). Abduction, reason, and science: Processes of discovery and explanation. New
York, NY: Kluwer/Plenum Press.
Magnani, L. (2009). Abductive cognition: The epistemological and eco-cognitive dimensions of
hypothetical reasoning. Berlin, Germany: Springer.
McGuire, W. J. (1997). Creative hypothesis generating in psychology: Some useful heuristics.
Annual Review of Psychology, 48, 1–30.
Mumford, S. (1998). Dispositions. Oxford, England: Oxford University Press.
Nickles, T. (1981). What is a problem that we might solve it? Synthese, 47, 85–118.
Nickles, T. (1987). Twixt method and madness. In N. J. Nersessian (Ed.), The process of science
(pp. 41–67). Dordrecht, The Netherlands: Martinus Nijhoff.
Peirce, C. S. (1931–1958). In C. Hartshorne, P. Weiss, & A. Burks (Eds.) Collected papers (Vols.
1–8). Cambridge, MA: Harvard University Press.
Popper, K. R. (1959). The logic of scientific discovery. London, England: Hutchinson.
Preacher, K. J., & MacCallum, R. C. (2003). Repairing Tom Swift’s electric factor analysis machine.
Understanding Statistics, 2, 13–43.
64 3 An Abductive Theory of Scientific Method
Proctor, R. W., & Capaldi, E. J. (2001). Empirical evaluation and justification of methodologies in
psychological science. Psychological Bulletin, 127(759), 772.
Rorer, L. G. (1991). Some myths of science in psychology. In D. Cicchetti & W. M. Grove (Eds.),
Thinking clearly about psychology (Vol. 1, pp. 61–87)., Matters of public interest Minneapolis,
MN: University of Minnesota Press.
Ross, S. D. (1981). Learning and discovery. New York, NY: Gordon & Breach.
Rozeboom, W. W. (1972). Scientific inference: The myth and the reality. In R. S. Brown & D. J.
Brenner (Eds.), Science, psychology, and communication: Essays honoring William Stephenson
(pp. 95–118). New York, NY: Teachers College Press.
Rozeboom, W. W. (1999). Good science is abductive, not hypothetico-deductive. In L. L. Harlow,
S. A. Mulaik, & J. H. Steiger (Eds.), What if there were no significance tests? (pp. 335–391).
Hillsdale, NJ: Erlbaum.
Schmidt, F. L. (1992). What do data really mean? Research findings, meta-analysis, and cumulative
knowledge in psychology. American Psychologist, 47, 1173–1181.
Sidman, M. (1960). Tactics of scientific research. New York, NY: Basic Books.
Simon, H. A. (1977). Models of discovery. Dordrecht, the Netherlands: Reidel.
Skinner, B. F. (1984). Methods and theories in the experimental analysis of behavior. Behavioral
and Brain Sciences, 7, 511–546.
Sohn, D. (1996). Meta-analysis and science. Theory and Psychology, 6, 229–246.
Stephenson, W. W. (1961). Scientific creed—1961. Psychological Record, 11, 1–25.
Strauss, A. L. (1987). Qualitative analysis for social scientists. Cambridge, England: Cambridge
University Press.
Thagard, P. (1988). Computational philosophy of science. Cambridge, MA: MIT Press.
Thagard, P. (1992). Conceptual revolutions. Princeton, NJ: Princeton University Press.
Thagard, P. (2000). Coherence in thought and action. Cambridge, MA: MIT Press.
Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison Wesley.
Walters, J. M., & Gardner, H. (1986). The theory of multiple intelligence: Some issues and answers.
In R. J. Sternberg & R. K. Wagner (Eds.), Practical intelligence: Nature and origins of competence
in the everyday world (pp. 163–182). Cambridge, England: Cambridge University Press.
Whitt, L. A. (1992). Indices of theory promise. Philosophy of Science, 59, 393–472.
Wilkinson, L., & The Task Force on Statistical Inference. (1999). Statistical methods in psychology
journals: Guidelines and explanations. American Psychologist, 54, 594–604.
Woodward, J. (1989). Data and phenomena. Synthese, 79, 393–472.
Woodward, J. (2000). Data, phenomena, and reliability. Philosophy of Science, 67(Suppl.), 163–179.
Woodward, J. (2003). Making things happen. Oxford, England: Oxford University Press.