Venkatesh Etal 2013 MISQ
Venkatesh Etal 2013 MISQ
Venkatesh Etal 2013 MISQ
net/publication/220260065
CITATIONS READS
9,265 15,600
3 authors:
Xin Xu
The Hong Kong Polytechnic University
13 PUBLICATIONS 10,998 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Viswanath Venkatesh on 25 March 2022.
Viswanath Venkatesh
University of Arkansas
Susan A. Brown
University of Arizona
Hillol Bala
Indiana University
This is a pre-publication version and was subject to copyediting and proofing prior to publication.
BRIDGING THE QUALITATIVE-QUANTITATIVE DIVIDE: GUIDELINES FOR CONDUCTING MIXED
METHODS RESEARCH IN INFORMATION SYSTEMS
ABSTRACT
Mixed methods research is an approach that combines quantitative and qualitative research methods in the
same research inquiry. Such work can help develop rich insights into various phenomena of interest that
cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits
and repeated calls for such work, there is a dearth of mixed methods research in information systems (IS).
Building on the literature on recent methodological advances in mixed methods research, we develop a set
of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important
aspects of conducting mixed methods research: (a) appropriateness of a mixed methods approach; (b)
development of meta-inferences—i.e., substantive theory—from mixed methods research; and (c)
assessment of the quality of meta-inferences—i.e., validation of mixed methods research. The applicability
of these guidelines is illustrated using two published IS papers that used mixed methods.
1
INTRODUCTION
Diversity in research methods is considered a major strength of information systems (IS) research
(Lee 1999; Robey 1996; Sidorova et al. 2008). IS researchers have employed a plethora of different
research methods that can, at one level, be broadly categorized into two: quantitative and qualitative (Lee
and Hubona 2009; Myers and Avison 2002). One of the recurring issues in social and behavioral sciences
research is the relative value of different research approaches, especially with intense debates on different
epistemologies (e.g., positivist vs. interpretive) and methodologies (e.g., qualitative vs. quantitative). While
there have been increasing calls for going beyond the rhetoric of the differences among epistemologies and
methodologies to develop a disciplined methodological pluralism (Landry and Banville 1992; Weber 2004),
there is limited research that has employed methodological pluralism in the IS literature (Kaplan and
Duchon 1988; Mingers 2001, 2003). Particularly, while the current state of methodological diversity in IS
research is encouraging, there is a dearth of research in IS that employs a mixed methods1 approach—i.e.,
use of both qualitative and quantitative methods in a single research inquiry—that builds on a common
scientific basis essential to advance and sustain the tradition of methodological diversity in IS research and
to create a cumulative body of knowledge (Lee and Hubona 2009; Mingers 2001, 2003; Weber 2004).
Mixed methods research has been termed the third methodological movement (paradigm), with
quantitative and qualitative methods representing the first and second movements (paradigms) respectively
(Ridenour and Newman 2008; Teddlie and Tashakkori 2003, 2009). While proponents of mixed methods
research have suggested areas in which a mixed methods approach is potentially superior to a single
method design, there has been intense debate regarding whether or not it is even appropriate to combine
multiple methods that are often based on radically different paradigmatic assumptions (Denzin and Lincoln
1994; Guba 1987). There are several challenges associated with methodological pluralism based on the
1There is a conceptual distinction between multimethod and mixed methods research that is discussed later in the section titled
“Mixed Methods Research.”
2
notion of the incompatibility thesis2 and suggested that it is, in fact, feasible to conduct research that cuts
across multiple methodologies and paradigms (Mingers 1997, 2001; Ridenour and Newman 2008; Teddlie
and Tashakkori 2003, 2009). Several researchers have reviewed prior calls for methodological combination
and suggested that a peaceful coexistence of multiple methodologies is possible (Datta 1994; House 1994;
Ridenour and Newman 2008; Rossi 1994). Others have called for a combination of research methods,
phenomenon (Denzin 1978; Jick 1979; Mingers 1997, 2001; Reichardt and Rallis 1994).
Despite such calls for methodological pluralism and the benefits of combining multiple methods,
there has not been much research in IS that has employed a mixed methods approach. Our review of the
IS literature suggests that less than 5% of the empirical studies published between 2001 and 2007 in the
six major IS journals identified in the Senior Scholars’ Basket of Journals (AIS 2007)3 have employed mixed
methods. Considering the strength of mixed methods research with respect to understanding and
explaining complex organizational and social phenomena, there is clearly a need for IS researchers to
conduct and publish research that employs mixed methods (Cao et al. 2006; Mingers 2001). However, we
observe that while guidelines for conducting and evaluating different types of research—e.g., quantitative,
positivist case study, interpretive case study, design science, and action research—have been widely
available in the IS literature (e.g., Dubé and Paré 2003; Hevner et al. 2004; Klein and Myers 1999; Lee
1989; Mingers 2001; Myers and Klein 2011; Straub et al. 2004), guidelines for conducting and evaluating
mixed methods research in IS are lacking. Further, mixed methods research has received much attention in
the social and behavioral sciences recently (see Tashakkori and Creswell 2008 for a review), and we
2 The incompatibility thesis suggests that compatibility “between quantitative and qualitative methods is impossible due to the
incompatibility of the paradigms underlying the methods…researchers who combine the two methods are doomed to failure due
to the differences in underlying systems” (Teddlie and Tashakkori 2003, p. 7).
3
European Journal of Information Systems, Information Systems Journal, Information Systems Research, Journal of the
Association for Information Systems, Journal of Management Information Systems and MIS Quarterly.
3
Our view is consistent with researchers who suggest that a peaceful co-existence of multiple
paradigms is feasible in a research inquiry. In fact, we suggest that if a mixed methods approach helps a
researcher find theoretically plausible answer(s) to his or her research question(s) and if the researcher is
able to overcome the cognitive and practical barriers associated with conducting mixed methods research,
he or she should undertake such research without much consideration of paradigmatic or cultural
insights into various phenomena and develop novel theoretical perspectives. However, the decision to
conduct mixed methods research should hinge on the research question, purpose, and context. In keeping
with this view, we offer a set of guidelines for conducting and evaluating mixed methods research in IS. Our
primary goal is to initiate and facilitate discourse on mixed methods research in IS, and encourage and
assist IS researchers to conduct rigorous mixed methods research to advance the field.
While we provide a set of general guidelines for conducting mixed methods research, we elaborate
three important areas in our guidelines: (a) appropriateness of a mixed methods approach; (b) development
of meta-inferences—i.e., substantive theory4—from mixed methods research; and (c) assessment of the
of these three areas because while much progress has been made in understanding the design issues
related to mixed methods research, there has been limited discussion and understanding of when to
conduct mixed methods research—i.e., appropriateness of mixed methods research—how to discover and
develop integrative findings from mixed methods research—i.e., meta-inferences—and how to assess the
quality of meta-inferences—i.e., validation. We illustrate the applicability of our guidelines using two
exemplars of mixed methods research from the IS literature. We also discuss implications of our guidelines
with respect to assessing the rigor and quality of mixed methods approaches employed by IS researchers.
4A substantive theory represents concepts and their interrelation into a set of theoretical statements for a given substantive area
or issue (Glaser and Strauss 1965).
4
The paper is organized as follows. First, we discuss mixed methods research. Then, we review the
recent body of IS research employing a mixed methods approach. Next, we present guidelines for mixed
methods research. Finally, we discuss two published papers using a mixed methods approach in light of
Mixed methods research, at its core, involves a research design that uses multiple methods—i.e.,
more than one research method or more than one worldview, i.e., quantitative or qualitative research
approaches, in a research inquiry (Tashakkori and Teddlie 2003a, 2003b). Tashakkori and Teddlie
identified two major types of multiple methods research: (1) mixed methods research, which is the focus of
the current paper; and (2) multimethod research (Mingers 2001, 2003). While the terms mixed methods and
multimethod have been used interchangeably in social and behavioral sciences including IS, there are
significant conceptual differences between the two. In multimethod research, researchers employ two or
more research methods, but may (or may not) restrict the research to a single worldview (Mingers and
Brocklesby 1997; Teddlie and Tashakkori 2003, 2009). For instance, a researcher may use participant
observation and oral history to study a new IS implementation in an organization. Another researcher may
use ethnography and case study to understand the same phenomenon. In both cases, the researchers are
restricted to a single worldview—i.e., qualitative—but employ multiple methods of data collection and
analysis. In fact, Mingers and Brocklesby (1997) classified methodology combination—i.e., combining two
partitioning methodologies and combining parts (e.g., two different methodologies within qualitative
paradigms)—as two distinct types of multiple methods research. They suggested that multimethodology
5
research can be conducted using either a single paradigm or multiple paradigms. In contrast, mixed
methods research by definition is more in line with methodology combination that essentially requires
Multimethod research is not limited to a qualitative worldview. In fact, in the quantitative paradigm,
Campbell and Fiske (1959) developed the concept of multitrait-multimethod matrix (MTMM) to assess the
construct validity of a set of measures. They suggested the use of multiple methods to collect and analyze
data to ensure a high degree of reliability and validity in quantitative analysis—e.g., survey and direct
observations. While this approach of using multiple methods is in line with the spirit of multimethod
research, another approach of multimethod research within a quantitative worldview would be the use of
two different quantitative methods—e.g., an experiment and a field study—to develop a holistic
understanding of a phenomenon of interest. For example, Sun and Zhang (2006) conducted a multimethod
study using two different quantitative methods—a field study and an experiment—to understand the causal
relationships between perceived enjoyment and perceived ease of use in the context of an IS adoption.
Mixed methods research, in contrast, uses quantitative and qualitative research methods either
concurrently—i.e., independent of each other—or sequentially, e.g., findings from one approach inform the
other, phases to understand a phenomenon of interest. For instance, Ang and Slaughter (2001) conducted
a sequential mixed methods study—a quantitative study followed by a qualitative study—to understand
differences in work attitudes, behaviors, and performance across two groups of information technology (IT)
professionals—contract vs. permanent. Therefore, all mixed methods research studies are, by definition,
multimethod research, but all multimethod studies are not mixed methods research.
Proponents of mixed methods research appreciate the value of both quantitative and qualitative
worldviews to develop a deep understanding of a phenomenon of interest. For example, a researcher may
use interviews (a qualitative data collection approach) and surveys (a quantitative data collection approach)
to collect data about a new IS implementation. Another researcher might employ an ethnography (a
6
qualitative method) and a field experiment (a quantitative method) to understand the same phenomenon.
Creswell and Clark (2007) suggested four major types of mixed methods designs: (1) triangulation—i.e.,
merge qualitative and quantitative data to understand a research problem; (2) embedded—i.e., use either
qualitative or quantitative data to answer a research question within a largely quantitative or qualitative
study; (3) explanatory—i.e., use qualitative data to help explain or elaborate quantitative results; and (4)
exploratory—i.e., collect quantitative data to test and explain a relationship found in qualitative data. Others
proposed different typologies of mixed methods research with respect to the temporal sequence of data
collection and analyses (see Morse 2003; Teddlie and Tashakkori 2009 for a review of these typologies).
Regardless of the type of research design employed, the key characteristic of mixed methods research is
the sequential or concurrent combination of quantitative and qualitative methods (e.g., data collection,
With the rapid advancement of a new and complex array of ITs, organizations constantly face new
challenges related to their understanding of IT capabilities, practices, usage, and impacts. Further, the
diffusion of the Internet, the proliferation of numerous non-work related systems and social media, and the
availability of myriad IT-enabled devices have now made IT an integral part of individuals’ lives. As a result
of this rapidly changing environment, IS researchers often encounter situations in which existing theories
and findings do not sufficiently explain or offer significant insights into a phenomenon of interest. Mixed
methods design strategies provide a powerful mechanism for IS researchers to deal with such situations
We discuss three major strengths of mixed methods research to depict the value of conducting
such research in the IS literature. We provide specific examples where a mixed methods approach is more
advantageous than a single method approach to make substantial theoretical contributions. First, mixed
7
methods research has the ability to address confirmatory and exploratory research questions
simultaneously (Teddlie and Tashakkori 2003, 2009). While both qualitative and quantitative methods can
arguably be used to address similar research questions, qualitative methods have been typically used more
in IS and other social sciences for exploratory research in order to develop a deep understanding of a
phenomenon and/or to inductively generate new theoretical insights (Punch 1998; Walsham 2006). In
contrast, quantitative methods have typically been used more in IS for confirmatory studies, such as theory
testing. Mixed methods research, by combining both qualitative and quantitative methods, has the ability to
address both exploratory and confirmatory questions within the same research inquiry.
For instance, when e-commerce was an emerging phenomenon and researchers began studying
it, they employed exploratory qualitative studies to unearth factors related to individuals’ perceptions of e-
commerce. In one of the earlier studies on e-commerce, Keeney (1999) conducted interviews to
understand individuals’ perceptions of pros and cons of e-commerce. An exploratory approach was
necessary at that time because extant theoretical models did not provide adequate insights on e-
commerce. Subsequently, there was a series of confirmatory quantitative studies to test theoretical models
of e-commerce adoption and use (e.g., Gefen et al. 2003; Koufaris 2002). While these were primarily single
method studies, Pavlou and Fygenson (2006) undertook mixed methods research to study e-commerce
adoption and use. They first conducted an exploratory belief elicitation study to unearth the factors that
individuals consider when making a decision about e-commerce adoption. They used a qualitative method
(i.e., open-ended questions) for this belief elicitation study. Given that e-commerce was still an emerging
phenomenon in the mid-2000s, with concerns related to privacy, security and website capabilities, and
existing theories were still lacking in terms of offering a comprehensive set of factors that individuals might
consider when making the adoption decision, an exploratory qualitative study offered a rich mechanism to
discover these factors. Pavlou and Fygenson (2006) subsequently included these factors in a research
model of e-commerce adoption and tested the model using a confirmatory quantitative study.
8
Second, mixed methods research has the ability to provide stronger inferences than a single
method or worldview (Teddlie and Tashakkori 2003, 2009). While we understand that IS research that
employs rigorous qualitative or quantitative methods offers rich insights on various IS phenomena, we
suggest that mixed methods research, by combining inferences from both qualitative and quantitative
studies, can offset the disadvantages that certain methods have by themselves (Greene and Caracelli
1997). Mixed methods research can leverage the complementary strengths and non-overlapping
weaknesses of qualitative and quantitative methods and offer greater insights on a phenomenon that each
of these methods individually cannot offer (Johnson and Turner 2003). For example, interviews, a
qualitative data collection approach, can provide depth in a research inquiry by allowing researchers to gain
deep insights from rich narratives, and surveys, a quantitative data collection approach, can bring breadth
to a study by helping researchers gather data about different aspects of a phenomenon from many
participants. Together, these two data collection approaches can help IS researchers make better and
from qualitative and quantitative strands of mixed methods research, and are considered essential
organizations. Prior IS implementation research from both qualitative (e.g., Boudreau and Robey 2005) and
quantitative (e.g., Venkatesh et al. 2003) approaches has offered insights on how employees react to a
new IS. However, we believe that much qualitative research on IS implementations did not offer insights on
the breadth of issues and reactions from a vast majority of stakeholders due to the practical limitations
related to the number of stakeholders who could be interviewed and topics that could be covered during the
interviews. Similarly, quantitative studies failed to offer deep insights on the context of an IS implementation
and failed to capture the depth of reactions from stakeholders. In this case, mixed methods research can
9
potentially offer a holistic understanding of IS implementations—e.g., a substantive theory of IS
implementation with a balance of breadth and depth—by facilitating high quality meta-inferences.
Finally, mixed methods research provides an opportunity for a greater assortment of divergent
and/or complementary views (Teddlie and Tashakkori 2003, 2009). While conducting mixed methods
research, a researcher may find different (e.g., contradictory and complementary) conclusions from the
quantitative and qualitative strands. Such divergent findings are valuable in that they lead to a re-
examination of the conceptual framework and the assumptions underlying each of the two strands of mixed
methods research. These findings not only enrich our understanding of a phenomenon but also help us
substantive theory—and open new avenues for future inquiries. Complementary findings are equally
valuable in the quest for generating substantive theories because these findings offer a holistic view of a
For example, Venkatesh et al. (2003) theorized and found, using a quantitative approach, that
performance expectancy and effort expectancy are two major determinants of IS adoption and use.
Lapointe and Rivard (2005) conducted a qualitative study of three clinical IS implementations and
developed a continuum of employees’ reactions to new IS from adoption to aggressive resistance. They
found that different facets of perceived threats (e.g., work and economic, loss of status, loss of power,
reorganization of work) play a critical role in determining employees’ position on the continuum of adoption
and aggressive resistance. While there is no major contradiction of findings between Venkatesh et al.
(2003) and Lapointe and Rivard (2005), there is a divergent and/or complementary view that suggests that
IS adoption is not necessarily a discrete decision and individuals consider a wide variety of positive and
negative factors while making adoption vis-à-vis resistance decisions. Such divergent and/or
complementary views provide an opportunity to discover, develop, extend, and test a substantive theory of
10
IS adoption in this case by unearthing a comprehensive set of factors or components and their
interrelations, and can be accommodated in a single research inquiry using a mixed methods approach.
While a mixed methods approach is clearly a valuable methodological choice for IS researchers
because of its strengths discussed in the previous section, we note that such an approach is not a panacea
and does not always lead to the discovery, development or extension of a substantive theory. Employment
of a mixed methods approach in a research inquiry should serve certain purposes. We summarize seven
purposes for mixed methods research that we adapted from prior research (Creswell 2003; Greene et al.
1989; Tashakkori and Teddlie 2008). These purposes include complementarity, completeness,
Tashakkori and Teddlie (2008, p. 103) noted that the reasons for using mixed methods are not always
“explicitly delineated and/or recognized” by researchers who conduct mixed methods research. The
explication of the purposes for conducting mixed methods research is an onus on researchers conducting
Understanding the purposes for which mixing qualitative and quantitative methods is deemed
appropriate in a research inquiry is important for three reasons. First, we argue that unlike qualitative and
quantitative approaches, a mixed methods approach is typically not a natural methodological choice in
social and behavioral sciences. Researchers have to overcome considerable paradigmatic, cultural,
cognitive, and physical challenges to be able to conduct mixed methods research (Mingers 2001).
Therefore, we suggest that a mixed methods research approach should serve one or more purposes
beyond the core purpose of a research methodology—i.e., help researchers conduct scientific research
inquiries. Hence, researchers thinking about employing a mixed methods approach should be aware of
different purposes for utilizing a mixed methods approach in their research. Table 1 offers a comprehensive
11
set of purposes for mixed methods research summarizing the reasons for employing such an approach in a
mixed methods approach may help the reader better understand the goals and outcomes of a mixed
methods research paper. For example, if the purpose for conducting mixed methods research is for
completeness, the reader can expect that a mixed methods study will provide a more holistic view of the
phenomenon of interest than its qualitative and quantitative components will alone. Finally, an
unambiguous understanding of mixed methods research purposes will help researchers make informed
decisions about the design and analysis aspects of a mixed methods inquiry. If, for instance, the purpose
for conducting mixed methods research is developmental, a sequential mixed methods approach is
In order to understand the current status of mixed methods research in IS, we reviewed the papers
published in the six journals in the Senior Scholars’ Basket of Journals (AIS 2007) over a seven-year period
(2001-’07). Mingers (2001, 2003) reviewed the same set of journals except Journal of Management
Information Systems and Journal of the Association for Information for the period between 1993 and 2000
and found a great paucity of multimethod research in IS—i.e., only about 13 percent of empirical papers
employed multiple methods. Our review is different from Mingers’ (2001, 2003) and other prior reviews
(e.g., Orlikowski and Baroudi 1991; Walsham 1995) in two ways. First, we followed the guidelines of
Tashakkori and Teddlie (1998) and Teddlie and Tashakkori (2003) to identify mixed methods research
papers. The criteria we used were: (1) the study must be empirical; (2) both quantitative (e.g., surveys) and
qualitative (e.g., interviews) methods of data collection must be employed; and (3) both quantitative and
qualitative data must be analyzed and presented. We noticed that some studies collected only qualitative
data, but analyzed the data quantitatively (e.g., Bala and Venkatesh 2007; Sherif et al. 2006; Slaughter et
12
al. 2006). We did not include these studies because they do not truly represent mixed methods research.
Mingers’ reviews were more inclusive than ours in that he included empirical studies that employed more
than one research method regardless of whether the method was qualitative or quantitative—e.g., papers
with two quantitative (or qualitative) methods would qualify in Mingers’ review as multimethod papers,
whereas they would not qualify in our review as mixed methods papers. Second, we focused on
appropriateness, meta-inferences and validation aspects of mixed methods research, as our key interest
was to understand the current state of mixed methods research from these three perspectives.
We searched the journals in two complementary ways. First, we searched these journals using
EBSCO Academic Search Premier, a leading database for academic articles, for the following eight
keywords: mixed, multi, mixed methods, multimethod, qualitative, quantitative, positivist, and interpretive. In
addition, we identified the papers that cited the key papers on mixed methods research in IS, i.e., Kaplan
and Duchon (1988) and Mingers (2001, 2003). We examined the research method section of these papers
to ensure that they employed both quantitative and qualitative data collection and analysis. Second, we
downloaded all articles published in these journals between 2001 and 2007, and read the research method
and results sections to determine if the authors employed a mixed methods design. In both cases, we
coded the articles on the following dimensions: purpose for employing a mixed methods approach,
methods used and paradigmatic assumptions (e.g., positivist, interpretive, and critical research) made, and
discussion of meta-inferences and validation. These two processes were accomplished by a research
assistant and one of the authors, and their inter-rater reliability (IRR) was .93. Minor discrepancies were
discussed and resolved and a 100 percent agreement was achieved. We found a total of 31 papers that
employed a true mixed methods design. This represents approximately 3 percent of the total papers
published in these six journals during this timeframe. Table 2 presents the list of the mixed methods papers
13
Table 2 shows that developmental and completeness are the most dominant purposes for
conducting mixed methods research in IS (32% and 26% respectively). Diversity (3%) and compensation
(3%) are the least used purposes for mixed methods research. It is important to note that the reasons for
conducting mixed methods research discussed in these papers would fit more than one purpose in many
cases. We coded these purposes based on our interpretation of these reasons. Table 2 also shows that
surveys and interviews are the most widely used data collection methods for quantitative and qualitative
studies respectively. Brannen (2008) noted that a mixed methods researcher does not always have to treat
both qualitative and quantitative studies equally. In other words, it is possible that, in some cases, the
quantitative study is the dominant component and, in some other cases, the qualitative study dominates.
We found that a quantitative study was dominant in a majority of mixed methods papers in (55%). We
found that 65% of the papers provided an explicit discussion of meta-inferences—i.e., integrative findings
from both quantitative and qualitative studies. Finally, validation of mixed methods research was not
While this review provides useful information about mixed methods research in IS, we also
searched for significant research programs in which IS researchers employed a mixed methods approach
for collecting and analyzing data, but crafted separate papers for the qualitative and quantitative studies
respectively. We conducted this search in two phases. In the first phase, we searched Web of Science for
all qualitative papers published between 2001 and 2007 in one of the six journals.5 While we found several
of these programs that offered deep insights on different phenomena of interests (see Table 3 for
examples), we noticed that none of these programs could be considered a true mixed methods research
program because the researchers did not offer meta-inferences of their findings. In other words, there was
no visible effort to integrate the findings from qualitative and quantitative studies—i.e., to provide meta-
inferences. Without such integration, it is difficult to classify a research program as truly being mixed
5 We did not include unpublished work, such as working papers or doctoral dissertations.
14
methods research (Teddlie and Tashakkori 2003, 2009). In the second phase, our goal was to extend this
search to include all authors who published in these six journals. However, given that we could not find a
single research program that had a true mixed methods approach in the first phase, we did not conduct the
second phase of search. This is consistent with Mingers (2001) who also could not find a significant
It is important to note that, in many cases, it was difficult to determine whether qualitative and
quantitative papers were parts of one research program due to the lack of matching descriptions of
research contexts in these papers. While the outcome of this search process was not entirely fruitful, it
confirmed our contention that there is a dearth of mixed methods research in IS. Despite its outcome, we
discuss this search process because it depicts a situation of “interim struggles” in the research process
(Runkel and Runkel 1984, p. 130). As we noted, while we found several research programs that had
characteristics of mixed methods research, we were unable to confirm (or disconfirm) whether or not these
programs were indeed examples of true mixed methods research programs due to the absence of an
published multiple articles without providing much detail to link these articles, thus making it difficult for the
reader to integrate findings from the qualitative and quantitative studies. It may not be possible or desirable
to publish all papers from such a research program as mixed methods papers because of different research
questions and interests. In addition, researchers typically prefer to have multiple publications from a
research program. We argue that publishing single method papers from a mixed methods research
program can lead to at least two potential drawbacks: contribution shrinkage and communal disutility.
If IS researchers continue to publish single method papers from mixed methods programs, they are
likely to miss the opportunity to discover, develop, or extend a substantive theory in richer ways than
possible with single method papers. A mixed methods approach, particularly the associated meta-
15
inferences, offers mechanisms for discovering substantive theory by allowing researchers to not only
unearth components related to a phenomenon, but also unveil interrelations among these components and
boundary conditions surrounding these interrelations. We suggest that papers from a mixed methods
research program that only report findings from single method research thus miss opportunities to
contribute substantially to the literature—hence, contribution shrinkage. Further, the entire community of
researchers who are interested in this phenomenon fails to learn intricacies of the phenomenon because a
holistic account is not provided, leading to communal disutility. Thus, publishing single method papers from
mixed methods research programs is disadvantageous to a researcher and the academic community.
quality and rigor (Cook and Campbell 1979; Shadish et al. 2002). There is a rich and long tradition of
applying validation principles in both quantitative and qualitative studies. However, while there is a general
consensus among researchers with respect to the validation principles and processes in quantitative
studies, researchers do not have any such agreement when it comes to applying validation principles in
qualitative studies. However, there have been attempts in recent years to develop a cumulative body of
knowledge of validation principles and processes for qualitative research (Lincoln and Guba 1985; Mertens
2005). In this section, we first briefly discuss validation in quantitative and qualitative research
independently. This is particularly important in our discussion of mixed methods research because we
suggest that the quantitative and qualitative strands in a mixed methods design are subject to the traditional
validation principles from each of these strands respectively. We then discuss the notion of validation in
mixed methods research. Building on the suggestions of scholars who advanced our knowledge of
research methodologies (Cook and Campbell 1979; Lincoln and Guba 2000; Maxwell 1992; Nunnaly and
Bernstein 1994; Patton 2002), we categorize the most widely used validation concepts from quantitative
and qualitative research, summarize them in Table 4 and discuss them in this section.
16
----- Insert Table 4 about here -----
Straub and his colleagues have provided detailed reviews and guidelines on validation in
quantitative research (Boudreau et al. 2004; Gefen et al. 2000; Straub 1989; Straub et al. 2004). Typically,
in quantitative research, two primary validation issues are addressed—i.e., reliability and validity of
measures. These two validation approaches are applicable to both formative and summative validity as
described by Lee and Hubona (2009). Reliability is related to the quality of measurement (Straub et al.
2004). A measure is considered reliable if it yields the same result over and over again. Types of reliability
and guidelines for assessing reliability are discussed elsewhere (Straub et al. 2004). Without reliable
measures, a quantitative study is considered invalid (Straub et al. 2004). Therefore, reliability is a
Validity refers to the legitimacy of the findings—i.e., how accurately the findings represent the truth
in the objective world. As shown in Table 4, there are three broad types of validity in quantitative research
(Cook and Campbell 1979; Shadish et al. 2002): (a) measurement validity, e.g., content validity, construct
validity; (b) design validity, i.e., internal and external validity; and (c) inferential validity, i.e., statistical
conclusion validity. Measurement validity estimates how well an instrument measures what it purports to
measure in terms of its match with the entire definition of the construct. Design validity encompasses
internal and external validity. Internal validity is the extent of approximate truth about inferences regarding
cause-effect or causal relationships in a scientific inquiry (Shadish et al. 2002). External validity is the
extent to which the results of a research study can be generalized to other settings and groups. Finally,
inferential or statistical conclusion validity is related to the findings of quantitative studies. It refers to the
appropriate use of statistics to infer whether the presumed independent and dependent variables covary.
Quantitative research in IS has recognized the importance of reliability and validity. Norms and
thresholds have been established over the years and have become generally accepted in the IS literature
17
about how to report reliability and validity. Reviewers and editors are very particular about these norms and
thresholds and it is unlikely that a quantitative paper that fails to follow the norms and meet the thresholds
will be published in IS journals. Recent reviews on validation in IS research have confirmed the steady
progress toward rigorous validation in quantitative IS research (Boudreau et al. 2004; Straub et al. 2004).
As noted earlier, unlike quantitative research that has generally accepted and largely undisputed
guidelines for validation (Cook and Campbell 1979; Nunnally and Bernstein 1994), qualitative research
does not have guidelines or evaluation criteria for validation that are generally accepted and/or widely used
(Kirk and Miller 1986; Lee and Hubona 2009). The issue of validation in qualitative research is rather
ambiguous and contentious (Maxwell 1992; Ridenour and Newman 2008). Some researchers, primarily
from the positivist paradigm, have suggested that the same set of criteria used in quantitative studies can
be applied to qualitative studies, while other researchers, primarily interpretivist or constructivist, have
suggested a different set of evaluation criteria. Some researchers have even suggested that the notion of
validation, such as reliability and validity, should not even be considered a criterion for evaluating
qualitative research (Guba and Lincoln 2005; Maxwell 1992; Stenbacka 2001). Others have suggested that
while validation is important for qualitative research, it should be called something other than reliability and
validity to distinguish it from what is done in quantitative research (Lincoln and Guba 1985; Patton 2002).
Regardless of the different views of validation in qualitative research, there is some agreement that
validation (or similar concepts) is essential in qualitative research to reduce misunderstanding of qualitative
research and to develop a common scientific body of knowledge (Maxwell 1992). In the IS literature, Lee
and Hubona (2009) recently highlighted the importance of establishing validity in qualitative research.
In qualitative research, consistency and dependability of data and analysis are two terms that are
conceptually similar to reliability in quantitative research (Lincoln and Guba 1985). Lincoln and Guba (1985)
suggested a process called inquiry audit to measure consistency and dependability of qualitative data.
18
They argued that because reliability is a necessary condition for validity, demonstrating validity in
qualitative research is sufficient to establish reliability. Validity, in the context of a qualitative study, is
defined as the extent to which data are plausible, credible and trustworthy, and thus can be defended when
challenged. Maxwell (1992) suggested three types of validity in qualitative research: (1) descriptive validity:
the accuracy of what is reported—e.g., events, objects, behaviors, and settings—by the researchers; (2)
interpretive validity: the accuracy of interpreting what is going on in the minds of the participants and the
degree to which the participants’ views, thoughts, feelings, intentions, and experiences are accurately
understood by the researchers; and (3) theoretical validity: the extent to which the theoretical explanation
While Maxwell’s (1992) suggestions about validity are broad, others have suggested more specific
forms of validity for qualitative research. For example, Lincoln and Guba (2000) suggested three criteria for
judging the soundness of qualitative research and explicitly offered these as an alternative to more
traditional quantitatively oriented criteria. These are: (1) credibility (as opposed to internal validity of
quantitative research); (2) transferability (as opposed to external validity of quantitative research); and (3)
confirmability (as opposed to statistical conclusion validity in quantitative research). Consistent with the
classification of quantitative validity types presented in Table 4, we organized different types of validity for
qualitative research into three broad categories: (a) design validity, e.g., descriptive validity, credibility, and
transferability; (b) analytical validity, e.g., theoretical validity, dependability, consistency, and plausibility;
and (c) inferential validity, e.g., interpretive validity and confirmability. This classification is consistent with
Guba and Lincoln (2005) and Ridenour and Newman (2008) who discussed two types of validation issues
in qualitative research—i.e., rigor in the application of methods (design validity) and rigor in the
interpretation of data (analytical and inferential validities). Design validity refers to how well a qualitative
study was designed and executed so that the findings are credible and transferable. Analytical validity
refers to how well qualitative data were collected and analyzed so that the findings are dependable,
19
consistent, and plausible. Finally, inferential validity refers to the quality of interpretation that reflects how
Given that there are no generally accepted guidelines, expectations and/or norms to discuss
validity in qualitative research, many IS researchers take an implicit approach of discussing validity in their
work. Researchers who prefer an implicit approach typically do not offer a formal discussion of validation.
Instead, they ensure rigor in their application of methods and interpretation of data by providing rich
descriptions of their engagement, high quality data collection efforts, and rigorous data analyses and
reporting (Guba and Lincoln 2005; Ridenour and Newman 2008). While this approach is not inconsistent
with the approach taken by qualitative researchers more broadly (see Maxwell 1992), our view is that it is
helpful if qualitative researchers provide an explicit discussion of validity. This view is consistent with Klein
and Myers (1999) who provide a set of principles for conducting and evaluating interpretive research in IS
and Lee and Hubona (2009) who advocate for a more explicit and rigorous treatment of validity in both
quantitative and qualitative research in IS in order to develop and maintain a common scientific basis.
While there has been much progress with respect to the design of mixed methods research, limited
guidance is available in the literature for validation in mixed methods research. Creswell and Clark (2007,
p. 145) noted, “…the very act of combining qualitative and quantitative approaches raises additional
potential validity issues.” Some of these issues are: (a) how should validity be conceptualized in mixed
methods research; (b) how and when to report and discuss validity for qualitative and quantitative strands
of mixed methods research; (c) whether researchers should follow the traditional validity guidelines and
expectations; and (d) how to minimize potential threats to the validity related to data collection and analysis
issues in mixed methods research (Creswell and Clark 2007). Overall, validation is a major issue in mixed
methods research. Teddlie and Tashakkori (2003) argued that with so many different types of validity in
quantitative and qualitative research (see Table 4), the term “validity” has lost the intended connotation.
20
Teddlie and Tashakkori (2003, 2009) proposed the term inference quality to refer to validity in the context of
mixed methods research. In contrast, Creswell and Clark (2007) argued that because the term validity is
extensively used in quantitative and much qualitative research, it may be used in mixed methods research
and thus, new terminology is not essential. We believe that a mixed methods nomenclature for validation
can be useful in order to differentiate mixed methods validation from quantitative and qualitative validation.
Therefore, consistent with Teddlie and Tashakkori (2003, 2009), we use the term inference quality to refer
to validity and the term data quality to refer to reliability in mixed methods research.
among people, events, and variables as well as his or her construction of respondents’ perceptions,
behavior, and feelings and how these relate to each other in coherent and systematic manner” (Tashakkori
and Teddlie 2003b, p. 692). Inference quality in mixed methods research refers to the accuracy of
inductively and deductively derived conclusions in a study or research inquiry. Inference quality is an
umbrella term that includes various types of validities. In contrast, data quality is associated with the quality
of measures and/or observations—i.e., reliability (Teddlie and Tashakkori 2003). While inference quality is
pertinent to interpretations and conclusions from mixed methods research, data quality refers to the degree
to which collected data (results of measurement or observation) meet the standards of quality to be
considered valid (e.g., trustworthiness) and reliable (e.g., dependable). Teddlie and Tashakkori (2003,
2009) suggested that inference quality consists of design quality, i.e., whether a mixed methods study
adheres to commonly accepted best practices, and interpretive rigor, i.e., standards for the evaluation of
accuracy or authenticity of the conclusion. Our guidelines for validation in mixed methods research are
based on the notion of inference quality and its dimensions—i.e., design quality and interpretive rigor—
21
As we noted at the outset, there have been several important papers published in the leading IS
journals that provide guidelines for conducting and evaluating research in areas that are not common in the
IS literature. For example: (1) Lee (1989) for case studies in IS; (2) Klein and Myers (1999) for interpretive
research in IS; (3) Mingers (2001) for multimethod research in IS; (4) Dubé and Paré (2003) for positivist
case studies in IS; (5) Lee and Baskerville (2003) for generalizability in IS research; and (6) Hevner et al.
(2004) for design science research in IS. These guidelines not only help authors craft and strengthen their
manuscripts but also help reviewers and editors to evaluate and make informed decisions about a paper.
Consequently, IS researchers are able to better design, conduct, and report research inquiries and offer
rich, theoretical insights on their phenomena of interest. In this section, we provide guidelines for mixed
methods research, with a particular focus on three areas: i.e., appropriateness of mixed methods research,
meta-inferences, and validation. While we offer a set of broad guidelines on other important aspects of
mixed methods research—e.g., research design, data collection, and analysis—we focus on these three
aspects because they have received the least attention in the extant literature on mixed methods research
(Teddlie and Tashakkori 2003, 2009). Our guidelines will help IS researchers conduct mixed methods
While we argue that mixed methods research can potentially offer insights into IS phenomena that
a single method may not be able to offer, we do not suggest nor do we expect that every research inquiry in
IS should employ a mixed methods approach. In fact, we note that mixed methods research is not a
substitute for rigorously conducted single method studies in IS. Instead, it is an additional approach for
gaining further insights on phenomena that are of interest to IS researchers. In this section, we offer a set
of guidelines for IS researchers to consider in making decisions regarding whether to employ a mixed
methods approach in their research. These guidelines will also help editors, reviewers, and readers of IS
research to assess and appreciate the overall appropriateness and quality of mixed methods research.
22
Before undertaking mixed methods research, IS researchers need to carefully consider the
appropriateness of employing a mixed methods approach in their research. While there are considerable
disagreements regarding the utility, design strategies and inference quality in mixed methods research,
there is a remarkable consistency of views with respect to how and why researchers should employ a
mixed methods approach in their research (Creswell and Clark 2007; Ridenour and Newman 2008; Teddlie
and Tashakkori 2003, 2009). The general agreement is that the selection of a mixed methods approach
should be driven by the research questions, objectives, and context (Creswell and Clark 2007; Mingers
2001; Ridenour and Newman 2008; Teddlie and Tashakkori 2003, 2009). Earlier, we discussed a set of
purposes of mixed methods research that we suggest will help IS researchers assess the suitability of a
mixed methods approach and make strategic research design decisions. Understanding these purposes
(shown in Table 1) will facilitate sound decision making with respect to the appropriateness and value of a
We suggest that while IS researchers think about their research questions, objectives and
contexts, they also need to carefully think about the three broad strengths of mixed methods research that
we discussed earlier. Our view is that IS researchers should employ a mixed methods approach only when
they intend to provide a holistic understanding of a phenomenon for which extant research is fragmented,
inconclusive, and equivocal. In particular, we suggest that it is the context of a phenomenon that should
drive the selection of methodology (Johns 2006; Rousseau and Fried 2001). Given the nature of IT artifacts
and associated phenomena, we suggest that IS researchers are in an ideal position to explore the role of
context in their research. A mixed methods approach will be a powerful mechanism to interject context into
a research inquiry. For example, while there has been much research on the impacts of IS use on
employees’ performance, there is no conclusive evidence of either a positive or a negative impact. Mixed
methods research can offer a holistic view of the circumstances under which IS use can have a positive (or
23
If, however, the objective of a research inquiry is to test a model that was developed from a well-
established theoretical perspective and the context of the research is not significantly different from the
context in which the theoretical perspective was developed, we suggest that there is no need to conduct
mixed methods research. For example, if an IS researcher develops a research model based on the
technology acceptance model (TAM; Davis et al. 1989) and plans to survey employees of an organization
in the U.S., there is probably no need for a mixed methods approach. However, if this study is going to be
conducted in a rural village in India, a mixed methods approach may unearth factors that are not typically
common in a developed country in the west. In that context, leveraging qualitative research, in addition to
quantitative, can likely help improve understanding of relationships in TAM that work differently (see Johns
2006), or the breakdown of TAM (see Alvesson and Kärreman 2007), and result in the emergence of
insights possible only from induction (Lee and Baskerville 2003, Locke 2007). For example, Venkatesh et
al. (2010) employed a mixed methods approach to study the influence of an IS on employees’ job
characteristics and outcomes in a developing country using the widely used job characteristics model
(JCM; Hackman and Oldham 1980). They found that while the new IS had a positive influence on job
characteristics, employees reported significantly lower job satisfaction and job performance following the
implementation of the IS. While JCM was not able to explain these puzzling findings, their qualitative study
revealed a set of contextual factors that explained these findings and offered insights on important
We also urge IS researchers, editors and reviewers to consider the broad purposes of mixed
methods research described in Table 1, and evaluate how the overall research questions, objectives and
context of a mixed methods study fit with one or more of these purposes. If there is no clear fit (e.g., a
mixed methods approach does not serve the purpose of providing plausible answers to a research
question), it is likely that mixed methods research is not appropriate. For instance, if the goal of a research
inquiry is to understand the role of personality characteristics in IS adoption decisions, a mixed methods
24
approach may not be useful because, due to the rich theory base related to personality characteristics and
IS adoption, there is limited opportunity to meet the purposes of conducting mixed methods research.
incommensurability thesis, we suggest that IS researchers have at least three options with respect to mixed
methods research paradigms: (a) alternative paradigm stance—i.e., use of new, emergent paradigms to
and demands of the inquiry, instead of paradigms, should be the guiding principle in a research inquiry; and
(c) substantive theory stance—i.e., traditional or emergent paradigms may be embedded in or intertwined
with substantive theories (Greene 2007, 2008; Teddlie and Tashakkori 2003). However, while we
acknowledge these stances as valid and powerful paradigmatic positions for mixed methods research, we
suggest that a substantive theory stance is a more appropriate paradigmatic stance for IS research due to
the dynamic nature of the field and the need for developing novel theoretical perspectives. If IS researchers
prefer to embrace an alternative paradigm as the epistemological foundation of mixed methods research,
there are at least three mixed methods research paradigms they can choose from: (a) pragmatic; (b)
Pragmatism considers practical consequences and real effects to be vital components of meaning
and truth. While a quantitative approach is primarily based on deduction and a qualitative approach is
based on induction, a pragmatic approach is based on abduction reasoning that moves back and forth
between induction and deduction. This iterative approach supports the use of both qualitative and
quantitative methods in the same research study and thus rejection of the incompatibility thesis (Howe
1988; Maxcy 2003). Pragmatists believe in the “dictatorship of the research questions.” They place the
greatest importance on the research questions and select a method and paradigm that fit with the research
questions. Pragmatism rejects a forced choice between existing paradigms with regard to logic, ontology,
and epistemology. In sum, pragmatism presents a practical and applied research philosophy. Some mixed
25
methodologists suggest that pragmatism is the best paradigm for justifying the use of mixed methods
research (Datta 1994; Howe 1988; Teddlie and Tashakkori 2003). A detailed discussion of pragmatism is
beyond the scope of this paper and provided elsewhere (e.g., Maxcy 2003).
(Mertens 2003, 2005). The basic thesis of this paradigm is that the creation of a more just and democratic
society should be the ultimate goal for conducting research. It places central importance on the
experiences of individuals who suffer from discrimination or oppression. It focuses on the interaction
between the researcher and the participants, and suggests that this interaction requires understanding and
trust. For example, researchers engaging in the transformative-emancipatory paradigm believe that they
should be aware of power differentials in the context of their research, and should promote social equity
and justice through their research. It supports mixed methods research due to its ability to address the
Finally, critical realism is a widely used paradigm that is particularly suitable for mixed methods
research. It offers a robust framework for the use of a variety of methods in order to gain better
understanding of the meaning and significance of a phenomenon of interest (Archer et al. 1998; Bhaskar
1978; Danermark et al. 2002; Houston 2001; Mingers 2004a; Patomaki and Wight 2000; Sayer 2000).
Critical realism does not recognize the existence of some absolute truth or reality to which an object or
account can be compared (Maxwell 1992). Critical realism is an ideal paradigm for mixed methods
research because it accepts the existence of different types of objects of knowledge—namely, physical,
social, and conceptual—that have different ontological and epistemological characteristics and meaning.
Therefore, it allows a combination of employing different research methods in a research inquiry to develop
multi-faceted insights on different objects of research that have different characteristics and meaning.
We suggest that the paradigm should not be an obstacle to conducting mixed methods research in
IS. Our view is that in order to find plausible and theoretically sound answers to a research question and to
26
develop substantive theory for various phenomena related to IS, IS researchers should be able to mix and
match their paradigmatic views and still conduct rigorous mixed methods research. The three paradigmatic
choices that we describe here will help IS researchers justify their paradigmatic (e.g., epistemological and
ontological) positions. While we do not suggest superiority of any particular paradigm of mixed methods
research, we note that critical realism has gained much attention in the IS literature recently (Mingers
2004a, 2004b, 2004c). Drawing on Mingers, we suggest that critical realism is a particularly suitable
paradigmatic choice for mixed methods IS research because of the dynamic nature and contextual richness
of the IS discipline (e.g., different types of object of knowledge—physical, social, and conceptual) that can
be adequately examined and theorized using a variety of methods in the same research study.
As noted earlier, mixed methods scholars have suggested several design strategies. Two of the
most widely used mixed methods research designs are: (a) concurrent; and (b) sequential (Creswell et al.
2003). In a concurrent design, quantitative and qualitative data are collected and analyzed in parallel and
then merged together for a complete understanding of a phenomenon or to compare individual results. In
contrast, in a sequential mixed methods design, quantitative and qualitative data collection and analyses
are implemented in different phases and each is integrated in a separate phase. While both design options
have advantages and disadvantages, we suggest that IS scholars should develop a design strategy in
keeping with their research questions and objectives. If the broad goal of an IS research inquiry is to
a concurrent mixed methods design approach should be employed. In contrast, if researchers expect that
findings from a qualitative (or a quantitative) study will theoretically and/or empirically inform a later
quantitative (or a qualitative) study, a sequential approach should be taken. For example, in the context of
job characteristics, a concurrent mixed methods research is perhaps appropriate because researchers will
27
not be able to capture the immediate impacts of an IS on employees’ jobs in a sequential design. Also, if
the goal of a research inquiry is to study changes in employees’ perceptions during an IS implementation
(e.g., Boudreau and Robey 2005; Compeau and Higgins 1995; Morris and Venkatesh 2010), a concurrent
approach would help to capture changes over time, both quantitatively and qualitatively. A concurrent
approach is preferred due to the nature of the changes being studied and the potential impact of time on
the changes. A sequential approach could make it difficult to discern, for example, whether the changes
that are identified are associated with the timing of the change or with the method of data collection.
If, however, the objective of a research effort is to understand employees’ reactions toward a new
type of IS and the researcher expects to develop a set of new factors, he or she can take a sequential
approach in which a core set of factors related to employees’ reactions is developed from interviews and
then a theory leveraging these factors is developed. The researcher could then conduct a quantitative study
among a larger sample of employees to garner further empirical support for the new theory. Unlike the
concurrent approach, the sequential approach requires IS researchers to think carefully about whether a
qualitative or a quantitative study should be conducted first. Our suggestion is that if IS researchers plan to
conduct a study for which a strong theoretical foundation already exists, but the context of the research is
novel or previous findings were fragmented and/or inconclusive, they may consider conducting a
quantitative study first followed by a qualitative study to offer additional insights based on the context-
specific findings or reasons for fragmented and/or inconclusive results in previous studies. In contrast, if
there is no strong theoretical foundation for a research inquiry, we suggest that IS researchers conduct a
qualitative study first to inductively develop a theoretical perspective (e.g., constructs and relationships)
followed by a quantitative study to validate this theory. Regardless of the approach taken, the goal of a
sequential research design is to leverage the findings from the first study to inform the second study and
28
Data analysis in mixed methods research should be done rigorously following the standards that
are generally acceptable in quantitative and qualitative research. In our review of mixed methods research
in IS, we found that there is typically a dominant study in mixed methods papers (see Table 2). The
dominant study is usually characterized by rigorous data collection and analysis, while the non-dominant
study is often presented in a manner that appears less rigorous with respect to data collection and/or
analysis. For instance, in Pavlou and Fygenson (2006), the quantitative study was the dominant component
of mixed methods research. The authors did not provide many details about their data collection and
analysis for the non-dominant qualitative study. In general, if the objective of a mixed methods research
study is to generate a set of factors from a qualitative study and then test these factors in a quantitative
study, we observed a tendency to conduct the data analysis in the qualitative study without the rigor that
Similarly, we noticed that if a qualitative study is the main thrust of a mixed methods research
study, the quantitative analysis is presented with less detail than would typically be expected in a
quantitative study. Dominance of one particular study in mixed methods research is sometimes desirable
due to the nature of the research inquiry. Neither of the situations that we just discussed is appropriate for
or desirable in mixed methods research. We urge IS researchers to develop a strategy for mixed methods
data analysis in which both quantitative and qualitative data are analyzed rigorously so that useful and
credible inferences can be made from these individual analyses. More importantly, the quality of inferences
from qualitative and quantitative studies contributes greatly to the process of developing high quality meta-
inferences, which we discuss in greater detail in the next point. Given that the actual process of analyzing
qualitative and quantitative data in IS depends on the research questions, model and contexts, a detailed
Development of Meta-inferences
29
We define meta-inferences as theoretical statements, narratives, or a story inferred from an
integration of findings from quantitative and qualitative strands of mixed methods research. Our review of IS
research employing a mixed methods approach revealed that, in many cases, researchers did not offer
meta-inferences (see Table 2). They kept the findings from the qualitative and quantitative studies separate
and did not offer a holistic explanation of the phenomenon of interest by combining findings from both
qualitative and quantitative studies. We suggest that drawing meta-inferences is a critical and essential
aspect of mixed methods research and IS researchers, editors, and reviewers need to be aware of the
importance of meta-inferences in mixed methods research. In fact, if researchers fail to provide and explain
meta-inferences, the very objective of conducting a mixed methods research study is not achieved.
Development of high quality meta-inferences largely depends on the quality of the data analysis in the
qualitative and quantitative studies of mixed methods research. While we do not intend to provide specific
guidelines regarding the length and structure of how meta-inferences should be written in a paper, we
suggest that the length and structure will depend on the context and insights gained from each strand—i.e.,
quantitative or qualitative study—of mixed methods research. For instance, Ang and Slaughter (2001)
updated their research model based on the findings from a mixed methods study and proposed a
substantive theory of IS professionals’ job characteristics and job outcomes integrating the findings from
quantitative and qualitative studies. In contrast, Ramiller and Swanson (2003) provided brief theoretical
Given that meta-inferences are essentially theoretical statements about a phenomenon, its
conceptually similar to the process of theory development from observation—in this case, the observations
are the findings from the qualitative and quantitative analyses. The core process of developing meta-
inferences is essentially an inductive one (e.g., moving from specific observations to broader
generalizations and theories). However, this process can be a part of a research inquiry that is either
30
inductive or deductive. Locke (2007) provided a detailed discussion of inductive theory building and called
for journal editors to make changes in editorial policies to encourage articles that develop theories
inductively. We suggest that Locke’s (2007) guidelines for developing theories inductively are pertinent to
the process of developing meta-inferences. In particular, he suggested that researchers should first
develop a substantial body of observations (or data) to be able to formulate valid concepts that are
fundamental building blocks of a theory. According to Locke (2007), researchers then need to look for
evidence of causality and identify causal mechanisms. Given that researchers conducting mixed methods
research analyze both qualitative and quantitative data, they are in a position to develop a substantial and
authoritative body of observations that can be used to formulate a unified body of valid concepts and
Once researchers have valid inferences from qualitative and quantitative studies separately, we
suggest they develop a meta-inference analysis path—the route they will take to develop meta-inferences.
The analysis path could be one of the following, depending on the mixed methods design strategies:
These paths suggest that meta-inferences can be developed irrespective of mixed methods design
strategies. The purpose of this path is to help researchers manage potential information overload. Once the
path is set, IS researchers can then take one of the following two approaches as they develop meta-
inferences: bracketing and bridging (Lewis and Grimes 1999). Bracketing is the process of incorporating a
diverse and/or opposing view of the phenomenon of interest. The goal of bracketing is to ensure that
researchers capture contradictions and/or oppositions from qualitative and quantitative findings and attempt
to theorize the nature and source of these contradictions and/or oppositions. This process is well suited for
concurrent mixed methods research, particularly when the quantitative and qualitative findings do not
31
agree. The concept of bracketing is consistent with the notion of exploration and exploitation of breakdowns
in which empirical findings cannot easily be explained by available theories (Alvesson and Kärreman 2007).
The process of breakdown can help researchers develop new understanding from mysteries/surprises in
findings. We suggest that when researchers encounter a breakdown in either qualitative or quantitative
strands in mixed methods research, they take this opportunity to solve the mystery in the findings by
developing meta-inferences. Bridging is the process of developing a consensus between qualitative and
quantitative findings. Bridging helps a researcher understand transitions and other boundary conditions
related to his or her research model and context. While bridging can be a valuable process for generating
meta-inferences from a concurrent design, we suggest that it is particularly suitable for sequential mixed
phenomenon of interest. We suggest that IS researchers will be able to develop a theoretically plausible
integrative understanding from qualitative and quantitative studies through a process of induction that
incorporates different theory development processes, such as bracketing and bridging. This understanding
to go beyond the findings from each study and develop an in-depth theoretical understanding that a single
Table 2 shows that a majority of mixed methods papers did not provide an explicit discussion of
validation related to the mixed methods design and findings. Further, while these papers discussed
validation of quantitative measures and results, a majority of them did not offer such a discussion for the
qualitative part of the study. This review naturally suggests that there is a need in the IS literature to
develop guidelines regarding validation of mixed methods research. These guidelines will help editors,
reviewers and readers to assess the quality and extent of rigor of mixed methods research. IS researchers
will be able to follow these guidelines when conducting and reporting mixed methods research. Building on
32
recent guidelines for mixed methods research (Creswell and Clark 2007; Onwuegbuzie and Johnson 2006;
Tashakkori and Teddlie 2008; Teddlie and Tashakkori 2003, 2009), we offer the following broad guidelines
for validation in mixed methods research in IS. Table 5 provides a summary of these guidelines.
IS researchers should discuss the validity of their design, analysis, and findings within the context
of both quantitative and qualitative research. In other words, researchers should discuss validation in
quantitative research and qualitative research independently before discussing validation for the mixed
methods meta-inferences. As suggested by Lee and Hubona (2009), IS researchers should attempt to
validate the formation (i.e., formative validity) and the testing (i.e., summative validity) of their theoretical
propositions in both quantitative and qualitative studies that are conducted as part of the mixed methods
design. Lee and Hubona (2009) offered detailed guidelines on how researchers can establish formative and
summative validity for both quantitative and qualitative research, and the interested reader is referred to
their extensive discussion. Given that quantitative research has a long tradition of assessing and reporting
validity, and inferential validity (see Table 4)—should not be avoided in mixed methods research.
As noted earlier, unlike quantitative methods, qualitative methods do not offer generally accepted
validation guidelines. Our view is that while a majority of IS qualitative research takes an implicit approach
to validation by providing rich and immersive discussions of research contexts, data collection processes
and data analysis approaches, there is still a need for considering how these discussions address the three
major groups of qualitative validation presented in Table 4—i.e., design validity, analytical validity, and
inferential validity. While the choice of a specific validation type within each category remains a decision of
the researcher, we believe that an explicit, albeit short, discussion of validation in qualitative research will
33
help not only develop a healthy tradition of qualitative research in IS, but also create a bridge between
We suggest that after discussing validation in both qualitative and quantitative strands, IS
researchers need to explicitly discuss validation for the mixed methods part of their research. In particular,
they need to provide a rigorous assessment of validation of the meta-inferences derived from mixed
methods research. We discuss this further below. We urge that, while evaluating theoretical contributions of
mixed methods research, editors, reviewers and readers of IS research need to assess the quality and rigor
of the validation aspects of all three components of mixed methods research—i.e., qualitative, quantitative,
and mixed methods meta-inferences. In the next section, we offer an integrated framework for assessing
When it comes to validation, we suggest that IS researchers use mixed methods research
nomenclature that has been proposed recently in order to avoid conceptual confusion related to validation
in a mixed methods approach, and qualitative and quantitative research (Teddlie and Tashakkori 2003,
2009). We suggest that when IS researchers discuss validation in quantitative and qualitative research,
they should use the well-accepted nomenclature within quantitative or qualitative research paradigms in IS.
However, when discussing validation in mixed methods research, the nomenclature developed by Teddlie
and Tashakkori (2003, 2009) can help differentiate mixed methods validation from quantitative or qualitative
validation. If the use of mixed methods research nomenclature becomes a norm in the IS literature, it will
help editors, reviewers and readers better understand the discussion of mixed methods research validation.
Validation in mixed methods research is essentially assessing the quality of findings and/or
inference from all of the data (both quantitative and qualitative) in the research inquiry (Teddlie and
Tashakkori 2003, 2009). In other words, inference quality has to be assessed on the overall findings from
mixed methods research—i.e., meta-inferences. We suggest that, while IS researchers need to establish
34
validity of qualitative and quantitative strands of mixed method research, they also need to provide an
explicit discussion and assessment of how they have integrated findings—i.e., meta-inferences—from both
qualitative and quantitative studies and the quality of this integration—i.e., inference quality. This discussion
will help editors, reviewers, and readers understand whether meta-inferences are consistent with the
Consistent with Creswell and Clark (2007), we suggest that IS researchers discuss validation from
the standpoint of the overall mixed methods design chosen for a research inquiry. Creswell and Clark
proposed that the discussion of validation should be different for concurrent designs as opposed to
sequential designs because researchers may employ different approaches to develop meta-inferences in
these designs. For example, in a concurrent design, researchers tend to merge qualitative and quantitative
data by transforming one type of data to make qualitative and quantitative data comparable (Creswell and
Clark 2007). While some researchers may choose not to transform data as such, the process of merging in
achieve adequate inference quality. In the case of sequential design, we suggest that IS researchers
discuss validation in keeping with whether they conducted the qualitative study first or the quantitative study
first. Meta-inferences and associated discussions of inference quality will be different in both designs
because the process of developing meta-inferences was essentially different. Research goals and intended
contributions are also different in these two design approaches. We urge editors, reviewers, and readers to
assess the quality of meta-inferences from the standpoint of the overall mixed methods design.
Finally, we suggest that IS researchers discuss the potential threats to validity that may arise
during data collection and analysis. This discussion should be provided for both qualitative and quantitative
strands of mixed methods research. IS researchers should also discuss what actions they took to
overcome or minimize these threats. The types of threats may vary among different types of mixed
35
methods research designs. Regardless, it is important to discuss them in order to enhance the overall
Building on the recent literature on mixed methods research, we present an integrative framework
for assessing inference quality in mixed methods research in IS (see Table 6). The integrative framework
provides definitions and examples of a set of quality criteria that mixed methods research needs to have to
facilitate accurate and meaningful inferences. In addition to the framework, we present a diagram showing
the process of conducting mixed methods research and assessing inference quality (see Figure 1). We
expect that the framework and the process diagram presented in this essay will help IS researchers
conduct high quality mixed methods research and apply appropriate validation principles.
The integrative framework has three key characteristics. First, it offers a rigorous set of criteria for
assessing the inference quality of mixed methods research. We suggest that conducting high quality
quantitative and qualitative studies in mixed methods research does not necessarily guarantee high
inference quality of mixed methods research. Therefore, IS researchers need to focus on how they
leverage inferences from quantitative and qualitative studies to generate meta-inferences. Second, the
integrative framework focuses primarily on the integration aspects of mixed methods research that are
often overlooked in much mixed methods research in IS. As noted earlier, the fundamental goal of mixed
methods research is to integrate inferences from quantitative and qualitative studies. This integration can
be done through the process of compare, contrast, infuse, link, and blend (Bryman 2007). Our framework
offers practical guidelines for the integration of qualitative and quantitative findings. Finally, the integrative
framework does not go beyond what we currently know about validation quality in quantitative and
qualitative research. The framework essentially suggests that while quantitative and qualitative studies
have their own validation principles (that should be applied during a mixed method research study), the
36
focus should be on the quality of integrative inferences or meta-inferences that provide holistic insights on
The integrative framework presented in Table 6 incorporates two aspects of inference quality:
design quality and explanation quality. Our view of design quality is consistent with Teddlie and Tashakkori
(2003, 2009) in that we suggest IS researchers need to rigorously develop a design strategy for mixed
methods research. We go beyond their guidelines by suggesting that for both quantitative and qualitative
strands, IS researchers need to think about design and analytic adequacies. In particular, we suggest that
IS researchers need to ensure that both qualitative and quantitative studies are designed and executed
rigorously following the norms and expectations in the IS literature. Our view of explanation quality is
different from Teddlie and Tashakkori’s (2003, 2009) interpretive rigor in that we suggest IS researchers
should follow the generally accepted validation principles for quantitative and qualitative studies. In
addition, IS researchers need to develop a rigorous strategy for the integration of findings and inferences
from quantitative and qualitative studies so that they can offer accurate and useful meta-inferences with a
The key elements of our framework are the three validation criteria for meta-inferences from mixed
methods research: integrative efficacy, i.e., inferences are effectively integrated into a theoretically
consistent meta-inference, integrative correspondence, i.e., meta-inferences satisfy the initial purpose of
doing a mixed methods research study, and inference transferability, i.e., meta-inferences are
generalizable to other contexts and settings. Integrative efficacy does not necessarily mean that findings
from qualitative and quantitative studies will have to produce a single understanding of the phenomenon of
interest (Tashakkori and Teddlie 2008; Teddlie and Tashakkori 2009). Instead, it refers to the quality of
comparison, contrast, infusion, linkage, and blending of findings from both strands of mixed methods
research (Bryman 2007). Integrative correspondence is important to ensure that researchers employ a
mixed methods research approach in keeping with an overarching research objective. In other words, if
37
quantitative and qualitative studies are conducted to achieve different research objectives, it will be difficult
to justify that the mixed methods approach has a high degree of integrative correspondence, even if the
studies were conducted within the same research project. Finally, we suggest that IS researchers discuss
boundary conditions (e.g., contexts) of meta-inferences from mixed methods research to delineate the
Figure 1 provides an overview of the process for conducting mixed methods research and
assessing inference quality. This process is consistent with our fundamental position on mixed methods
research that IS researchers need to ensure a high quality design and incorporate inferences for both
qualitative and quantitative strands of mixed methods research. Once these criteria are met, IS researchers
can move to the integrative inference and/or meta-inference stage and assess whether their meta-
inferences meet the criteria mentioned in the integrative framework, i.e., integrative efficacy, integrative
correspondence, and inference transferability. According to Figure 1, if these inference quality criteria are
met, IS researchers can feel confident about the overall inference quality of mixed methods research and
be able to report findings from mixed methods research. Thus, the process diagram will help IS researchers
combine the general guidelines for conducting mixed methods research (Table 5) and the integrative
framework for assessing inference quality in mixed methods research (Table 6) by highlighting specific
decision points in which researchers have to compare the quality and rigor of their work to the guidelines
IS literature. The first paper, by Bhattacherjee and Premkumar (2004), studied changes in users’ beliefs
about usefulness and attitudes toward IS use. The second paper, by Piccoli and Ives (2003), examined the
role of behavioral control on trust decline in virtual teams. Both papers were published in MIS Quarterly. It
is important to note that the purpose of this discussion is not to critique the application of the mixed
38
methods approach in these papers. Instead, our goal is to demonstrate how our guidelines can be used to
understand and apply the process of conducting and validating mixed methods research in IS. Further, we
expect that this discussion will help to demonstrate the value of meta-inferences and inference quality.
Bhattacherjee and Premkumar (2004), henceforth B&P2004, studied one of the most enduring
questions in IS research: why do individuals use an IS? While much prior research employing the TAM (see
Venkatesh et al. 2003 for a review) and other user acceptance models have provided rich insights to
answer this question, B&P2004 offered an alternative conjecture. They hypothesized that a change in
users’ beliefs and attitudes toward an IS over time could explain why and how users form intentions to
continue using an IS. They postulated a two-stage model of cognition change in which a pre-usage belief
(i.e., perceived usefulness of an IS) and attitude toward an IS influence the usage stage belief and attitude
respectively. Pre-usage perceived usefulness was also theorized to influence usage stage disconfirmation
and satisfaction. Finally, the usage stage, perceived usefulness, and attitude were expected to influence
users’ continuance intention. B&P2004 conducted two longitudinal studies to test their research model.
They used a survey methodology to collect quantitative data and open-ended interview questions to collect
qualitative data. The model was tested using a quantitative approach in which constructs were measured
using pre-validated items and a statistical technique (i.e., partial least squares; PLS) was used to analyze
the quantitative data. Qualitative data were content analyzed to create general themes representing the
In this section, we discuss the selection and application of mixed methods by B&P2004 using the
general guidelines of mixed methods research that we presented earlier (see Table 5). First, while
B&P2004 did not offer an explicit discussion of the appropriateness of a mixed methods approach, they did
mention that their sole purpose for conducting qualitative analysis was to triangulate and validate their
39
quantitative results. We suggest that a clear depiction of the purpose of employing a mixed methods
approach is critical to demonstrate the appropriateness of conducting mixed methods research. Given that
B&P2004 were interested in providing a novel theoretical perspective in the context of IS adoption and use
and they were conducting two different longitudinal studies, we believe that they were expecting
unanticipated results from the quantitative analysis. Therefore, they were likely interested in using the
qualitative analysis to validate the findings from the quantitative analysis. Overall, we believe that they
satisfied two purposes of conducting mixed methods research from Table 1—i.e., corroboration/
confirmation and expansion. While corroboration/confirmation was the explicit purpose mentioned by
B&P2004 because they wanted to use the qualitative study to garner additional credibility of their
quantitative findings, expansion was an implicit purpose for them because they wanted to gain additional
insights into the nature and causes of hypothesized relationship in the research model.
Second, B&P2004 adopted a concurrent mixed methods design strategy in which qualitative and
quantitative data were collected simultaneously. Given that this was a longitudinal study to understand
change in users’ beliefs about usefulness and attitudes toward IS use over time, it was critical that both
qualitative and quantitative data were collected at the same time so that change could be measured and
interpreted accurately using both types of data. Further, given that the purpose of the qualitative analysis
was to validate the findings from the quantitative analysis, if quantitative and qualitative data were collected
sequentially, it would have been difficult to achieve this objective because users’ perceptions of IS might
change by the time qualitative data would have been collected. B&P2004 developed a convincing strategy
to analyze both quantitative and qualitative data. Quantitative data were analyzed using well-established
statistical tools. Quantitative validation was assessed rigorously. Qualitative data were analyzed using a
content analysis approach performed by three independent judges who were not aware of the objective of
this study. Overall, the strategy for analyzing mixed methods data was well-executed in this paper.
40
Finally, B&P2004 integrated the findings from quantitative and qualitative analysis and offered
insightful meta-inferences. Consistent with our guidelines, B&P2004 compared and merged findings from
both qualitative and quantitative studies to develop meta-inferences. For example, they noted (p. 247):
“These responses validated our choice of usefulness as the most salient belief driving IT usage behaviors
and the core belief of interest to this study. However, other beliefs, such as usability (e.g., “The program
takes too long to load”), lack of time (e.g., “It is helpful, but I worry that I will not have the time to use it”), and
compatibility (e.g., “The software is extremely negative because I don’t want to be taught by a computer”),
also influenced subjects’ CBT usage intentions, albeit to a lesser extent, and may have contributed to some
of the unexplained variance in our PLS models. Subject responses corroborated the central role of
disconfirmation in influencing later-stage usefulness perceptions and intentions...”
While we suggest that B&P2004 could have elaborated on the meta-inferences, particularly in light
of the three research questions they mentioned in the introduction, we acknowledge their rigorous data
analysis approach for the quantitative and qualitative strands of the mixed methods approach and
discussion of meta-inferences to integrate the findings from both strands. This discussion clearly suggests
that our general guidelines for conducting mixed methods research can be useful to understand how IS
researchers make and execute important decisions related to the appropriateness of mixed methods
research, selection of mixed methods research design, and data analysis strategies and presentation of
When we assess the paper in light of the integrative framework for mixed methods inference
quality, we see that the paper had a high inference quality. The paper has substantial design quality (see
Table 6) because the authors selected appropriate and rigorous design and analytic approaches for both
quantitative and qualitative studies. For example, the authors reported reliability and validity of measures in
the quantitative analysis. While not the only way to assess reliability, they discussed inter-rater reliability
related to their qualitative data analysis. While the authors did not explicate it in the paper, the use of
independent judges and a theoretically developed classification scheme for coding purposes helped ensure
theoretical validity and plausibility of the qualitative findings. With respect to explanation quality, we observe
41
that the quality of quantitative and qualitative inferences was high. However, there was no explicit
discussion regarding the validity of qualitative inferences, such as credibility, confirmability and
transferability. While we suggest that an explicit discussion of validity is helpful, we believe that the
discussion of the data collection and analysis in B&P2004 provides adequate evidence of credibility,
When we examine the quality of integrative and/or meta-inferences, we clearly see an effort to
ensure a high degree of integrative efficacy and correspondence. In other words, the authors were able to
integrate their findings from the quantitative and qualitative analyses into a theoretically consistent meta-
inference. The meta-inference was also consistent with the proposed research model and relationships.
Although the authors did not explicitly mention the transferability of meta-inference, they acknowledged it
as a limitation of the study. Based on the validation guidelines, we suggest that this paper has high
inference quality. Overall, while we believe that the B&P2004 paper could offer a richer theoretical
The Piccoli and Ives (2003), henceforth P&I2003, paper is different from the B&P2004 paper in that
the purpose of the mixed methods approach in this paper is completeness or expansion as opposed to
corroboration/confirmation (see Table 1). P&I2003 conducted a longitudinal study of virtual teams to
understand the impact of behavioral control on trust. They found that behavioral control had a negative
influence on trust. In particular, they found that a high degree of behavioral control led to declining trust in
virtual teams. They employed a concurrent mixed methods approach in which trust (i.e., dependent
variable) was measured using a quantitative approach and various aspects of behavioral control (i.e.,
42
Like B&P2004, we found that the appropriateness of mixed methods research was not as clearly
described in the P&I2003 paper. While the authors mentioned that the use of a mixed methods approach
would minimize the threat of mono-method variance, our guidelines suggest that the appropriateness of
mixed methods research should primarily be driven by the research questions, objectives and contexts.
This aspect of our guidelines was not clearly followed in this paper. However, the other aspects of our
general guidelines, such as selection of the mixed methods research design and data analysis approach
and presentation of meta-inferences, were clearly incorporated. The authors provided a rigorous discussion
of how they developed and executed the research design. While they discussed the generally accepted
quantitative validation principles, they took an implicit approach in terms of addressing the issues related to
qualitative validity. They provided rich and immersive descriptions of their data collection and analysis.
In terms of the data analysis, the quantitative data were analyzed using appropriate statistical
techniques (e.g., t-tests and ANCOVA), and the qualitative data were analyzed using a coding and data
reduction approach. Given that the dependent variable was measured using a quantitative approach and
independent variables were assessed using a qualitative approach, we suggest that the authors did not
have to offer a separate discussion of meta-inferences because the results section provides a substantial
discussion of meta-inferences. By triangulating quantitative and qualitative results, the authors offered rich
insights on the process by which behavioral control has a negative influence on trust in virtual teams. An
“In summary, behavior control mechanisms do appear to increase team members’ vigilance and the
salience of reneging and incongruence incidents the team experiences during the project. In so doing, they
increase the likelihood that these incidents will be detected and lead to trust decline. Conversely, in teams
that experience no incidents, or that only experience some early incidents, behavior control has no
detectable effect on trust.”
Consistent with our integrative framework of inference quality, the authors did provide a discussion
of design quality. In particular, they discussed the data collection procedure and analysis approach for both
43
qualitative and quantitative strands of a mixed methods approach. The authors discussed the design
adequacy of the quantitative data (e.g., reliability, validity). However, they did not provide any explicit
discussion of design adequacy for the qualitative data (e.g., credibility). They did provide rich descriptions
of their data collection and analysis strategies. In fact, their data analysis discussion indicated a great deal
of rigor and legitimacy. Similarly, P&I2003 did not provide an explicit discussion of the explanation quality of
qualitative data (e.g., confirmability, transferability). While we note that P&I2003 provided a rich description
of their context, data collection process and data analysis approach indicating that there is certainly a high
degree of credibility, confirmability and transferability of their findings, we suggest that an explicit discussion
of how different aspects of their data collection and analysis process addressed these types of validity
Nonetheless, we found that the inference quality of meta-inferences was substantially high
because the authors were able to effectively integrate the findings from qualitative and quantitative data to
demonstrate a high quality of integrative efficacy. With respect to integrative correspondence, it is clear that
the authors were able to achieve the objective of mixed methods research that they articulated at the outset
of the paper. By measuring dependent and independent variables separately, the authors were able to
minimize the threat of mono-method variance. However, as we noted earlier, the objective of employing a
mixed methods approach (i.e., to minimize the threat of mono-method variance) was not clearly aligned
with the overall research objective and context of this paper (i.e., to understand the impact of behavioral
control on trust), thus limiting our ability to assess the value of mixed methods research in this context.
Nevertheless, we suggest that the P&I2003 is an exemplar of a well-conducted mixed methods research
DISCUSSION
Our primary goal in this paper is to facilitate discourse on mixed methods research in IS, with a
particular focus on encouraging and assisting IS researchers to conduct high quality, rigorous mixed
44
methods research to advance the IS discipline. We are sensitive to the issue that a paper such as this can
be misinterpreted in at least two ways. First, it could be viewed that mixed methods research is now an
imperative for publication in journals, such as MIS Quarterly. Second, these guidelines could be seen as
legislative. In this section, in addition to reiterating that neither of these viewpoints represents our intention
or perspective, we discuss contributions and implications of this work. While a mixed methods approach
clearly has certain advantages over a mono-method approach, it is not a silver bullet to problems that are
associated with any single method. There are also a few limitations with the mixed methods guidelines
proposed here that must be acknowledged. One important limitation is that the typical amount of time and
effort involved in collecting, analyzing and validating both quantitative and qualitative data are significantly
greater than work that employs only one method. Overall, while our guidelines have the potential to offer a
way to integrate the strengths of two data collection methods, it may not always be feasible or desirable to
do so. We urge IS researchers to carefully think about their research objectives, theoretical foundations,
and context before conducting mixed methods research. This paper serves as a call for further work to
examine the integration of quantitative and qualitative data collection methods within a single study.
Our key contributions are three-fold. Our first contribution is the delineation of an overview of mixed
methods research based on recent advances in this area. We reviewed six leading IS journals identified in
the Senior Scholars’ Basket of Journals (AIS 2007) to understand the state of mixed methods research in
IS. Our review suggests that there is a dearth of mixed methods research in IS and there are no standards
or guidelines for conducting and evaluating such research in IS. We also provided a set of general
guidelines for conducting mixed methods research in IS. We focused on three important areas in our
substantive theory—from mixed methods research; and (c) assessment of the quality of meta-inferences—
i.e., validation of mixed methods research. We provided in-depth discussions of these three areas because
45
there has been limited discussion and understanding of when to conduct mixed methods research—i.e.,
appropriateness—how to discover and develop integrative findings from mixed methods research—i.e.,
meta-inferences—and how to assess the quality of meta-inferences—i.e., validation. This paper should
initiate scholarly discourse regarding these three areas to encourage IS researchers to engage in high
are essential components of mixed methods research. If researchers fail to develop meta-inferences from
mixed methods research, it is difficult to develop substantive theory or make theoretical contributions. If
researchers do not intend to develop meta-inferences and instead plan to publish mixed methods research
in multiple publications as single method articles, the very purpose of conducting mixed methods research
will not be achieved. The shortage of true mixed methods research programs seems to indicate that IS
researchers indeed publish single method articles from mixed methods research programs. While
researchers may do so to avoid paradigmatic, cultural, cognitive and physical challenges associated with
conducting mixed methods research and developing meta-inferences, we argued that such a practice will
Our third contribution is the development of an integrative framework for performing and assessing
validation (quality of) for mixed methods research in IS. While much progress has been made on mixed
methods research design and data analysis in other social sciences disciplines, there has not been much
discussion of validation (Teddlie and Tashakkori 2003). We developed these guidelines from the recent
work on mixed methods research and discussed it in the context of IS research. We expect that these
guidelines will be useful in conducting and evaluating mixed methods research in IS. Lee and Hubona
(2009) recently provided a valuable discussion of the importance of validation in quantitative and qualitative
IS research. This work augments their suggestions by offering and illustrating validation guidelines for
46
We believe that IS phenomena are socially constructed and not fully deterministic. Therefore, a
purely quantitative research approach may not always provide rich insights into IS phenomena. Similarly, a
purely qualitative approach may not provide findings that are robust and generalizable to other settings
because of the difficulty to collect qualitative data from many different sources. Consequently, a mixed
methods approach provides an opportunity for IS researchers to be engaged in rich theory development
processes, such as bracketing, breakdown and bridging. We suggest that mixed methods research is an
appropriate methodological approach for IS research because of the opportunity to develop novel
theoretical perspectives. We call for going beyond the debates on the incompatibility of methodology and
paradigmatic incommensurability, and suggest that IS researchers take a more pragmatic approach. We
also call for conducting more mixed methods research in IS as it offers substantial benefits over and above
mono-method research by answering research questions that a single method cannot answer, providing
better (stronger) inferences and presenting a greater diversity of views. That being said, we suggest that IS
researchers do not need to conduct qualitative and quantitative studies to publish a single paper unless
there is clearly a need for doing so. Our view is that, on most occasions, the process of crafting a
manuscript is likely to be business as usual. A combination of both methods in one inquiry or paper is
another arrow in a researcher’s quiver for occasions when it is appropriate. Further, there may now be
occasions when well-designed and well-executed mixed methods studies can result in a third paper that
IS is a relatively new applied social science, with roots in multiple disciplines, such as quantitative
sciences (e.g., mathematics, statistics), computer science and engineering, organizational behavior and
social psychology. IS researchers have backgrounds in these disciplines, thus setting up an ideal situation
for conducting mixed methods research. A researcher who has a strong background in quantitative
sciences can collaborate with a qualitative researcher to investigate a phenomenon that is of interest to
47
both researchers. Thus, IS researchers will be able to complement each other and offer unique
Finally, with respect to evaluating the quality of meta-inferences, we suggest that the criteria we
discussed in this paper are no different from what is used to evaluate findings from qualitative and
quantitative studies. The key, in our opinion, is to develop insightful meta-inferences that, as we observed
in our review of prior research, are missing in many articles that employed a mixed methods approach.
Insights that help extend theory and practice will be important, as always. In order to encourage and
evaluate work resulting from mixed methods inquiries, journal editors should find a pool of reviewers who
can provide coverage of various methodological aspects. It is also important to instruct such reviewers to
focus only on their areas of expertise and suspend their biases about other methods. Ideally, one or both
reviewers can provide their expertise on the phenomenon and/or theory bases being used. Ultimately, more
so than any other paper, the editor’s role becomes important as biases of reviewers favoring one particular
method may tend to bury or dismiss the value of mixed methods research. We call for editors to buffer the
authors from such biases and take risks when the value of the insights, particularly the meta-inferences, or
the theoretical or practical insights, outweigh minor methodological issues. Just as we recommended to the
editors to watch out for reviewers’ biases, we encourage reviewers to suspend their biases about methods
CONCLUSIONS
We set out to review the current state of mixed methods research in IS and provide guidelines to
conduct mixed methods research in IS, with a particular focus on three important aspects of conducting
mixed methods research: (a) appropriateness of a mixed methods approach in IS; (b) development of
meta-inferences or substantive theory from mixed methods research; and (c) assessment of the quality of
meta-inferences of mixed methods research. Considering the value of mixed methods research in
developing novel theoretical perspectives and advancing the field, we urge IS researchers to go beyond the
48
rhetorical debate related to the use of multiple methods and paradigmatic incommensurability and consider
undertaking mixed methods research if they feel that such an approach will help them find theoretical
plausible answers to their research questions. We present a set of guidelines for conducting and assessing
mixed methods research in IS. We hope that this paper will be a launching pad for more mixed methods
research in IS and the guidelines presented here will help IS researchers conduct and evaluate high quality
REFERENCES
AIS, Senior Scholars’ Basket of Journals, Association for Information Systems (AIS), 2007,
http://home.aisnet.org/displaycommon.cfm?an=1&subarticlenbr=346 (Accessed: April, 20, 2009)
Alvesson, M., and Kärreman, D. “Constructing Mystery: Empirical Matters in Theory Development,”
Academy of Management Review (32:4), 2007, pp. 1265-1281.
Ang, S., and Slaughter, S.A. “Work Outcomes and Job Design for Contract versus Permanent Information
Systems Professionals on Software Development Teams,” MIS Quarterly (25:3), 2001, pp. 321-350.
Archer, M., Bhaskar, R., Collier, A., Lawson, T., and Norrie, A. Critical Realism: Essential Readings,
Routledge, London, 1998.
Bala, H., and Venkatesh, V. “Assimilation of Interorganizational Business Process Standards,” Information
Systems Research (18:3), 2007, pp. 340-362.
Beaudry, A., and Pinsonneault, A. “The Other Side of Acceptance: Studying the Direct and Indirect Effects
of Emotions on IT Use,” MIS Quarterly (34:4), 2010, pp. 689-710.
Beaudry, A., and Pinsonneault, A. “Understanding User Responses to Information Technology: A Coping
Model of User Adaptation,” MIS Quarterly (29:3), 2005, pp. 493-525.
Becerra-Fernandez, I., and Sabherwal, R. “Organization Knowledge Management: A Contingency
Perspective,” Journal of Management Information Systems (18:1), 2001, pp. 23-55.
Bhaskar, R. A Realist Theory of Science, Harvester, Hemel Hempstead, UK, 1978.
Bhattacherjee, A., and Premkumar, G. “Understanding Changes in Belief and attitude Toward Information
Technology Usage: A Theoretical Model and Longitudinal Test,” MIS Quarterly (28:2), 2004, pp. 229-
254.
Blechar, J., Constantiou, I.D., and Damsgaard, J. “Exploring the Influence of Reference Situations and
Reference Pricing on Mobile Service User Behavior,” European Journal of Information Systems (15:3),
2006, pp. 285-291.
Boudreau, M.-C., Ariyachandra, T., Gefen, D., and Straub, D. “Validating IS Positivist Instrumentation:
1997-2001,” in The Handbook of Information Systems Research, M. E. Whitman and A.B. Woszczynski
(eds.), Idea Group Publishing, Hershey, Pennsylvania, 2004, pp. 15-26.
Boudreau, M.-C., and Robey, D. 2005. “Enacting Integrated Information Technology: A Human Agency
Perspective,” Organization Science (16:1), 2005, pp. 3-18.
Bowen, P. L., Heales, J., and Vongphakdi, M.T. “Reliability Factors in Business Software: Volatility,
Requirements and End-Users,” Information Systems Journal (12:3), 2002, pp. 185-213.
Brannen, J.”The practice of a Mixed Method Research Strategy: Person, Professional and Project
Considerations,” in Advances in Mixed Methods Research, M.M. Bergman (eds.), Sage, London, 2008,
pp. 53-65.
49
Brown, S.A., and Venkatesh, V. “Model of Adoption of Technology in the Household: A Baseline Model
Test and Extension Incorporating Household Life Cycle,” MIS Quarterly (29:3), 2005, pp. 399-426.
Bryman, A. “Barriers to Integrating Quantitative and Qualitative Research,” Journal of Mixed Methods
Research (1:1), 2007, pp. 8-22.
Campbell, D., and Fiske, D.W. “Convergent and Discriminant Validation by the Multitrait-Multimethod
Matrix,” Psychological Bulletin (56:1), 1959, pp. 81-105.
Cao, J., Crews, J.M., Lin, M., Deokar, A.V., Burgoon, J.K., and Nunamaker Jr., J.F. “Interactions between
System Evaluation and Theory Testing: A Demonstration of the Power of a Multifaceted Approach to
Information Systems Research,” Journal of Management Information Systems (22:4), 2006, pp. 207-
235.
Chang, H.H. “Technical and Management Perceptions of Enterprise Information System Importance,
Implementation and Benefits,” Information Systems Journal (16:3), 2006, pp. 263-292.
Choi, H., Lee, M., Im, K.S., and Kim, J. “Contribution to Quality of Life: A New Outcome Variable for Mobile
Data Service,” Journal of the Association for Information Systems (8:12), 2007, pp. 598-618.
Compeau, D.R., and Higgins, C.A. “Computer Self-efficacy: Development of a Measure and Initial Test,”
MIS Quarterly (19:2), 1995, pp. 189-211.
Cook, T.D., and Campbell, D.T. Quasi-Experimentation: Design and Analysis Issues for Field Settings,
Houghton Mifflin Company, Boston, 1979.
Creswell, J.W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 2nd edition,
Sage, Thousand Oaks, CA, 2003.
Creswell, J.W., and Clark, V.L.P. Designing and Conducting Mixed Methods Research, Sage, Thousand
Oaks, CA, 2007.
Creswell, J.W., Tashakkori, A., Jensen, K.D., and Shapley, K.L. “Teaching Mixed Methods Research:
Practices, Dilemmas, and Challenges,” in Handbook of Mixed Methods in Social and Behavioral
Research, A. Tashakkori and C. Teddlie (eds.), Sage, Thousand Oaks, CA, 2003, pp. 91-110.
Danermark, B., Ekstrom, M., Jakobsen, L., and Karlsson, J. Explaining Society: Critical Realism in the
Social Sciences, Routledge, London, 2002.
Datta, L. “Paradigm Wars: A Basis for Peaceful Coexistence and Beyond,” in The Qualitative-Quantitative
Debate: New Perspectives, C.S. Reichardt and S.F. Rallis (eds.), Jossey-Bass Publishers, San
Francisco, 1994, pp. 53-70.
Davis, F.D., Bagozzi, R.P., and Warshaw, P.R. “User Acceptance of Computer Technology: A Comparison
of Two Theoretical Models,” Management Science (35:8), 1989, pp. 982-1003.
Deltor, B. “Internet-Based Information Systems Use in Organizations: An Information Studies Perspective,”
Information Systems Journal (13:2), 2003, pp. 113-132.
Dennis, A.R., and Garfield, M.J. “The Adoption and Use of GSS in Project Teams: Toward More
Participative Processes and Outcomes,” MIS Quarterly (27:2), 2003, pp. 289-323.
Denzin, K. The Research Act, New York: McGraw-Hill, 1978.
Denzin, N.K., and Lincoln, Y.S. Handbook of Qualitative Research, Sage, Thousand Oaks, CA, 1994.
Dubé, L., and Paré, G. “Rigor in Information Systems Positivist Case Research: Current Practices, Trends,
and Recommendations,” MIS Quarterly (27:4), 2003, pp. 597-636.
Espinosa, J.A., Slaughter, S.A., Kraut, R.E., and Herbsleb, J.D. “Team Knowledge and Coordination in
Geographically Distributed Software Development,” Journal of Management Information Systems (24:1),
2007a, pp.135-169.
Espinosa, J.A., Slaughter, S.A., Kraut, R.E., and Herbsleb, J.D. “Familiarity, Complexity and Team
Performance in Geographically Distributed Software Development,” Organization Science (18:4), 2007b,
pp. 613-630.
50
Gallivan, M.J., and Keil, M. “The User-Developer Communication Process: A Critical Case Study,”
Information Systems Journal (13:1), 2003, pp. 37-68.
Gefen, D., Karahanna, E., and Straub, D.W. “Trust and TAM in Online Shopping: An Integrated Model,”
MIS Quarterly (27:1), 2003, pp. 51-90.
Gefen, D., Straub, D.W., and Boudreau, M.-C. “Structural Equation Modeling and Regression: Guidelines
for Research Practice,” Communications of AIS, (7:7), 2000, pp. 1-78.
Geissler, G., Zinkhan, G., and Watson, R.T. “Web Home Page Complexity and Communication
Effectiveness,” Journal of the Association for Information Systems (2:2), 2001, pp. 1-46.
Glaser, B.G., and Strauss, A.L. “Discovery of Substantive Theory: A Basic Strategy Underlying Qualitative
Research,” American Behavioral Scientist (8:6), 1965 pp. 5-12.
Greene, J.C. “Is Mixed Methods Social Inquiry a Distinctive Methodology?” Journal of Mixed Methods
Research (2:1), 2008, pp. 7-21.
Greene, J. C. Mixed Methods in Social Inquiry, Jossey-Bass, San Francisco, 2007.
Greene, J. C., and Caracelli, V. J. “Defining and Describing the Paradigm Issue in Mixed-Method
Evaluation,” In Advances in Mixed-Method Evaluation: The Challenges and Benefits of Integrating
Diverse Paradigms: New Directions for Evaluation, J. C. Greene and V. J. Caracelli (eds.), Jossey-Bass,
San Francisco, 1997, pp. 5-17.
Greene, J.C., Caracelli, V.J., and Graham, W.F. “Toward a Conceptual Framework for Mixed-method
Evaluation Designs,” Educational Evaluation and Policy Analysis (11), 1989, pp. 255-274.
Grimsley, M., and Meehan, A. “E-Government Information Systems: Evaluation-led Design for Public Value
and Client Trust,” European Journal of Information Systems (16:2), 2007, pp. 134-148.
Guba, E.G. “What Have We Learned about Naturalistic Evaluation?” American Journal of Evaluation (8:1),
1987, pp. 23-43.
Guba, E.G., and Lincoln, Y.S. “Paradigmatic Controversies, Contradictions, and Emerging Confluences,” In
The Sage Handbook of Qualitative Research, 3rd edition, N.K. Denzin and Y.S. Lincoln (eds.), Sage,
Thousand Oaks, CA, 2005, pp. 191-215.
Hackman, J.R., and Oldham, G.R. Work Redesign, Addison-Wesley, Reading, MA, 1980.
Hackney R.A., Jones S., and Losch A., “Towards an e-Government Efficiency Agenda: The Impact of
Information and Communication Behaviour on e-Reverse Auctions in Public Sector Procurement,”
European Journal Information Systems (16:2), 2007, pp. 178-191.
Hatzakis, T., Lycett, M., Macredie, R.D., and Martin, V.A. “Towards the Development of a Social Capital
Approach to Evaluating Change Management Interventions,” European Journal of Information Systems
(14:1), 2005, pp. 60-74.
Hatzakis, T., Lycett, M., and Serrano, A. “A Programme Management Approach for Ensuring Curriculum
Coherence in IS (higher) Education,” European Journal of Information Systems (16:5), 2007, pp. 643-
657.
Hevner, A.R., March, S.T., Park, J., and Ram, S., “Design Science in Information Systems Research,” MIS
Quarterly (28:1), 2004, pp. 75-105.
Ho, V. T., Ang, S., and Straub, D.W. “When Subordinates Become IT Contractors: Persistent Managerial
Expectations in IT Outsourcing,” Information Systems Research (14:1), 2003, pp. 66-86.
House, E R. “Integrating the Quantitative and Qualitative,” In The Qualitative-quantitative Debate: New
Perspectives, C.S. Reichardt and S.F. Rallis (eds.), Jossey-Bass, San Francisco, 1994, pp. 13-22.
Houston, S. “Beyond Social Constructionism: Critical Realism and Social Work,” British Journal of Social
Work (31:6), 2001, pp. 845-861.
Howe, K.R. “Against the Quantitative-qualitative Incompatibility Thesis or, Dogmas Die Hard,” Educational
Researcher (17:1), 1988, pp. 10-16.
51
Jick, T.D. “Mixing Qualitative and Quantitative Methods: Triangulation in Action,” Administrative Science
Quarterly (24:4), 1979, pp. 602-611.
Johns, G. “The Essential Impact of Context on Organizational Behavior,” Academy of Management Review
(31:2), 2006, pp. 386-408.
Johnson, R.B., and Turner, L. “Data Collection Strategies in Mixed Methods Research,” in Handbook of
Mixed Methods in Social and Behavioral Research, A. Tashakkori and C. Teddlie (eds.), Sage,
Thousand Oaks, CA, 2003, pp. 297-320.
Kaplan, B., and Duchon, D. “Combining Qualitative and Quantitative Methods in Information Systems
Research: A Case Study,” MIS Quarterly (12:4), 1988, pp. 571-586.
Kayworth, T. R., and Leidner, D.E., “Leadership Effectiveness in Global Virtual Teams,” Journal of
Management Information Systems (18:3), 2002, pp. 7-40.
Keeney, R.L. “The Value of Internet Commerce to the Customer,” Management Science (45:4), 1999, pp.
533-542.
Keil, M., Im, G.P., and Mahring, M. “Reporting Bad News on Software Projects: The Effects of Culturally
Constituted Views of Face-saving,” Information Systems Journal (17:1), 2007, pp. 59-87.
Keil, M., and Tiwana, A. “Relative Importance of Evaluation Criteria for Enterprise Systems: A Conjoint
Study,” Information Systems Journal (16:3), 2006, pp. 237-262.
Keil, M., Tiwana, A., and Bush, A., “Reconciling User and Project Manager Perceptions of IT Project Risk:
A Delphi Study,” Information Systems Journal (12:2), 2002, pp. 103-119.
Kirk, J., and Miller, M.L. Reliability and Validity in Qualitative Research, Sage, Thousand Oaks, CA, 1986.
Klein, H.K., and Myers, M.D. “A Set of Principles for Conducting and Evaluating Interpretive Field Studies in
Information Systems,” MIS Quarterly (23:1), 1999, pp. 67-93.
Koh, C., Ang, S., and Straub, D.W. “IT Outsourcing Success: A Psychological Contract Perspective,”
Information Systems Research (15:4), 2004, pp. 356-373.
Koufaris, M. “Applying the Technology Acceptance Model and Flow Theory to Online Consumer Behavior,”
Information Systems Research (13:2), 2002, pp. 205-223.
Landry, M., and Banville, C. “A Disciplined Methodological Pluralism for MIS Research,” Accounting,
Management and Information Technologies (2:2), 1992, pp. 77-97.
Lapointe, L., and Rivard, S. “A Multilevel Model of Resistance to Information Technology Implementation,”
MIS Quarterly (29:3), 2005, pp. 461-491.
Lee, A.S. “Rigor and Relevance in MIS Research: Beyond the Approach of Positivism Alone,” MIS
Quarterly (23:1), 1999, pp. 29-34.
Lee, A.S. “A Scientific Methodology for MIS Case Studies,” MIS Quarterly (13:1), 1989, pp. 33-50.
Lee, A.S., and Baskerville, R. “Generalizing Generalizability in Information Systems Research,” Information
Systems Research (14:3), 2003, pp. 221-243.
Lee, A.S., and Hubona, G.S. “A Scientific Basis for Rigor in Information Systems Research,” MIS Quarterly
(33:2), 2009, pp. 237-262.
Lewis, M.W., and Grimes, A.J. “Metatriangulation: Building Theory from Multiple Paradigms,” Academy of
Management Review (24:4), 1999, pp. 672-90.
Lincoln, Y.S., and Guba, E.G. “Paradigmatic Controversies, Contradictions, and Emerging Confluences,” in
Handbook of Qualitative Research, N.K. Denzin and Y.S. Lincoln (eds.), Sage, Thousand Oaks, CA,
2000, pp. 163-188.
Lincoln, Y.S., and Guba, E.G. Naturalistic Inquiry, Sage, Thousand Oaks, CA, 1985.
Locke, E.A., “The Case for Inductive Theory Building,” Journal of Management (33:6), 2007, pp. 867-890.
Maxcy, S.J. “Pragmatic Threads in Mixed Methods Research in the Social Sciences: The Search for
Multiple Methods of Inquiry and the End of the Philosophy of Formalism,” in Handbook of Mixed
52
Methods in Social and Behavioral Research, A. Tashakkori and C. Teddlie (eds.), Sage, Thousand
Oaks, CA, 2003, pp. 51-89.
Maxwell, J.A. “Understanding and Validity in Qualitative Research,” Harvard Educational Review, (62:3),
1992, pp. 279-300
Mertens, D.M. Research and Evaluation in Education and Psychology: Integrating Diversity with
Quantitative, Qualitative, and Mixed Methods, Sage, Thousand Oaks, CA, 2005.
Mertens, D.M. “Mixed Methods and the Politics of Human Research: The Transformative-emancipatory
Perspective,” in Handbook of Mixed Methods in Social and Behavioural Research, A. Tashakkori and C.
Teddlie (eds.), Sage, Thousand Oaks, CA, 2003, pp. 135-164.
Mingers, J. “Critical Realism and Information Systems: Brief Responses to Monod and Klein,” Information
and Organization (14:2), 2004a, pp. 145-153.
Mingers, J. “Re-establishing the Real: Critical Realism and Information Systems Research,” in Social
Theory and Philosophical for Information Systems, J. Mingers and L. Willcocks (eds.), London: Wiley,
2004b, pp. 372-406.
Mingers, J. “Real-izing Information Systems: Critical Realism as an Underpinning Philosophy for
Information Systems,” Information and Organization (14:2), 2004c, pp. 87-103.
Mingers, J. “The Paucity of Multi-method Research: A Review of the Information Systems Literature,”
Information Systems Journal (13:3), 2003, 233-249.
Mingers, J. “Combining IS Research Methods: Towards a Pluralist Methodology,” Information Systems
Research (12:3), 2001, pp. 240-259.
Mingers, J. “Multi-Paradigm Multimethodology,” in Multimethodology: Theory and Practice of Combining
Management Science Methodologies, J. Mingers and A. Gill (eds.), Chichester: Wiley, 1997, pp. 1-20.
Mingers, J., and Brocklesby, J. “Multimethodology: Towards a Framework for Mixing Methodologies,"
Omega (25:5), 1997, pp. 489-509.
Morris, M.G., and Venkatesh, V. “Job Characteristics and Job Satisfaction: Understanding the Role of
Enterprise Resource Planning System Implementation,” MIS Quarterly (34:1), 2010, pp. 143-161.
Morse, J.M. “Principles of Mixed Methods and Multimethod Research Design,” in Handbook of Mixed
Methods in Social and Behavioural Research, A. Tashakkori and C. Teddlie (eds.), Sage, Thousand
Oaks, CA, 2003, pp. 189-208.
Myers, M.D., and Avison, D. Qualitative Research in Information Systems, Sage, London, 2002.
Myers, M.D., and Klein, H.K. “A Set of Principles for Conducting and Evaluating Critical Research in
Information Systems,” MIS Quarterly (35:1), 2011, pp. 17-36.
Newman, M., and Westrup, C. “Making ERPs Work: Accountants and the Introduction of ERP Systems,”
European Journal of Information Systems (14:3), 2005, pp. 258-272.
Nunnally, J.C., and Bernstein, I.H. Psychometric Theory, McGraw-Hill, New York, 1994.
Onwuegbuzie, A. J., and Johnson, R. B. “The Validity Issue in Mixed Methods Research,” Research in the
Schools (13:1), 2006, pp. 48-63.
Oosterhout, M. van, Waarts, E., and Hillegersberg, J. “Change Factors Requiring Agility and Implications
for IT,” European Journal of Information Systems (15:2), 2006, pp. 132-145.
Orlikowski, W. J., and Baroudi, J.J. “Studying Information Technology in Organizations: Research
Approaches and Assumptions,” Information Systems Research (2:1), 1991, pp. 1-28.
Patton, M.Q. Qualitative Evaluation and Research Methods, Sage, Thousand Oaks, CA, 2002.
Pavlou, P.A., and Dimoka, A. “The Nature and Role of Feedback Text Comments in Online Marketplaces:
Implications for Trust Building, Price Premiums, and Seller Differentiation,” Information Systems
Research (17:4), 2006, pp. 391-412.
Pavlou, P.A., and Fygenson, M. “Understanding and Predicting Electronic Commerce Adoption: An
Extension of the Theory of Planned Behavior,” MIS Quarterly (20:1), 2006, pp. 111-145.
53
Piccoli, G., and Ives, B. “Trust and the Unintended Effects of Behavior Control in Virtual Teams,” MIS
Quarterly (27:3), 2003, pp. 365-395.
Patomaki, H., and Wight, C. “After Postpositivism? The Promises of Critical Realism,” International Studies
Quarterly (44:2), 2000, pp. 213-237.
Punch, K.F. Introduction to Social Research: Quantitative and Qualitative Approaches, Sage, Thousand
Oaks, 1998.
Ramiller, N.C., and Swanson, E.B. “Organizing Visions for Information Technology and the Information
Systems Executive Response,” Journal of Management Information Systems (20:1), 2003, pp. 13-50.
Reichardt, C.S., and Rallis, S.F. “Qualitative and Quantitative Inquiries are not Incompatible: A Call for a
New Partnership,” in The Qualitative-Quantitative Debate: New Perspectives, C.S. Reichardt and S.F.
Rallis (eds.), Jossey-Bass, San Francisco, 1994, pp. 85-92.
Ridenour, C.S., and Newman, I. Mixed Methods Research: Exploring the Interactive Continuum, Southern
Illinois University Press, Carbondale, IL, 2008.
Robey, D. “Diversity in Information Systems Research: Threat, Promise, and Responsibility,” Information
Systems Research (7:4), 1996, pp. 400-408.
Rossi, P.H. “The War between the Quals and the Quants: Is a Lasting Peace Possible?” In The Qualitative-
quantitative Debate: New Perspectives, C.S. Reichardt and S.F. Rallis (eds.), Jossey-Bass, San
Francisco, 1994, pp. 23-36.
Rousseau, D.M., and Fried, Y. Location, Location, Location: Contextualizing Organizational Research,”
Journal of Organizational Behavior (22:1), 2001, pp. 1-13.
Runkel, P.J., and Runkel, M. A Guide to Usage for Writers and Students in the Social Sciences, Rowman &
Allanheld, Totowa, NJ, 1984.
Santhanam, R., Seligman, L., and Kang, D., “Post-implementation Knowledge Transfers to Users and
Information Technology Professionals,” Journal of Management Information Systems (24:1), 2007, pp.
174-203.
Sayer, A. Realism and Social Science, Sage, London, 2000.
Shadish, W.R., Cook, T.D., and Campbell, D.T. Experimental and Quasi-experimental Designs for
Generalized Causal Inference, Houghton-Mifflin, Boston, 2002.
Shim, J.P., Shin, Y.B., and Nottingham, L. “Retailer Web Site Influence on Customer Shopping: Exploratory
Study on Key Factors of Customer Satisfaction,” Journal of the Association for Information Systems
(3:1), 2002, pp. 53-76.
Shin, B. “An Exploratory Investigation of System Success Factors in Data Warehousing,” Journal of the
Association for Information Systems (4:1), 2003, pp. 141-170.
Sherif, K., Zmud, R.W., and Browne, G. “Managing Peer-to-peer Conflicts in Disruptive Information
Technology Innovations: The Case of Software Reuse,” MIS Quarterly (30:2), 2006, pp. 339-356.
Sidorova, A., Evangelopoulos, N., Valacich, J.S., and Ramakrishnan, T. “Uncovering the Intellectual Core
of the Information Systems Discipline,” MIS Quarterly (32:3), 2008, pp.467-482.
Slaughter, S., Levine, L., Ramesh, B., Baskerville, R., and Pries-Heje, J. “Aligning Software Processes and
Strategy,” MIS Quarterly (30:4), 2006, pp. 891-918.
Smith, H.J., Keil, M., and Depledge, G., “Keeping Mum as the Project Goes Under: Towards and
Explanatory Model,” Journal of Management Information Systems (18:2), 2001, pp. 189-228.
Snow, A., and Keil, M. “A Framework for Assessing the Reliability of Software Project Status Reports,”
Engineering Management Journal (14: 2), 2002, pp. 20-26.
Soffer, P., and Hadar, I. “Applying Ontology-Based Rules to Conceptual Modeling: A Reflection on
Modeling Decision Making,” European Journal of Information Systems (16:5), 2007, pp. 599-611.
Sun, H., and Zhang, P. “Causal Relationships between Perceived Enjoyment and Perceived Ease of Use:
An Alternative Approach,” Journal of the Association for Information Systems (7:9), 2006, pp. 618-645.
54
Stenbacka, C. “Qualitative Research Requires Quality Concepts of its Own,” Management Decision (39:7),
2001, pp. 551-555.
Straub, D. “Validating Instruments in MIS Research,” MIS Quarterly (13:2), 1989, pp. 146-169.
Straub, D., Boudreau, M.-C., and Gefen, D. “Validation Guidelines for IS Positivist Research,”
Communications of the AIS (13:24), 2004, pp. 380-427.
Tashakkori, A., and Creswell. J. W. “Mixed Methodology across Disciplines,” Journal of Mixed Methods
Research (2:1), 2008, pp. 3-6.
Tashakkori, A., and Teddlie, C. “Quality of Inferences in Mixed Methods Research: Calling for an
Integrative Framework,” in Advances in Mixed Methods Research: Theories and Applications, M.
Bergman (ed.), Sage, London, UK, 2008, pp. 101-119.
Tashakkori, A., and Teddlie, C. “Issues and Dilemmas in Teaching Research Methods Courses in Social
and Behavioral Sciences: A US Perspective,” International Journal of Social Research Methodology
(6:1), 2003a, pp. 61-77.
Tashakkori, A., and Teddlie, C. “The Past and the Future of Mixed Methods Research: From
‘Methodological Triangulation’ to ‘Mixed Methods Designs’,” in Handbook of Mixed Methods in Social
and Behavioral Research, A. Tashakkori and C. Teddlie (eds.), Sage, Thousand Oaks, CA, 2003b, pp.
671-701.
Tashakkori, A., and Teddlie, C. Mixed Methodology: Combining Qualitative and Quantitative Approaches,
Sage Publications, Thousand Oaks, CA, 1998.
Teddlie, C., and Tashakkori, A. “Major Issues and Controversies in the Use of Mixed Methods in the Social
and Behavioral Sciences,” in Handbook of Mixed Methods in Social and Behavioral Research, A.
Tashakkori and C. Teddlie (eds.), Sage, Thousand Oaks, CA, 2003, pp. 3-50.
Teddlie, C., and Tashakkori, A. Foundations of Mixed Methods Research, Sage, Thousand Oaks, CA,
2009.
Tiwana, A., and Bush, A.A. “A Comparison of Transaction Cost, Agency, and Knowledge-based Predictors
of IT Outsourcing Decisions: A U.S.-Japan Cross-cultural Field Study,” Journal of Management
Information Systems (24:1), 2007, pp. 259-300.
Venkatesh, V., Bala, H., and Sykes, T.A. “Impacts of Information and Communication Technology
Implementations on Employees' Jobs in India: A Multi-method Longitudinal Field Study,” Production and
Operations Management (19:5), 2010, pp. 591-613.
Venkatesh, V., and Brown, S.A. “A Longitudinal Investigation of Personal Computers in Homes: Adoption
Determinants and Emerging Challenges,” MIS Quarterly (25:1), 2001, pp. 71-102.
Venkatesh, V., Morris, M.G., Davis, G.B., and Davis, F.D. “User Acceptance of Information Technology:
Toward a Unified View,” MIS Quarterly (27:3), 2003, pp. 425-478.
Wakefield, R.L., Leidner, D.E., and Garrison, G. “A Model of Conflict, Leadership, and Performance in
Virtual Teams,” Information Systems Research (19:4), 2008, pp. 434-455.
Walsham, G. “Doing Interpretive Research,” European Journal of Information Systems (15:3), 2006, pp.
320-330.
Walsham, G. “The Emergence of Interpretivism in IS Research,” Information Systems Research (6:4),
1995, pp. 376-394.
Weber, R. “The Rhetoric of Positivism versus Interpretivism: A Personal View,” MIS Quarterly (28:1), 2004,
pp. iii-xii.
55
Table 1: Purposes of Mixed Methods Research*
Prior IS Research
Purposes Description
Examples** Illustration
Complementarity Mixed methods are used in order to Soffer and Hader A qualitative study was used to gain
gain complementary views about the (2007) additional insights on the findings from a
same phenomenon or relationships. quantitative study.
Completeness Mixed methods designs are used to Piccoli and Ives The qualitative data and results provided
make sure a complete picture of the (2003); rich explanations of the findings from the
phenomenon is obtained. Hackney et al. (2007) quantitative data and analysis.
Developmental Questions for one strand emerge Becerra-Fernandez A qualitative study was used to develop
from the inferences of a previous one and Sabherwal constructs and hypotheses and a
(sequential mixed methods), or one (2001); quantitative study was conducted to test
strand provides hypotheses to be Ho et al. (2003); the hypotheses.
tested in the next one. Grimsley and
Meehan (2007)
Expansion Mixed methods are used in order to Ang and Slaughter The findings from one study (e.g.,
explain or expand upon the (2001); quantitative) were expanded or elaborated
understanding obtained in a previous Koh et al. (2004); by examining the findings from a different
strand of a study. Keil et al. (2007) study (e.g., qualitative).
Corroboration/ Mixed methods are used in order to Bhattacherjee and A qualitative study was conducted to
Confirmation assess the credibility of inferences Premkumar (2004) confirm the findings from a quantitative
obtained from one approach (strand). study.
Compensation Mixed methods enable to Dennis and Garfield The qualitative analysis compensated for
compensate for the weaknesses of (2003) the small sample size in the quantitative
one approach by using the other. study.
Diversity Mixed methods are used with the Chang (2006) Qualitative and quantitative studies were
hope of obtaining divergent views of conducted to compare perceptions of a
the same phenomenon. phenomenon of interest by two different
types of participants.
* Adapted from Creswell (2003), Greene et al. (1989), and Tashakkori and Teddlie (2003a, 2008).
** Many of these examples can be placed in multiple purpose categories. For example, while Bhattacherjee and Premkumar’s
(2004) paper is placed in the corroboration/confirmation category, it can also be placed in the expansion category because the
authors noted that in addition to confirming the findings of the quantitative study, the purpose of the qualitative analysis was to
“possibly gain additional insights into the nature and causes of the hypothesized associations” (p. 246).
56
Table 2: Review of Mixed Methods Research Articles in IS (2001 to 2007)*
No. Papers Purpose of Mixed Methods Employed and Paradigms Meta- Discussion of Validation
Methods Research* Quantitative Qualitative Dominant inferences Quantitative Qualitative Meta-
Method inferences
1. Ang, S., and Slaughter, S. A. “Work Expansion Survey; Interviews; None Yes; a Reliability and Inter-rater None
Outcomes and Job Design for Contract Positivist Positivist substantiv validity were reliability
versus Permanent Information Systems e theory reported. discussed; rich
Professionals on Software Development developed. description
Teams,” MIS Quarterly (25:3), 2001, pp. provided
321-350.
2. Becerra-Fernandez, I., and Sabherwal, Developmental Survey; Interviews; Quantitative Yes; Reliability and Validation was None
R. “Organization Knowledge Positivist Positivist theoretical validity were discussed.
Management: A Contingency statements reported.
Perspective,” Journal of Management provided.
Information Systems (18:1), 2001, pp.
23-55.
3. Geissler, G., Zinkhan, G., and Watson, Developmental Experiments; Focus groups; Quantitative No Reliability and None None
R. T. “Web Home Page Complexity and Positivist interviews; validity were
Communication Effectiveness,” Journal Interpretive reported.
of the Association for Information
Systems (2:1), 2001, pp. 1-46.
4. Bowen, P. L., Heales, J., and Corroboration/ Survey; Interviews; None Yes; Reliability; None None
Vongphakdi, M. T. “Reliability Factors in Confirmation Positivist Positivist theoretical Validity was not
Business Software: Volatility, statements reported.
Requirements and End-Users,” provided.
Information Systems Journal (12:3),
2002, pp. 185-213.
5. Shim, J. P., Shin, Y. B., and Nottingham, Developmental Objective data Interviews; None Yes; Reliability; Inter-rater None
L. “Retailer Web Site Influence on were collected Positivist theoretical Validity was not agreement was
Customer Shopping: Exploratory Study based on statements reported. discussed.
on Key Factors of Customer website provided.
Satisfaction,” Journal of the Association characteristics
for Information Systems (3:1), 2002, pp. ;
53-76. Positivist
6. Dennis, A. R., and Garfield, M. J. “The Compensation Survey; Observations, Qualitative Yes; None None None
Adoption and Use of GSS in Project Positivist interviews, and integrated
Teams: Toward More Participative transcripts; findings
Processes and Outcomes,” MIS Interpretive discussed.
Quarterly (27:2), 2003, pp. 289-323.
7. Deltor, B. “Internet-Based Information Completeness Data were Interviews; None No None None None
Systems Use in Organizations: An collected from Positivist
Information Studies Perspective,” web tracking;
Information Systems Journal (13: 2), Positivist
2003, pp. 113-132.
8. Gallivan, M. J., and Keil, M. “The User- Developmental Survey; Interviews, None Yes; None None None
Developer Communication Process: A Positivist observations, theoretical
Critical Case Study,” Information documents; statements
Systems Journal (13:1), 2003, pp. 37-68. Interpretive provided.
9. Ho, V. T., Ang, S., and Straub, D. “When Developmental Surveys; Focus groups; Quantitative No Reliability and None None
Subordinates Become IT Contractors: Positivist Positivist validity were
Persistent Managerial Expectations in IT reported.
Outsourcing,” Information Systems
Research (14:1), 2003, pp. 66-86.
10. Piccoli, G., and Ives, B. “Trust and the Completeness Experiment; Case studies; Quantitative Yes; Reliability and None None
Unintended Effects of Behavior Control (also has elements Positivist Positivist theoretical validity were
in Virtual Teams,” MIS Quarterly (27:3), of expansion) statements reported.
2003, pp. 365-395. provided.
11. Ramiller, N. C., and Swanson, E. B. Completeness Survey; Interviews; Quantitative Yes; None Rich None
“Organizing Visions for Information (also has elements Positivist Interpretive theoretical descriptions of
Technology and the Information Systems of complementarity statements the context,
Executive Response,” Journal of and expansion) provided. data collection
Management Information Systems and analysis.
(20:1), 2003, pp. 13-50.
12. Shin, B. “An Exploratory Investigation of Developmental Survey; Interviews; Quantitative Yes; Reliability and None None
System Success Factors in Data Positivist Positivist synthesis validity were
Warehousing,” Journal of the of findings reported.
Association for Information Systems provided.
(4:1), 2003, pp. 141-170.
13. Bhattacherjee, A., and Premkumar, G. Corroboration/ Survey; Interviews; Quantitative Yes; Reliability and Validation was None
“Understanding Changes in Belief and Confirmation (also Positivist Positivist theoretical validity were discussed.
attitude Toward Information Technology has elements of statements reported.
Usage: A Theoretical Model and expansion) provided.
Longitudinal Test,” MIS Quarterly (28:2),
2004, pp. 229-254.
14. Koh, C., Ang, S., and Straub, D. W. “IT Expansion Survey; Interviews; Quantitative No Reliability and Rich description None
Outsourcing Success: A Psychological Positivist Positivist validity were provided.
Contract Perspective,” Information reported.
Systems Research (15:4), 2004, pp.
356-373.
15. Hatzakis, T., Lycett, M., Macredie, R. D., Completeness (also Survey; Interviews; None Yes; None None None
58
Martin, V. A. “Towards the Development has elements of Positivist interpretive theoretical
of a Social Capital Approach to complementarity) statements
Evaluating Change Management provided.
Interventions,” European Journal of
Information Systems (14:1), 2005, pp.
60-74.
16. Newman, M., and Westrup, C. “Making Completeness Survey; Interviews; Qualitative Yes; brief None None None
ERPs Work: Accountants and the Positivist Positivist theoretical
Introduction of ERP Systems,” European statements
Journal of Information Systems (14:3), provided.
2005, pp. 258-272.
17. Blechar, J., Constantiou, I. D., and Developmental Survey; Open-ended None Yes; None None None
Damsgaard, J. “Exploring the Influence Positivist questions, theoretical
of Reference Situations and Reference focus groups, statements
Pricing on Mobile Service User and interviews; provided.
Behavior,” European Journal of Positivist
Information Systems (15:3), 2006, pp.
285-291.
18. Cao, J., Crews, J. M., Lin, M., Deokar, A. Complementarity Survey; Open-ended Quantitative No None None None
V., Burgoon, J. K., and Nunamaker Jr., Positivist questions;
J. F. “Interactions between System Positivist
Evaluation and Theory Testing: A
Demonstration of the Power of a
Multifaceted Approach to Information
Systems Research,” Journal of
Management Information Systems
(22:4), 2006, pp. 207-235.
19. Chang, H. H. “Technical and Diversity Survey; Case study; Quantitative No Reliability was None None
Management Perceptions of Enterprise Positivist Positivist reported; validity
Information System Importance, was not
Implementation and Benefits,” reported.
Information Systems Journal (16:3),
2006, pp. 263-292.
20. Keil, M., and Tiwana, A. “Relative Complementarity Conjoint Past literature Quantitative Yes; Reliability was None None
Importance of Evaluation Criteria for survey; and interviews; synthesis not reported;
Enterprise Systems: A Conjoint Study,” Positivist Interpretive of findings validity was
Information Systems Journal (16:3), discussed. reported.
2006, pp. 237-262.
21. Oosterhout, M. van, Waarts, E., and Complementarity Survey; Interviews; None Yes; None None None
Hillegersberg, J. “Change Factors Positivist Positivist synthesis
59
Requiring Agility and Implications for IT,” of findings
European Journal of Information discussed.
Systems (15:2), 2006, pp. 132-145.
22. Pavlou, P. A., and Dimoka, A. “The Completeness Coded from Feedback text Quantitative No Reliability and Reliability and None
Nature and Role of Feedback Text feedback text comments from validity were validity were
Comments in Online Marketplaces: comments online buyers reported. discussed.
Implications for Trust Building, Price from online and sellers.
Premiums, and Seller Differentiation,” buyers and
Information Systems Research (17:4), sellers.
2006, pp. 391-412.
23. Pavlou, P.A., and Fygenson, M. Developmental (also Survey; Interviews; Quantitative No Reliability and None None
“Understanding and Predicting Electronic has the elements of Positivist Positivist validity were
Commerce Adoption: An Extension of corroboration/ reported.
the Theory of Planned Behavior,” MIS confirmation)
Quarterly (20:1), 2006, pp. 111-145.
24. Choi, H., Lee, M., Im, K. S., and Kim, J. Developmental Survey; Interviews; Quantitative No Reliability and Reliability and None
“Contribution to Quality of Life: A New Positivist Positivist validity were validity were
Outcome Variable for Mobile Data reported. discussed.
Service,” Journal of the Association for
Information Systems (8:12), 2007, pp.
598-618.
25. Grimsley, M., and Meehan, A. “E- Developmental Survey; Case study; None Yes; None None None
Government Information Systems: Positivist interpretive theoretical
Evaluation-Led Design for Public Value framework
and Client Trust,” European Journal of developed.
Information Systems (16:2), 2007, pp.
134-148.
26. Hackney R. A., Jones S., and Losch A., Completeness Survey; Focus groups; None Yes; None None None
“Towards an e-Government Efficiency Positivist Positivist synthesis
Agenda: The Impact of Information and of findings
Communication Behaviour on e-Reverse provided.
Auctions in Public Sector Procurement,”
European Journal Information Systems
(16), 2007, pp. 178-191.
27. Hatzakis, T., Lycett, M., and Serrano, A. Complementarity Survey; Action Qualitative Yes; None Validation was None
“A Programme Management Approach Positivist research; theoretical discussed.
for Ensuring Curriculum Coherence in IS Interpretive framework
(higher) Education,” European Journal of proposed.
Information Systems (16:5), 2007, pp.
643-657.
60
28. Keil, M., Im, G. P., and Mahring, M. Expansion Survey; Interviews; Quantitative No Reliability was None None
“Reporting Bad News on Software Positivist Interpretive not reported;
Projects: The Effects of Culturally validity was
Constituted Views of Face-Saving,” reported.
Information Systems Journal (17:1),
2007, pp. 59-87.
29. Santhanam, R., Seligman, L., and Kang, Completeness Data collected Data were Quantitative No None Inter-coder None
D., “Post-implementation Knowledge from system collected from agreement was
Transfers to Users and Information log; observations; discussed.
Technology Professionals,” Journal of Positivist Positivist
Management Information Systems
(24:1), 2007, pp. 174-203.
30. Soffer, P., and Hadar, I. “Applying Complementarity Survey; Observations None Yes; None None None
Ontology-Based Rules to Conceptual Positivist and interviews; theoretical
Modeling: A Reflection on Modeling Positivist statements
Decision Making,” European Journal of provided.
Information Systems (16:5), 2007, pp.
599-611.
31. Tiwana, A., and Bush, A. A. “A Complementarity Conjoint Interviews; Quantitative Yes; Validity was None None
Comparison of Transaction Cost, experiment; Positivist synthesis discussed.
Agency, and Knowledge-Based Positivist of findings
Predictors of IT Outsourcing Decisions: discussed.
A U.S.-Japan Cross-Cultural Field
Study,” Journal of Management
Information Systems (24:1), 2007, pp.
259-300.
* In many cases, authors did not provide an explicit explanation or indication of the purpose of employing a mixed methods approach, paradigmatic assumptions, and validation.
Hence, the coding reported in this table represents our interpretation of where each of these papers fit in this table based on our understanding of these papers.
61
Table 3: Examples of Mixed Methods Research Programs in IS (2001-2007)
Authors Description Validation
Quantitative Qualitative Meta-inferences*
• Beaudry and Beaudry and Pinnsonneault (2005) developed and tested a Validation was discussed in Validation was discussed in No discussion of
Pinsonneault (2005) model of the user adaptation process using a qualitative study. the quantitative study (i.e., the qualitative study (i.e., meta-inferences
• Beaudry and Beaudry and Pinsonneault (2010) developed a model of the role Beaudry and Pinsonneault Beaudry and Pinsonneault and validation was
Pinsonneault (2010) of affect in IT use based on the theoretical foundation of Beaudry 2010). 2005). provided.
and Pinsonneault (2005) and tested it using a quantitative study.
• Espinosa et al. Espinosa et al. (2007a) studied coordination needs in Validation was discussed in Inter-rater reliability was No discussion of
(2007a) geographically distributed software development teams using a the quantitative study (i.e., discussed in the qualitative meta-inferences
• Espinosa et al. qualitative approach. Espinosa et al. (2007b) studied how Espinosa et al. 2007b). study (i.e., Espinosa et al. and validation was
(2007b) familiarity and coordination complexity interact with each other to 2007a). provided.
influence performance of geographically distributed software
development teams.
• Kayworth and Kayworth and Leidner (2002) studied the role of effective Validation was discussed in Validation was discussed in No discussion of
Leidner (2002) leadership in global virtual teams using a qualitative study. the quantitative study (i.e., the qualitative study (i.e., meta-inferences
• Wakefield et al. Wakefield et al. (2008) developed and tested a model of conflict Wakefield et al. 2008). Kayworth and Leidner and validation was
(2008) and leadership in global virtual teams using a quantitative study. 2002). provided.
• Smith et al. (2001) In this research program, Keil and his colleagues conducted both Validation was discussed in Validation was discussed in No discussion of
• Keil et al. (2002) qualitative and quantitative studies to examine communication the quantitative studies the qualitative studies (e.g., meta-inferences
• Snow et al. (2002) processes in IT projects, particularly in projects that had major (e.g., Smith et al. 2001). Keil et al. 2002). and validation was
• Keil et al. (2007) problems. provided.
• Venkatesh and Venkatesh and Brown (2001) presented a model of home PC Validation was discussed in No discussion of validation No discussion of
Brown (2001) adoption based on a qualitative study. Brown and Venkatesh’s the quantitative study (i.e., was provided. meta-inferences
• Brown and (2005) paper was from the same broad program of research that Brown and Venkatesh and validation was
Venkatesh (2005) tested a model of home PC adoption using a quantitative 2005). provided.
approach.
* Given that these studies were published as separate journal articles, we believe that the authors did not have an opportunity to offer meta-inferences that cut across these
studies.
62
Table 4: Examples of Validity in Quantitative and Qualitative Research*
Quantitative Methods
Design Validity ▪ Internal validity: The validity of the inference about whether the observed covariation between
independent and dependent variables reflects a causal relationship (e.g., the ability to rule out
alternative explanations).
▪ External validity: The validity of the inference about whether the cause-effect relationship holds
over variation in persons, settings, treatment variables, and measurement variables.
Measurement ▪ Reliability: The term reliability means repeatability or consistency. A measure is considered to be
Validity reliable if it produces the same result over and over again. There are various types of reliability,
such as inter-rater or inter-observer reliability, test-retest reliability, parallel-forms reliability and
internal consistency reliability.
▪ Construct validity: The degree to which inferences can legitimately be made from the
operationalizations in a study to the theoretical constructs on which those operationalizations are
based. There are many different types of construct validity, such as face, content, criterion-
related, predictive, concurrent, convergent, discriminant and factorial.
Inferential Validity ▪ Statistical conclusion validity: The validity of inferences about the correlation (covariation)
between independent and dependent variables.
Qualitative Methods
Design Validity ▪ Descriptive validity: The accuracy of what is reported—e.g., events, objects, behaviors, settings—
by researchers.
▪ Credibility: Involves establishing that the results of qualitative research are credible or believable
from the perspective of the participants in the research to convincingly rule out alternative
explanations.
▪ Transferability: The degree to which the results of qualitative research can be generalized or
transferred to other contexts or settings.
Analytical Validity ▪ Theoretical validity: The extent to which the theoretical explanation developed fits the data and,
therefore, is credible and defensible.
▪ Dependability: Emphasizes the need for the researcher to describe the changes that occur in the
setting and how these changes affected the way the researcher approached the study.
▪ Consistency: Emphasizes the process of verifying the steps of qualitative research through
examination of such items as raw data, data reduction products, and process notes.
▪ Plausibility: Concerned with determining whether the findings of the study, in the form of
description, explanation or theory, fit the data from which they are derived (Sandelowski 1986).
Inferential Validity ▪ Interpretive validity: The accuracy of interpreting what is going on in the minds of the participants
and the degree to which the participants’ views, thoughts, feelings, intentions, and experiences
are accurately understood by the researcher.
▪ Confirmability: The degree to which the results could be confirmed or corroborated by others.
* This list is not exhaustive. There are many types of validity suggested for qualitative and quantitative methods. This table
provides examples of some widely used validity types that were identified and defined by Cook and Campbell (1979), Shadish et
al. (2002), and Teddlie and Tashakkori (2003).
63
Table 5: Summary of Mixed Methods Research Guidelines
Area Guideline Author Considerations Editor/Reviewer Evaluations
(1) Decide on the Carefully think about the research questions, Understand the core objective of a research inquiry to assess whether mixed
appropriateness of a objectives and contexts to decide on the methods research is appropriate for an inquiry. For example, if the
mixed methods appropriateness of a mixed methods approach for theoretical/causal mechanisms/processes are not clear in a quantitative paper,
approach. the research. Explication of the broad and specific after carefully considering the practicality, ask authors to collect qualitative data
research objective is important to establish the (e.g., interview, focus groups) to unearth these mechanisms and processes.
appropriateness and utility of mixed methods
research.
General Guidelines
(2) Develop a strategy for Carefully select a mixed methods design strategy Evaluate the appropriateness of a mixed methods research design from two
mixed methods that is appropriate for the research questions, perspectives: research objective and theoretical contributions. For example, if
research design. objectives and contexts (see Table 6 for the the objective of a research inquiry is to identify and test theoretical constructs
definition of design suitability and adequacy). and mechanisms in a new context, a qualitative study followed by a quantitative
study is appropriate (i.e., sequential design).
(3) Develop a strategy for Develop a strategy for rigorously analyzing mixed While recognizing the practical challenges of collecting, analyzing and reporting
analyzing mixed methods data. A cursory analysis of qualitative data both qualitative and quantitative data in a single research inquiry, apply the
methods data. followed by a rigorous analysis of quantitative data same standards for rigor as would typically be applied in evaluating the analysis
or vice versa is not desirable. quality of other quantitative and qualitative studies.
(4) Draw meta-inferences Integrate inferences from the qualitative and Ensure that authors draw meta-inferences from mixed methods research.
from mixed methods quantitative studies in order to draw meta- Evaluation of meta-inferences should be done from the perspective of the
results. inferences. research objective and theoretical contributions to make sure the authors draw
and report appropriate meta-inferences.
(1) Discuss validation within Discuss validation for both quantitative and Ensure that authors follow and report validity types that are typically expected in
quantitative and qualitative studies. a quantitative study. For the qualitative study, ensure that the authors provide
qualitative research. either explicit or implicit (e.g., rich and detailed description of the data collection
and analyses) discussion of validation.
(2) Use mixed methods When discussing mixed methods validation, use Ensure that the authors use consistent nomenclature for reporting mixed
research nomenclature mixed methods research nomenclature. methods research validation.
when discussing
Validation
validation.
(3) Discuss validation of Mixed methods research validation should be Assess the quality of integration of qualitative and quantitative results. The
mixed methods findings assessed on the overall findings from mixed quality should be assessed in light of the theoretical contributions.
and/or meta- methods research, not from the individual studies.
inference(s).
(4) Discuss validation from Discuss validation from the standpoint of the overall Assess the quality of meta-inferences from the standpoint of the overall mixed
a research design point mixed methods design chosen for a study or methods design chosen by IS researchers (e.g., concurrent or sequential).
of view. research inquiry.
(5) Discuss potential Discuss the potential threats to validity that may Evaluate the discussion of potential threats using the same standard that is
threats and remedies. arise during data collection and analysis. typically used in rigorously conducted qualitative and quantitative studies.
64
Table 6: Integrative Framework for Mixed Methods Inference Quality*
Quality Aspects Quality Criteria Description
Design quality: Design The degree to which methods selected and research design employed are appropriate
The degree to suitability/ for answering the research question. For example, researchers need to select
which a appropriateness appropriate quantitative (e.g., survey) and qualitative (e.g., interview) methodologies and
researcher has decide whether they will conduct parallel or sequential mixed methods research.
selected the Design Quantitative: The degree to which the design components for the quantitative part (e.g.,
most appropriate adequacy sampling, measures, data collection procedures) are implemented with acceptable
procedures for quality and rigor. Indicators of inference quality include reliability and internal validity
answering the (Shadish et al. 2002; Teddlie and Tashakkori 2009).
research Qualitative: The degree to which the qualitative design components are implemented
questions with acceptable quality and rigor. Indicators of inference quality include credibility and
(Teddlie and dependability (Teddlie and Tashakkori 2009).
Tashakkori Analytic Quantitative: The degree to which the quantitative data analysis procedures/strategies
2009). adequacy are appropriate and adequate to provide plausible answers to the research questions. An
indicator of inference quality is statistical conclusion validity (Shadish et al. 2002).
Qualitative: The degree to which qualitative data analysis procedures/strategies are
appropriate and adequate to provide plausible answers to the research questions.
Indicators of quality include theoretical validity and plausibility.
Explanation Quantitative The degree to which interpretations from the quantitative analysis closely follow the
quality: inferences relevant findings, consistent with theory and the state of knowledge in the field, and are
The degree to generalizable. Indicators of quality include internal validity, statistical conclusion validity,
which credible and external validity.
interpretations Qualitative The degree to which interpretations from the qualitative analysis closely follow the
have been made inferences relevant findings, consistent with theory and the state of knowledge in the field, and are
on the basis of transferable. Indicators of quality include credibility, confirmability, and transferability.
obtained results Integrative Integrative efficacy: The degree to which inferences made in each strand of a mixed
(Lincoln and inference/ meta- methods research inquiry are effectively integrated into a theoretically consistent meta-
Guba 2000; inference inference.
Tashakori and Inference transferability: The degree to which meta-inferences from mixed methods
Teddlie 2003b). research are generalizable or transferable to other contexts or settings.
Integrative correspondence: The degree to which meta-inferences from mixed methods
research satisfy the initial purpose (see Table 1) for using a mixed methods approach.
* Adapted from Tashakkori and Teddlie (2008) and Teddlie and Tashakkori (2009, 2003). While Teddlie and Tashakkori used the
term “interpretive rigor” as the second aspect of inference quality, we refer it as explanation quality in this table to avoid the
potential confusion with “interpretive” research, a major paradigm of qualitative research in the IS literature.
65
Figure 1: The Process of Mixed Methods Research and Inference Quality*
Mixed Methods
Research Design
No No
Yes
Quantitative Qualitative
inference inference
No No
Yes
Integrative inference/
meta-inference
No No
Yes
Report
* Adapted from Teddlie and Tashakkori (2009).
66