Design Based Implementation Research - Fishman Penuel
Design Based Implementation Research - Fishman Penuel
Design Based Implementation Research - Fishman Penuel
WILLIAM R. PENUEL
University of Colorado Boulder
ANNA-RUTH ALLEN
University of Colorado Boulder
NORA SABELLI
SRI International
National Society for the Study of Education, Volume 112, Issue 2, pp. 136-156
Copyright © by Teachers College, Columbia University
Design-Based Implementation Research 137
This National Society for the Study of Education Yearbook presents an over-
view of an emerging model at the intersection of policy, research, and
practice called design-based implementation research (DBIR). DBIR applies
design-based perspectives and methods to address and study problems
of implementation. As the chapters in this volume illustrate, DBIR chal-
lenges education researchers to break down barriers between sub-disci-
plines of educational research that isolate those who design and study
innovations within classrooms from those who study the diffusion of in-
novations. It also aims to reconfigure the roles of researchers and prac-
titioners in bringing about systemic change in ways that make it more
likely that practitioners can adapt innovations productively to meet the
needs of diverse students and that durable research–practice partner-
ships can sustain innovations that make a difference.
ANTECEDENTS OF DBIR
EVALUATION RESEARCH
Three different models of evaluation research align closely with the aims
of DBIR. One is utilization-focused evaluation, which directs evaluation
researchers to consider intended uses of research by intended users in
the design and conduct of evaluation (Patton, 1997, 2000). Like DBIR,
utilization-focused evaluation emphasizes uses of research findings for
programs, and it highlights the ways that stakeholder involvement in the
process of evaluation can support the design and development of both
programs and organizations (Patton, 2000). A second strand of evalua-
tion research includes participatory models of evaluation (e.g., Cousins
& Earl, 1992; Fetterman, 2001), in which stakeholders are involved in
all aspects of evaluation research, from formulating questions to inter-
preting results. This model presumes that the knowledge people use to
guide decision making within organizations is socially constructed and
that participation in all aspects of the process of evaluation helps prac-
titioners develop shared understandings of program goals, effects, and
conditions for success (Cousins & Earl, 1992). Finally, theory-driven eval-
uation (Donaldson, 2007) emphasizes the importance of using a combi-
nation of social science and stakeholder theories in designing evaluation
studies and of using evaluation to support program planning. A prem-
ise behind this model is that to be effective, interventions need to be
grounded in theory and evidence from relevant social science disciplines
(e.g., learning sciences, psychology, public health). Supovitz (2013, this
Yearbook) explores the relation between DBIR and different models of
evaluation, pointing out both intersections and conflicts between DBIR
and traditional evaluation of educational innovations.
DESIGN-BASED RESEARCH
Within the learning sciences, design-based research offers a model for the
design and testing of innovations within the crucible of classrooms and
other contexts for learning (Cobb, Confrey, diSessa, Lehrer, & Schau-
ble, 2003; O’Neill, 2012). As a model that emphasizes iterative cycles of
design and testing, design-based research is particularly well-suited to
making evidence-based improvements to innovations, in which evidence
from both implementation and outcomes informs changes that design
teams make to innovations for learning (see, e.g., Fishman, Marx, Best,
& Tal, 2003). The potential utility of design research to support imple-
mentation also derives from its commitment to developing both theory
that guides design decisions and practical tools that can be used to sup-
port local innovation and solve practical problems (Cobb et al., 2003). As
in community-based participatory research, the collaborative nature of
much design research positions practitioners as codesigners of solutions
to problems, which can facilitate the development of usable tools that
educators are willing to adopt (Penuel, Roschelle, & Shechtman, 2007).
IMPLEMENTATION RESEARCH
& Means, 2004; Penuel & Yarnall, 2005). Implementation research is of-
ten conducted within larger outcome studies, with the aim of analyzing
how and how much variations in implementation matter for innovation
effectiveness (e.g., Furtak et al., 2008; Lee, Penfield, & Maerten-Rivera,
2009; O’Donnell & Lynch, 2008). Notably, the study of implementation
among policy researchers in education and sociologists of education has
given an important context for theory development in these fields, from
new institutionalism (Meyer & Rowan, 2006) to the diffusion of innova-
tions (e.g., Frank, Zhao, & Borman, 2004). Implementation research stud-
ies conducted as part of policy and evaluation studies have yielded impor-
tant practical insights over the years, notably about the inevitability of local
adaptation and the need to support local actors’ sense-making in shaping
implementation of innovations (e.g., Berman & McLaughlin, 1975; Dat-
now, Hubbard, & Mehan, 1998; Means, Padilla, DeBarger, & Bakia, 2009).
The aim of a social design experiment is to develop new tools and prac-
tices that produce new learning arrangements, especially for students
from nondominant communities (Gutiérrez & Vossoughi, 2010). As in
other forms of design research, researchers work in close partnerships with
practitioners to develop these arrangements. In addition, consistent with
DBIR, a focus is on transforming learning arrangements across different
settings and levels. A key aim is to develop so-called third spaces, in which
hybrid cultural practices enable students to bridge everyday and academic
literacies (Gutiérrez, Baquedano-Lopez, & Tejada, 2000). Doing so often
requires engaging community members in partnerships to ensure that the
voices, tools, and practices of nondominant communities become integral
to new learning arrangements. These partnerships, moreover, may incor-
porate cultural practices and beliefs of members of the community as a
means to ensure that the design process becomes a third space (Bang, Me-
din, Washinawatok, & Chapman, 2010). A number of scholars are develop-
ing approaches to designing learning environments that engage commu-
nity members and that attempt to help students relate everyday cultural
practices to disciplinary ways of thinking and reasoning (Bang & Medin,
2008; DeBarger, Choppin, Beauvineau, & Moorthy, 2013, this Yearbook;
Kirshner & Polman, 2013, this Yearbook; Tzou & Bell, 2010).
PRINCIPLES OF DBIR
The principles that we outline next are heuristics for guiding the orga-
nization of DBIR (Penuel, Fishman, Cheng, & Sabelli, 2011). We gave
the authors of the chapters in this Yearbook guidance to use, adapt, and
extend them to illustrate what DBIR is now and might be in the future.
To provide readers with a sense of where we began, however, it is useful
to describe the principles we think make a particular research project,
program of research and development, or infrastructure for collabora-
tion an example of DBIR. The four key principles are:
improvement: (1) the need to identify theories and methods that are ap-
propriate for research that focuses simultaneously on classroom teaching
and learning and on the policies and systems that support those improve-
ments at scale; (2) the need to develop cross-setting, cross-sector perspec-
tives on improving teaching and learning; (3) the need to develop meth-
ods for working in partnership with practitioners to negotiate the focus of
their work and to organize design to include a wide range of stakeholders
in schools and communities; (4) the need to develop standards of evidence
appropriate to the questions we pose in research; and (5) the need for
policies and infrastructures that help sustain partnerships and grow our
capacity for continuous improvement and sustainable change.
A key construct emerging within both the learning sciences and policy re-
search in education is the importance of considering learning as unfold-
ing over time across multiple settings (Banks et al., 2007). Learning is
not bounded by the classroom, and it does not take place only in formal
settings. Improving student learning outcomes depends on taking into
account learners’ history of engagement with particular content areas,
the forms their engagement with that content takes in different settings
of their lives, and the organization of learners’ access to opportunities to
engage with specific domains. This broadened perspective on where and
Design-Based Implementation Research 147
how learning takes place is an important anchor for DBIR as a model for
developing partnerships to improve the design and implementation of
educational interventions.
The chapters in this section both outline this life-long, life-wide per-
spective on learning and present examples of research–practice part-
nerships in which cross-setting interventions are being developed and
their implementation is being studied and promoted through partner-
ship activity. In the first chapter in this section, Milbrey McLaughlin and
Rebecca London (2013, this Yearbook) describe the work of the Youth
Data Archive (YDA) at Stanford University’s John W. Gardner Center.
The YDA makes it possible for people from different sectors to orga-
nize knowledge from diverse sources in order to gain new insight into
persistent problems and to identify new problems that can be addressed
through coordinated cross-sector efforts. In their chapter, Ben Kirsh-
ner and Joe Polman (2013, this Yearbook) take up the question of what
happens when innovations themselves move across different settings. In
contrast to approaches that emphasize adherence or fidelity to program
components, Kirshner and Polman discuss the need to authorize local
adaptation and for designers to provide “signature tools” that enable
productive—as opposed to maladaptive—adaptations of innovations.
and evidence change in a DBIR approach and what challenges this pres-
ents to established views of validity and evidence in program evaluation.
This chapter considers the utility of identifying what evidence is needed
for an intervention on the basis of its location of an intervention along a
hypothetical “intervention development curve” as a framework for judg-
ing the adequacy of evidence in DBIR.
Just as the DBIR approach seeks to foster the creation of scalable and
sustainable educational innovation through new combinations of re-
search and practice partnerships, DBIR itself requires a different way to
conceive of, fund, and support the ongoing conduct of research and de-
velopment. This is an infrastructural challenge. The current policy and
funding environment frames the research enterprise in particular ways,
and DBIR challenges that framing. For instance, policy makers will need
to broaden conceptions of evidence beyond focusing on “what works”
to encourage researchers to develop evidence related to what works for
whom, when, and under what conditions.
The final section of this Yearbook comprises chapters that present
examples of new organizations and organizational structures for pur-
suing the work of DBIR and that critically examine existing infrastruc-
tures for supporting research in terms of what is needed for DBIR to
be fully realized. The Strategic Education Research Partnership (SERP)
was chartered by the National Academy of Sciences specifically to de-
velop new working relationships between research and practice and to
address pressing educational problems, and in many ways it exemplifies
the DBIR principles. Suzanne Donovan, Catherine Snow, and Phil Daro
(2013, this Yearbook) describe the history and work of SERP, with a fo-
cus on how that organization identifies important problems of practice
and brokers relationships between practice and research to address those
problems. Jimmy Scherrer, Nancy Israel, and Lauren Resnick (2013, this
Yearbook) describe the Institute for Learning (IFL), an organization with
a long history of focus on school reform. They examine work on both
teacher effectiveness and student learning from the perspective of insti-
tutions, employing a metaphor of “nesting” to describe organizational
structure. IFL has worked to design supports for “boundary crossers”
who communicate new information across contexts, and the chapter
explores how IFL has evolved over time and, with each iteration, has
moved toward DBIR principles. Jon Dolle, Louis Gomez, Jennifer Rus-
sell, and Tony Bryk (2013, this Yearbook) describe a radically different
approach to cross-boundary collaboration and improvement focused on
networked improvement communities. They describe how the Carnegie
150 National Society for the Study of Education
CONCLUSION
Acknowledgments
The workshop and development of materials in this volume were funded by the
National Science Foundation under award no. 1054086. The authors would,
in particular, like to thank Janice Earle of the NSF for helping to shape the
ideas that led to the workshop, and Denise Sauerteig of SRI International for
Design-Based Implementation Research 151
invaluable support of all aspects of this work. The views represented in this chap-
ter and in this volume are those of the authors and not of the funding agencies or
the authors’ respective institutions.
References
Aleven, V. A., & Koedinger, K. R. (2002). An effective metacognitive strategy: Learning by
doing and explaining with a computer-based Cognitive Tutor. Cognitive Science, 26(2),
147–179.
Bang, M., & Medin, D. (2008, February). Community-based design of science learning
environments: Engaging with and implementing relational epistemologies. Paper presented at
the American Association for the Advancement of Science Meeting, Boston, MA.
Bang, M., Medin, D., Washinawatok, K., & Chapman, S. (2010). Innovations in culturally
based science education through partnerships and community. In M. S. Khine & M.
I. Saleh (Eds.), New science of learning: Cognition, computers, and collaboration in education
(pp. 569–592). New York, NY: Springer.
Banks, J. A., Au, K. H., Ball, A. F., Bell, P., Gordon, E. W., Gutierrez, K. D., . . . Zhou, M.
(2007). Learning in and out of school in diverse environments: Life-long, life-wide, life-deep.
Seattle, WA: The LIFE Center (The Learning in Informal and Formal Environments
Center), University of Washington, Stanford University, and SRI International and
Center for Multicultural Education, University of Washington.
Berman, P., & McLaughlin, M. W. (1975). Federal programs supporting educational change: Vol.
4. The findings in review. Santa Monica, CA: RAND.
Berwick, D. M. (2008). The science of improvement. Journal of the American Medical
Association, 299(10), 1182–1184.
Borko, H., & Klingner, J. K. (2013). Supporting teachers in schools to improve their
instructional practice. National Society for the Study of Education Yearbook, 112(2), 274–297.
Bryk, A. S., Gomez, L. M., & Grunow, A. (2011). Getting ideas into action: Building
networked improvement communities in education. In M. Hallinan (Ed.), Frontiers in
sociology of education (pp. 127–162). Dordrecht, The Netherlands: Verlag.
Cobb, P. A., Confrey, J., diSessa, A. A., Lehrer, R., & Schauble, L. (2003). Design
experiments in educational research. Educational Researcher, 32(1), 9–13.
Cobb, P., Jackson, K., Smith, T., Sorum, M., & Henrick, E. (2013). Design research with
educational systems: Investigating and supporting improvements in the quality of
mathematics teaching and learning at scale. National Society for the Study of Education
Yearbook, 112(2), 320–349.
Coburn, C. E., & Stein, M. K. (Eds.). (2010). Research and practice in education: Building
alliances, bridging the divide. Lanham, MD: Rowman and Littlefield.
Coburn, C. E., & Talbert, J. E. (2006). Conceptions of evidence use in school districts:
Mapping the terrain. American Journal of Education, 112, 469–495.
Coburn, C. E., Honig, M. I., & Stein, M. K. (2009). What’s the evidence on districts’ use of
evidence? In J. D. Bransford, D. J. Stipek, N. J. Vye, L. M. Gomez, & D. Lam (Eds.),
The role of research in educational improvement (pp. 67–87). Cambridge, MA: Harvard
Education Press.
Collins, A., Joseph, D., & Bielaczyc, K. (2004). Design research: Theoretical and
methodological issues. Journal of the Learning Sciences, 13(1), 15–42.
Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational
Evaluation and Policy Analysis, 14(4), 397–414.
152 National Society for the Study of Education
Datnow, A., Hubbard, L., & Mehan, H. (1998). Educational reform implementation: A co-
constructed process (Technical report). Santa Cruz: University of California, Center for
Research on Education, Diversity, and Excellence.
Davidson, M. R., Fields, M. K., & Yang, J. (2009). A randomized trial study of a preschool
literacy curriculum: The importance of implementation. Journal of Research on
Educational Effectiveness, 2, 177–208.
DeBarger, A. H., Choppin, J., Beauvineau, Y., Moorthy, S. (2013). Designing for productive
adaptations of curriculum interventions. National Society for the Study of Education
Yearbook, 112(2), 298–319.
Dolle, J. R., Gomez, L. M., Russell, J. L., & Bryk, A. S. (2013). More than a network:
Building professional communities for educational improvement. National Society for the
Study of Education Yearbook, 112(2), 443–463.
Donaldson, S. I. (2007). Program theory-driven evaluation science: Strategies and applications.
Mahwah, NJ: Erlbaum.
Donovan, M. S., Snow, C. & Daro, P. (2013). The SERP approach to problem-solving
research, development, and implementation. National Society for the Study of Education
Yearbook, 112(2), 400–425.
Donovan, M. S., Wigdor, A. K., & Snow, C. E. (2003). Strategic education research partnership.
Washington, DC: National Research Council.
Edelson, D. C. (2002). Design research: What we learn when we engage in design. Journal
of the Learning Sciences, 11(1), 105–121.
Fetterman, D. M. (2001). Foundations of empowerment evaluation. Thousand Oaks, CA: Sage.
Fishman, B. J., Marx, R. W., Best, S., & Tal, R. (2003). Linking teacher and student
learning to improve professional development in systemic reform. Teaching and Teacher
Education, 19(6), 643–658.
Fixsen, D. L., Naoom, S. F., Blase, K. A., & Friedman, R. M. (2005). Implementation research:
A synthesis of the literature. Tampa: Louis de la Parte Florida Mental Health Institute,
National Implementation Research Network, University of South Florida.
Flyvbjerg, B. (2006). Five misunderstandings about case study research. Qualitative Inquiry,
12(2), 219–245.
Frank, K. A., Zhao, Y., & Borman, K. (2004). Social capital and the diffusion of innovations
within organizations: Application to the implementation of computer technology in
schools. Sociology of Education, 77(2), 148–171.
Furtak, E. M., Ruiz-Primo, M. A., Shemwell, J. T., Ayala, C. C., Brandon, P. R., Shavelson,
R. J., & Yin, Y. (2008). On the fidelity of implementing embedded formative assessment
and its relation to student learning. Applied Measurement in Education, 21, 360–389.
Gutiérrez, K. D., Baquedano-Lopez, P., & Tejada, C. (2000). Rethinking diversity:
Hybridity and hybrid language practices in the third space. Mind, Culture, and Activity,
6(4), 286–303.
Gutiérrez, K. D., Rymes, B., & Larson, J. (1995). Script, counterscript, and underlife in
the classroom: James Brown versus Brown v. Board of Education. Harvard Educational
Review, 65(3), 445–471.
Gutiérrez, K. D., & Vossoughi, S. (2010). Lifting off the ground to return anew: Mediated
praxis, transformative learning, and social design experiments. Journal of Teacher
Education, 61(1–2), 100–117.
Honig, M. I. (2013) Beyond the policy memo: Designing to strengthen the practice of
district central office leadership for instructional improvement at scale. National Society
for the Study of Education Yearbook, 112(2), 256–273.
Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009).
Research on evaluation use: A review of the empirical literature from 1986 to 2005.
American Journal of Evaluation, 30(3), 377–410.
Design-Based Implementation Research 153
Penuel, W. R., & Means, B. (2004). Implementation variation and fidelity in an inquiry
science program: An analysis of GLOBE data reporting patterns. Journal of Research in
Science Teaching, 41(3), 294–315.
Penuel, W. R., Roschelle, J., & Shechtman, N. (2007). The WHIRL co-design process:
Participant experiences. Research and Practice in Technology Enhanced Learning, 2(1),
51–74.
Penuel, W. R., Tatar, D., & Roschelle, J. (2004). The role of research on contexts of
teaching practice in informing the design of handheld learning technologies. Journal of
Educational Computing Research, 30(4), 331–348.
Penuel, W. R., & Yarnall, L. (2005). Designing handheld software to support classroom
assessment: An analysis of conditions for teacher adoption. Journal of Technology,
Learning, and Assessment, 3(5). Available from http://www.jtla.org.
Resnick, L. B., & Spillane, J. P. (2006). From individual learning to organizational
designs for learning. In L. Verschaffel, F. Dochy, M. Boekaerts, & S. Vosniadou (Eds.),
Instructional psychology: Past, present and future trends. Sixteen essays in honor of Erik De Corte
(pp. 259–276). Oxford, England: Pergamon.
Roschelle, J., Knudsen, J., & Hegedus, S. J. (2010). From new technological infrastructures
to curricular activity systems: Advanced designs for teaching and learning. In M. J.
Jacobson & P. Reimann (Eds.), Designs for learning environments of the future: International
perspectives from the learning sciences (pp. 233–262). New York, NY: Springer.
Russell, J. L., Jackson, K., Krumm, A. E., & Frank, K. A. (2013). Theory and research
methodologies for design-based implementation research: Examples from four cases.
National Society for the Study of Education Yearbook, 112(2), 157–191.
Sabelli, N., & Dede, C. (2013). Empowering DBIR: The need for infrastructure. National
Society for the Study of Education Yearbook, 112(2), 464–480.
Scherrer, J., Israel, N., & Resnick, L.B. (2013). Beyond classrooms: Scaling and sustaining
instructional innovations. National Society for the Study of Education Yearbook, 112(2),
426–442.
Spillane, J. P., & Coldren, A. F. (2010). Diagnosis and design for school improvement: Using a
distributed perspective to lead and manage change. New York, NY: Teachers College Press.
Stein, M. K., & Coburn, C. E. (2008). Architectures for learning: A comparative analysis of
two urban school districts. American Journal of Education, 114(4), 583–626.
Stewart, D. W., & Shamdasani, P. N. (2006). Applied social research methods series. Newbury
Park, CA: Sage.
Strand, K., Marullo, S., Cutforth, N., Stoecker, R., & Donohue, P. (2003). Community-based
research and higher education. San Francisco, CA: Jossey-Bass.
Supovitz, J. (2013). Situated research design and methodological choices in formative
program evaluation. National Society for the Study of Education Yearbook, 112(2), 372–399.
Tyack, D., & Cuban, L. (1995). Tinkering toward utopia: A century of public school reform.
Cambridge, MA: Harvard University Press.
Tzou, C. T., & Bell, P. (2010). Micros and Me: Leveraging home and community practices
in formal science instruction. In K. Gomez, L. Lyons, & J. Radinsky (Eds.), Proceedings
of the 9th International Conference of the Learning Sciences (pp. 1135–1143). Chicago, IL:
International Society of the Learning Sciences.
Wallerstein, N., & Duran, B. (2010). Community-based participatory research contributions
to intervention research: The intersection of science and practice to improve health
equity. American Journal of Public Health, 100(S1), S40–S46.
Weinberg, A. S. (2003). Negotiating community-based research: A case study in the “Life’s
Work” project. Michigan Journal of Community Service Learning, 9(3), 26–35.
Werner, A. (2004). A guide to implementation research. Washington, DC: The Urban Institute Press.
Whyte, W. F. (1991). Participatory action research. Newbury Park, CA: Sage.
Design-Based Implementation Research 155