Logic-Models-Kaplan-Garrett

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Evaluation and Program Planning 28 (2005) 167–172

www.elsevier.com/locate/evalprogplan

The use of logic models by community-based initiatives


Sue A. Kaplan*, Katherine E. Garrett
Center for Health and Public Service Research, New York University, 295 Lafayette Street, 2nd floor, New York, NY 10012 9604, USA
Received 1 February 2004; received in revised form 1 June 2004; accepted 1 September 2004

Abstract

Many grant programs now require community-based initiatives to develop logic models as part of the application process or to facilitate
program monitoring and evaluation. This paper examines three such programs to understand the benefits and challenges of using logic
models to help build consensus and foster collaboration within a community coalition, strengthen program design, and facilitate internal and
external communication. The paper concludes with recommendations for how to make the logic model development process more useful for
community-based initiatives.
q 2005 Elsevier Ltd. All rights reserved.

1. Introduction In light of the ubiquity of logic models in program


development and evaluation, it seems important and timely
For several decades, “logic models” have been used as to understand how they are used, and what benefits and
tools for program planning, management, and evaluation challenges they present, at a community level. Although
(Bickman, 1987; Chen, 1990; Chen & Rossie, 1983; logic models are often used in program evaluation, ideally,
Wholey, 1987). A logic model is a graphic display or the logic model approach offers practitioners a planning and
‘map’ of the relationship between a program’s resources, management tool—to help clarify goals, achieve consensus,
activities, and intended results, which also identifies the identify gaps in logic or in knowledge, and track progress
program’s underlying theory and assumptions (McLaughlin (Millar, Simeone, & Carnevale, 2001). To what extent, and
& Jordan, 1999; Renger & Titcomb, 2002). In recent years, under what circumstances, do community-based initiatives
many funders have begun to require that community-based find logic models to be a helpful tool in program design and
initiatives develop logic models as part of their grant implementation?
applications and for on going monitoring and reporting.1 Over the past 3 years, the Center for Health and Public
At the same time, program evaluators are increasingly Service Research (CHPSR) of the Robert F. Wagner
using logic models to identify and measure expected Graduate School of Public Service at New York University
results (see, for example, Hebert & Anderson, 1998; has worked extensively with community coalitions provid-
Kagan, 1998; Milligan, Coulton, York, & Register 1998; ing technical assistance on logic model development and
using logic models as a tool for program monitoring and
Torvatn, 1999).
evaluation. In this paper, we discuss the lessons we have
learned about the usefulness of logic models to community
* Corresponding author. Tel.: C1 212 998 7554; fax: C1 212 995 4166. coalitions participating in three initiatives: the US Depart-
E-mail address: sue.kaplan@nyu.edu (S.A. Kaplan). ment of Health and Human Services (DHHS) Community
1
The US Department of Health and Human Services has required logic Access Program, which provides support to coalitions
models from all Community Access Program grantees and applicants and seeking to bridge service gaps for the un- and under-
for the Centers for Disease Control REACH program. The W.K. Kellogg
Foundation has used logic models in many of their initiatives, and has
insured; the New York City Department of Health and
developed a resource book on logic model development for practitioners Mental Hygiene’s (DOHMH) Childhood Asthma Initiative,
and program developers. W.K. Kellogg Foundation (2001). Logic model an effort to promote community partnerships in high-risk
development guide: using logic models to bring together planning, neighborhoods and to strengthen the capacity of the
evaluation, & action. Battle Creek, MI: W. K. Kellogg Foundation. community institutions with which asthmatic children
0149-7189/$ - see front matter q 2005 Elsevier Ltd. All rights reserved. come into contact; and Bronx Health REACH, a compre-
doi:10.1016/j.evalprogplan.2004.09.002 hensive community effort funded by the Centers for Disease
168 S.A. Kaplan, K.E. Garrett / Evaluation and Program Planning 28 (2005) 167–172

Control and Prevention (CDC) to eliminate racial and ethnic identifying the resources, activities, outputs, and outcomes
disparities in health outcomes, focusing on diabetes and that were necessary to achieve these results. Based upon this
related heart disease. discussion, the research team then created a draft model,
All three programs used logic models to shape their which we shared with each site at a second meeting. In this
program monitoring and evaluation, and to help the grantees second round, the participants refined the model to ensure
design, implement, and manage their programs. All three that it reflected their program, and began to identify the
relied heavily on the W.K. Kellogg Foundation approach to strengths and weaknesses of the underlying program theory
logic model development (W.K. Kellogg Foundation, and assumptions. Over the same time period, CHPSR
2001). The process by which each program developed and worked through a similar process with the City Department
used logic models is summarized briefly below. of Health and Mental Hygiene. The resulting logic models
Community Access Program. The Community Access were used to shape the program evaluation, to understand
Program (CAP), an open-ended, multi-site program, devel- the extent to which there was consensus about the model
oped its comprehensive logic model from the ground up by within each community, and to ascertain the degree to
asking the first two groups of funded sites (23 in federal which the community models were similar to or diverged
fiscal year 2000, followed by 53 in 2001) to develop site- from the DOHMH program-wide model.
specific logic models. CHPSR worked with the grantees, Bronx Health REACH. From the beginning of the
providing a brief training in the logic model tool at their first REACH program, the CDC emphasized the use of logic
meeting. Later, a dedicated CHPSR reviewer also provided models as a tool for planning and as a way to identify and
individualized feedback to the year-two grantees (45 of the measure interim outcomes. With the encouragement of the
53 communities as of the time of that review) about the CDC program staff, each of the Bronx Health REACH
comprehensiveness and quality of their logic models, for coalition’s working groups, with the assistance of CHPSR,
example, identifying missing links in the causal chain and developed logic models for its programmatic component:
key unexamined assumptions. Based upon the site logic the Nutrition and Fitness Program, the Faith-Based Out-
models and other program documentation, as well as reach Initiative, the Community Health Advocacy Initiative,
conversations with federal program staff, CHPSR (together the Public Education Campaign, and the Legal and
with colleagues at the Rutgers Center for State Health Regulatory Initiative. The steering committee of the
Policy) developed a logic model for the entire CAP coalition simultaneously created a logic model for the
initiative. The resulting program-wide logic model formed entire initiative. Together, these logic models were used to
the basis of the monitoring tool that was developed by the plan activities, set goals, and develop monitoring tools. The
research team and used by DHHS. We later interviewed the coalition leadership and staff also used the logic models to
year-two CAP sites about their experiences creating their develop an annual work plan for the coalition, to shape the
logic models to assess the utility of the process for future contracts with the community partners, and to report on
grantees.2 In subsequent years of the program, DHHS has program changes and progress to the CDC.
continued to use logic models to monitor site progress, and
has required CAP applicants to submit logic models as part
of their grant applications. 2. Findings
New York City DOHMH Childhood Asthma Initiative.
The City Department of Health and Mental Hygiene 2.1. Building consensus and fostering collaboration
required its three evaluation communities to develop site-
specific logic models. To assist them in this process, and to Many logic model proponents believe that the process of
help shape the program evaluation, CHPSR first met developing a logic model forces participants to articulate
separately with the leadership teams in those sites to and clarify the project’s goals and assign responsibility for
provide an introduction to the concept of logic models and a tasks and outcomes, thereby helping to foster collaboration
definition of terms. We then led the participants in a and build consensus (Goodman, 1998; McLaughlin &
structured discussion, beginning with an articulation of the Jordan, 1999; Millar et al., 2001; Patton, 1986; Weiss,
intended long-term impact of the intervention, and then 1995). In our experience, these benefits tend to accrue to
coalitions that are already fairly strong and collaborative. As
2 part of our process evaluations for the NYC Childhood
In this 45-minute telephone interview, we asked the project directors
detailed questions about the process by which their coalition’s logic model Asthma Initiative and Bronx Health REACH, we assessed
was developed (e.g., who was involved, how was participation determined, the strengths of the participating coalitions by examining the
how many and what kinds of meetings were devoted to this task, what was members’ sense of shared vision, their degree of partici-
hard about the process and what worked well); their perception of the pation in decision-making, and the partnership’s lifespan
usefulness of the logic model development process for program planning, and growth (see, e.g., Lasker & Weiss, 2003). The strong
management and evaluation (e.g., was any aspect of the project changed as
a result of the process, is the logic model currently being used in any way,
coalitions with which we worked tended to view the logic
has the coalition referred back to it since it was completed); and their model development process as an opportunity to build
suggestions for improvement and future use by other CAP grantees. consensus. For example, in the NYC Childhood Asthma
S.A. Kaplan, K.E. Garrett / Evaluation and Program Planning 28 (2005) 167–172 169

Initiative, the strongest coalition included representatives varied. Through trial and error, the coalition discovered
from many of the community partners in their logic model several strategies to ensure that the process was both useful
development process. The lead agency subsequently and collaborative. First, logic model development worked
presented the program logic model at a community meeting best in small, interactive group settings. At full coalition
to elicit feedback and create consensus, and later indepen- meetings, it was impossible to keep everyone engaged, and
dently used a collaborative logic model development difficult to pitch the discussion at a level that was
process to design their program plan for the following comfortable and productive for all attendees. Second, the
year. Similarly, Bronx Health REACH’s strong coalition symbols and the language often used in logic models were
encouraged a high degree of participation among their unfamiliar and daunting to many. For example, many logic
members in the logic model development process, and used models make a distinction between outputs (“the direct
the process to develop mutually agreed upon annual goals products of program activities and may include types, levels
for the partners’ subcontracts. By contrast, in the weakest and targets of services to be delivered by the program”) and
coalition participating in the Childhood Asthma Initiative outcomes (“the specific changes in program participants’
only members of the lead agency attended the two logic behavior, knowledge, skills, status and level of function-
model development sessions, and we found it difficult to get ing”) (W.K. Kellogg Foundation, 2001, p. 2). This
them engaged in the process beyond listing their program distinction was not intuitive for many, regardless of
activities. educational level, and was difficult for most of the
Using logic model development to foster collaboration participants to grasp. (This distinction also proved to be
can be challenging for organizations that are stretched thin elusive and troublesome for the majority of CAP commu-
in terms of their resources, or spread wide in terms of the nities.) Ultimately, in working with the Bronx coalition, we
location of their members. In the CAP initiative, one of the abandoned this terminology, as well as any complex
explicit goals for the logic model development process was graphics, and decided simply to link activities to a range
to strengthen the collaboration among the different public of results, which, in turn led to other results.
and private sector organizations participating in each site’s
project. Yet most of the sites we interviewed did not place 2.2. Strengthening program design by assessing
high priority on involving coalition members in the initial underlying assumptions
development of the draft. Generally, staff developed a draft
model and then shared it with a wider circle of coalition Assumptions play an essential role in the logic model: to
members. Several sites explained that their coalition design a project that has a good chance of success, project
members were geographically dispersed and difficult to planners need to articulate what they expect to be true, so
convene. Many partners were already stretched thin in they and their colleagues can highlight any gaps in the logic
implementing the project, and the staff was reluctant to of the program and assess whether this assumption will, in
burden them with this additional task. However, those sites fact, turn out to be valid (Renger & Titcomb, 2002; Weiss,
that engaged a collaborative logic model development 1995). Those community initiatives that identified the
process (six of the 45) uniformly characterized it as positive. underlying assumptions for at least part of their programs
Several noted that the collaboration created a shared found this to be the most valuable part of the logic model
understanding of how and why the Community Access development exercise. In several cases, it was through the
Program was expected to work—and what outcomes it was articulation of the underlying assumptions that the sites were
expected to achieve—given its resources and planned able to identify gaps in their program, sharpen their thinking,
activities. Those that engaged in such a collaborative effort or build a credible case in support of the program concept.
tended to submit more complete models, perhaps because of Several of the communities with which we worked
a wider range of input and scrutiny, and perhaps because changed staffing plans after examining their program
they gave the process greater priority. assumptions. For example, the Bronx Health REACH
Collaboration in developing logic models can also be coalition redirected resources in order to hire a part-time
challenging for coalitions that comprise a diverse group of coordinator for their Faith-Based Outreach Initiative after
organizations and individuals, even if the coalition is strong identifying and then questioning their assumption about the
and geographically compact. For example, the logic model capacity of small, local churches to carry out the program.
development process in Bronx Health REACH was very Similarly, one of the CAP communities, after examining
collaborative in nature, including an extraordinarily diverse their assumptions about the relationship between quality
group of coalition members, ranging from highly educated management and clinical care coordination, decided to
professionals to grass roots participants with much lower change their staffing plans, creating two senior level
levels of formal education. There was also a range among positions instead of the one originally planned.
the participants in terms of their role in the initiative, from Others used their examination of program assumptions to
program planners and evaluators to program implementers identify flaws in the program design or implementation. For
and managers. Under these circumstances, the level of example, one CAP coalition realized that they had “assumed
interest in, and patience for, logic model development that knowledge was in place [for one of their partners] that
170 S.A. Kaplan, K.E. Garrett / Evaluation and Program Planning 28 (2005) 167–172

was not there” and recognized that these “partners were not the need to explore best practices become evident. In the
yet ready to share information.” By reviewing their program case of the NYC Childhood Asthma Initiative, a review of
assumptions, the Bronx Health REACH coalition recog- the literature on community health worker models was
nized the disjuncture between their long-term goal of undertaken well after the program had been designed and
community-wide change and mobilization, and small size of the contractual arrangements set.
their programs. This process led the coalition to refocus its Several communities expressed concern about the
efforts on replication of their programs in other institutions potential risks of critically examining program assumptions.
and expansion of their partnerships to support community- Individuals and organizations may resent being asked to
organizing efforts. The coalition also realized that their logic question long-held beliefs, or to provide evidence to support
model assumed that coalition partners and other participat- their work. Moreover, the questioning of program assump-
ing institutions would change their policies and practices tions may lead to the need to reallocate resources and
related to diabetes prevention and detection. By articulating responsibilities that have already been allotted. Such a
this program assumption, the coalition leadership recog- process potentially raises issues not only within the
nized that such changes would be unlikely to occur absent community coalition, but also with funders. One site that
specific supporting resources, activities and goals. modified their approach after examining their assumptions
Although the communities that articulated underlying noted that “it is dangerous to do one of these [logic models]
program assumptions universally found this to be the most after the grant has been funded. By doing this, you may spot
useful part of the logic development process, in our gaps in your original application and then you worry about
experience, very few sites complete this task. Only six of telling the program office that you can’t do what you said
the 45 CAP logic models that we reviewed submitted fully you would.”
realized sets of assumptions. For example, often the most In some instances, community partnerships can feel that
unreliable assumptions are those that state that people will, it is not their role to examine or question the program
without coaching, change their work or care-seeking habits assumptions. For example, in the NYC Childhood Asthma
because of the existence of a new type of technology. Over Initiative, the City Department of Health and Mental
70% of the year 2001 CAP sites (33/45) planned some sort Hygiene, through its contracting process with the commu-
of patient information system or computer-based referral nities, was quite prescriptive about program activities. As a
system as part of their CAP project. Of these, less than half result, all of the sites were clear about their activities and
(15) included in their logic models any explicit assumptions about the long-term results that were expected from the
that providers or patients will use the new system. This program. But they were less clear about how the former
raised concerns that the sites had not planned fully for the would lead to the latter. In all three communities with which
implementation of new technology or anticipated common we worked, the coalitions seemed to take the causal
obstacles. connections as a given. Perhaps because they did not design
Given how useful this process can be, why is it that the intervention, they did not examine the theory by which
so few communities identify a full set of program their activities were to lead to desired outcomes. Since the
assumptions? activities and the outputs were prescribed, the sites did not
Often when funders require the development of a see the utility of engaging in a discussion of the assumptions
program logic model, the emphasis is on laying out the that underlay the intervention. Although some of the
activities and expected outcomes. Although articulating the communities recognized that questions about program
underlying rationale for a program is critical to its success, it assumptions were relevant to an understanding and assess-
is frequently a second-generation or post hoc activity—one ment of the City-wide Asthma Initiative, they saw their own
that is never quite completed. In addition, the discussion of role as limited to program implementation, rather than
program assumptions seems to be the place where there is design and reflection.
the biggest disconnect between planners/evaluators and
program managers/implementers. Often managers are will- 2.3. Facilitating communication
ing and able to layout activities and expected outputs, but
the time-consuming process of articulating and assessing Many of the community coalitions that developed logic
the strength of assumptions through a literature review or models identified an unexpected side benefit: by having a
discussion with experts can feel like a distraction (Renger & logic model that succinctly laid out program activities and
Titcomb, 2002). As one program manager said, “(we) are so expected results, the coalition was able to communicate
busy implementing, implementing, implementing.” In the more effectively with both internal and external constitu-
few instances where sites looked to the literature or best encies (see McLaughlin & Jordan, 1999). For example, one
practices to test the strength of their program assumptions, CAP site commented that by articulating the assumptions of
the exercise did not prove useful. In the case of Bronx their program, they were able to identify what they “needed
Health REACH, few resources seemed relevant to the from each of the partners.” Having a logic model in hand
program’s ambitious goals. Only once the coalition faced then gave them “credibility in asking for it.” Another
the nitty-gritty aspects of program implementation did coalition used the logic model as a clear summary of
S.A. Kaplan, K.E. Garrett / Evaluation and Program Planning 28 (2005) 167–172 171

the project that “allows everyone to see their role.” Several participants in applying the scientific method—the articula-
used their logic models to develop and communicate work tion of a clear hypothesis or objective to be tested—to their
plans. One CAP community reported that they also used their project development, implementation, and monitoring.
logic model for staff orientations. Training in this new way of thinking takes time. In our
Across all three initiatives, community coalitions used experience in working with the communities across these
their logic models to explain their programs to funders and three programs, the development of a strong logic model is
prospective supporters. For example, one of the CAP not a quick and easy process. We have seen a wide variation
grantees found their logic model to be a useful tool in their in completeness, coherence, sophistication of thought, and
application for state and federal waivers. They noted that the reflection in the logic models that we have reviewed. For
logic model was particularly helpful with funders and example, in the CAP initiative, where only a few hours of
government agencies that are outcome-oriented, since it centralized technical assistance was provided with minimal
makes clear that “you expect to achieve interim outcomes follow-up, about 40% of the sites (18/45) submitted logic
and milestones.” This same community used their logic models that reflected an understanding of what a logic
model to identify separate programmatic components that model should be and a thoughtful effort to display their
might be of interest to specific foundations. Another plans for their project in this new way. The communities
community noted that by showing their assumptions, they with which we worked as part of the NYC Childhood
were able to make clear to their constituencies and to Asthma Initiative and Bronx Health REACH also struggled,
government officials that they did not have to reduce quality particularly with the task of developing a full set of
in order to reduce costs. underlying program assumptions. As with the use of any
Because the CAP initiative was so large and open-ended, complex tool, the effective use of logic models by
the logic models provided a way to develop a typology of community coalitions requires training, time and resources.
program approaches that categorized the many diverse Hands-on technical assistance can allow the value of the
interventions into overarching and succinct descriptions logic model to become more apparent to grantees, and result
with articulated outcomes. This allowed the DHHS to in stronger projects that are more likely to achieve their
explain the program to external audiences, including goals. Ideally, those providing technical assistance should
Congress, and to link communities that were undertaking remain involved with the project and available to revisit and
similar efforts. help revise the models that are developed.
In several instances, the development of logic models To overcome the resistance of those who may feel that
also served to clarify expectations and identify differences the logic model exercise is a distraction from the true work
between the funder’s priorities or perceptions of the of program implementation, or who are intimidated by its
program and those of the community. For example, in the jargon, those providing support and technical assistance
NYC Childhood Asthma Initiative, although the community need to be flexible enough to allow the community to adapt
logic models showed a clear understanding of what was the tool to meet its needs. In addition, the language and the
expected of them, in a few instances, the logic models models themselves must be kept simple enough to convey
served to highlight differences in emphasis. For example, the program’s underlying rationale, not “shrouded” by
the City Department of Health and Mental Hygiene sought “overlaying all the elements of evaluation” (Renger &
to “change.standard operating procedures of community Titcomb, 2002, p. 495). Renger and Titcomb suggest, for
institutions.” In the site logic models, however, the changes example, that a program’s underlying rationale can be most
in community institutions focused more on enhancing staff simply discerned by repeatedly asking the question “why,”
knowledge and providing services, and less on changing thereby allowing program planners and evaluators to
policies or procedures. Similarly, in the Bronx Health identify the causal factors that are being targeted.
REACH initiative, the logic model for the Faith-Based Because developing and questioning the underlying
Outreach Initiative served to clarify to the CDC that the assumptions of the program model can be threatening to
primary purpose of the program was not individual behavior participants, it is important that the process be done in a
change among church members (for example, weight loss), sensitive and collaborative way, so that it strengthens the
but community mobilization around racial and ethnic program without dampening enthusiasm or diminishing gut-
disparities in health, and access to healthy food and health level commitment. Community coalitions also need to
services. understand the potential for using logic models directly to
further program implementation, through, for example,
work plan development and communication with potential
3. Lessons learned supporters.
Funders need to be aware that program structures can
Part of the difficulty of logic model development may also inadvertently militate against grantees’ taking the logic
be its greatest strength: that it forces planners and managers model development process seriously and being willing to
to think of their projects in a conceptually different way. In assess and question underlying assumptions. In addition to
its essence, use of the logic model guides program providing adequate time and resources to support the process,
172 S.A. Kaplan, K.E. Garrett / Evaluation and Program Planning 28 (2005) 167–172

funders should emphasize the importance of articulating Kagan, S. (1998). Using a theory of change approach in a national
assumptions, integrate the use of logic models into periodic evaluation of family support programs: practitioner reflections. In K.
Fulbright-Anderson, A. C. Kubisch, & J. P. Connell, New approaches to
program reviews, and allow their adaptation to local needs.
evaluating community initiatives (Vol. 2) (pp. 113–121). Washington,
Most importantly, since an in-depth examination of assump- DC: The Aspen Institute, 113–121.
tions may well lead to program modifications, funders need W.K. Kellogg Foundation (2001). Logic model development guide: Using
to be open to such changes and have a system in place for logic models to bring together planning, evaluation, & action. Battle
reviewing modifications in program design. Creek, MI: W.K. Kellogg Foundation.
Lasker, R. D., & Weiss, E. S. (2003). Broadening participation in
community problem solving: a multidisciplinary model to support
collaborative practice and research. Journal of Urban Health, 80,
Acknowledgements 14–60.
McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: A tool for telling
We wish to thank two anonymous reviewers and our your program’s performance story. Evaluation and Program Planning,
colleagues, John Billings and Carolyn Berry, for their 22, 65–72.
thoughtful comments, as well the Community Access Millar, A., Simeone, R. S., & Carnevale, J. T. (2001). Logic models: A
systems tool for performance management. Evaluation and Program
Program coalitions, the NYC Childhood Asthma Initiative
Planning, 24, 73–81.
partnerships, and the Bronx Health REACH coalition, with Milligan, S., Coulton, C., York, P., & Register, R. (1998). Implementing a
which we have been privileged to work. theory of change evaluation in the Cleveland community-building
initiative: A case study. In K. Fulbright-Anderson, A. C. Kubisch, &
J. P. Connell, New approaches to evaluating community initiatives
References (Vol. 2) (pp. 45–85). Washington, DC: The Aspen Institute, 45–85.
Patton, M. Q. (1986). Utilization-focused evaluation (2nd ed.). Beverly
Hills, CA: Sage Publications.
Bickman, L. (1987). The functions of program theory Using program
Renger, R., & Titcomb, A. (2002). A three-step approach to teaching logic
theory in evaluation. New directions for program evaluation, Vol. 33.
San Francisco: Jossey-Bass. models. American Journal of Evaluation, 23(4), 493–503.
Chen, H. (1990). Theory driven evaluations: A comprehensive perspective. Torvatn, H. (1999). Using program theory models in evaluation of
Newbury Park, CA: Sage. industrial modernization programs: three case studies. Evaluation and
Chen, H., & Rossie, P. (1983). Evaluating with sense—the theory driven Program Planning, 22, 73–82.
approach. Evaluation Review, 7, 283–302. Weiss, C. H. (1995). Nothing as practical as good theory: Exploring theory-
Goodman, R. M. (1998). Principles and tools for evaluating community- based evaluation for comprehensive community initiatives for children
based prevention and health promotion programs. Journal of Public and families. In J. P. Connell, A. C. Kubisch, L. B. Schorr, & C. H.
Health Management Practice, 4(2), 37–47. Weiss (Eds.), New approaches to evaluating community initiatives:
Hebert, S., & Anderson, A. (1998). Applying the theory of change approach Concepts, methods, and context (pp. 65–92). Washington, DC: The
to two national, multisite comprehensive community initiatives. In K. Aspen Institute, 65–92.
Fulbright-Anderson, A. C. Kubisch, & J. P. Connell, New approaches to Wholey, J. S. (1987). Evaluability assessment: Developing program theory.
evaluating community initiatives (Vol. 2) (pp. 123–148). Washington, In L. Bickman, Using program theory in evaluation. New directions for
DC: The Aspen Institute, 123–148. program evaluation (vol. 33). San Francisco: Jossey-Bass.

You might also like