ODBOOKCHAPTER10
ODBOOKCHAPTER10
ODBOOKCHAPTER10
net/publication/329905659
CITATIONS READS
0 2,536
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Emerald Jay D. Ilac on 25 December 2018.
10
CHAPTER
213
Approaches to Evaluation
Organizations can use two modalities in conducting evaluation: formative or
summative. According to Monette and his colleagues (2014), the choice between
these two approaches depends on two basic goals of conducting evaluation research.
First, the goal of formative evaluation is to provide valuable inputs to guide planning,
development and implementation of specific initiatives, policies, or programs. It
primarily focuses on ensuring that programs are well integrated, purposeful, and
strategic in relation to the overall goals of the organization. Formative evaluation can
also be used to assist in policy and management decisions that result in incremental
changes designed to improve existing policies and programs (Wholey 1996). For
example, this type of evaluation is often helpful for pilot projects and new programs,
but can also be used to monitor the progress of ongoing programs.
On the other hand, the goal of summative evaluation is to understand and
measure the impact of programs at the end of an operating cycle. It makes certain
that both effectiveness and efficiency of programs are achieved (Hamilton and
Chervany 1981; Monette et al. 2014). As such, findings are used to help decide
whether a program should be adopted, canceled, continued, or modified for
improvement. This type of evaluation typically is done for large-scale projects that
use various human and material resources to provide a systematic, unbiased, and
holistic picture of the extent and applicability of programs to other contexts or
populations.
Models of Evaluation
Beyond formative and summative approaches, there are other models (Alzahmi,
Rothwell, and Kim 2013) widely used by OD practitioners in evaluation: these are
Kirkpatrick’s levels of evaluation, Holton’s evaluation model, balanced scorecard, and
the appreciative inquiry approach. Beyond, this Logic Models can be used as a tool
for evaluation.
Among the models listed above, the most cited in literature are Kirkpatrick’s
levels of evaluation. Usually applied in the context of training, Donald L. Kirkpatrick
(1996) suggested four increasing levels of evaluation that he referred to as reactions
(level one), learning (level two), behavior (level three), and results (level four)
(Kirkpatrick and Kirkpatrick 2006). Reactions refer to participants’ feelings
of satisfaction toward the program or intervention. Learning looks at whether
important information is remembered and understood by the participants. Thirdly,
214
the behavioral level validates actual learning and application of knowledge by the
participants in their job. Lastly, results (“bottom-line” or organizational impact)
refer to improvements in goals or desired business outcomes. This is linked to the
fifth level, ROI or return of investment that measures the cost of the program vis-
à-vis organizational targets (Phillips 1996). The systematic, relatively simple, and
outcome-focused approach to evaluation makes this model popular (Bates 2004).
Despite some criticisms of Kirkpatrick’s model (e.g., lack of empirical validation,
reliance on anecdotal testimony), it remains a useful tool for program facilitators and
OD practitioners.
The Holton Evaluation model (Holton 1996) attempted to improve Kirkpatrick’s
model by accounting for intervening factors like readiness, motivation, design,
and reinforcement on the job. The inclusion of these variables aims to explain the
discrepancy between “positive” outcomes and “absence” of real on-the-job behaviors
(Donovan, Hannigan, and Crowe 2001). On the other hand, Kauffman and Keller
(1996) and Kauffman, Keller, and Watkins (1996) explicitly expanded the scope and
usefulness of Kirkpatrick’s original model to include the individual (micro-level),
organization (macro-level), and external clients as well as society at large (mega-
level).
Both Kirkpatrick and Holton’s models emerged from training interventions. In
contrast, the balanced scorecard is a strategic planning and performance evaluation
management system. It is used extensively in various industries to align business
activities to the vision and strategy of the organization, improve internal and external
communications, and monitor organization performance against strategic goals. It
originated from Robert Kaplan and David Norton as a performance measurement
framework that added strategic non-financial performance measures alongside
traditional financial metrics to maximize “intangible” assets such as employee skills
or customer satisfaction with the goal of enhancing value creation and organizational
performance (Kaplan and Norton 1992; Kaplan 2010). Key elements of the balanced
scorecard are strategy maps, strategic objectives, initiatives, and criteria measures.
In addition, the balanced scorecard system provides multi-level communication
channels for discussions in evaluating the processes of an organization (Cooper
n.d.). Perhaps the reason this approach has been so popular in evaluation is that it
appears to be strategic and focuses on the bottom line (McLean 2005), and this is
found helpful in reviewing interventions and how they affect the financial aspect of
the organization. However, the balanced scorecard has also been criticized because:
1) the system focuses only on the most easily measured outcomes, 2) it is perceived
as an overly simplistic strategic model, and 3) the goals in the scorecard can become
obsolete quickly unless there is consistent effort to keep them up to date (McLean
2005). As such, most organizations will use the balanced score card in evaluating, but
will not rely completely on this framework.
215
216
and Jordan 2010). The logic model for a program identifies the desired outcomes,
while evaluation revolves around the extent to which these outcomes are met.
Evaluation Methods
Evaluation requires the systematic collection of information about an
intervention’s activities, characteristics, and program outcomes; what is imperative is
that all this collection of information is anchored to the goals of the intervention.
From here, one can make sound judgment about the merits of a program, its
effectiveness and future programming (Thomas, Corso and Pietz 2013). Evaluations
can use surveys, interviews, group discussions, pre- and post-tests, or observations.
Evaluations can also elicit both quantitative and qualitative data. For instance,
training programs may have survey evaluations at the end to see the value of the
program to its participants in terms of program management, facilitators, and venue.
Conducting a focus group discussion after a team-building exercise may help gather
in-depth information on the effect of the intervention on team dynamics. Another
common technique is to collect survey or narrative information (“success stories”)
as a part of program follow-up, to find out if longer-term outcomes like behavior
change were realized among participants. A challenge in selecting tools and methods
for program evaluation are various trade-offs in decisions related to using pre-
existing measurement tools or creating ones that are unique to a specific program.
Other decisions about tools for program evaluation efforts include what types of
software to use for data analysis and the use of online data collectors and mobile
device survey designs (Extension.org 2015).
Sustainability in OD
The next logical step after evaluation is sustainability or institutionalization
of changes. This involves ensuring that the changes persist in the organization
(Cummings and Worley 2005). Thus, sustainability requires continued resolve to
ensure that the interventions continue to work and constantly produce the desired
results for the betterment of the institution.
Rogers and Hudson (2011) suggest that one of the centripetal issues
concerning sustainability is the need for changes in thinking and practices at every
level—individual, group, and organization. Although OD interventions maybe
spearheaded by a few people at the start, sustainability requires participation of
the entire organization, so that each member propels and manifests the needed
changes. By allowing all members of the organization to join in sustaining the
change momentum, the sustained change becomes the new normal. Sustainable
change endures and becomes strategic when it is not dependent on just one person
but exists within the culture, embedded in the mindset, activities and systems of
217
218
219
Drivers
A study on OD in NGOs and POs found that a major factor in increasing
sustainability of OD interventions is creating a mindset and culture of readiness for
change (De Dios and Reyes 2005). This requires creating a culture of learning and
continuous improvement to allow organizations to prosper. Such culture necessitates
the creation of spaces within the organization where individual members, leaders,
and managers build their capacity to learn for continuous development. Senge (1990)
defines sustainability as an organization’s capacity to continually expand its ability to
create its future. As such, OD processes move towards bringing out a more reflexive
organization capable of adapting and managing changes and complexities.
In a similar study, Cawrse and Walsh (2005) developed a framework based on
communities of practice in the Department of Education (DepEd) across three
regions in Mindanao (Regions XI, XII, and ARMM). They found that a key aspect of
strategic resource for building capacity and improving organizational performance is
Stewardship, which entails sustaining momentum through natural shifts in practice,
members, and technology. In Stewardship, conferences and concurrent sessions were
done to share information, knowledge, achievements, and best practices, as well
as insights to allow the changes inculcated to continue within DepEd. From this
point, each member considers how to sustain the momentum of change, how the
practice can be extended beyond organizational boundaries, and how they can use
the information they shared. This promoted a sense of ownership and confidence
in enacting the change. This offered DepEd the promise of sustaining improved
organizational performance, and for Mindanao a promise of greater social and
economic development.
Among cooperatives in the Philippines, Salvosa (2007) found that the operational
framework to assess governance performance in this type of organization, is
anchored on the universal principles of cooperatives and elements of good
governance. Management and other members of the organization used these
principles and elements as benchmarks to monitor, evaluate, and decide whether the
cooperative is moving close towards optimal performance. She recommends that key
aspects of governance are anchored to developing a systems approach. Furthermore,
these are also anchored to allowing everyone in the organization to partake in and
become accountable for the change. Meaningful and sustainable changes, however,
will take time and would require not only internal reforms, but also reinforcing the
value of these reforms within the wider environment outside the organization.
A survey on evaluating and sustaining OD interventions conducted by the
authors among 24 OD practitioners from various corporate and non-profit industries
found that recognizing human capital as valuable in the organization is central to
evaluation processes. In addition, training on how to evaluate was deemed important
by respondents. The respondents also shared the importance of the following:
220
Challenges
Despite the drivers of evaluation and sustainability, there are challenges that
beset these two OD facets. In various multi-country studies, Hofstede et al. (2010)
found that having a long-term orientation helps people prepare for the future.
Unfortunately, the Philippines scored low on this dimension, which suggests that
Filipinos prefer short-term solutions and fail to sustain these solutions for the long
term. This makes OD processes difficult because of the resistance to change, and
makes sustainability of change doubly problematic because of the Filipino’s short-
term perspective. This was echoed in a local leadership study where Filipinos ranked
lower compared to other global leaders in terms of strategic judgment, persistence,
planning, and analytical thinking (Lanuza and Wells 2005). It was further reflected
in a model for leading organization transformation in the Philippines created by the
Ateneo Center for Organization Research and Development in 2012 (Hechanova
and Franco 2012). The short-term orientation of Philippine organizations makes it
difficult to create long-term and systematic transformation that is vision-driven.
A study by de Dios and Reyes on non-government organizations (NGOs) and
people’s organizations (POs) in 2005 analyzed factors influencing intervention
efforts. Findings showed that factors detrimental to sustainability include increasing
the number of development agents, decreasing funding assistance, perceived
diminishing impact of the interventions, fragmented civil society, and the diversity of
development ideologies (De Dios and Reyes 2005).
Moreover, results of this chapter author’s survey highlight the challenges related
to these two functions. Of those surveyed, 64 percent claim to have not done any
evaluation efforts of their OD interventions. The most common rationale offered by
the respondents was the lack of perceived value for conducting an evaluation. Having
this mindset created deficiency in creating systems or having dedicated people in-
charge of assessing the usefulness of the interventions. People were also unfamiliar
with how evaluation should be correctly done and did not have the necessary
competencies to do so. Some respondents mentioned OD was not appreciated
by organization leaders. Respondents report that support from top management
was important to promote evaluation. Without the support, evaluation cannot be
accomplished.
In regard to sustainability efforts in the context of organization development,
71 percent of the respondents claimed that their organizations do not focus on the
sustainability of OD intervention efforts. Reasons mentioned include lack of interest
by upper management, absence of culture of sustaining efforts, lack of appreciation
221
222
223
they fail to ensure project continuity, maintain long-term changes, and continue
institutional development. Evaluation and sustainability practices are basic and
fundamental components of OD practice, requiring top management support and a
paradigm change for the entire institution.
DISCUSSION QUESTIONS
1. What are the factors to be considered in ensuring the sustainability of any OD
intervention?
2. How can leaders affect the sustainability efforts of an organization?
3. What would hamper the evaluation of OD interventions in the following
sectors: the academe? Non-profit organizations? The government?
4. As heavily influenced by interpersonal relationships, what are the drivers and
challenges of evaluation and sustainability for the Filipino?
REFERENCES
Alzahmi, Rashed, William Rothwell, and Woocheoi Kim. 2013. “A Practical
Evaluation Approach for OD Interventions.” International Journal of Research in
Management, Economics, and Commerce 3, no. 3: 43– 65.
Bates, Reid. 2004. “A Critical Analysis of Evaluation Practice: The Kirkpatrick
Model and the Principle of Beneficence.” Evaluation and Program Planning 27:
341–47.
Buchanan, David, Dianne Ketley, Rose Gollop, Jane Louise Jones, Sharon Saint
Lamont, Annette Neath, and Elaine Whitby. 2003. “No Going Back: A Review
of the Literature on Sustaining Strategic Change.” Research into Practice series:
NHS Modernisation Agency.
Cawrse, Scott and Ian D’Arcy Walsh. 2005. “Cultivating Communities of Practice
to Build Organisational Capacity: A Case Study of the Philippines-Australia
Basic Education Assistance for Mindanao (BEAM) Project.” Evaluation Journal of
Australasia 1–2: 22–26.
Coghlan, Anne, Hallie Preskill, Tessie Catsambas. 2003. “An Overview of
Appreciative Inquiry in Evaluation.” New Directions for Evaluation 100: 5–22.
Cooper, Elizabeth. n.d. “The Road to ERM: Using the Balanced Scorecard to Implement
Enterprise Risk Management.” White Paper, Balanced Scorecard Institute.
Cooperrider, David and Diana Whitney. 2005. Appreciative Inquiry: A Positive
Revolution in Change. USA: Berrett-Koehler.
224
225
Kaplan, Robert S. and David P. Norton. 1992. “The Balanced Scorecard: Measures
that Drive Performance.” Harvard Business Review (January–February): 71–79.
Kaplan, Robert S. 2010. “March Conceptual Foundations of the Balanced
Scorecard.” Working Paper. http://hbswk.hbs.edu/item/conceptual-foundations-
of-the-balanced-scorecard.
Kaufman, Roger and John Keller. 1996. “Levels of Evaluation: Beyond Kirkpatrick.”
Human Resource Development Quarterly 5, no. 4: 371–80.
Kaufman, Roger, John Keller, and Ryan Watkins. 1996. “What Works and What
Doesn't: Evaluation beyond Kirkpatrick.” Performance+Instruction 35, no. 2: 8–12.
Kirkpatrick, Donald. 1996. “Great Ideas Revisited: Techniques for Evaluating
Training Programs.” Training and Development 50, no. 1.
Kirkpatrick, Donald and James Kirkpatrick. 2006. Evaluating Training Programs: The
Four Levels. 3rd ed. San Francisco: Berrett-Koehler.
Kotter, John. 1995. “Leading Change: Why Transformation Efforts Fail.” Harvard
Business Review 73, no. 2: 59–67.
Lanuza, Godofredo and Yvette Wells. 2005. “The Paradoxes of Leadership: A Profile
of Successful Filipino Business Leaders.” In The Way We Work: Research and Best
Practices in Philippine Organizations, edited by Ma. Regina Hechanova and Edna
Franco, 86–106. Quezon City: Ateneo de Manila University Press.
Lewin, Kurt. 1947. “Frontiers in Group Dynamics.” Human Relations 1: 5–41.
Lippitt, Ronald, Jeanne Watson, and Bruce Westley. 1958. The Dynamics of Planned
Change. New York: Harcourt, Brace, and World.
Llaguno, Jennifer. 1998. “Organizational Factors in Implementing Gender and
Development: Focus on the Community Based Forest Management Program.”
Master’s thesis. University of the Philippines.
McLaughlin, J. A. and GB Jordan. 1999. “Logic Models: A Tool for Telling Your
Programs Performance Story.” Evaluation and Program Planning 22, no. 1: 65–72.
———. 2010. “Using Logic Models.” In Handbook of Practical Program Evaluation,
edited by Joseph Wholey, Harry Hatry, and Kathryn Newcomer, 55–80. 3rd ed.
San Francisco: Jossey-Bass.
McLean, Gary N. 2005. “Examining Approaches to HR Evaluation.” Strategic HR
Review 4, no. 2: 24–27.
Monette, Duane, Thomas Sullivan, Cornell DeJong, and Timothy Hilton. 2014.
Applied Social Research: A Tool for the Human Services. 9th ed. USA: Cengage
Learning.
226
Morato Jr, Eduardo. 2004. “Policies and Strategies for Promoting Entrepreneurship
and the Enterprise of Development.” Doctoral dissertation. University of the
Philippines.
Nabatar, Ma. Theresa. 2011. “Leadership Succession and Organizational
Development in Technical Vocational Education Organizations.” Master’s thesis.
University of the Philippines.
Nielsen, Karina and Johan Abildgaard. 2013. “Organizational Interventions: A
Research-Based Framework for the Evaluation of Both Process and Effects.”
Work & Stress 27, no. 3: 278–97.
Phillips, Jack. 1996. “Measuring ROI: The Fifth Level of Evaluation.” Technical &
Skills Training 7, no. 3: 10–13.
Preskill, Hallie and Valerie Caracelli. 1997. “Current and Developing Conceptions
of Use: Evaluation Use TIG Survey Results.” Evaluation Practice 18, no. 3: 209–
25.
Preskill, Hallie and Tessie Catsambas. 2006. Reframing Evaluation through
Appreciative Inquiry. Thousand Oaks: Sage Publications.
Preskill, Hallie and Rosalie Torres. 1999. “The Role of Evaluative Enquiry in
Creating Learning Organizations.” In Organizational Learning and the Learning
Organization: Developments in Theory and Practice, edited by Mark Easterby-
Smith, John Burgoyne, and Luis Araujo, 92–114. London: SAGE Publications.
Renger, Ralph and Allison Titcomb. 2002. “A Three-Step Approach to Teaching
Logic Models.” American Journal of Evaluation 23, no. 4: 493–503.
Rogers, Katrina and Barclay Hudson. 2011. “The Triple Bottom Line: The
Synergies of Transformative Perceptions and Practices for Sustainability.” OD
Practitioner 43, no. 4: 3–9.
Salvosa, Cristina. 2007. “Assessing Governance Performance of Selected Primary
Cooperatives in the Philippines.” Doctoral dissertation. University of the
Philippines.
Senge, Peter. 1990. The Fifth Discipline: The Art and Practice of the Learning
Organization. New York: Currency Doubleday.
Senge, Peter, Art Kleiner, Charlotte Roberts, Richard Ross, George Roth, Bryan
Smith, and Elizabeth Guman. 1999. The Dance of Change: The Challenges of
Sustaining Momentum in Learning Organizations. London: Nicholas Brealey.
Stufflebeam, Daniel and Anthony Shinkfield. 2007. Evaluation Theory, Models and
Applications. San Francisco: Jossey-Bass.
227
Taplin, Jessica, Dianne Dredge, and Pascal Scherrer. 2014. “Monitoring and
Evaluating Volunteer Tourism: A Review and Analytical Framework.” Journal of
Sustainable Tourism 22, no. 6: 874–97.
Taras, Maddalena. 2005. “Assessment—Summative and Formative—Some
Theoretical Reflections.” British Journal of Educational Studies 53, no. 4: 466–78.
Thomas, Craig, Liza Corso, and Harald Pietz. 2013. “Evaluation, Performance
Management, and Quality Improvement: Understanding the Role They Play
to Improve Public Health.” Division of Public Health Performance Improvement.
Office for State, Tribal, Local and Territorial Support: Centers for Disease
Control and Prevention.
Tsoukas, Haridimos and Robert Chia. 2002. “On Organizational Becoming:
Rethinking Organizational Change.” Organization Science 13: 567–82.
Wholey, Joseph S. 1996. “Formative and Summative Evaluation: Related Issues in
Performance Management.” Evaluation Practice 17, no. 2: 145–49.
228
OD BOOK CHAPTER
View publication stats 10.indd 228 9/4/2018 3:03:00 PM