UNIT-V

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

UNIT V: CURRICULUM EVALUATION

Overview

Any national education system must include curriculum evaluation as a required


and significant component. It serves as the foundation for curriculum policy decisions,
input on ongoing curriculum modifications, and curriculum implementation processes.
The main concerns of curriculum evaluation are:
• The effectiveness and efficiency with which government education policies are
translated into educational practice;
• The status of curriculum contents and practices in relation to global, national, and
local concerns; and
• The achievement of educational program goals and aims.

Student evaluation is an important part of curriculum evaluation since it aids in


understanding the impact and outcome of educational programs. The quality of student
learning is a key indicator of a curriculum's success. Knowing how far students have
progressed toward the curriculum's objectives is critical for both improving teaching and
evaluating the curriculum.
http://www.ibe.unesco.org/fileadmin/user_upload/COPs/Pages_documents/Resource_Packs/
TTCD/sitemap/Module_8/Module_8.html

Learning Objectives

At the end of Unit V, I am able to:


1. Define the term "curriculum evaluation";
2. Determine the what, why and how of curriculum evaluation;
3. Determine the curriculum evaluation models; and
4. Differentiate between different levels of curriculum evaluation.
Setting Up

Direction: Read the scenario below and answer the following questions.
Scenario:

A local human rights non-profit organization started an after-school human


rights education program for junior high school students, which is held at a church
near the junior high school. It started off well, but enrollments began to diminish as
time went on. The drop was notably noticeable during the previous academic year.
During the previous school year, the drop was extremely pronounced. The non-
board profits of directors concluded that an evaluation was necessary to determine:
(a) if the program should be continued; and (b) if it should be continued, what
needed to be done to re-energize it.
Questions:

1. Why there is a need for an evaluation in the scenario?

2. Do you think the evaluation will be helpful to solve the problem?


Explain.

Lesson Proper

CURRRICULUM EVALUATION
The process of producing a value assessment is referred to as "evaluation." The term
"evaluation" is used in education to refer to operations involving curricula, programs, interventions,
instructional methods, and organizational aspects. Curriculum evaluation examines the influence of
implemented curriculum on student (learning) accomplishment in order to change the official
curriculum if necessary and to review teaching and learning processes in the classroom. The
evaluation of a curriculum establishes:
Specific curriculum and implementation strengths and weaknesses;
• Critical data for strategic changes and policy decisions;
• Inputs for better learning and teaching;
• Monitoring indicators
Curriculum evaluation could be an internal activity and process carried out by different units
within the educational system for their own purposes. National Ministries of Education, regional
education authorities, institutional oversight and reporting systems, education departments, schools,
and communities are examples of these units.
External or commissioned review processes may be used for curriculum evaluation. They might
be research-based studies on the state and effectiveness of various areas of the curriculum and its
implementation, or they could be done on a regular basis by specific committees or task forces on the
curriculum. These methods could look into the efficacy of curriculum content, existing pedagogies and
instructional approaches, teacher training, and textbooks and instructional materials, among other
things.

Student Assessment
Curriculum evaluation's ultimate purpose is to guarantee that the curriculum is effective in
improving student learning quality. As a result, student assessment entails a review of the student's
progress. Student learning assessment has always had a strong influence on how and what teachers
teach, and is thus an important source of feedback on the appropriateness of curriculum material
implementation.
Different types of evaluation tools and procedures must be used to meet distinct goals in the areas of
diagnosis, certification, and accountability. Student learning assessment can be summative or formative,
and there are a variety of tests to meet different purposes, including standardized examinations,
performance-based assessments, ability tests, aptitude tests, and IQ tests.
(http://www.ibe.unesco.org/fileadmin/user_upload/COPs/Pages_documents/Resource_Packs/TTCD/
sitemap/Module_8/Module_8.html)
In addition, there are other concepts associated with curriculum evaluation. In his book, Pawilen
(2015) summarized many curriculum scholars' definitions of curriculum evaluation depending on how
they interpret curriculum, curriculum aims, curriculum influences, and curriculum implementation.
According to these experts, the basic criteria for evaluating a program are:
 The process of delineating, obtaining, and providing information useful for decisions and
judgments about curricula (Davis 1980);
 The process of reviewing a curriculum's goals, reasoning, and structure (Marsh 2004). (In this
book, curriculum assessment is defined as the process of assessing a curriculum's philosophy,
goals, and objectives, content, learning experience, and evaluation objectively.);
 The process of evaluating a program of studies, a course, or a topic of study for its merit and
worth (Print 1993);
 The method for determining whether or not a program is accomplishing its objectives (Tuckman
1985);
 The comprehensive and ongoing investigation into the consequences of using content and
processes to achieve clearly defined objectives (Doll 1992); and
 The process of identifying, acquiring, and disseminating relevant data for evaluating decision
alternatives (Stufflebeam 1971).
As a result, curriculum review entails determining whether the curriculum is relevant and
responsive to the demands of society and learners. It is a dynamic and scientific technique for
determining the worth of any program.
Purposes of Curriculum Evaluation
Print (1993) identified several important purposes and functions of evaluation in school setting:
 Essential in delivering feedback to students - gives useful information to help students improve
their performance while also assisting teachers in identifying students' strengths and
weaknesses.
 Helpful in determining how effectively students have met the curriculum's objectives—
describes whether students have learned or mastered the curriculum's desired outcomes and
objectives.
 To improve curriculum—the findings of the evaluation are used to improve the curriculum and
identify new ways to increase learning.
In addition, curriculum evaluation is also useful to administrators and teachers in many different
ways. For example:
 Evaluation aids in the decision-making process for enhancing teaching and learning processes
 It aids in the development of academic policies.
 It aids in the implementation of curricular changes and innovations.
 It assures that any curricular program is of high quality.
 It aids schools in aligning their curricula with a variety of sources and influences.
 It defines the degree to which the school's vision and mission are realized.
Curriculum evaluation is a key indicator of a school's or university's commitment to quality and
ongoing improvement. It demonstrates how committed a school is to achieving its philosophy, vision,
and goal. (https://www.coursehero.com/file/p3mqbv6/To-improve-curriculum-the-result-of-the-
evaluation-serves-as-basis-for/) Emerita Reyes et al (2015), on the other hand, stated that curriculum
must be reassessed in order to determine if it fulfills the current demands of educational reforms. The
findings of the evaluation would allow educational authorities to make necessary adjustments or
enhancements in the event of any potential gaps between the curriculum in use and the recognized
educational requirements.
WHAT TO EVALUATE?
According to Ornstein & Hunkins (1998), evaluation can be used to collect statistics and pertinent
information to help educators decide whether to accept, amend, or eliminate the curriculum in general
or a specific educational resource. The entire curriculum, or specific elements of it, such as goals,
objectives, material, techniques, and even the results or outcomes, could be assessed. The various stages
or phases of curriculum creation may also be the subject of evaluation.
Goals and Objectives
All of the processes and mechanisms needed to design a curricular or educational program
are based on the goals and objectives, so they must be evaluated, primarily to determine
whether these goals and objectives are worthwhile foundations for the program's development
and if they are achievable and result in the desired outcomes. It is also worth noting that a
curriculum's contents, materials, and procedures must align with the aims and objectives for
which the program was designed and constructed.

1) Content and Methodology


The developed curriculum or any educational program's contents must be examined and
evaluated to see if they correspond to the needs of the learners for whom the curriculum was
created, as well as to determine the methodology's congruency with the curriculum objectives
and the content's appropriateness (Gattawa 1990).

2) Outcomes/Results
The assessment of outcomes or results is linked to the assessment of objectives, content, and
technique. These outcomes or results serve as the final indicator of the curriculum's success or
effectiveness in attaining its aims and objectives. The purpose of outcome evaluation is to gather
information and data that can be used to improve the curriculum as a whole.
FORMS OF EVALUATION
Evaluation can take two forms, both of which can be used to give facts and information necessary for
making a choice.

1. Formative Evaluation is the process of determining if a curriculum program, a syllabus, or a


subject taught during implementation succeeded or failed in order to enhance the program
(Glickman, Gordon, 2004). As the name implies, it refers to the information obtained during the
development or implementation of a curriculum that can be used to revise the current curricular
program. It is done concurrently with the program's ongoing operation to ensure that all
components of the curriculum being implemented are likely to yield the desired and expected
outcomes.

2. Summative Evaluation is a type of evaluation utilized at the end of a program's


implementation. As the name implies, it entails the gathering of necessary data, which is
normally done after the end of the curricular program's execution. It is used to determine
whether a program, project, or even an activity functioned as intended when it was first created
or established. In most circumstances, this type of review is used to determine whether or not a
curriculum or program will be continued, improved, or revised, or even canceled.

CURRICULUM EVALUATION MODELS


Conducting a curricular program assessment necessitates the use of precise and systematic
procedures in accordance with certain approaches and methodologies that must be determined based
on what is being reviewed. The models listed below can help with the evaluating process.

1. Tyler’s Objectives-Centered Model


During the creation of a curriculum, Tyler's Ends-Means Model starts with defining the teacher's
philosophy, then identifying the desired outcomes in the form of educational goals, purposes, and
objectives, and then creating and evaluating the curriculum in accordance by looking at the three key
elements: the learners, the life in the community and the subject matter.
Tyler's Objectives-Centered Model (1950) can be regarded as the sensible and systematic
movement of reviewing procedures while considering the various steps involved (Glathorn, 1987 p.
273) as indicated below:
 Begin with the behavioral goals that have already been established.
 Identify the scenarios that will allow the learner to display the behavior represented in the aim,
as well as those that will elicit or encourage it.
 Select, adapt, or build appropriate assessment instruments, and ensure that they are objective,
reliable, and valid.
 Obtain summarized or assessed results using the instruments.
 Compare the results of numerous instruments before and after specific time periods to evaluate
the degree of change that has occurred.
 Analyze the data to discover the curriculum's strengths and shortcomings, as well as possible
explanations for why this particular pattern of strengths and weaknesses exists.
 Make the required changes to the curriculum based on the findings.
Tyler's Objectives-Centered Model, which has been regarded as reasonable and systematic, has
been found to be beneficial in reviewing curricula because it is relatively simple to learn and
execute.
2. Stufflebeam’s Context, Input, Process and Product Model (CIPP)
Phi Delta Kappa, chaired by Daniel Stufflebeam, created it (1971). Accordingly, this model
appeared to appeal to educational leaders because it emphasizes the importance of producing
evaluative data that can be used for decision-making, as the phi Delta Committee that worked on the
model believed that decision-making is the sole justification and rationale for conducting an
evaluation. This methodology, according to Braden (1992), can be used for both formative and
summative evaluation activities.
To respond more effectively to the needs of decision makers, this Stufflebeam model provides a
means for generating data relative to the four phases of program evaluation:
 Context Evaluation. The goal of context evaluation is to continuously examine needs and
challenges in the context of decision-making in order to assist decision-makers in
determining goals and objectives (Worthen and Sanders,1987). This component of the CIPP
Model "is intended to describe the status, context, or setting in order to identify unmet
needs, potential opportunities, problems, or program objectives to be evaluated." (Pace &
Friedlander, 1987 as stated by Reyes and Dizon, 2015).
 Input Evaluation is used to evaluate different methods for reaching those goals and
objectives in order to assist decision-makers in selecting the best option. to assist in the
structuring of decisions (Worthen & sanders, 1987), this section is for evaluators to provide
information that will assist decision makers in selecting procedures and resources for the
purpose of devising or selecting appropriate methods and materials. (Pace & Friedlander,
1987 as stated by Reyes and Dizon, 2015).
 Process Evaluation. The key job of this CIPP Model element is to monitor the processes,
both to confirm that the menas are really implemented and to make any necessary changes.
It assists in making implementation decisions (Worthen & Sanders, 1987), as it ensures that
the program is running well and identifies any flaws or strengths in the procedures (Pace &
Friedlander, 1987, as stated by Reyes and Dizon, 2015).
 Product Evaluation. This is used to compare actual ends to desired or intended ends, which
leads to a sequence of modifying and/or recycling decisions. It is used in recycling decisions
(Worthens & Sanders, 1987), where a combination of progress and outcome evaluation
phases (Pace & Friedlander, 1987, as stated by Reyes and Dizon, 2015) is used to determine
and judge program outcomes.

The following particular steps are conducted throughout the four stages of the model,
according to Glatthorn (1987):
1. Determine the kind of decisions that will be made.
2. Determine the kind of information required to make judgments.
3. Gather the information you'll need.
4. Create a set of criteria for judging quality.
5. Use established criteria to analyze the data you've gathered.
6. Explicitly provided needed information to decision-makers.

To summarize, the CIPP Model considers evaluation in terms of processes, products, and
outcomes not only at the program's conclusion, but also at numerous phases and stages of
implementation. Supposed outcomes are predicted to be offshoots of set aims, with discrepancies
between expected and actual outcomes highlighted. In effect, the CIPP Model allows decision makers to
keep, stop, or change the program. (Pace & Friedlander, 1987).
3. Stake’s Responsive Model. Robert Stake (1973) established this assessment model, which places a
greater emphasis on a complete description of the evaluation program as well as the evaluation process
itself. Stake thinks that the concerns of the stakeholders for whom the evaluation is conducted should
take precedence in determining all matters related to the evaluation process. This model is referred to
as a responsive evaluation strategy since it trades off some measurement precision in order to make the
findings more valuable to program participants.

Three Crucial Components


• Antecedents- these are the conditions that existed before to intervention;
• Transactions- these are the events or experiences that make up the program; and
• Outcomes- these are the program's results.
Two special aspects may also describe this particular model:
• The separation between intentions and observations; and
• The distinction between standards and judgments about the effects that occurred. In effect, the
model can be viewed as either comparative (is A better than B?) or non-comparative (does A do
what it's supposed to accomplish?) (Ogle, 2002, as quoted in Sumayo, 2012, and Dizon and
Reyes, 2015).
According to Worthen and Santhers (1987), as mentioned by Ogle (2002), and cited by Dizon and
Reyes (2015), the evaluator would use this model, following these steps:
1. Give context, justification, and a summary of the program's rationale (including its necessity);
2. Make a list of the antecedents (inputs, resources, and current conditions), transactions
(activities, processes), and results you want to achieve.
3. Explicitly state the standards (criteria, expectations, performance of comparable programs)
for judging program antecedents, transactions, and outcomes;
4. Record judgments made about the antecedent conditions, transactions, and outcomes; and
5. Record judgments made about the antecedent conditions, transactions, and outcomes.
Stake himself as cited in Glathorn (1987; pp. 275-276) recommends the following steps in employing
his model which he considers as an interactive and recursive evaluation process:
1. The evaluator meets with clients, staff, and audiences to learn about their perspectives
on the assessment and their goals for it.
2. The scope of the evaluation project is determined by the evaluators based on such
conversations and document analysis.
3. The evaluator closely monitors the program to obtain a sense of how it works and to
identify any unanticipated departures from the stated goals.
4. The evaluator learns the project's declared and true goals, as well as the various
audiences' worries about it and the evaluation.
5. The evaluator highlights the concerns and problems that should be addressed in the
evaluation. The evaluator creates an evaluation plan for each issue and problem,
outlining the types of data required.
6. The evaluator chooses the methods for obtaining the desired data. Human observers or
judges will most likely be the means.
7. The data-collection processes are carried out by the evaluator.
8. The evaluator organizes the data into topics and creates "portraits" that explain the
thematic reports in a natural way. Videotapes, relics, case studies, and other "faithful
representations" may be used in the portrayals.
9. Using stakeholder concerns as a guide, the evaluator determines which audiences
require which reports and selects the most acceptable forms for each audience.
House (1980) as cited in Ogle (2002) points out very clearly that the essential components of
Stake’s Responsive evaluation are:
 Belief that nothing has actual value (knowledge is context-dependent)'
 Belief in the importance of stakeholder perspectives in evaluation, and
 Belief that case studies are the most effective way of portraying stakeholders' views
and values, as well as reporting and evaluating results.

The key benefit of this responsive model is that it is sensitive to clients or stakeholders' concerns
and values. This methodology, if applied correctly, should produce highly valuable evaluations for
clients.

4. Eisner Connoisseurship Model. Through his expertise in aesthetics and education, Elliot Eisner
(1979) established this model, which is a technique to evaluation that emphasizes qualitative
enjoyment. Eisner argued that learning was too complex just to be broken down to a list of objectives
then measured quantitatively to find out if these objectives have been attained or that learning has taken
place, therefore, it is imperative that in evaluating a program, it is important to get into the details of
what is actually happening inside the classroom, instead of just considering the small bits and pieces of
information vis-a-vis the objectives of a particular learning episode. Eisner devised and advocated the
Connoisseurship Model on the concept that a qualified evaluator, utilizing a mix of abilities and
experience, can determine whether a given curricular program has been successful. The word
connoisseurship is derived from the Latin cognoscere, which means "to know"
(https://www.coursehero.com/file/36747137/CurDev-ReportPreciouspptx/).
References

Books
Bilbao, Purita P., Dayagbil Filomena T., Corpuz, Brenda B. (2015). Curriculum Development for
Teachers. Cubao, Quezon City: Lorimar Publishing Co., Inc.
De Guzman, Estefania S., Adamos, Joel L. (2015). Assessment of Learning. Cubao, Quezon City:
Adiana Publishing Co., Inc.
Palma, Jesus C. (1992). Curriculum Development System. 125 Pioneer St., Mandaluyong City:
National Book Store
Pawilen, Greg T. 2015. Curriculum Development: A Guide for Teachers. Rex Book Store. Manila,
Philippines.
Reyes, Emerita D., Dizon, Erlinda (2015). Curriculum Development. Assessment of Learning.
Cubao, Quezon City: Adiana Publishing Co., Inc.
Three Evaluation Scenarios. 2000. Human Rights Resource Center, University of Minnesota.
Retrieved from:
http://hrlibrary.umn.edu/edumat/hreduseries/hrhandbook/part6C.html
Tyler R.W. 1949. Basic Principles of Curriculum and Instruction. Chicago and London: University
of Chicago Press
Webliography
1. http://www.ibe.unesco.org/fileadmin/user_upload/COPs/Pages_documents/
Resource_Packs/TTCD/sitemap/Module_8/Module_8.html
2. http://www.allresearchjournal.com/archives/2015/vol1issue11/PartI/1-11-80.pdf
3. https://pdfs.semanticscholar.org/edcc/33f0d4099fcbc7a87a0cfeaafa0691c47563.pdf
4. https://files.eric.ed.gov/fulltext/EJ1180614.pdf
5. http://talc.ukzn.ac.za/Libraries/Curriculum/models_of_curriculum_evaluation.sflb.ashx
6. https://www2.education.uiowa.edu/archives/jrel/fall01/Johnson_0101.htm
7. https://www.slideshare.net/RizzaLynnLabastida/chapter-4-evaluating-the-curriculum-
67274672
8. https://www.nap.edu/read/10024/chapter/7
9. https://ctb.ku.edu/en/table-of-contents/evaluate/evaluation/framework-for-
evaluation/main

You might also like