Ped 4

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 194

Davao de Oro State College

ASSESSME
NT OF
LEARNING
1
MICHEL A. SARPAMONES, MAED-ELT
Educational measurement,
evaluation, testing, and
assessment
Objectives
At the end of the lesson, you are expected to:

a.explain by providing examples of the


following concepts in educational measurement,
such as: measurement, assessment, testing, and
evaluation;
b.compare and contrast measurement,
assessment,
c. testing,
illustrate and evaluation;
application and
of these concepts
the in process and
teachinglearnin instructional decision
g making.
Test
A test may be called a tool, a question, a set of questions,
or an examination that is used to measure a
particular characteristic of a person or a group of
individuals.
A test is the form of questioning or measuring tool used
to access the status of one’s skill, attitude, and fitness.

An instrument or activity is used to accumulate data on


a person’s ability to perform a specified task.

It is an assessment intended to measure a test-


taker’s knowledge, skill, aptitude, performance, or
classification in many other topics.
Measurement
A process of collecting data on attributes of interest.

A measurement is an act or process that involves


the assignment of numerical values to whatever is being
tested. So it involves the quantity of something.

Measurement is the term used to describe the assignment


of a number to a given assessment. The number can be a
raw score or a score based on a normal distribution
curve. The process of quantifying this number is
separate from using this information to evaluate
student outcomes and achievement.
Assessment
Assessment is the process of documenting, usually
in measurable terms, knowledge, skills,
attitudes, and beliefs.
Assessment in education is the process of
gathering, interpreting, recording & using information
about pupils’ responses to an educational task.
(Harlen,et. al., 1992)
Evaluation
Evaluation is concerned with a whole range of issues in
and beyond education; lessons, programs, and skills
can be evaluated. It produces a global view of
achievements usually based on many different types
of information such as observation of lessons, test scores,
assessment reports, course documents, or interviews with
students and teachers.
The process of making an overall judgement about one’s
work or a whole school work (Cameron).

Evaluation is a process of determining to what extend


the educational objectives are being realized
Review
your
output.
Use of Assessment in
Learning
Objectives
At the end of the lesson, you are expected to:

a.explain what assessment is and why it is important in


planning learning goals;
b.demonstrate their ability to describe the different
principles of effective assessment;
c.select appropriate type of assessment to use in
measuring intended learning outcome; and
d. apply assessment strategies in planning future learning goals.
Assessment is the process of gathering evidence of a
student’s performance over a period of time to determine
learning and mastery of skills. Such evidence of learning can
take the form of dialogue records, journals, written work,
portfolios, tests, and other learning tasks.

The overall goal of assessment is to improve student learning


and provide students, parents, and teachers with reliable
information regarding student progress and the extent of
attainment of the expected learning outcomes.

Assessment results show the more permanent learning and


clearer picture of the student’s ability.
Principles of Good Practice in
Assessing Leaning Outcomes
The assessment of student learning starts with the
institution’s missions and core values.

Assessment works best when the program has a clear


statement of objectives aligned with the institutional
mission and core values.

Outcomes-based assessment focuses on the student


activities that will still be relevant after formal schooling
concludes. The approach is to design assessment activities
which are observable and less abstract.
Principles of Good Practice in
Assessing Leaning Outcomes
Assessment requires attention not only to outcomes but also
and equally to the activities and experiences that lead
to the attainment of learning outcomes.

Assessment works best when it is continuous, ongoing, and


not episodic. Assessment should be cumulative
because improvement is best achieved through a linked series
of activities done over time in an instructional cycle.
Principles of Assessment

Principle 1 - Assessment should be valid.

Validity ensures that assessment tasks and


associated criteria effectively measure student
attainment of the intended learning outcomes at the
appropriate level.

Principle 2 - Assessment should be reliable and


consistent.

There is a need for assessment to be reliable and


this requires clear and consistent processes for the
setting, marking, grading and moderation of
Principles of Assessment
Principle 3 - Information about assessment should be
explicit, accessible, and transparent.

Clear, accurate, consistent, and timely information


on assessment tasks and procedures should be made
available to students, staff, and other external assessors or
examiners.

Principle 4 - Assessment should be inclusive and equitable.

As far as is possible without compromising


academic standards, inclusive and equitable assessment
should ensure that tasks and procedures do not
disadvantage any group or individual.
Principles of Assessment
Principle 5 - Assessment should be an integral part
of program design and should relate directly to the
program aims and learning outcomes.
Assessment tasks should primarily reflect the nature of
the discipline or subject but should also ensure that students
have the opportunity to develop a range of generic
skills and capabilities.

Principle 6 - The amount of assessed work should


be manageable.
The scheduling of assignments and the amount of
assessed work required should provide a reliable and
valid profile of achievement without overloading staff or
students.
Principles of Assessment
Principle 7 - Formative and summative assessment should
be included in each programme.

Formative and summative assessment should be


incorporated into programmes to ensure that the purposes
of assessment are adequately addressed. Many
programmes may also wish to include diagnostic
assessment.
Principles of Assessment
Principle 8 - Timely feedback that promotes learning
and facilitates improvement should be an integral part
of the assessment process.

Students are entitled to feedback on


submittedformative assessment tasks, and on
summative tasks, where appropriate. The nature,
extent and timing of feedback for each assessment task
should be made clear to students in advance.
Principles of Assessment
Principle 9 - Staff development policy
and strategy should include assessment.

All those involved in the assessment of students must


be competent to undertake their roles and responsibilities.
Variety of Assessment
Instruments

Objective examinations - The advantage of using this


type is that teachers are familiar with it, although
constructing high quality test questions may be difficult.

Essay examinations - Allow for student individuality


and expression although it may not cover an entire
range of knowledge.
Variety of Assessment
Instruments

Written work - This type allows learning in the process as


well as in the completion of the process. The disadvantage
is that plagiarism may occur and written work is difficult to
quantify. •

Portfolio a ss e ss me n t - May either be longitudinal


portfolio which contains reports, documents, and
professional activities compiled over a period of time,
or best-case/thematic portfolio which is specific to a certain
topic or theme.
Variety of Assessment
Instruments

Assessment rubrics - An authentic assessment tool


which measures student’s work. It is a scoring guide that
seeks to evaluate student’s performance based on a
full range of criteria rather than a single numerical score
Providing evidence of
achievement of outcomes
and standards
Objectives
At the end of the lesson, you are expected to:

a.categorize learning outcomes as to


cognitive, affective and psychomotor behavior;

b. classify the three domains of learning; and

c. provide examples in each domain in writing learning outcomes.


Three (3) Domains of
Learning
Benjamin Bloom and a committee of colleagues in
1956, identified three domains of educational
activities; cognitive, referring to mental skills; affective,
referring to growth in feeling or emotion; and
psychomotor, referring to manual or physical
skills. These domains are organized into
categories or levels and arranged in hierarchical
order form the simplest behavior to the most
complex behavior. To ensure that the learning
outcomes are measurable, demonstrable,
and verifiable, the outcomes should be stated as
concrete and active verbs.
COGNITIVE DOMAIN

The cognitive domain involves knowledge


and the development of intellectual skills.
This includes the recall or recognition of
specific facts, procedural patterns, and
concepts that serve in the development of
intellectual abilities and skills.
COGNITIVE DOMAIN
COGNITIVE DOMAIN
COGNITIVE DOMAIN
PSYCHOMOTOR DOMAIN

In the early seventies, E Simpson, Dave and


A.S. Harrow recommended categories for
the Psychomotor Domain which included
physical coordination, movement, and use of
the motor skills body parts.
PSYCHOMOTOR DOMAIN
PSYCHOMOTOR DOMAIN
PSYCHOMOTOR DOMAIN
AFFECTIVE DOMAIN

•The affective d o m a i n refers to the


situation emotionally s u c h a s fay in
which we deal enthusiasm, motivation,
values, a n d eeling, appreciation, attitudes.
AFFECTIVE DOMAIN
AFFECTIVE DOMAIN
AFFECTIVE DOMAIN
AFFECTIVE DOMAIN
Learning Outcome
Construction
Now that you have recognized and understood the
three domains of learning, you are tasked to apply the
concepts in the given situation by crafting learning
outcomes for each of the three domains from the
simplest to the most complex level or category.

1. Cognitive: Topic – Investigative Project in Biological Science


1. Remembering
2. Understanding
3. Applying
4. Analyzing
5. Evaluating
6. Creating
Learning Outcome
ConstructionTopic – Table Setting
Psychomotor:
1. Observing
2. Imitating
3. Practicing
4. Adapting

Affective: Topic – Developing and Nurturing


Honesty
5. Receiving
6. Responding
7. Valuing
8. Organizing
9. Internalizing
Examples of Assessment of
Learning

Objectives
At the end of the lesson, you are expected to:

a. write examples of written assessment according to the


three domains.
CONCEPT MAPPING
In this activity, you are tasked to construct a concept
map showing your ideas about summative assessment.
1. If you h a v e to
combine your ideas
into one, how would
you describe a
summative
assessment?
2. W h y is
summative
assessment
important?
3. Do you think that
s u m m a t i v e a s s e s s m e n t is
the sole b a s i s of knowing
if the child h a s learned or
not?
The predominant kind of assessment in schools is Assessment
of Learning or also known as the Summative Assessment.

Assessment of Learning in classrooms is typically done at the


end of something (e.g., a unit, course, a grade, a Key Stage, a
program) and takes the form of tests or exams that include
questions drawn from the material studied during that time.
In Assessment of Learning, the results are expressed
symbolically, generally as marks across several content areas
to report to parents.
Summative assessment, or assessment of learning, can take
many forms. Here are some possible types of summative
assessment that can be used in the language classroom:

•Performance Task: students are asked to complete a task


that will test a specific set of skills and/or abilities and determine
what the students know and are capable of doing. A rubric,
checklist, or another form of the scoring guide should
accompany this type of assessment.
Written Product: students are asked to write an original selection.
There are many written forms that teachers can use to get
students to write. In addition, students may be asked to write about a
previous activity such as a field trip or guest speaker. Students may
also be asked to create a piece of persuasive writing or a reflection on
their learning experience. A rubric, checklist, or other forms of the
scoring guide should accompany this type of assessment.

Oral Product: students are asked to prepare an oral piece of work; this
can take the shape of any of the oral forms. A rubric, checklist, or other
forms of the scoring guide should accompany this type of assessment.
Test: the students are asked to write a test at the end of a
section, chapter, unit, theme, etc. to demonstrate what they know.

Standardized Test: students are asked to write a test


that is standardized in terms of content of the test and conditions
under which the test is written.
Conditions of Validity,
Reliability, and quality of
Feedback
Objectives
At the end of the lesson, you are expected to:

a. describe elements that constitute high quality assessment; and

b.decide when constructs of validity, reliability,


and quality feedback are appropriate
How would you
describe a n a s s e s s m e n t
that develops the child
holistically?
Wh e n c a n you s a y that
assessme a n is
nt valid a n d
reliable?
Is feedback important
in a s s e s s m e n t ?
W h y or w h y
not?
Principles
of High
Quality
Assessme
1. Clarity of learning targets
(knowledge, reasoning,
skills, products, affects)
Assessment can be made precise, accurate
and dependable only if what is to be achieved is
clearly stated and feasible. The learning targets,
involving knowledge, reasoning, skills, products
and effects, need to be stated in behavioral
terms which denote something which can be
observed through the behavior of the students.
Cognitive Targets
Benjamin Bloom (1954) proposed a hierarchy
of educational objectives at the cognitive level.
These are:
•Knowledge – acquisition of facts, concepts
and theories

•Comprehension - understanding, which


involves cognition or awareness of the
interrelationships

•Application – transfer of knowledge from one


field of study to another of from one concept to
another concept in the same discipline
Cognitive Targets
Analysis – breaking down a concept or idea
into its components and explaining the
concept as a composition of these concepts

Synthesis – opposite of analysis, entails


putting together the components in order to
summarize the concept

Evaluation a n d Reasoning – valuing


and judgment or putting the “worth” of a
concept or principle.
Skills, Competencies and
Abilities Targets

Skills – specific activities or tasks that a


student can proficiently do
Competencies – a cluster of skills
Abilities – made up of relate
competencies categorized as:
• Cognitive • Affective • Psychomotor
Products, Outputs and
Project Targets

tangible and concrete evidence of a


student’s ability - need to clearly specify
the level of workmanship of projects

i. expert ii. skilled iii. novice


2. Appropriateness of
Assessment Methods
A. Written-Response Instruments

Objective tests – appropriate for assessing


the various levels of the hierarchy of
educational objectives
Essays – can test the student's grasp of
the higher level cognitive skills
Checklists – list of several characteristics
or activities presented to the subjects of a
study, where theywill analyze and place
a mark opposite to the characteristics.
2. Appropriateness of
Assessment Methods
B. Product Rating Scales

Used to rate products like book reports,


maps, charts, diagrams, notebooks, creative
endeavors

Need to be developed to assess various


products over the years
2. Appropriateness of
Assessment Methods
c. Performance Tests - Performance
checklist
-Consists of a list of behaviors that make up
a certain type of performance
-Used to determine whether or not an
individual behaves in a certain way when asked to
complete a particular task
d.Oral Questioning – appropriate
assessment method when the objectives are to:
- Assess the students’ stock knowledge and/or
-Determine the students’ ability to
communicate ideas in coherent verbal sentences.
2. Appropriateness of
Assessment Methods
e. Observation a n d Self Reports
- Useful supplementary methods when used
in conjunction with oral questioning and
performance tests
3. Validity
Educational assessment should always have
a clear purpose. Nothing will be gained from
the assessment unless the assessment has
some validity for the purpose. For that reason,
validity is the most important single attribute of a
good test.

The validity of an assessment tool is the extent


to which it measures what it was designed
to measure, without contamination from
other characteristics. For example, a test of
reading comprehension should not require
mathematical ability
3. Validity
There are several different types of validity:

•Face validity: do the


assessment items appear to be
appropriate?
•Content validity: does the assessment content
cover what you want to assess?
•Criterion-related validity: how well does the test
measure what you want it to?
•Construct validity: are you
measuring what you think you're measuring?
4. Reliability
The reliability of an assessment tool is the extent to which
it consistently and accurately measures learning.

When the results of an assessment are reliable, we can


be confident that repeated or equivalent
assessments will provide consistent results. This puts us
in a better position to make generalized statements
about a student’s level of achievement, which is
especially important when we are using the results of
an assessment to make decisions about teaching and
learning, or when we are reporting back to students
and their parents or caregivers. No results,
however, can be completely reliable.
Factors
which c a n
aff ect
reliability
4. Reliability
The length of the assessment – a longer
assessment generally produces more reliable results.
The suitability of the questions or tasks for the
students being assessed.
The phrasing and terminology of the questions.
The consistency in test administration – for example,
the length of time given for the assessment, and
instructions given to students before the test.
The design of the marking schedule and moderation
of marking procedures.
The readiness of students for the assessment –
for example, a hot afternoon or straight after
physical activity might not be the best time for
students to be assessed.
5.
Fairness
The concept that assessment should be 'fair'
covers a number of aspects.

• Student Knowledge and learning targets of assessment


• Opportunity to learn
• Prerequisite knowledge and skills
• Avoiding teacher stereotype
• Avoiding bias in assessment tasks and procedures
6. Positive
Consequences
Learning assessments provide students with
effective feedback and potentially improve their
motivation and/or self-esteem.

Moreover, assessments of learning gives students the


tools to assess themselves and understand how to
improve.
7. Practicality and Efficiency

•Something practical is something effective in


real situations.
• A practical test is one that can be practically
administered.

Questions:
• Will the test take longer to design than apply?
• Will the test be easy to mark?

Tests can be made more practical by making it


more objective (more controlled items)
8. Ethics

•Conforming to the standards of conduct of a


given profession or group

Ethical issues that m ay be raised

• Possible harm to the participants.


• Confidentiality.
• Presence of concealment or deception.
• Temptation to assist students.
Quality Feedback
Feedback is an important part of the assessment process.
It has a significant effect on student learning and has
been described as “the most powerful single
moderator that enhances achievement” (Hattie, 1999).

The main objectives of feedback are to:


• justify to students how their mark or grade was derived
• identify and reward specific qualities in student work
• guide students on what steps to take to improve
• motivate them to act on their assessment
•develop their capability to monitor, evaluate and
regulate their own learning (Nicol, 2010).
To benefit student learning,
feedback needs to be:
Constructive:

As well as highlighting the strengths and weaknesses of


a given piece of work, it should set out ways in which
the student can improve the work.
For the student, it:
•encourages them to think critically about their work and
to reflect on what they need to do to improve it
•helps them see their learning in new ways and
gain increased satisfaction from it
• helps promote dialogue between staff and students.
To benefit student learning,
feedback needs to be:
Timely:
Give feedback while the assessed work is still fresh in
a student’s mind before the student moves on to
subsequent tasks.

Meaningful: It should target individual needs, be


linked to specific assessment criteria, and be received by
a student in time to benefit subsequent work.
Effective feedback:
•guides students to adapt and adjust their
strategies learning
• guides teachers adapt and adjust teaching to
to
accommodate students’ learning needs
•guides students to become independent and self-
reflective learners, and better critics of their own work
•stimulates reflection, interaction and dialogue
about learning improvement
•is constructive, so that students feel encouraged
and motivated to improve
•has consequences, so that it engages students by
requiring them to attend to the feedback as part of the
assessment
• is efficient, so that staff can manage it effectively.
Effective feedback:

Feedback is valuable when it is received, understood


and acted on. How students analyse, discuss and act on
feedback is as important as the quality of the feedback
itself (Nicol, 2010).

Through the interaction students have with feedback,


they come to understand how to develop their learning.
SITUATIONAL ANALYSIS

PAGE
66
Assessment
a s Learning
and
Metacogniti
on
Objectiv
es
At the end of the lesson, you are expected to:

a. explain assessment as learning;

b. compare assessment for, of and as learning; and

c. cite the importance of metacognition in


assessment
How important is self
monitoring in -
assessment?
Assessment as Learning is the use of
ongoing self- assessment by students in order
to monitor their own learning, which is
“characterized by students reflecting on their own
learning and ma kin g adjustments so that they
achieve deeper understanding.”

(Western and Northern Canadian Protocol for


Collaboration in Education [WNCP], 2006, p.41)
Self-a ss e ss me n t is the process of looking at oneself
in order to assess aspects that are important to
one's identity. It is one of the motives that drive self-
evaluation, along with self-verification and self-
enhancement.

Sedikides (1993) suggests that the self-assessment


motive will prompt people to seek information
to confi rm their uncertain self-concept rather
than their certain self- concept and at the same
time people use self-assessment to enhance their
certainty of their own self-knowledge.
Metacognition according to Schraw (1998) is,
"thinking about one's own mental processes" or
the "regulation of cognition." Thus, if cognition is
defined as the knowledge or act of knowing then
metacognition is understanding one's own
knowledge.

For students, this means that they understand what


they do and do not know. With teacher guidance,
they can learn to monitor this; they also learn to seek
out the knowledge or develop their skills with this new
sense of self-awareness.
The Role of
Teachers
and
Students in
Assessment
A s Learning
Objectiv
es
At the end of the lesson, you are expected to:

a. define the roles of the teacher and the student in assessment as learning;
and

b. cite examples of assessment as learning


SITUATIONAL ANALYSIS

In this activity, you are tasked to think of a solution that


will help you as a teacher to understand your students
and how will you improve the situation if not totally solve
the problem.

1. Teacher John presented his learning outcomes to


the student in Algebra. Most of his students did not
understand it. Thus, most of them failed in first
assessment.
SITUATIONAL ANALYSIS

2. Paul a student of teacher John, compiled all quizzes


and tasks of their class as a requirement. He
submitted the compilation project to his teacher without
even have a time to look for it. On the final exam, he failed
because he was not able to monitor that all his quizzes
are failing.
1. W h a t are the factors
you have considered in
the solutions you h av e
c o m e up?
2. Why is it
important to consider
the factors you have
mentioned?
The Role of the Teacher
Ensuring a ss e ss me n t methods are appropriate
a n d the purpose is clear to students ensures
quality and fair assessment practices as per the
Principles for Fair Student Assessment in Canada (1993).

Beyond choosing the learning outcomes to be covered,


the activities to follow and the assessment
methods, in Assessment as Learning, the teacher
e n g a g e s the students in this process.
The Role of the Teacher
In Assessment as Learning, the teacher is a guide,
“Giving them [students] the tools to undertake their
own learning wisely and well.” (WNCP, p. 42)

Students learn to monitor their own learning a n d


m a ke adaptations a s required. In addition to
monitoring learning and guiding instruction through
assessment for learning, the teacher is a s s e s s i n g
the students’ ability to a s s e s s themselves as
they learn how to assess their own learning.
The Role of the Teacher
Teachers can follow the following model in order to
practice Assessment as Learning in their classroom:
(adapted from WNCP, p. 42-43)
1. Discuss the learning outcomes with the students.
2.Create criteria with the students for the various tasks
that need to be completed and/or skills that need to be
learned or mastered.
3.Provide feedback to students as they learn and ask
them guiding questions to help them monitor their own
learning.
The Role of the Teacher
4.Help them set goals to extend or support their
learning as needed in order to meet or fully meet the
expectations.
5.Provide reference points and
examples for the learning outcomes.

Teachers are also responsible for ensuring that


students have a learning environment in
which they feel comfortable a n d safe to learn
a s well a s have ample time to practice what is
being taught.
The Role of the Student
Beyond completing the tasks assigned to them by
their teacher, students move from passive learners to
active owners of their own learning. Initially,
with teacher guidance and tools, students learn to
monitor if they have understood the learning outcome
being explored and the metacognitive process.

Once the metacognitive skills have been acquired,


students can independently adjust their learning
accordingly and demonstrate the “self-reflection,
self-monitoring a n d self- adjustment.” (WNCP,
2006, p.85)
The Role of the Student
Extensive and relevant modeling in the questions below
can help students reach this point: Monitoring
Metacognition (Protocol adaptation of Shraw,
“Promoting General Metacognitive Awareness” in
WNCP)

1.What is the purpose of learning these concepts and


skills?
2. What do I know about this topic?
3. What strategies do I know that will help me learn this?
4. Am I understanding these concepts?
5. What are the criteria for improving my work?
6. Have I accomplished the goals I set for myself?
Classroo
m
Example
s
Literacy Mentoring A m o n g Students Teacher Mike
and her fellow teachers at Pag-asa Elementary
School used Assessment as Learning as a tool to
review reading strategies and metacognitive skills in
reading for grade 4/5 students and to have them in turn,
mentor grade 1 students. Through the process, "Both sets
of students learned to clarify their thinking, and were
using similar language to describe their learning
processes." The grade 4/5 students became adept at
using both teacher-created criteria and their own criteria
and were able to mentor grade 1 students through the
process. Koehn observed that "They [grade 4/5
students] naturally began each lesson with a
stated learning intention."
Attendance Procedures

The Bagum-Buhay National High School suggests


having students record their own attendance as late or
absent on a class posted list. The teacher would
have continued discussions around class expectations
for attendance and the impact of tardiness or being
absent on learning. Students will then have continual
opportunities to reflect upon and make changes to
their attendance and punctuality.
Physical Education Work Habits

Teacher Kyle’s rubrics in Physical Education


concerning work habits in Physical Education can help
clarify teacher expectations and increase students'
abilities to self-monitor thus developing their
metacognitive skills. This also serves the dual purpose
of making a class that is sometimes stressful and
unmanageable more ordered and manageable.
E-
Portfolios
An Portfolio encourage "self-guided learning
Electric to Tuttle s(2007). Students start with " an
according
understanding of the outcomes to be met throughout
the year or term and then gather evidence of
learning throughout the term to complete a finalized
digital project. This ability to select the assignments that
best demonstrate their abilities in a given area
demonstrate the metacognition necessary for
Assessment as Learning. Tuttle reinforces this argument
by stating, "Self-assessment becomes a regular part
of learning as students frequently select or reevaluate
which of their work is the best evidence of their skills and
strive to create even better evidence in future
ASSESSME
NT FOR
LEARNING
Using
A s s e s s m e n t to
Classify
Learning a n d
Understanding
Using Assessment to Classify
Learning and Understanding

Objectives
At the end of the lesson, you are expected
to:
assessment of learning
a. distinguish
assessment a sassessment
learning; for learning and
from
b. state the purpose of a ssessment to classify learning a n d
understanding; a n d
c.discuss the features of assessment and their
implications to planning, implementing, a n d improving learning.
Assessment for learning is an approach to teaching
and learning where feedback is utilized to improve
students’ performance. This refers to an
a ss e ss me n t that is given while the teacher is
in the process of student formation. Generally, it
is commonly referred to as formative assessment
– that is, an assessment designed to inform instruction.
Formative assessment also includes the pretest
a n d posttest that a teacher gives to ensure learning
(Corpuz, 2015). The assessment before instruction
(diagnostic assessment or pretest) describes the
entry knowledge or skills pushing teachers to
revise planned instruction accordingly. Though
posttests are generally treated as tools for
assessment of learning, they may also become means
for assessment of learning if the results were used as a
basis in accurately planning future learning episodes.
Assessment for learning is reflective in nature. Results
and findings when used by the teacher in giving
constructive feedback would let students
recognize their achievements a n d diffi culties.
This realization will motivate learners to become
more productive students and to work with the
teacher to resolve the learning discrepancies.

Assessment for learning intends to close the g a p


between a learner’s current situation a n d where
they want to be in their learning a n d
achievement.
Providing
Evidence
of
Improved
Learning
Performan
Assessment for learning pertains to diagnostic
and formative assessment tasks which are used to
determine learning needs, monitor academic progress
of students during a unit of instruction and guide
instruction. Students are given on-going and immediate
feedback concerning their performance (de Guzman, E. &
Adamos, J., 2015).

Teachers utilize the results of these assessments in


reaching
strategy, various instruction
designing decisions
appropriate such that
tasks as selecting
a teaching consider differences, and
individual improving
strategies assessment
.
Assessment for learning envisions the creation of a learner-
centered classroom with a supportive environment, where
students regard mistakes as opportunities to learn. Effective
formative assessment mechanisms integrate and embed the
following practices shown below:
Assessment methods can be categorized according to the
nature and characteristics of each method. McMillan (2007)
identified four major categories: selected response,
constructed response, teacher observation, and student self-
assessment.

Selected response: This pen-and-paper test format


requires students to choose from a given set of options to
answer a question or a problem. Teachers assess students
using multiple- choice, alternate response (true/false), and
matching type items. This method is deemed ideal for
measuring knowledge and simple understanding.
Constructed-response: This subjective format demands
that students supply their answers to a question, problem,
or task. Included in this category are brief-constructed
response items, performance assessments, essay items, or
oral questioning. Examples of brief constructed responses
include sentence completion, short-answer to questions,
labeling a diagram, or a simple solution to a problem.
Performance a ss e ssme n t expects students to complete
a task with an emphasis on authenticity. Performance tasks
are mostly preferred in assessing skills and products.

Essay a ss e ss me n t requires students to respond to a


question or proposition in a logically written form and very
appropriate for measuring deep understanding and reasoning.

Oral questioning is commonly done during the instruction to


check on understanding and probe deeper.
Teacher Observation: This is a form of ongoing assessment
where teachers are required to be watchful of the student's
behavior, strengths, and weaknesses. Teachers may use a
checklist while the activity is ongoing or while a student performs
a task. Observation is ideal for measuring mastery of skills.

Student Self-Assessment: This is a process where


students are given the opportunity to reflect and rate their own
work based an agreed set of criteria. This may be done
through an activity checklist, journals and self-report
inventories.
Without regard to the category of assessment
instruments prepared by teachers, each instrument has its
own unique significance in determining the learning of
students (Buenaflor, 2012).

The choice of instrument must be decided based on


its appropriateness to measure the learning targets. In other
words, there must be constructive alignment between the
instrument and the outcomes
Examples of
Assessment
for Learning
The aim of assessment in the classroom is to help
students perform well in relation to the learning standards
– content standards, performance standards, and learning
competencies (DepEd, 2015).

Content standards refer to the essential knowledge


and understanding that every student should learn.

Performance standards describe the expected abilities


and skills that learners must demonstrate.

Learning competencies refer to the knowledge, skills and


attitudes that students need to display in every learning
episode.
Learners are assessed in the classroom through
various processes a n d measures appropriate
to a n d congruent with learning competencies
defined in the curriculum guide. Assessment for
learning or formative assessment may be conducted
in the diff erent parts of the lesson – before the
lesson, during the lesson, a n d after the lesson.
(DepEd, 2015).
Giving feedback is imperative after conducting
the formative assessment. These activities should
not be treated as mere strategies to kill the time.
Specific and informative feedbacks increase the
chance of students to attain the target competencies
and aids them in being accountable for their own
learning. These also prepare them for the
summative assessment. Formative assessments
make a teacher be aware of the patterns of learning of
the students. However, using them for grading purposes
is highly discouraged.
FORMATIVE
ASSESSMENT
ACTIVITY NO. 1
ENGLISH LEARNING COMPETENCY:

Give the appropriate


communicative styles for
various situations (intimate,
casual, conversational,
consultative, frozen).
EN9V-IIb-27
ENGLISH LEARNING COMPETENCY:

Give the appropriate communicative


styles for various situations (intimate,
casual, conversational, consultative,
frozen).
EN9V-IIb-27
MATH LEARNING COMPETENCY:

Solves linear equations or inequality in


one variable involving absolute value by:
(a) graphing; and (b) algebraic
methods.
M7AL-IIi-j - 1
AP LEARNING COMPETENCY:

Nasusuri ang mahahalagang


pangyayaring naganap sa Unang
Digmaang Pandaigdig

AP8AKD-IVb 2
a. Before the lesson:

b. During the
lesson:

c. After the lesson:


MATH LEARNING COMPETENCY:

Solves linear equations or inequality in


one variable involving absolute value by:
(a) graphing; and (b) algebraic
methods.
M7AL-IIi-j - 1
Conditions
of Validity,
Reliability,
a n d Quality
of Feedback
A culture of data has become prevalent in schools all
over the country. This implies the integration of data
into the day-to-day operations of a school in order to
achieve instructional goals. However, one big
challenge is determining what data will accurately
reflect those goals and how can schools generate such
information. A wrong assessment tool may yield
meaningless or uninterpretable data blurring the quality
of decisions reached.
When conducting the assessment, the validity
and reliability of the instruments are of utmost
importance. Validity describes how well the test
measures what it purports to measure. It pertains to
the correspondence between the purpose of the
assessment and which data the assessor chooses to
quantify that purpose. From the classroom assessment
perspective, content validity is of prime value. This
means the questions asked in the assessment
should really be about what was actually discussed
in class.
To ensure validity of assessment tasks, consider
these guidelines:
Reliability, on the other hand, asks whether the test used
to collect data produces consistent results. It is
concern whether the results could be replicated or not.
Since it is not at all concerned with the intent of
assessment, it is possible to have an instrument which is
simultaneously reliable and invalid.
Hence for schools, validity will generally take
precedence over reliability. To ensure the reliability of
assessment tasks, consider these guidelines:

• Instruction for each task must be clearly written.


• Questions and tasks must capture the material taught.
•Seek feedback regarding the thoroughness of
the assessment from students and colleagues.
•Whenchecking essays grade item by item and
grade anonymously.
•When assessing performance task, prepare a well-
defined rubric.
Feedback and assessment go hand-in-hand as
a successful strategy for learning and
improvement. Eff ective feedback provides
learners with a n insight into their performance.
The four stages of feedback according to Duff y
(2013) are:

• G a u g e the student’s expectation of feedback.


• Gather information on student practice.
• Act immediately.
• Be specifi c.
Quality feedback allows the student time to express
their views and to ask for clarification from the points
you have raised and any difference of opinion.

As the teacher giving the feedback, be tactful in


justifying your assessment of the situation and
feedback given. Be accommodating toward feedbacks
on your own feedback practices and mechanisms.
Agree-Disagree: Read each statement carefully. Write A
if you agree with the statement. Otherwise, you write DA.

1. Feedback is most useful at the earliest opportunity


after the given behavior has occurred.
2. Receiving feedback is never frustrating for the receiver.
3. Humor is always inappropriate when giving feedback.
4. Useful content is important for effective feedback.
5.You should give students constructive feedback
even if they are performing to an adequate standard.
Agree-Disagree: Read each statement carefully. Write A
if you agree with the statement. Otherwise, you write DA.

1. Feedback is most useful at the earliest opportunity


after the given behavior has occurred.
2. Receiving feedback is never frustrating for the receiver.
3. Humor is always inappropriate when giving feedback.
4. Useful content is important for effective feedback.
5.You should give students constructive feedback
even if they are performing to an adequate standard.
C a s e Analysis

Jordan is very playful in class. He keeps on sharpening


his pencil or asks permission to go to the comfort
room. However, when you ask him to answer the
arithmetic problems on the board, he is able to solve
them well. How would you deal with this kind of learner?
State how would you explain to him the effect of
his behavior to his performance.
Principles of
Test
Development
Validity
This refers to the evidence base that can be provided
about appropriateness of the inferences, uses, and
consequences that come from assessment (McMillan,
2001a).

Appropriateness has to do with the


soundness, trustworthiness, or legitimacy of the claims
or inferences that testers would like to make on the
basis of obtained scores. Validity also refers to whether
the test is actually measuring what it claims to measure
(Arshad, 2004).
Reliability
This means the degree to which an assessment
tool produces stable and consistent results. Reliability
essentially denotes ‘consistency, stability, dependability,
and accuracy of assessment results’ (McMillan, 2001a,
p.65 in Brown, G. et al, 2008).
Factors that
aff ect the
reliability of a
test
A ) Test Factor
In general, longer tests produce higher reliabilities.
Due to the dependency on coincidence and
guessing, the scores will be more accurate if the
duration of the test is longer.
An objective test has higher consistency because it
is not exposed to a variety of interpretations.
A valid test is said to be reliable but a reliable test
need not be valid.
A consistent score does not necessarily measure what
is intended to measure.
In addition, the test items are the samples of the
subject being tested and variation in the samples
may be found in two equivalent tests and there
can be one of the causes test outcomes are
unreliable.
B) Teacher a n d Student Factor

In most tests, it is normal for teachers to construct


and administer tests for students.
Thus, any good teacher-student relationship would
help increase the consistency of the results.
Other factors that contribute to positive effects on
the reliability of a test include teacher’s
encouragement, positive mental and physical
condition, familiarity to the test formats, and
perseverance and motivation.
C ) Environment Factor

An examination environment certainly influences


test- takers and their scores.
Any favorable environment with comfortable chairs
and desks, good ventilation, sufficient light, and
space will improve the reliability of the test.
On the contrary, a non-conducive environment
will affect test-takers’ performance and test reliability.
D) Test Administration Factor

Because students' grades are dependent on the


way tests are being administered, test administrators
should strive to provide clear and accurate
instructions, sufficient time, and careful monitoring
of tests to improve the reliability of their tests.
A test-retest technique can be used to determine
test reliability.
E) Marking Factor

It is also common that different markers award


different marks for the same answer even with a
prepared mark scheme.
A marker’s assessment may vary from time to time
and with different situations.
Conversely, it does not happen to the objective type
of tests since the responses are fixed. Thus, objectivity
is a condition for reliability.
Objectivity
This refers to the degree to which equally competent
scorers obtain the same results.

To ensure high objectivity:


•Select assessment procedures most
appropriate for the learning goals being
assessed.
• Make the assessment procedure as objective as
possible
DISCRIMINATION
Does the test discriminate the skills of the students?

COMPREHENSIVENESS - does thetest


measure all of the content studied in
class?

EASE OF ADMINISTRATION - are there


any difficulties in administrating the test?

PRACTICALITY AND SCORING - is the


test easy to mark? Does the marking
take a lot of time?

USABILITY - does the test provide


Steps in Developing a Classroom Assessment Test
1. Determining the purpose of the assessment (pre-
test, formative, or summative)
2.Developing the test specifications (this is the table
you are creating)
3.Selecting the appropriate assessment tasks (form
and type)
4. Prepare the relevant assessment tasks
5. Assemble the assessment
6. Provide instruction
7. Evaluate the assessment
8. Use the assessment results
Table of Specifications
It is a chart or table that details the content and level
of cognitive domain assessed on a test as well as the
types and emphases of test items. (Gareis and Grant,
2008).

Construction of TOS

• Selecting the learning outcomes to be measured.


•Make an outline of the subject matter to be covered in
the test.
• Decide on the number of items per subtopic.
• Make the TOS.
• Construct the test items.
Fundamenta
l types,
Purpose a n d
Qualities of
g ood tests
a n d Related
Objectivity
Teacher- tests are normally prepared
made
administered a n for
d testing classroom
achievement of students, evaluating the method of
teaching adopted by the teacher and other
curricular p ro g r a m s of the school. Teacher-m a d e
test is one of the most valuable instruments in the
h a n d s of the teacher to solve his purpose. It is
designed to solve the problem or requirements of the
class
Guidelines for Construction of Test Items
1. Begin writing items far enough in advance so that you
will have time to revise them.
2.Match items to the intended
outcomes at appropriate level of difficulty.
3.Be sure each item deals with
important aspect of the content area.
4. Be sure the problem posed is clear and unambiguous.
5. Be sure that the item is independent with all other
items.
Guidelines for Construction of Test Items
6.Be sure the itemhas one or thebest
answer on which experts would agree.
7.Prevent unintended clues to an answer in the
statement or question.
8. Avoid replication of the textbook in writing test items.
9. Avoid trick or catch questions in an achievement test.
10.Try to write items that require higher-order thinking
skills.
Guidelines for Construction of Multiple-choice
1. Make a test that is practical.
2.Use diagrams or drawing when asking questions
about the application, analysis, or evaluation.
3.When asked to interpret or evaluate quotations,
present actual quotations from secondary sources.
4.Use tables, figures or charts when asking to interpret
and pictures when students are required to apply
concepts and principles.
5. List the choices/options vertically NOT horizontally.
Guidelines for Construction of Multiple-choice
6. Avoid trivial questions.
7. Use only one correct answer or the best answer format.
8. Use three to five options to discourage guessing.
9. Be sure that distracters are plausible and effective.
10. Increase similarity of the options.
11. Do not use “none of the above” when asking for the
best answer.
12. Avoid using “all of the above” options.
Guidelines for Construction of Stem
1. The stem should be written in question form or
completion form.
2.Do not leave blank at the beginning or middle of the
stem (for completion form).
3. The stem should pose the problem completely.
4. The stem should be clear and concise.
5. Avoid excessive and meaningless use of words.
6.State the stem in positive form.
Avoid using negative phrase like “not” and
“except”.
7. Avoid grammatical clues.
Guidelines for Construction of Options
1.There should be one correct answer
orbest answer in each item.
2.List options in vertical order not horizontal order
beneath the stem.
3.Arrange the options in a logical
order and use capital letters.
4. No overlapping options; keep it independent.
5. All options must be homogeneous in content.
6.As much as possible the length of the options must be
the same or equal.
7.Avoid using the phrase “all of
the above”, “none of the above” and “ I don’t
know ”
Guidelines for Construction of Distracters
1. The distracters should be plausible and equally popular
to all examinees.
2. Avoid using ineffective distracters.
3.Each distracter should be chosen by at least 5% of
the examinees but not more than the key answer.
4. Revise distracters that are over-attractive.
Advantages of Multiple-Choice Test
Measures learning outcomes from remembering
to creating.
Scoring is highly objective, easy, and reliable.
Scores are more reliable than subjective type of
test. Measures broad sample of content within a short
span of time.
Item analysis can reveal the difficulty of an item and
can discriminate the good and poor performing
students.
Disadvantages of Multiple-Choice Test
Time-consuming to construct a good item.
Difficult to find effective and plausible
distracters.
Scores can be influenced by the reading ability of
the examinees.
In some cases, there is more than one justifiable
correct answer.
Ineffective in assessing the problem solving skills of
the students.
Not applicable when assessing the student’s ability to
organize and express ideas.
Guidelines for Construction of Matching Type
1.The options and descriptions must be short
and homogeneous.
2.The descriptions must be written on the left side
and marked with Column A and the options must be
written at the right side and marked with Column B.
3.There should be more options than descriptions
or indicate in the directions that each option may be
used more than once.
4. Matching directions should specify the basis for
matching.
5. Avoid too many correct answers.
Guidelines for Construction of Matching Type
6.When using the name, always
include the complete name.
7.Use numbers for the descriptions and capital letters
for the options
8.Arrange the options in
chronological or alphabetical order.
9. The descriptions and options must be written in one
page
10. A minimum of three and a maximum of seven items
for elementary and a maximum of seventeen
items for secondary and tertiary levels.
Advantages of Matching Type
•It is simpler to construct than the multiple-choice type
of test.
•It reduces the effect of guessing compared to multiple
choice or true-or-false type of tests.
• It is appropriate to assess the association between facts.
•Provides easy, accurate, efficient, objective and
reliable scores
• More content can be covered in the given set of test.
Disadvantages of Matching Type
•It measures only simple recall or memorization of
information.
•It is difficult to construct due to problems in
selecting descriptions and options.
•It assesses only low level of cognitive domain such
as knowledge and comprehension.
Guidelines for Construction of True-or-False Test
1. Avoid writing very long statements.
2. Avoid trivial questions.
3.It should contain only one idea in each item except
for a statement showing cause and effect.
4. Avoid opinion-based statements.
5. Avoid using negative or double negative.
6.Avoid specific determiner such as never, always, all
and none for they tend to appear in the statements
that are false.
Guidelines for Construction of True-or-False Test
7.Avoid specific determiners such as some, sometimes
and may for they tend to appear in the statements that
are true.
8. Avoid grammatical clues.
9. Avoid statements directly taken from the book.
10.Avoid arranging the
statements in patterned order (TTTTTT,
FFFFFF, TFTFTF).
11. Directions should indicate where and how the
students should mark their answers.
Advantages of True-or-False Test
• It covers a lot of content in a short span of time.
•It is easier to prepare compared to the multiple-

choice and matching type of test.


•It is easier to score because it can be scored
objectively compared to test that depends on the
judgment of the rater/s.
• It is useful when there are two alternatives only.
• The score is more reliable than essay test.
Disadvantages of True-or-False Test
•Limited to only to level of thinking skills such as
knowledge and comprehension or recognition or recall
information.
• High probability of guessing the correct answer.
Guidelines for Construction of Completion/Short Answer
Test
1.The item should require a single
answer or brief and definite statement.
2.Be sure that language used in the statement is
precise and accurate in relation to the subject matter
being tested.
3. Be sure to omit only key words.
4.Do not leave blank at the beginning
or within the statement.
Guidelines for Construction of Completion/Short Answer
Test
5.Usedirect questions rather than an
incomplete statements.
6.Be sure to indicate units in which to be expressed
when the statement requires numerical answer.
7.Be sure that the answer the student is required to
produce is factually correct.
8. Avoid grammatical clues.
9. Do not select textbook sentences.
Advantages of Completion/Short Answer Test

• It covers a broad range of topics in a short span of time.


•It is easier to prepare and less time consuming
compared to multiple choice and matching type of test.
•It can assess effectively the lower level of
Bloom’s taxonomy.
• It reduces the possibility of guessing the correct answer.
Disadvantages of Completion/Short Answer Test
•It is only appropriate for questions that can be
answered with short responses.
•There is difficulty in scoring when questions are not
prepared properly and clearly.
•It can assess only knowledge, comprehension
and application in Bloom’s taxonomy of cognitive domain.
•It is not adaptable in measuring complex
learning outcomes.
• Scoring is tedious and time consuming.
Essay
It consists of a few questions wherein the examinee
is expected to demonstrate the ability to recall
factual knowledge, organize knowledge, and present
knowledge in logical and integrated answers.

Ad va n ta g e s of Extended Response Essay:


•Demonstrate learning outcomes at synthesis
and evaluation levels.
•Provides more freedom to give responses to the
question and provide creative integration of ideas.
Disadvantages of Extended
Response Essay:
• More difficult to construct
• Scoring is time consuming
Ad va n ta g e s of Restricted Response Essay:
• It is easier to prepare questions
• It is easier to score
• It is more directly related to specific learning outcomes

Disadvantages of Restricted Response Essay:


•It provides little opportunity for students to organize
ideas, integrate materials, and develop new patterns.
•It measures learning outcomes at
comprehension, application and analysis levels.
Guidelines for Construction of Essay Test
1.Construct essay questions to measure complex
learning outcomes and relate them to the
outcomes to be measured.
2.Formulate essay questions that present a clear task
to be performed.
3.An item should be stated precisely and it must
clearly focus on the desired answer.
4.All students should be required to answer the
same question.
5.Number of points and time
spent in answering the question must be
indicated in each item.
6.Specify the number of words, paragraphs, or the
number of sentences for the answer.
7.The scoring system must be discussed or presented
to the students.
Ad va n ta g e s of Essay Test
• It is easier to prepare and less time-consuming.
• It measures HOTS
•It allows students’ freedom to express individuality in
answering the question
• The students have a chance to express their own ideas.
• It reduces guessing answer
• It presents more realistic task
• It emphasizes on integration and application of ideas
Disadvantages of Essay Test
•It cannot provide an objective measure of
the achievement of the students
•It needs so much time to grade and prepare
scoring criteria
• The scores are usually not reliable
• It measures a limited amount of objectives and contents
• Low variation in scores
• It encourages bluffing
Suggestions for Grading Essay Test
•Decide on the policy for dealing with incorrect, irrelevant
or illegal responses.
• Keep the scores of previously read items out of sight
•The student’s identity should remain anonymous
while his/her paper is being graded
•Read and evaluate each student’s answer to the
same question before grading the next question.
• Provide students with general grading criteria
• Use analytic or holistic scoring
•Answer the test question yourself by writing the
ideal answer.
• Write your comments on their papers.

You might also like