Module in Assessment 2 For 2nd Sem 2021 2022
Module in Assessment 2 For 2nd Sem 2021 2022
Module in Assessment 2 For 2nd Sem 2021 2022
OF STUDENT LEARNING 2
MODULE
by
DR. ANNELYN A.
TUNG
DR. ALMA R.
DEFACTO
Chapter I 21ST CENTURY ASSESSMENT
Learning Outcomes:
Pre-Test: (write your answers on yellow pad papers and submit them on April 11, 2022
at the Main Campus. Drop them on the box with my name. Don’t forget to write
your name in each of the pages of your yellow pad papers, course, year and section
and subject. Thank you very much.)
In order to thrive in this constantly changing and extremely challenging period, the
acquisition of 21st century skills is necessary. It is imperative that the educational
system sees that these skills are developed and honed before the learners graduate.
It should be integrated in the program of each discipline. More than just acquiring
knowledge, its application is important. To ensure that education has really done its
role, ways to measure or to assess the learning process are necessary. Thus, the
assessment processes and tools must be suited to the needs and requirements of the
21st century. In this lesson, the characteristics of 21st century assessment, how it is
used as one of the inputs in making instructional decision, and outcome-based
assessment will be discussed.
Learning Outcomes:
Inevitably the 21st century is here, demanding a lot of changes, development, and re-
Engineering of systems in different fields for this generation to thrive. In the field of
education, most of the changes are focused on teaching and learning. Preparing
and equipping the teachers to cater to the needs of the 21st century learners are part
of the adjustments being done in the education system. Curricula are updated to
address the needs of the community in relation to the demands of the 21 st century.
This aspect of teaching and learning has been given its share of focus, the
various components/factors analysed and updated to ensure that students' learning
will be at par with the demands of the 21st century. Although a lot of changes has
been made on the different facets of education, there are some members of the
educational community calling for corresponding development or change in
educational assessment. Viewing educational assessment as agent of educational
change is of great importance. This belief, coupled with the traditional focus on
teaching and learning will produce a strong and emerging imperative to alter
our long-held conceptions of these three parts: teaching, learning, and
assessment (Greenstein, 2012). Twenty-first century skills must build on the core
literacy and numeracy that all students must master. Students need to think
critically and creatively, communicate and collaborate effectively, and work globally
to be productive, accountable citizens and leaders. These skills to be honed
must be assessed, not just simply to get numerical results but more so, to take the
results of assessment as guide to take further action. Educators need to focus on:
what to teach; how to teach it; and how to assess it (Greenstein, 2012;
Schmoker, 2011).
1.2 Self-reflection - peer feedback, and opportunities for revision will be a natural
outcome.
1.4 Informative - the desired 21st century goals and objectives are clearly stated and
explicitly taught. Students display their range of emerging knowledge and
skills. Exemplars routinely guide students toward achievement of targets.
Learning objectives, instructional strategies, assessment methods, and
reporting processes are clearly aligned. Complex learning takes time. Students
have opportunities to build on prior learning in a logical sequence. As students
develop and build skills. i.e. learning and innovation skills, information,
communication and technology skills, and life and career skills; the work
gets progressively more rigorous.
Demonstration or 21st century skills are evident and support learning. Students show
the steps they go through and display their thought processes for peer and teacher
review.
Observation of student’s
interaction
Diagnosis of the types of errors Look for alternative ways to
the students made or teach the materials
erroneous thinking the students
are using
Identify if there are students who
are not participating and acting
appropriately
After a Teaching How well students achieve the Strengths and weaknesses to be
Segment short and long term instructional given as feedback to parents or
targets guardian of students
Grade to be given to each student Effectiveness of teaching the
for the lesson or unit, grading lesson to the students
period or end of the course
Based on what was presented, it can be inferred that there is a very close
relationship between assessment and instruction. The data on observation and
evidences and other sources of information serve as basis for the teacher to decide
what action he/she needs to do to help the learner achieve the desired learning
outcome. Note that data used may be from informal assessments such as
observation from interaction of teacher and learner and through formal one, such as
giving of actual case/problem for calculation as what was mentioned in the above
example.
CATEGORY PURPOSE
1. Placement Assessment Measures entry behavior
2. Formative Assessment Monitors learning progress
3. Diagnostic Assessment Identifies causes of learning problems
4. Summative Assessment Measures end- of course achievement
On a greater scale, the use of assessment in decision-making is not just within the
bounds of the classroom. It extends to the whole education community. Results
of assessment may trigger updates in the existing curriculum and other policies
governing the school system. Or it may be the other way around, so as to plan for
changes or development in school assessments and in what particular aspect of
the school system these changes are necessary.
Kubiszyn and Borich (2002) classified the different educational decisions into
eight (8) categories. These types of decisions are described briefly below.
3. Outcome-Based Assessment
Knowing what is expected from the learners by their teachers at the end of a
particular lesson helps them to meet those targets successfully. In relation to this,
teachers who have set clear targets for their lessons, will be guided accordingly as
they deliver their lesson through instructional learning activities to meet the desired
outcomes. Thus, all assessment and evaluation activities must be founded on the
identified student intended learning outcomes (ILO). These ILOs should be identified
and clarified with students so that it will be an effective teaching-learning
process as the teachers commence the learning activities through delivery of the
lessons.
2. Focused on the learner: rather than explaining what the instructor will do in
the course, good learning outcomes describe knowledge or skills that the
student will employ, and help the learner understand why that knowledge
and those skills are useful and valuable to their personal, professional, and
academic future.
5. Good learning outcomes prepare students for assessment and help them
feel engaged in and empowered by the assessment and evaluation process.
During teaching, teachers not only have to communicate the information they
planned but also continuously monitor students’ learning and motivation in order to
determine whether modifications have to be made. Beginning teachers find this
more difficult than experienced teachers because of the complex cognitive skills
required to improvise and be responsive to students needs while simultaneously
keeping in mind the goals and plans of the lesson. The informal assessment
strategies teachers most often use during instruction are observation and
questioning.
Observation
Effective teachers observe their students from the time they enter the
classroom. Some teachers greet their students at the door not only to
welcome them but also to observe their mood and motivation. Are Hannah
and Naomi still not talking to each other? Does Ethan have his materials with
him? Gaining information on such questions can help the teacher foster
student learning more effectively (e.g. suggesting Ethan goes back to his
locker to get his materials before the bell rings or avoiding assigning Hannah
and Naomi to the same group).
Questioning
Try to make sure you are not only seeing what you want to see.
Teachers lack of
Teachers typically want to feel good about their instruction so
objectivity about overall
it is easy to look for positive student interactions. Occasionally,
class involvement and
teachers want to see negative student reactions to confirm
understanding
their beliefs about an individual student or class.
Keep records.
Record keeping
Anecdotal records often provide important information and are better than relying
on one’s memory but they take time to maintain and it is difficult for teachers to be
objective. For example, after seeing Joseph fall asleep the teacher may now look for
any signs of Joseph’s sleepiness—ignoring the days he is not sleepy. Also, it is hard
for teachers to sample a wide enough range of data for their observations to be
highly reliable.
Teachers also conduct more formal observations especially for students with special
needs who have IEPs (Individual Education Plans). An example of the importance of
informal and formal observations in a preschool follows:
The class of preschoolers in a suburban neighborhood of a large city has eight special
needs students and four students—the peer models—who have been selected
because of their well developed language and social skills. Some of the special needs
students have been diagnosed with delayed language, some with behavior disorders,
and several with autism.
The students are sitting on the mat with the teacher who has a box with sets of three
“cool” things of varying size (e.g. toy pandas) and the students are asked to put the
things in order by size, big, medium and small. Students who are able are also
requested to point to each item in turn and say “This is the big one,” “This is the
medium one,” and “This is the little one.” For some students, only two choices (big
and little) are offered because that is appropriate for their developmental level.
The teacher informally observes that one of the boys is having trouble keeping his
legs still so she quietly asks the aid for a weighted pad that she places on the boy’s
legs to help him keep them still. The activity continues and the aide carefully
observes students behaviors and records on IEP progress cards whether a child
meets specific objectives such as: “When given two picture or object choices, Mark
will point to the appropriate object in 80 percent of the opportunities.” The teacher
and aides keep records of the relevant behavior of the special needs students during
the half day they are in preschool. The daily records are summarized weekly. If there
are not enough observations that have been recorded for a specific objective, the
teacher and aide focus their observations more on that child, and if necessary, try to
create specific situations that relate to that objective. At end of each month the
teacher calculates whether the special needs children are meeting their IEP
objectives.
For example, if the goal is for students to conduct an experiment then they should be
asked to do that rather that than being asked about conducting an experiment.
Common problems
Selected response items are easy to score but are hard to devise. Teachers often do
not spend enough time constructing items and common problems include:
True/False Two ideas are T/F: George H Bush the 40th president of the US
included in item was defeated by William Jefferson Clinton in
1992. The 1st idea is false; the 2nd is true making it
difficult for students to decide whether to circle T
or F.
True/False Irrelevant cues T/F: The President of the United States is usually
elected to that office. True items contain the
words such as usually generally; whereas false
items contain the terms such as always, all, never.
Matching Columns do not Directions: On the line to the US Civil War Battle
contain write the year or confederate general in Column
homogeneous B.
information
Column A
Ft Sumter
2nd Battle of Bull Run
Ft Henry
Column B
Matching Too many items Lists should be relatively short (4–7) in each
Table 2: Common errors in selected
response items
Matching Responses are In the example with Spanish and English words
not in logical (Exhibit 1) should be in a logical order (they are
order alphabetical). If the order is not logical, student
spend too much time searching for the correct
answer.
1. Great Britain
2. Spain
3. France
4. Holland
Multiple Some of the Who is best known for their work on the
Choice alternatives are development of the morality of justice?
not plausible 1. Gerald Ford
2. Vygotsky
3. Maslow
4. Kohlberg
Multiple Use of “All of If all of the “above is used” then the other items
Choice above” must be correct. This means that a student may
Table 2: Common errors in selected
response items
Directions: On the line to the right of the Spanish word in Column A, write the letter
of the English word in Column B that has the same meaning.
Column A
1. Casa ___
2. Bebé ___
3. Gata ___
4. Perro ___
5. Hermano___
Column B
1. Aunt
2. Baby
3. Brother
4. Cat
5. Dog
6. Father
7. House
While matching items may seem easy to devise it is hard to create homogenous lists.
Other problems with matching items and suggested remedies are in Table 2.
Multiple Choice items are the most commonly used type of objective test items
because they have a number of advantages over other objective test items. Most
importantly they can be adapted to assess higher levels thinking such as application
as well as lower level factual knowledge. The first example in Exhibit 2 assesses
knowledge of a specific fact whereas the second example assesses application of
knowledge.
There are several other advantages of multiple choice items. Students have to
recognize the correct answer not just know the incorrect answer as they do in
true/false items. Also, the opportunity for guessing is reduced because four or five
alternatives are usually provided whereas in true/false items students only have to
choose between two choices. Also, multiple choice items do not need homogeneous
material as matching items do. However, creating good multiple choice test items is
difficult and students (maybe including you) often become frustrated when taking a
test with poor multiple choice items. Three steps have to be considered when
constructing a multiple choice item: formulating a clearly stated problem, identifying
plausible alternatives, and removing irrelevant clues to the answer. Common
problems in each of these steps are summarized in Table 3 (below).
Formal assessment also includes constructed response items in which students are
asked to recall information and create an answer—not just recognize if the answer is
correct—so guessing is reduced. Constructed response items can be used to assess a
wide variety of kinds of knowledge and two major kinds are discussed: completion or
short answer (also called short response) and extended response.
Completion and short answer items can be answered in a word, phrase, number, or
symbol. These types of items are essentially the same only varying in whether the
problem is presented as a statement or a question (Linn & Miller 2005). Look at
Exhibit 3 for examples:
The teacher may expect the answer “in a log cabin” but other correct answers are
also “on Sinking Spring Farm,” “in Hardin County,” or “in Kentucky.” Common errors
in these items are summarized in Table 3.
Completion and There is more than one Where was US President Lincoln
short answer possible answer. born? The answer could be in a log cabin,
in Kentucky, etc.
Completion and Too many blanks are in In ________ theory, the first stage,
short answer the completion item so ________ is when infants process
it is too difficult or through their ________
doesn’t make sense. and________ ________.
Completion and Clues are given by Three states are contiguous to New
short answer length of blanks in Hampshire: ________ is to the West,
completion items. ________ is to the East, and ________ is
to the South.
Extended Choices are given on the Testing experts recommend not giving
response test and some answers choices in tests because then students
are easier than others are not really taking the same test
creating equity problems.
Extended response
Extended response items are used in many content areas and answers may vary in
length from a paragraph to several pages. Questions that require longer responses
are often called essay questions. Extended response items have several advantages
and the most important is their adaptability for measuring complex learning
outcomes— particularly integration and application. These items also require that
students write and therefore provide teachers a way to assess writing skills. A
commonly cited advantage to these items is their ease in construction; however,
carefully worded items that are related to learning outcomes and assess complex
learning are hard to devise (Linn & Miller, 2005). Well-constructed items phrase the
question so the task of the student is clear. Often this involves providing hints or
planning notes. In the first example below the actual question is clear not only
because of the wording but because of the format (i.e. it is placed in a box). In the
second and third examples planning notes are provided:
The owner of a bookstore gave 14 books to the school. The principal will give an
equal number of books to each of three classrooms and the remaining books to the
school library. How many books could the principal give to each student and the
school?
Show all your work on the space below and on the next page. Explain in words how
you found the answer. Tell why you took the steps you did to solve the problem.
Jose and Maria noticed three different types of soil, black soil, sand, and clay, were
found in their neighborhood. They decided to investigate the question, “How does
the type of soil (black soil, sand, and clay) under grass sod affect the height of grass?”
Plan an investigation that could answer their new question. In your plan, be sure to
include:
Prediction of the outcome of the investigation
Materials needed to do the investigation
Procedure that includes:
o logical steps to do the investigation
o one variable kept the same (controlled)
o one variable changed (manipulated)
o any variables being measure and recorded
o how often measurements are taken and recorded
Some people think that schools should teach students how to cook. Other people
think that cooking is something that ought to be taught in the home. What do you
think? Explain why you think as you do.
Planning notes
Choose One:
I think schools should teach students how to cook
I think cooking should l be taught in the home
I think cooking should be taught in ____________ because
________________________.
A major disadvantage of extended response items is the difficulty in reliable scoring.
Not only do various teachers score the same response differently but also the same
teacher may score the identical response differently on various occasions (Linn &
Miller 2005). A variety of steps can be taken to improve the reliability and validity of
scoring. First, teachers should begin by writing an outline of a model answer. This
helps make it clear what students are expected to include. Second, a sample of the
answers should be read. This assists in determining what the students can do and if
there are any common misconceptions arising from the question. Third, teachers
have to decide what to do about irrelevant information that is included (e.g. is it
ignored or are students penalized) and how to evaluate mechanical errors such as
grammar and spelling. Then, a point scoring or a scoring rubric should be used.
In point scoring components of the answer are assigned points. For example, if
students were asked: What are the nature, symptoms, and risk factors of
hyperthermia?
Scoring rubrics
Assignment. Write about an interesting, fun, or exciting story you have read in class
this year. Some of the things you could write about are:
What happened in the story (the plot or events)
Where the events took place (the setting)
People, animals, or things in the story ( the characters)
In your writing make sure you use facts and details from the story to describe
everything clearly. After you write about the story, explain what makes the story
interesting, fun or exciting.
Scoring Rubric
Teachers can use scoring rubrics as part of instruction by giving students the rubric
during instruction, providing several responses, and analyzing these responses in
terms of the rubric. For example, use of accurate terminology is one dimension of the
science rubric in Table 4. An elementary science teacher could discuss why it is
important for scientists to use accurate terminology, give examples of inaccurate and
accurate terminology, provide that component of the scoring rubric to students,
distribute some examples of student responses (maybe from former students), and
then discuss how these responses would be classified according to the rubric. This
strategy of assessment for learning should be more effective if the teacher (a)
emphasizes to students why using accurate terminology is important when learning
science rather than how to get a good grade on the test (we provide more details
about this in the section on motivation later in this chapter); (b) provides an
exemplary response so students can see a model; and (c) emphasizes that the goal is
student improvement on this skill not ranking students.
0 The student has no understanding of the question or problem. The response is completely
incorrect or irrelevant.
Performance assessments
These examples all involve complex skills but illustrate that the term performance
assessment is used in a variety of ways. For example, the teacher may not observe all
of the process (e.g. she sees a draft paper but the final product is written during out-
of-school hours) and essay tests are typically classified as performance assessments
(Airasian, 2000). In addition, in some performance assessments there may be no clear
product (e.g. the performance may be group interaction skills).
There are several advantages of performance assessments (Linn & Miller 2005). First,
the focus is on complex learning outcomes that often cannot be measured by other
methods. Second, performance assessments typically assess process or procedure as
well as the product. For example, the teacher can observe if the students are
repairing the machine using the appropriate tools and procedures as well as whether
the machine functions properly after the repairs. Third, well designed performance
assessments communicate the instructional goals and meaningful learning clearly to
students. For example, if the topic in a fifth grade art class is one-point perspective
the performance assessment could be drawing a city scene that illustrates one point
perspective. This assessment is meaningful and clearly communicates the learning
goal. This performance assessment is a good instructional activity and has good
content validity—common with well-designed performance assessments (Linn &
Miller 2005).
One major disadvantage with performance assessments is that they are typically very
time consuming for students and teachers. This means that fewer assessments can
be gathered so if they are not carefully devised fewer learning goals will be assessed
—which can reduce content validity. State curriculum guidelines can be helpful in
determining what should be included in a performance assessment. For example,
Eric, a dance teacher in a high school in Tennessee learns that the state standards
indicate that dance students at the highest level should be able to do demonstrate
consistency and clarity in performing technical skills by:
Eric devises the following performance task for his eleventh grade modern dance
class:
In groups of 4–6 students will perform a dance at least 5 minutes in length. The
dance selected should be multifaceted so that all the dancers can demonstrate
technical skills, complex movements, and a dynamic range (Items 1–2). Students will
videotape their rehearsals and document how they improved through self evaluation
(Item 3). Each group will view and critique the final performance of one other group
in class (Item 4). Eric would need to scaffold most steps in this performance
assessment. The groups probably would need guidance in selecting a dance that
allowed all the dancers to demonstrate the appropriate skills; critiquing their own
performances constructively; working effectively as a team, and applying criteria to
evaluate a dance.
Another disadvantage of performance assessments is they are hard to assess reliably
which can lead to inaccuracy and unfair evaluation. As with any constructed response
assessment, scoring rubrics are very important. An example of holistic and analytic
scoring rubrics designed to assess a completed product are in Exhibit 4 and Table 4. A
rubric designed to assess the process of group interactions is in Table 5.
0 Group did not stay on Group did not assign or Single individual did
task and so task was share roles. the task.
not completed.
1 Group was off-task the Groups assigned roles but Group totally
majority of the time members did not use disregarded comments
but task was these roles. and ideas from some
completed. members.
2 Group stayed on task Groups accepted and used Group accepted some
most of the time. some but not all roles. ideas but did not give
others adequate
consideration
3 Group stayed on task Group accepted and used Groups gave equal
throughout the roles and actively consideration to all
activity and managed participated. ideas
time well.
This rubric was devised for middle grade science but could be used in other subject
areas when assessing group process. In some performance assessments several
scoring rubrics should be used. In the dance performance example above Eric should
have scoring rubrics for the performance skills, the improvement based on self-
evaluation, the team work, and the critique of the other group. Obviously, devising a
good performance assessment is complex and Linn and Miller (2005) recommend
that teachers should:
Portfolios
“A portfolio is a meaningful collection of student work that tells the story of student
achievement or growth” (Arter, Spandel, & Culham, 1995, p. 2). Portfolios are
a purposeful collection of student work not just folders of all the work a student
does. Portfolios are used for a variety of purposes and developing a portfolio system
can be confusing and stressful unless the teachers are clear on their purpose. The
varied purposes can be illustrated as four dimensions (Linn & Miller 2005):
A final distinction can be made between a finished portfolio—maybe used to for a job
application—versus a working portfolio that typically includes day-to-day work
samples. Working portfolios evolve over time and are not intended to be used for
assessment of learning. The focus in a working portfolio is on developing ideas and
skills so students should be allowed to make mistakes, freely comment on their own
work, and respond to teacher feedback (Linn & Miller, 2005). Finished portfolios are
designed for use with a particular audience and the products selected may be drawn
from a working portfolio. For example, in a teacher education program, the working
portfolio may contain work samples from all the courses taken. A student may
develop one finished portfolio to demonstrate she has mastered the required
competencies in the teacher education program and a second finished portfolio for
her job application.
Portfolios used well in classrooms have several advantages. They provide a way of
documenting and evaluating growth in a much more nuanced way than selected
response tests can. Also, portfolios can be integrated easily into instruction, i.e. used
for assessment for learning. Portfolios also encourage student self-evaluation and
reflection, as well as ownership for learning (Popham, 2005). Using classroom
assessment to promote student motivation is an important component of
assessment for learning which is considered in the next section.
However, there are some major disadvantages of portfolio use. First, good portfolio
assessment takes an enormous amount of teacher time and organization. The time is
needed to help students understand the purpose and structure of the portfolio,
decide which work samples to collect, and to self-reflect. Some of this time needs to
be conducted in one-to-one conferences. Reviewing and evaluating the portfolios out
of class time is also enormously time consuming. Teachers have to weigh if the time
spent is worth the benefits of the portfolio use.
Second, evaluating portfolios reliability and eliminating bias can be even more
difficult than in a constructed response assessment because the products are more
varied. The experience of the state-wide use of portfolios for assessment in writing
and mathematics for fourth and eighth graders in Vermont is sobering. Teachers used
the same analytic scoring rubric when evaluating the portfolio. In the first two years
of implementation samples from schools were collected and scored by an external
panel of teachers. In the first year the agreement among raters (i.e. inter-rater
reliability) was poor for mathematics and reading; in the second year the agreement
among raters improved for mathematics but not for reading. However, even with the
improvement in mathematics the reliability was too low to use the portfolios for
individual student accountability (Koretz, Stecher, Klein & McCaffrey, 1994). When
reliability is low, validity is also compromised because unstable results cannot be
interpreted meaningfully.
If teachers do use portfolios in their classroom, the series of steps needed for
implementation are outlined in . If the school or district has an existing portfolio
system these steps may have to be modified.
Table 6: Steps in implementing a classroom portfolio program
1. Make sure students own Talk to your students about your ideas of the
their portfolios. portfolio, the different purposes, and the variety of
work samples. If possible, have them help make
decisions about the kind of portfolio you
implement.
3. Decide what work samples For example, in writing, is every writing assignment
to collect. included? Are early drafts as well as final products
included?
4. Collect and store work Decide where the work sample will be stored. For
samples. example, will each student have a file folder in a file
cabinet, or a small plastic tub on a shelf in the
classroom?
6. Teach and require students Help students learn to evaluate their own work
conduct self-evaluations of using agreed upon criteria. For younger students,
their own work. the self-evaluations may be simple (strengths,
weaknesses, and ways to improve); for older
students a more analytic approach is desirable
including using the same scoring rubrics that the
teachers will use.
Determining whether students are achieving the educational outcomes faculty have
established for graduates of their programs is a critical part of the teaching-learning
process. UNC-Chapel Hill requires academic programs to develop student learning
outcomes assessment plans and to report on how they have used assessment results
to enhance their programs. Appendix A displays the policy adopted by the University
to ensure that these processes take place regularly for purposes of continuous
improvement as well as accountability. There are a number of reasons for measuring
and assessing student learning outcomes at the program level:
While often these terms are used interchangeably, an outcome differs from a goal or
objective in terms of specificity and focus. Learning outcomes describe measurable
knowledge, skills, and behaviors that students should be able to demonstrate as a
result of completing the program. Goals and objectives are typically broader
statements of program purpose that are more difficult to measure, such as
“providing a comprehensive liberal arts education,” “producing quality scientists for
the twenty-first century,” etc.
To be useful in this context, however, the performance data would need to: (1) be
rated using agreed-upon, standard criteria, and (2) be “rolled up” and analyzed at the
program level. More on how to assess student performance so that it can be used to
evaluate the program is contained in later sections of this document. What is a
“Program”? For purposes of student learning outcomes assessment, the University
of North Carolina at Chapel Hill has defined a “program” as a credit-bearing course of
study that results in a degree or a stand-alone professional certificate.
The following guidance is provided to help determine what programs are required to
submit assessment plans and reports:
• Include all undergraduate, master’s, and doctoral degree major programs, and
free-standing certificate programs. Exclude certificate programs consisting only of
courses from existing degree programs offered to matriculated students.
• Within degree programs, the focus of learning outcomes assessment is the major.
Minors, concentrations, program tracks, and certificates offered only to degree-
seeking students may be assessed separately at the discretion of the dean or chair,
but the results do not need to be reported outside of the department.
• A program with multiple degrees at the same level and a common core curriculum
(e.g., BA and BS in Biology) may submit one report, but should include at least one
unique measure for each degree.
• Graduate programs that only admit students to pursue a doctoral degree but are
approved to award a master’s degree as students’ progress toward the doctorate
may prepare one report. The outcomes should reflect what students know or can do
upon completion of the doctoral degree.
• Programs with residential and distance education versions of the same degree may
submit joint or separate reports, but either way, need to present evidence that
graduates demonstrate equivalent knowledge and skills, regardless of mode of
delivery.
1. a mission statement,
2. intended learning outcomes, and
3. a description of the methods that will be used to gather data to measure student
achievement of each outcome.
Begin with a brief statement of the mission and general goals for the program
• A brief description of the purpose of the program (usually a paragraph)
• Can include statements about: o Educational values; o Major bodies of
knowledge covered in the curriculum; What the program prepares students for (e.g.,
graduate study, professional positions)
• An example taken from UNC-Chapel Hill websites: Curriculum in Toxicology
(Ph.D.) The Curriculum in Toxicology is an interdisciplinary program dedicated to the
development of future scientists who are knowledgeable in the basic principles of
toxicology and environmental health sciences with in-depth experience in the design,
execution and publication of research relevant to toxicology and human health. An
annual assessment report describes learning outcomes and how they have been used
for program improvement. Identify the intended student learning outcomes of the
program
• The faculty should clearly define learning outcomes for each major in terms
of what a student should know, think, do, or value as a result of completing the
program. Note that the focus is on measuring what students actually learn, not what
the faculty intend to deliver.
• Learning outcomes must be stated in measurable terms. Producing
“educated persons” or an “ethical individuals” or a “good citizens” might be worthy
goals, but such terms need to be operationalized in order to be measured Assessing
Student Learning at the Program Level Determining whether students are achieving
the educational outcomes faculty have established for graduates of their programs is
a critical part of the teaching-learning process. UNC-Chapel Hill requires academic
programs to develop student learning outcomes assessment plans and to report on
how they have used assessment results to enhance their programs.
References
Siarova, H.; Sternadel, D.; Mašidlauskaitė, R., Assessment Practices for 21st Century
Learning: Review of Evidence, NESET II report, Luxembourg: Publications Office
of the European Union, 2017. doi: 10.2766/71491