PRACTITIONER RESEARCH IN HIGHER EDUCATION 1 (1)
Student assignment feedback
Practitioner Research
in Higher Education
Copyright © 2007
University of Cumbria
Vol1 (1): page 46-50
Mark Plater
Bishop Grosseteste University College
Lincoln
LN1 3DY
Abstract
This paper presents the results of a small scale action research project in which
a group of seven Year 2 undergraduate initial teacher education (ITE) students
experienced three interventions which were concerned with feedback on essay
assignments—provision of traditional written tutor feedback; completion of a
tutor feedback analysis worksheet; and an experience of peer marking, with an
opportunity for resubmission of work. After each intervention students were
asked to reflect on the effectiveness of the intervention. The paper presents a
brief overview of recent research on the topic of assignment feedback, and the
results of this project are considered in the context of this.
Introduction
After hours of concentrated work I would often wonder, just what, if anything, all of this detail meant to Joe
Student-Teacher. I had read, corrected, and offered suggestions for improvement, then completed a six-point
analysis, along with key strengths and future targets. Was I wasting my time? When I was his age I cared only
about the final grade. All I really wanted to know was, had I done enough to get me through and on to the
next hurdle towards qualification?
One year into my work as an HE lecturer, and needing to find a topic for my Postgraduate Certificate in
Teaching and Learning in Higher Education (PgCTLHE) action research, I decided that this would be a helpful
area for further exploration. My core question was this
How can I ensure that my students take seriously, and learn from, my written assignment feedback?
And, as a corollary to that, What could I learn from my students about giving better feedback?
My procedural method was to experiment with various feedback interventions, and then to question students
about which of these was most effective.
Literature review
There is a growing literature on the subject of student assessment and feedback. Swann and Arthurs (1998),
Ecclestone (1988), and Becker et al (1968) all claim that students have an instrumental view of learning,
seeing assignments as obstacles to be overcome in the pursuit of a university qualification. Ding’s 1998 work
concludes that even if tutor comments are read, little is actually done with them to affect future learning.
More recent work by Higgins et al (2002) proposes that, although students may rightly be considered as
having an instrumental or mechanistic view of education, they are actually conscientious consumers, who
really do care about the learning process. My own experience with a small group of students at St Martin’s
College, now the University of Cumbria, Lancaster, was to confirm this more optimistic perspective.
Another body of research tries to understand and explain the possible reasons for the dissonance between
what is written by tutors and what is received by students. Winter (1993) points out that we cannot assume
that feedback is a straightforward ‘transmission of information’. Rather the tutor is in a position of both
power and authority. Chanock (2000) shows that certain words and ideas (such as ‘too much description—
not enough analysis’) have totally different meanings between different tutors and their students, and indeed,
between different academic disciplines. Rust (2003) and Higgins (2000) show that much that is presented
by tutors as learning objectives and assessment criteria is of a complex, tacit nature, which tutors find very
difficult to pass on to students, and students find very difficult to grasp. Lea and Stierer (2000) describe this
46
PLATER: STUDENT ASSIGNMENT FEEDBACK
as a specific form of ‘academic literacy’ which academics assume, but that students struggle to understand.
Many of the more recent studies on assessment are concerned with the importance of getting students
engaged with the processes of assessment. Elwood and Klenowski (2002) propose that only as students
themselves participate in the typical activities of academic communities of shared practice can they be
absorbed into the culture of those communities. So, for instance, as students engage in peer assessment of
ideas, and practice other conventions used by the community, so they become inducted as full participants
and become practicing members. Rust (2003), in a study of a Business Studies group, suggests that activities
which actively engage students with assignment feedback result in improved grades in other similar
assignments. Bloxham and West (2004) suggest that students, with appropriate preparation, are able to use
marking schemes accurately, and that peer marking helps them to better understand what is required in set
tasks. Furthermore, Bloxham and West claim that students are very positive about the experience, and gain
a greater appreciation for the usefulness of marking criteria. Van den Berg et al (2006) and Rust et al (2003)
both warn however, that peer feedback is likely to focus on content and style, rather than structure, and so
may not be relied on to fully replace assessment by the tutor.
Development of the action research
My initial research proposal consisted of three interventions and then an analysis of the impact of each. Firstly,
students would be provided with essay feedback in the format traditionally used with our secondary Religious
Education (RE) trainees (this consists of a typed (or sometimes hand-written) feedback sheet with a breakdown
of the assignment into 5-6 elements, with each being graded on a scale from Very Good to Limited, then
a section for Strengths of the assignment, and another for Targets for future work). Secondly, they would
receive both oral tutorial feedback as well as the traditional written feedback. Thirdly, they would be given a
task requiring them to engage with the feedback from several past assignments, by asking what they could
learn from past feedback, and requiring them to set targets for future work.
In the course of my research however, I decided to substitute the oral feedback with a peer-marking activity.
This was largely on the basis of time-considerations (the time taken to organise and offer individual tutorials),
but also because peer-marking as an activity seemed to hold several other benefits for trainee teachers as a
means of inducting them into the community of shared practice (Elwood and Klenowski, 2002) referred to
above. Hopefully, it would offer students practice in assessment and offering feedback; it would allow them to
become engaged with the use of learning outcomes and marking criteria; it would nurture skills and attitudes
in collaborative working; it would encourage self-assessment and self-evaluation; and it would nurture greater
ownership of their own and others’ learning.
This new activity replicated a similar intervention carried out at the University of St Andrew’s in 1995 (Juwah
et al, 2004, Case Study 6, p28-30). Students were asked to grade and give feedback on an anonymous piece
of work by a fellow student. Detailed guidance for the activity was provided, including an ethical framework,
proposals for use of the marking criteria and suggestions for procedure. The peer-marked work was then
returned to its owner and the students allowed a further period of time to resubmit their work if they so
wished. As an added incentive, they were informed that their best piece of work (out of the original or the
resubmitted one) would count as their final grade. In other words, they had nothing to lose from the activity,
but possibly something to gain.
Results of the three interventions
Rather unexpectedly, feedback after the first intervention (traditional written feedback from the tutor) revealed
that the majority of students (92% of the 5x Yr 2 UG and 20x PG students) found the feedback helpful or
very helpful. 88% said that they read and re-read the feedback very carefully. In addition, all of them could
remember aspects of the feedback, and all of them were able to note down one specific thing from the
feedback that they remembered. Students said that they felt helped (20), encouraged (18), pleased (11),
and relieved (4) by the feedback. These results were not as I had expected and were not in keeping with the
research described above. However, this was the first piece of RE tutor feedback received by both groups of
students and may have been exceptional in that respect.
My second and third interventions were with the smaller group of seven Year 2 BA/QT undergraduate students
only. The second intervention consisted of a worksheet activity in which students summarised the feedback
from three or more past essays, identified any elements of the feedback which they could not understand,
and then identified 2-3 specific things which they could do to help improve their grades on future essays.
I also made copies of the actual tutor feedback used, so that I could assess the accuracy of the students’
47
PRACTITIONER RESEARCH IN HIGHER EDUCATION 1 (1)
own summaries. Figure 1 shows a typical example of the summary of tutor feedback written by one of the
students, and, alongside it, my own summary of the same tutor feedback. The student summaries provided
a generally accurate reflection of what tutors had written. In the example provided, it can be seen that the
student had identified the main gist of what tutors said, even if s/he had missed some of the specific details
(shown in italics in the tutor summary section). In response to the question about elements of the feedback
which students had not understood, 4 references were made to legibility, 3 to inability to understand the
meaning of tutor comments, and 1 to an inconsistency between the grade given and the accompanying
comments.
Figure 1 Chart showing results of student 3 for intervention 2 question 1, in which students were asked to
summarise the tutor feedback provided on 3 or more past essays.
Student 3 (Student summary)
Student 3
The feedback says that…
• Well research and written
The feedback says that…
•
Well researched and written. Good
knowledge/ understanding and critical
discussion.
•
Clearly organised. Bibliography a little
light. Read more- to uncover your position
in order to critically defend it, and to
penetrate issues deeper so that you can
argue alternative positions.
•
Detailed and well researched/referenced
work. Pay attention to aspects of
punctuation and sentence structure.
Engage more with primary sources and
provide evidence of critical reflection.
• Organised. Should argue more. Light
Bibliography.
• Good research/referencing.
Attention required in areas of
primary sources/literacy.
(Tutor summary)
Writing in italics indicates elements of tutor feedback which were less accurately summarised by the
student.
In my second questionnaire, each of the students concluded that this had been a helpful activity, and that it had
helped to clarify thinking about what lecturers had said about their work. They were very positive about such an
activity being conducted with all students at the end of year 1.
Unfortunately, my third intervention (voluntary peer marking) was carried out at a very busy time of year for
both students and tutor, and centred around an assignment for which three of the seven students sought, and
were granted, extensions. As a result, only two of the students carried out the full peer marking activity, and
only one of them chose to resubmit the assignment. The resubmitted essay did not result in an improved grade,
although there was some improvement in the general literacy and presentation of the essay. Analysis of the peer
feedback provided by students confirmed the claims of Vandenberd et al (2006) and Rust et al (2003) that such
feedback is likely to focus on content and writing style rather than on issues of accuracy and essay structure.
A final questionnaire revealed that those students who did complete the activity found it very helpful. They were
also much more positive about completing the activity again than were those who had not fully participated.
Asked what they gained most (or thought they would gain) from the activity, students highlighted, the chance
to see how others have approached the essay (4); practice in using marking criteria (4); and, the chance to
resubmit the work after feedback (3).
However, the most telling element of the questionnaire feedback for me was the responses given to the final
question on each of the last two questionnaires. After the second intervention, students were asked, “Which
of the following have you found important in helping you to learn from tutor feedback on your assignments?”
Figure 2 shows the responses.
48
PLATER: STUDENT ASSIGNMENT FEEDBACK
Figure 2 Student responses to the question, Which of the following have you found important in helping you
to learn from tutor feedback on your assignments?
0
Opportunity to comment back to the
tutor on the feedback
2
Tutor comments which relate directly
to the course learning outcomes and
marking criteria
3
Tutor comments which offer specific
targets for improving future essays
1
Comments on the essay draft itself to
show specific mistakes or weaknesses
0
Opportunities to discuss and compare
tutor feedback with other students
2
Opportunities to discuss the feedback
comments with the tutor concerned
3
Opportunities to see ‘model’ answers to
the essay questions, in order to compare
these with my own work
2
Opportunities to ‘work with’ the tutor
feedback, such as the above exercise
2
Opportunities to discuss the essay in class
after it is marked and returned
0
Feedback is received asap after essay is
completed
0
Other (Please give example/s)
In spite of previous claims that the students have benefited from such engagement, here they indicate a
disinterest in activities requiring them to work with the feedback, and a clear preference for explicit feedback
from the tutor. Likewise, after the third intervention, in spite of claiming that the peer marking was (or could
be) beneficial, when asked to rate what would be most helpful for improving their essay-writing skills, their
preferences were, in order
normal written tutor feedback; seeing ‘model’ answers to similar questions; tutorial feedback with the
tutor; more activities like the peer-marking one; class discussion on the marked/returned essays
Clearly, these students feel very dependent on the tutor’s feedback, and very uncertain about the benefits of
alternative individual or peer-related engagement.
Conclusions
It would be very unwise of me to attempt to draw any major conclusions from such a small scale study as the
above. Likewise, the lack of full participation by students in my final intervention (peer marking) precludes any
ability to make assumptions about the potential benefits of such interventions. However, I would tentatively
propose the following, both from the evidence collected and from my own more qualitative (though informal)
encounters with the students, firstly, that this brief study accords with the research of Winter (1993), Chanock
(2000), Rust (2003) and others, that students struggle to understand the written feedback of tutors. Although
they are reasonably proficient at summarising feedback, there are some details of the feedback that they miss,
but, more importantly, there are some comments which they are aware of not understanding, even though
they may do nothing to pursue this with the tutor concerned. Secondly, the study supports Higgins et al’s
(2002) more optimistic view of students, that they are conscientious consumers, who are concerned about
improved learning rather than just seeking to achieve satisfactory grades. This second claim may seem to be
in contradiction to the first. However, in this study students indicate that they often do go back to tutors for
clarification of feedback - particularly if this relates to issues that were relevant to future modules. Where the
feedback was not followed up this was usually because the feedback was late in coming and the student was
now busy with other modules and other assignments. Thirdly, the work supports the claims of Van den Berg et
al (2006) and Rust et al (2003) that peer feedback is likely to provide feedback of a particular type, and so may
not be considered as an alternative to tutor assessment and feedback but as a complement to it.
49
PRACTITIONER RESEARCH IN HIGHER EDUCATION 1 (1)
Finally, the work accords with the conclusions of Bloxham and West (2004) that students are generally positive
about the experience of peer marking, believing that this supports their understanding of assessment tasks,
and gives them a greater appreciation of the usefulness of marking criteria. On the other hand, there is a
suggestion that students would not appreciate this as an alternative to tutor feedback, which they value most
highly. Rather, they appreciate it as an additional means of support in the task of writing college assignments.
Because of the somewhat incomplete nature of the third intervention in this study, I propose to experiment
further with the practice of peer marking, using it as a way of providing additional systemic feedback to
students for ongoing skill development in the writing of academic essays. I also plan to take this one stage
further, by allowing my students to engage in a more open discussion of their academic writing through the
use of academic writing workshops, much in the style used by creative writing teachers (Flann, 2006). These
sessions will be incorporated into the planned weekly programme, and will allow students space to reflect
together on the nature of good academic writing in their subject specialism.
References
Becker, H. et al. (1968). Making the grade—The academic side of college life. London—John Wiley.
Bloxham, S. and West, A. (2004). Understanding the rules of the game—Marking peer assessment as a
medium for developing students’ conceptions of assessment. Assessment and evaluation in higher education.
29 (6). 721-733.
Chanock, K. (2000). Comments on essays—Do students understand what tutors write? Teaching in higher
education. 5 (1). 95-105.
Ding, L. (1988). Revisiting assessment and learning—Implications of student perspectives on assessment
feedback. Referred to in Higgins et al 2001 (below).
Flann, K. (2006). Are you talking to me?—Using the “workshop” method to energise student writing.
Workshop presentation at the St Martin’s College, now the University of Cumbria, Learning and Teaching Fest,
June 2006.
Higgins, R. (2000). “Be more critical”—Rethinking assessment feedback. Quoted in Bloxham and West
2004 (above).
Higgins, R. et al. (2002). The conscientious consumer—Reconsidering the role of assessment feedback in
student learning. Studies in higher education. 27 (1). 53-64.
Juwah, C. et al. (2004). Enhancing student learning through effective formative feedback. Higher education
academy.
Lea, M. R. and Stierer, B. (Eds) (2000). Student writing in higher education. Buckingham—OUP.
Rust, C. et al. (2003). Improving students’ learning by developing their understanding of assessment criteria
and processes. Assessment and evaluation in higher education. 28 (2). 147-164.
Swann, J. and Arthurs, J. (1988). Empowering lecturers—A problem-based approach to improving assessment
practice. Higher education review. 31 (2). 50-74.
Van den Berg, I. et al. (2006). Designing student peer assessment in higher education—Analysis of written and
oral peer reports. Teaching in higher education. 11 (2). 135-147.
Winter, R. (1993). Education or grading? Arguments for a non-subdivided honours degree, studies in higher
education. 18 (3). 90-116.
50