Perceptions of Peer Assessment in University Teamwork
P.Willmot*, K.Pond**, S.P.Loddington† and O.A.Palermo**
*
Loughborough University/School of Mechanical and Manufacturing Engineering, UK
* Loughborough University/Business School, UK
Index Terms: Peer Assessment, Teamwork, Project Based Learning.
I. ABSTRACT
Self and peer assessment systems can provide a convenient solution to the very real problem of awarding
fair marks for team members undertaking group assignments. The developing methodology has numerous
benefits for enhanced student learning and transferable skill development. Peer Assessment is not, however,
universally embraced: critics cite potential drawbacks including collusion, and unfair or vindictive marking
and this paper provides a comprehensive review of the state of the art and then describes a web-based peer
assessment tool from Loughborough University. The paper goes on to outline a research methodology that
embraced interview, web survey and data analysis, through which staff and student experiences and
perspectives were collected.
Whilst much of the data tends to confirm, update and strengthen previous literature on this subject,
important new insights are gained into the thoughts of students who appear to recognise and value the
fairness they believe peer mark moderation can offer. Statistical data verifies the lack of collusion associated
with the web-based system and students comment positively on qualities of anonymity and the relatively
accurate recognition of the different levels of achievement within teams. Individual and group marking
behaviours also suggest that most peer review marking is “honest” but can be influenced by group size,
selection method and the year of study.
II. INTRODUCTION
From cars to computers, aircraft engines to vacuum cleaners, products and projects worked on by
engineers in industry today are created by skilled teams of people. Today’s engineer must be well grounded
in appropriate science but must also operate efficiently in the world of problem solving, decision making,
and cooperative enquiry while functioning effectively as a member of a team. It has been said that an
engineer is hired for his technical skills and fired for his people skills[1]. Notwithstanding the inherent
benefits of practising working with others and learning to deal with human attitudes and frailties, it is well
researched that collaborative forms of learning, such as, group tasks can help foster lifelong learning
skills[2]. Little wonder, then, that universities are concentrating their efforts ever more on team projects and
facilitated Problem Based Learning in the twenty-first century. Group or teamwork is commonly seen as an
activity which stimulates students and therefore the use of group work within higher education courses is
common, taking Loughborough University as an example, group work now takes place in over 500 modules
representing every department at the institution [3].
Of all the problems associated with team-based education, the difficulties of precise and individual
assessment are supreme. Academics who feel comfortable setting examinations and individual coursework
assignments are often deterred from devising team assessments because the student-centred learning
approach dictates that they have only a limited knowledge of the real contribution that each team member
made to the team effort.
The most common system of assessment for group work has been marking the finished product of the
project and awarding the group mark to everyone in that group. This has, however, been a cause for concern
for many educators due to the lack of appreciation of the group members’ different levels of efforts and
quality of their contributions. The unfairness of group marks awarded to every group member has been
recognised by many and a great deal of effort has been put into correcting this unjust system. Many
educators have been trying different methods to resolve this and one of them is through using ‘self and peer’
assessment.
1
E-mail: P.Willmot@lboro.ac.uk
III. PEER ASSESSMENT
Context
In this context, the term ‘(self and) peer assessment’ is used to describe the process undertaken by students
to assess the performance/contribution of themselves and their peer group, in relation to a group task. This
has been described by some as peer-moderated marking. Peer assessment is similar, yet different, from peer
review which is typically where students assess a piece of work produced by a ‘different’ individual or
group. Whilst the two are separate, there are some similarities in terms of the feedback and learning
opportunities on offer because they share the common desirable element that students are actively engaged in
the assessment process.
Falchikov [4] identifies two distinct types of peer assessment; the peer assessment of product and peer
assessment of performance (also referred to as the peer assessment of process). Peer assessment of product is
where students assess other students’ work: either a finished product, in case of summative assessment, or a
work in progress in the case of formative assessment. Hence peer assessment can be used summatively or
formatively within a course. Peer assessment of performance could be where students assess. Peer review is
almost always focussed on product while peer mark-moderation of group work could conceivably be of
product and/or performance as defined by the criteria but is most commonly focussed on the
contribution/performance of students working within the group. Clearly, the tutor may choose to assess once
or more than once at the end of the work or at various stages along the road. There are a number of examples
that show peer assessment can be used formatively or summatively, with the latter being the most reported in
the literature and from our investigations. Some examples of summative peer assessment include case studies
by Robinson[5] and Loddington et al[6], while Wheater et al.[7] compare two case studies; one summative
and one formative, to show the success for both types of assessment.
The Benefits of Peer assessment
It is, of course, a simple strategy to treat teams of students like teams in the sporting arena where the whole
team benefits or otherwise equally from the team’s promotion of relegation in the league. Following this
argument would suggest that, in the case of team assignments, individuals must be prepared to entrust their
future to the collective outcome. This argument is not one that is easily accepted by either the students or by
teaching quality assessors. The fairness of allocating equal marks to all team members was questioned by
Willmot & Crawford[8] who concluded that this was not the correct approach and stated that the common
belief is that “a lazy student might benefit from the efforts of team-mates or particularly diligent students
may have their efforts diluted by weaker team members”. Pond et al[9] found that “bunched group marks
often show a low standard deviation and the use of peer review {assessment} can help to spread this when
marks”, which is generally a desirable feature in academia.
There is much concern, not least amongst the student body, over ‘free riders’ in group work. The term
‘free rider’ is frequently used to describe a student who relies on others to carry out a large proportion of the
group work. Unfortunately tutors or project supervisors cannot be solely relied upon to identify and penalise
free riders who may present very well in front of the teacher but shy away from any real contribution.
Moreover, it can be very difficult or near impossible for a tutor to assess students’ individual effort of a
group task when the majority of work necessarily takes place during non-contact periods. One solution, in an
attempt to make it fairer, is to involve students in the assessment process.
Peer assessment allows us to provide students with individual scores for group work activities. Some
rightly regard assessment as an obligation bound by academic/tutors and that it is their job to assess students.
Whilst this may be true, it has to be noted that it would be extremely difficult or near impossible for an
academic to assess each member’s contribution to a group output or task. Race[10] (2001. p.17) identifies
that “when it comes to measuring an individual’s relative contribution to group work, the only people who
really know what the relative contributions are, are the students themselves”. By involving students in the
assessment, it allows teachers to gain an insight in the group dynamics and measure things that are not
possible without student assistance. It has indeed been argued that tutor assessment of this type of work is
not sufficiently valid and that students are better placed to assess their own or each other’s work[10] (Race,
2001). The validity of peer assessment has mostly been evaluated by surveying participants and various
studies find the assessment to be fair [9,11].
1
E-mail: P.Willmot@lboro.ac.uk
In addition to providing a convenient solution to the problem of unfair group marks, peer assessment has
been recognised as contributing to student-centred learning. Self and peer assessment systems can and have
numerous benefits for enhanced student learning and skills development. Russell et al [12] explore the
potential benefits of group work and identify that peer assessment can improve a number of transferable
skills including; “decision making, negotiation, communication, empathy and delegation”, while Falchikov
[4] described improved reflective skills and higher levels of thinking. In a wider sense, Boud et al [2]
declared that “assessment is the single most powerful influence on learning in formal courses”.
Somervell[13] embraces the need for a shift in educational methods and argues that self, peer and
collaborative assessment should be part of a process of change towards a student-centred approach. Such a
strategic leap highlights the significance of designing assessments that stimulate the student learning process
whilst achieving the aims and objectives of the course. In respect of assessment, it requires a change in
emphasis from the norm-referenced to the criterion-referenced, from the purely summative to the formative
and summative, from external to internal and from the assessment of product only to the assessment of
process as well.
Potential drawbacks
Peer Assessment is not universally embraced as a solution: critics cite potential drawbacks including
collusion, and unfair or vindictive marking. Nevertheless, there are some powerful administrative drivers that
continue to attract academics to both team assignments and peer assessment; these were identified by
Hughes[14]. One commonly cited drawback is the need to prepare students for peer assessment and to
properly explain the assessment process. Discussions of the criteria beforehand might be helpful [15] and
students need to understand how to apply the assessment criteria [16]. Of course, this assumes that the
methods employed actually have explicit criteria and indeed, this is not always the case: it is not uncommon
for team members to be simply asked to rate each other at the end of a project through some simple metric,
even though this mechanism clearly offers little pedagogic validity. A reliable and valid assessment should
measure against specific targets that are aligned to the intended learning outcomes and course content.
Research into the reliability has more to do with peer assessment of product rather than of team-member
performance but validity of peer assessment can be tested in both types. Langan & Wheater [17] report a
strong correlation between tutor marks and student marks and others [16] argue that they have not found
sufficient reliability of peer assessment.
The necessity of staff training is also frequently mentioned as a potential criticism in the literature, in
particular reference to web-based peer assessment systems. Pond et al [9] discuss some possible drawbacks
with peer assessment systems in general but confirm that there are ways to alleviate or remove these
problems. They investigate the potential for group collusion and highlight that the extreme subjectivity a
student could bring in marking their friends and the influence of personal dislike. Some have suggested that
peer assessment can have a negative effect on students’ personal relationship within a group but this problem
appears to grow or diminish depending on the detailed methods employed. It is anecdotally reported that
there is very little variation in marks allocated by team members where the method requires students to sit
together and agree ‘each others’ contribution’ because students can be afraid to speak up. Indeed some report
that this just serves to increase the number of complaints of unfairness after the process and this
characteristic was demonstrated by Willmot and Crawford in a national workshop for engineering lecturers
in 2003 and later reported at ICEE 2004 [18].
Another obstacle suggested by Falchikov [4] is that peer assessment might be time-consuming for students
and that they would object to this imposition. The time taken for the process is clearly dependent on the
design of the system and is therefore largely in the hands of the course designer. Orsmond et al.[19] believe
that, in comparison to traditional assessment methods peer assessment can be too demanding of students, too
time consuming and criteria setting can be problematic. Whilst most authors who have reported on peer
assessment note general student acceptance of the methodology some question whether students have an
appropriate understanding of individual assessment criteria [20].
Literature Summary
It is clear that there are many potential benefits of self and peer assessment to both teachers and students.
The main benefit for tutors is that it can save a huge amount of marking and reduce the ever growing
1
E-mail: P.Willmot@lboro.ac.uk
workload but without the complaints of unfair grading associated with simple team mark allocation. Students
generally see well presented peer assessment as a fair way of assessing group work and feel more involved
compared to other assessment methods. Peer assessment is not without its problems and it is clear that
academics have a number of things to address before running such an assessment. Some of the most
important are; setting the criteria to be used, forming the groups, making adequate provision for handling and
reporting the peer assessment data and making the whole process transparent to students. Clearly any quality
automated system should provide assistance to the user at both the setup and reporting stages.
IV. INTRODUCING WEB-PA
Web-PA is an online peer assessment system, or more specifically, a web-based peer-moderated marking
system. It is designed for teams of students doing group-work, the outcome of which earns an overall group
mark. Each student in a group grades their team-mates (and their own) performance, which is then used
along with the supervisor’s overall group mark to provide each student with an individual grade, reflecting
their contribution to the team effort.
It is currently in use in over half the departments across Loughborough university campus and has been
embedded into the university quality system as the recommended mechanism for group mark moderation. An
open source variant has now been developed and adopted in a number of other UK universities including
Hull and Manchester Metropolitan. In May 2008, the project was shortlisted for an IMS global learning
impact award (Austin, Texas). The software incorporates a number of significant enhancements that help to
integrate good practice being developed locally and nationally in order to benefit lifelong learning and builds
upon existing evaluation of assessment practices across a range of subject disciplines. The system remains
under constant development.
Web-PA was developed from an original paper based peer assessment system with a view to making data
entry and analysis more convenient and providing flexibility for very many types of group assessments.
Web-PA is flexible on team size and constitution and allows the tutor to define any number of assessment
criteria or ‘form elements’ that can be aligned to the learning outcomes of the module or unit. It invites
objective marking statements which guide students to what performance should be associated with a given
mark. The tutor selects teams directly from the central university database and defines timeframe within
which the students must enter their data. Students are just required to visit a terminal between the specified
dates and complete a very simple form using clickable menus. Data entry is therefore confidential and only
the entry points for their own team members appear on screen and they rate each member in turn, including
themselves, against the stated criteria. The assessment may be applied at the end of a project or at any time
during it; more than once if required.
Put simply, the system calculates a variation factor for each team member (Web-PA factor) based on the
total scores received for an individual divided by the normalised average scores for the whole team. The tutor
or supervisor marks the team submission in the usual way and this mark, or part of it at the supervisor’s
discretion, is multiplied by the factor for each individual. Where all team members score equally, the WebPA factor is 1.0 so all members gain the unmodified team mark. After the deadline, the tutor can retrieve a
complete set of data in a variety of customisable formats and still retains the option of intervening if foul
play is suspected.
This rapidly maturing online tool has been developed over a period of years and more recently, design and
development of the software has been supported by the Engineering Centre for Excellence in Teaching and
Learning (engCETL) at Loughborough University and by JISC. The project site can be found at
www.webpaproject.com where visitors can access a discussion forum and a demonstrator.
V. A MULTI–FACETED RESEARCH METHODOLOGY
The research benefited from the unique access to copious good quality, consistent data captured by the
Web-PA system itself. This facility also gave email links to students so that the student survey could be sent
out to a large number of students studying a diverse range of degree subjects. In 2005/6 an Higher Education
Authority (HEA) “Small Grants make a difference” fund provided for a number of student focus groups [9].
This produced some high quality insights into the Peer Assessment (PA) process from Business School
students and aided both the design of the wider student survey in 2007 and the focus on quantitative analysis
1
E-mail: P.Willmot@lboro.ac.uk
of the PA data. A further small ‘Academic Practice’ Grant in 2006 provided for further research into staff
and student interactions with Web-PA and to advise modification and upgrades of the software platform.
During 2007 the engCETL was awarded £200,000 by JISC over 3-years to further develop the system for
sharing with the wider academic community.
Staff interviews were used to help form the student survey. It was found that usage of Web-PA was
limited to a small number of Loughborough staff in Semester 1 of 2006/07 and so these interviews,
themselves were limited since they only reflected the views of ‘champions’ and ‘early adopters’. Repeating
the interviews in 2007/8 would have provided a much wider population. The interviews were used to aid
focus of the survey questions – much as in the earlier focus groups.
Student
focus
groups
Staff
interviews
Web-PA
data
Web
survey
Fig. 1.
Research Methodology
A Student survey was carried out at the end of Semester 2, 2006/07 using the commercial online tool
‘SurveyMonkey’. Providing for a £40 book token ‘prize-draw’ inducement, the survey was sent out to 2209
students studying on 36 modules across 14 departments. There was ultimately an overall response rate of
13% with 284 usable responses. The survey used 27 Lickert scale questions as well as a number of static
data questions such as department, year and gender. The Lickert scale questions were focused on the
friendliness of the system, the benefits of Peer Assessment, the fairness of marking, the students own
feelings and the extent of collusion at the point of data entry.
The final part of the research was to separately analyse the raw Web-PA data captured by the system
during the second semester of 2006/7. Data was collected from 6 modules across 3 departments, and
included group assignments taken by all undergraduate years: this data reflects 730 student interactions. The
analysis focused on perceptions of fairness and honesty of marking and, again, on collusion within teams.
VI. COMPOSITION AND KEY FINDINGS OF THE STUDENT SURVEY
The majority of the student teams or groups were formed by tutor selection (77%) while a significant
minority (19%) had been formed by the students themselves. It can be assumed that in these cases the
students knew each other before the activity started. The residue (14%) used the method of team formation
known as the ‘seeding’ method: formed by students around a seed member that is predetermined by the tutor
concerned. Both undergraduate and postgraduate students took part in the survey with a reasonable spread
across all year groups. Loughborough University has a particularly large engineering faculty and it was from
here that the Peer Assessment facility originated so it is perhaps not surprising that there was a numerical
bias towards males (62%) in the survey. A breakdown of year-group and gender of the respondents is given
in figure 2.
Figure 3 demonstrates the breadth of the survey and names all departments that provided at least 5%
(rounded) of the total response.
1
E-mail: P.Willmot@lboro.ac.uk
Fig. 2.
Fig. 3.
Age and Gender profiles of the Survey
Breakdown of respondents by Department
Standard statistical analysis tools have been used to analyse this significant survey. Techniques such as
ANOVA, regression and bi-variate analysis have been applied but such a detailed treatment is beyond the
scope of this paper (the authors hope to publish this elsewhere in due course). There follows a broad
discussion of the key findings.
Using the linear regression model and a stepwise variables selection method it was found that the most
significant variables (95% level) that helped to explain the overall positive acceptance of Web-PA were:
• Anonymity of marking
• The opportunity to reward higher achievers: ‘Stars’
• The feeling that Web-PA provided fair marks
• The absence of instances of group collusion – adding to the feeling that the Web-PA system provided
honest marks.
Using a means comparison technique we sought levels of significance in excess of 95% in the responses
given by students. This showed that in the sample:
1
E-mail: P.Willmot@lboro.ac.uk
•
•
There were no significant differences between departments represented.
Final year students and males, overall, were more discriminating and used the system to identify
‘free-riders’: they appear more protective of their final grade.
• Females preferred anonymity more than males but a majority overall preferred this feature as it aided
honest marking.
Delving more deeply into the gender specific results indicates that men seem more characterised by a
sense of camaraderie: they, in fact, reported to have it found more difficult to give to own friends in their
group a low mark even when it was deserved. On the other hand, there is evidence that women students are
more prone to value the importance of Web-PA for understanding their role within the team.
Through the survey we also delved into the area of feedback, i.e. attitudes to feeding back the assessment
of the peer group to a student’s work. Clearly this is a sensitive area that needs careful treatment but it is
within this concept that there is basis for the frequently heard claim that Peer Review can develop key skills.
While the analysis showed, in general, that not much difference exists in real terms between the various
departments in the sample; the only significant difference that emerged was in regard to the possibility of
offering other group members feedback on the mark that a student had given them (an optional routine for
this exists within Web-PA). While postgraduate students of all departments appear keen to share there intergroup feedback and undergraduate students of some departments like Business School/Economics and
Politics and International Relations would appreciate such feedback; students from English & Drama and
Engineering would prefer not to disclose or receive peer marks or any indicator of the mark.
VII. QUANTITIVE (WEB-PA) DATA ANALYSIS
The live data for this section was extracted from the online web based peer assessment system. Six
modules from the academic year 2006/2007 at Loughborough University were selected to encompass a
variety of year groups and departments.
TABLE I.
A
Module
B
Method of group
selection
C
Year
SUMMARY OF DATA ANALYSIS FOR SIX MODULES
D
Number
of
students
E
F
G
H
I
Average
Team size
Non
submission
%
Zero
standard
deviations
%
‘Self' lower
than 'Peer'
mark %
Teams with
>95%
available
marks
awarded
1
Tutor - Random
1
286
6.00
7.0%
6.4%
18.8%
2.1%
2
3
4
Tutor - alphabetical
Seeding
Seeding
3
3
1
87
69
109
3.90
4.60
5.70
1.2%
4.4%
4.6%
40.0%
40.9%
1.9%
15.1%
13.6%
27.9%
31.8%
26.7%
0.0%
5
6
Self selecting
Seeding
2
2
63
116
3.20
4.30
12.7%
0.9%
60.0%
2.6%
18.2%
31.3%
42.1%
3.7%
Firstly we consider what ‘honest’ marking looks like from a data point of view. ‘Honest’ marking implies
there is a willingness to discriminate between team members and we would expect there to be engagement
with the process. So for ‘honest’ marking there will be a reasonable chance of a student marking him/herself
lower than others in the team: ‘self-mark < peer-mark’(column H, table 1). The opposite would be where a
student seriously overestimates his/her own scores.
We would expect the groups not to give out 100% of available marks. There should be some variation in
the marks awarded for different criteria and this is shown by a low percentage of zero standard deviations
(column G, Table 1). A zero standard deviation occurs when individuals award all members of the group
the same mark, probably applying little genuine thought to the process. We would also expect that if there is
‘engagement with the process’ then there would be few non-submissions (Column F, table 1).
1
E-mail: P.Willmot@lboro.ac.uk
The data collected shows that sometimes there is honesty as defined above and sometimes not. For all the
groups there is a reasonable chance of a ‘self less than peer’ score, but there is variation in the other three
measures. The interviews with staff and the survey of student users have suggested that method of group
selection; year group and group size all have an effect on ‘honesty’.
Comparing module 5 (self selecting) with module 6 (where the tutor predetermined the seed member) we
see they are both second year groups and of a similar group size. However, for module 5 we see a high
proportion of non-submissions (13%), 60% of zero standard deviations and over 40% of the groups have
allocated over 95% of the available marks. This leads us to suspect there is less honesty – less willingness to
discriminate and engage in the process in groups that are self selecting.
Considering modules 1 and 4, which are both from first year and of similar group size, the key difference
is that in module 1 the tutor has allocated teams randomly and module 4 is seeded. Both of these modules
appear to demonstrate ‘honest’ marking with the seeded module having a particularly low percentage of zero
standard deviations. Now considering modules 2 and 3 (both third year and similar group size) again the key
difference is that module 2 is random and module 3 is seeded. These modules also exhibit similar marking
behaviour but would not fit the criteria as ‘honest’. So we can conclude there is no apparent difference
between marking behaviour for random and seeded groups, whether that behaviour is honest or not. Other
factors are clearly having an influence here.
When comparing modules 1 and 2 which are similar in every respect except year of study the test suggests
much less honesty in module 2 (year 3). Module 3 seems to confirm the suggestion of honesty loss in
finalists. However marking behaviour could be influenced by the fact that by the third year the students are
likely to know each other well however the teams were selected, so it is possible that the groups are behaving
like a ‘self selected’ group. Further research would be needed to establish if this is the case.
Of particular interest are modules 4 and 6 which are both seeded, are of similar size from year 1 and 2
respectively and having the same Responsible Examiner. These groups both exhibit ‘honest’ marking
behaviour with particularly low % zero standard deviations. Module 6 students had experienced the peer
review process in their first year and appear to have confidence in and a commitment to the process, there
being only 1 non-submission. Another explanation might be the style of introduction to the process that this
lecturer uses.
In short, analysis of the PR data suggests that:
•
•
Self-selecting groups are less discriminating and potentially less ‘honest’ in their marking.
Early years students show more marking ‘honesty’ than finalists – Finalists show a greater number of
zero standard deviations in marks at group level.
VIII. CONCLUSIONS
There is considerable anecdotal evidence that students undertaking team projects where no rational
measure is taken of the individual’s contribution express concern about the way in which marks are
awarded. As a consequence, the benefits of group work have sometimes been overshadowed by such
concerns especially where students within a team are allocated the same mark.
This work has determined that there is much interest on a wide stage in peer assessment: the idea is not
new, but the intensity of its uses and support for its pedagogic validity as a system is growing and very
applicable to a world where recruiters are demanding graduates with enhanced interpersonal and transferable
skills. The peer assessment method has been applied in a wide variety of formats with varying degrees of
success.
The Web-PA self and peer online mark-moderation method, described here, has met with a very
enthusiastic and rapidly growing following. Whilst much of the data it has generated supports previous
literature on this subject, important new insights are gained into the thoughts of the student participants.
After experiencing Web-PA, there is much support for the fairness they believe peer review can offer. More
specifically they comment positively on qualities of anonymity, recognition of ‘stars’ and ‘free-riders’ and
the point to a perhaps surprising lack of collusion associated with the web-based system. Individual and
group marking behaviours also suggest that most peer review marking is ‘honest’ but can be influenced by
group size, selection method and the year of study.
1
E-mail: P.Willmot@lboro.ac.uk
Overall, marking is found to be credible and while free riders are known to mark themselves up, the
overall system appears to compensate for this generate an acceptable, lower than team average grade. Final
year undergraduate students seem to be bounded by an individualistic approach to study which is heavily
focussed on maximising their own grade rather than on developing team working skills or making maximum
use of any developmental benefits. There are detectable differences in the peer review data according to how
the teams were originally formed but, as yet, there is insufficient evidence to offer concrete conclusions
except to note that self-selecting groups appear to generate a smaller variance in the marks they allocate.
Anonymous marking is strongly preferred by all except postgraduate while female students, in particular,
express a desire for anonymity. Females are also more inclined to allocate a larger range of marks. In
addition, the generally more mature postgraduate students show a stronger appreciation for peer assessment
as an educational support tool for developing and refining their own team-working skills.
IX. REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
1
Russell, J.S. and Yao, J.T.P., “Consensus! Students Need More Management Education,” Journal of Management in
Engineering, 12(6), 1996, pp. 17-29.
Boud, D., Cohen, R., & Sampson, J. Peer learning and assessment. Assessment and Evaluation in Higher Education, 24(4),
1999, p.413-426.
Blease, D. The organisation and assessment of group work at Loughborough University internal report, 2006. Loughborough
University.
Falchikov, N. “Peer feedback marking: Developing Peer Assessment.” Innovations in Education and Training International,
1995, 32, pp.175-187.
Robinson, C. L. “Self and peer assessment in group work.” The European Society for Engineering Education: SEFI MWG
Seminar, Kongsberg. Norway, June 2006
Loddington, S., Wilkinson, N., Glass, J & Willmot, P., “An Examination of Academic and Student Attitudes to the Peer
Assessment of Group Work using WebPA in Engineering”, Proceedings of International Conference on Innovation, Good
Practice and Research in Engineering Education; ee2008, Loughborough, UK, July 2008
Wheater, P., Langan, M. & Dunleavy, P.J.” Students assessing student: case studies on peer assessment”. ‘Planet’; the Journal
of the Higher Education Academy Subject Centre for Geography and Earth Sciences, 2005. 15, p. 13-15. Available online at;
http://www.gees.ac.uk/planet/p15/p15.pdf
Willmot, P. and Crawford, A.R., ''Peer Review of team marks using a web-based tool: an evaluation'', ‘Engineering Education:
the Journal of the Higher Education Academy Engineering Subject Centre, 2(1), 2007, PP 59-66, ISSN 1750-0044, Available
online from: http://www.engsc.ac.uk/journal/index.php/ee
Pond, K., Coates, D.S. and Palermo, O., ''Student Experiences of Peer Review Marking of Team Projects'', International Journal
of Management Education, 6(2), 2007, pp 30-43, ISSN 1472-8117.
Race, P., “Self Peer and Group Assessment,.” LTSN Generic assessment series, Briefing Paper. 2001 ????
Crockett, G. & Peter, V. (2003). “Peer assessment in a second year macroeconomics unit”. The Higher Education Academy,
Economics Network, Extended Case Study, available online at http://www.economicsnetwork.ac.uk/showcase/crockett_peer.htm
Russell, M., Haritos, G. & Combes, A. “Individualising students’ scores using blind and holistic peer assessment”. ‘Engineering
Education’: the Journal of the Higher Education Academy Engineering Subject Centre, Vol.1 (1) 2006. pp.50-59. available
online at http://www.engsc.ac.uk/journal/index.php/ee
Sommervell, H. “Issues in assessment, enterprise and higher education: the case for self-, peer and collaborative assessment”.
Assessment and Evaluation in Higher Education, 1993. 18(3), 221-233.
Hughes, I.E. “But isn't this what you're paid for?” The pros and cons of peer and self assessment. Planet (the LTSN Centre for
Geography, Earth and Environmental Sciences Bulletin), pp. 20-23, June 2001.
Juwah, C. “Using peer assessment to develop skills and capabilities”. The Journal of the United States Distance Learning
Association, (2003). 17(1) available online at http://www.usdla.org/html/journal/JAN03_Issue/article04.html
Cheng, W. & Warren, M. “Having second thoughts: Student perceptions before and after a peer assessment exercise”. Studies in
Higher Education, (1997). 22(2), 233-239.
Langan, A. M. & Wheater, C. P. “Can students assess student effectively? Some insights into peer-assessment”. Learning and
Teaching in Action, 2(1), (2003). 02/01, 2007
Willmot, P, Crawford, A, R, “Online peer assessed marking of team projects” Proceedings of the International Conference on
Engineering Education, ICEE 2004, Paper WA7/4 , Gainesville, Florida, USA, October 2004, pp 1-7
Orsmond P, Merry S and Reiling K, “The importance of marking criteria in the use of peer assessment,” Assessment and
Evaluation in Higher Education, 1996, Vol. 21.
Lin SSJ, Liu EZ and Yuan SM, “Student attitudes toward networked peer assessment: Case studies of undergraduate students
and senior high school students”, International Journal of Instructional Media, 2002, Vol. 29(2).
E-mail: P.Willmot@lboro.ac.uk