CHAPTER ONE
INTRODUCTION
Background to the Problem
One of the principal pillars in the Government of Nigeria vision is to create an
educated and informed nation, using amongst other means, modern information and
communication technologies (ICT) to provide access and increase the quality and
relevance of tertiary education in a growing global information era. The goal is easier
to implement as ICT and the Internet has developed rapidly as a potential for online
course delivery platform and is relatively affordable and user-friendly. Many
universities are embracing the technology and those institutions which seem slow in
adopting
the
method
may
be
left
behind
in
the
race
for
globalization,
internationalization of higher education and technological development (Volery and
Lord, 2000).
The rapid change in our entire life influences directly or indirectly the systems
that controls our knowledge, skills and behaviour. The evolution of our culture is one
of the major indicators for this change. Our educational system has been influenced
by this rapid change over time and technology is increasingly used in learning settings.
Assessment as a part of the educational system is exposed to the same changes.
Using computers to assist assessment task has been an interesting research topic for
decades. However, developments have mainly transferred traditional assessment
approaches into computer environments, in order to automatically grade students‟
1
assignments, this types of assessment approaches have been further limited (Elliot
2008). Consequently, the rapid increase of using technology in learning settings
expedites also the need for new technology-based assessment. Our life has been
influenced by a revolution in the field of information and technology. As a result,
peoples‟ mentality has changed significantly in the recent years and pedagogy has
become affected and educationalists have also started redesigning educational
systems (Prensky, 2001).
Learning is no more divided there is no separation between schools‟ education
and workplace experience. This is because students are exposed to computer
knowledge and the use of Internet facilities to facilitate teaching and learning have
bridged the gap between the school and the world at large. Acquiring knowledge is a
continuous learning process. According to Jegede (2005), Learning is a continuous
process over lifetime, it is a lifelong process. Therefore a new paradigm for
assessment in lifelong learning is becoming important. Changing education from
memorizing facts to higher levels of comprehension and synthesis requires building
and assessing critical-thinking skills. According to Hayes, (1999), measuring
knowledge is important but is not enough.
The term e-assessment is a broadly-based one, covering a range of activities in which
digital technologies are used in assessment. Such activities include the designing and
delivery of assessments, marking – by computers, or humans assisted by scanners
and online tools – and all processes of reporting, storing and transferring of data
associated with public and internal assessments. e-Assessment is the end-to-end
2
electronic assessment processes where ICT is used for the presentation of assessment
activity, and the recording of responses. This includes the end-to-end assessment
process from the perspective of learners, tutors, learning establishments, awarding
bodies and regulators, and the general public (JISC/QCA, 2009)
E−Assessment is often seen as providing a partial solution to providing
assessment for increasing numbers of students and declining staff to student ratios
(Sim, Holifield, and Brown, 2004). E-assessment can be distinguished as Computer
Based Assessment (CBA) and Computer Assisted Assessment (CAA) which are often
used interchangeably and somewhat inconsistently. CBA can be understood as the
interaction between the student and computer during the assessment process. In such
assessment, the test delivery and feedback provision is done by the computer. Where
CAA is more general, it covers the whole process of assessment involving test
marking, analysis and reporting Charman & Elms, (1998). The assessment lifecycle
includes the following tasks: planning, discussion, consensus building, reflection,
measuring, analyzing, and improving based on the data and artifacts gathered about a
learning objective (Martell & Calderon, 2005).
Ayo, Akinyemi, Adebiyi and Ekong (2007), also define e-examination as a
system that involves the conduct of examinations through the web or the intranet.
Though the definition of Wikipedia is that of e-assessment, it is related to eexamination. E-assessment in its broadest sense is the use of information technology
for any assessment related activity. The origin of e-examination would naturally be
traced to the further deployment of the potentials of the Internet and the intranet. El
3
Emary and Al Sondos (2006) state that schools around the world establish connections
to the Internet and teachers and students gain proficiency with navigating through the
vast quantity of readily available information, the true educational potential of the
World Wide Web can finally begin to be understood. The Web can be a dynamic tool
capable of assisting educators in propelling learning exciting and competing levels and
of bringing education to any students, anywhere, at any time.
One of the potentials of the web is the ability to conduct examination through
electronic means. Many scholars have hinted on the advantages of the deployment of
e-examination. Awosiyan (2010) quoting Prof. Olu Jegede, the Vice-Chancellor of
NOUN, says that e-examination was introduced to address series of anomalies being
encountered in the manual tests. He said that the e-examination would remove all
human errors recorded in manual examination and create opportunity for students to
access their results immediately.
A series of studies do not focus on students' perceptions about a specific mode
of assessment but more generally investigate students' perceptions about assessment.
The study of Drew (2001) illustrates students' general perceptions about the value and
purpose of assessment. Within the context of new modes of assessment, the
Northumbria Assessment studies are often referred to. In these studies, different
aspects of perceptions of students about new modes of assessment are elaborated
upon the consequential validity of alternative assessment and its (perceived) fairness.
4
Student perceptions of the fairness of assessment, the issue of fairness, from
the student perspective, is a fundamental aspect of assessment which is often
overlooked or oversimplified by the staff. To students, the concept of fairness
frequently embraces more than simply the possibility of cheating: it is an extremely
complex and sophisticated concept which students use to articulate their perceptions
of an assessment mechanism, and it relates closely to our notions of validity. Students
repeatedly expressed the view that traditional assessment is an inaccurate measure of
learning. Many made the point that end- point assessment or evaluations, particularly
examinations which took place only on one day, were actually considerably down to
luck, rather than accurately assessing present performance.
Often students expressed concern that it was too easy to leave out large
portions of the course material, when writing essays or taking exams, and still do well
in terms of marks. Many students felt quite unable to exercise any degree of control
within the context of the assessment of their own learning. Normal assessment was
done to them, rather than something in which they could play an active role. In some
cases, students believed that what exams actually measured was the quality of their
lecturer's notes and handouts. Other reservations that students blanketed under the
banner of 'unfairness', included whether you were fortunate enough to have a lot of
practice in any particular assessment technique in comparison with your peers
(Sambell, McDowell & Brown, 1997).
5
When discussing e-assessment, many students believed that success more
fairly depended on consistent application and hard work, not a last minute burst of
effort or sheer luck. Students use the concept of fairness to talk about whether, from
their viewpoint, the e-assessment method in question rewards, that is, looks like it is
going to attach marks to, the time and effort they have invested in what they perceive
to be meaningful learning. E-assessment was fair because it was perceived as
rewarding those who consistently make the effort to learn rather than those who rely
on cramming or a last- minute effort. In addition, students often claimed that eassessment represents a marked improvement: firstly in terms of the quality of the
feedback students expected to receive, and secondly, in terms of successfully
communicating staff expectations.
Many felt that openness and clarity were fundamental requirements of a fair
and valid assessment system. There were some concerns about the e-assessment,
even though students valued the activity (Sambell, McDowell & Brown, 1997). It can
be concluded that students' perceptions of poor learning, lack of control, arbitrary and
irrelevant tasks in relation to traditional assessment contrasted sharply with
perceptions
of
high
quality
learning,
active
student
participation,
feedback
opportunities and meaningful tasks in relation to e-assessment (Sambell, McDowell &
Brown, 1997).
University of Ilorin in its strategic plan deliberately adopted the use of ICT in
the delivery of its academic programmes especially at undergraduate levels. The
6
rationale for the development and integration of educational technologies in eassessment at the University was to fast track two priority areas in the University‟s five
year strategic plan being: Priority Area one: Expanding Access and Participation and
Priority Area two: Enriching Quality Academic Programmes through the infusion of ICT
in e-assessment. It is envisioned that the use of ICT based techniques will expand
access and also enrich quality of academic programmes. To support ICT for
assessment purposes at Unilorin, the executive management of the institution
committed resources for the development and improvement of the learning and
teaching environment.
In the last 4 years, adoption of e-Assessment at Unilorin seems to be
encouraging as shown by the increasing numbers of academic staff who have
developed e-examination. There are, however, several issues and questions that need
to be addressed. These include, but not limited to, students‟ perception of the
technology; capital and running cost provision; system maintenance and availability;
quality, standards and benchmarking; copyright, archiving and Curation of materials,
and reward for developing online assessment.
The rapid change in our entire life influences directly or indirectly the systems
that controls our knowledge, skills and behaviour. The evolution of our culture is one
of the major indicators for this change. Nigeria educational system has been
influenced by this rapid change over the time and technology is increasingly used in
learning settings. Assessment as a part of the educational system is exposed to the
same changes. Specialists have been taken care to adapt the learning system with
7
culture changes in the society but unfortunately they have not properly focused on
performance measures and feedback. Assessment research trying to keep pace with
modern learning settings but still provides room for interesting and challenging
research. Using computers to assist assessment task has been an interesting research
topic for decades, however, developments have mainly transferred traditional
assessment approaches into computer environments, in order to automatically grading
students‟ assignments, types of assessment approaches have been further limited
(Elliot 2008).
Consequently, the rapid increase of using technology in learning settings
expedites also the need for new technology-based assessment. Our life has been
influenced by a revolution in the field of information and technology. As a result,
peoples‟ mentality has changed significantly in the recent years. Consequently,
pedagogy has become affected and educationalists have also started redesigning
educational systems (Prensky, 2001). Learning is no more divided; there is no
separation between schools‟ education and workplace experience, this is because
students are expose to computer knowledge and the use of Internet facilities to
facilitate teaching and learning have bridged the gap between the school and the
world at large.
According to Jegede (2005), learning is a continuous process over lifetime, it is
a lifelong process. Therefore a new paradigm for assessment in lifelong learning is
becoming important. Changing education from memorizing facts to higher levels of
comprehension and synthesis requires building and assessing critical-thinking skills.
8
The academic programs should work on building and assessing students‟ criticalthinking skills. In general, assessment has different strategies according to its
purposes. The two main basic types of these strategies are formative and summative
assessment. Formative assessment is part of the learning process, this assessment is
used to give feedback to both students and lecturers in order to guide their efforts
toward achieving the goals of the learning process. Where, summative assessment is
performed at the end of specific learning activity and used to judge the students
progression and also to discriminate between them (Bransford, Brown and Cocking,
2000). According to Bennett, (2002), technology is an essential component of modern
learning system. As a result, technology is also increasingly needed for the assessment
process to be authentic.
The issue of gender influnce on e-assessment perception has been of interest
to researchers (Bebetos and Antonio, 2008; Kadel, 2005). Gender is liken to a social
attributes and opportunity associated with being male and female and the mutual
relationship. These attributes and relationships are socially constructed and are
learned through socialization processes while technology development serves as forum
for exploring the linkage between changing gender relations and technological
development (Ewhrudjakpor, 2006).
Statement of the problem
While ICT has been effectively applied to almost all human endeavours such as
banking, health, postal services and of course education or what is better known as eassessment there still remains a very big vaccuum to be filled in many areas especially
9
in the area of University education in Nigeria (Ayo, et.al 2007). The present
information technology means of examining students is the use of electronic systems
in place of manual or paper method which was characterized by massive examination
leakages, impersonations, demand for gratification by teachers, bribe-taking by
supervisors and invigilators of examinations.
Today, e-assessment is being used in almost every human endeavours;
employers are conducting aptitude tests for their job seekers through electronic
means; the Universities and other tertiary institutions are registering and conducting
electronic examination for their students through the Internet, intranet and other
electronic and networking gadgets, various examination bodies in the country like the
West Africa Examination Council (WAEC), National Examination Council (NECO),
National Board for Technical Education (NABTEB), among others are now registering
their students through electronic means, recently electronic examination has been
widely adopted by nearly all the Nigeria Universities for post Unified Tertiary and
Matriculation Examination (Post-UTME) otherwise called pre-admission screening,
Olawale and Shafi‟i (2010).
Richard and Thomas, (2008), worked on computer-mediated exams: student
perceptions and performance. The results of their findings reveal that students
believed the traditional pen-and-paper examination enhanced their performance while
the computerized exam had a negative effect.
10
Olubiyi, (2010) carried out a research on perception of learners on electronic
examination in open and distance learning institutions: a case study of national open
university of Nigeria. The results of his finding revealed that the difference in students‟
perception lies on reduction of examination malpractice, wide coverage of the scheme
of work, students‟ academic performance, age factor to the use of IT, and inadequate
facilities.
Tenson (1999), opined that Information and communication technology (ICT)
for female users has been cited as an important factor in determining their attitude
and anxieties towards its usage. Some studies carried out have found gender disparity
in ICT achievement in favour of males (Ajunwa 2000; Awodeji, 1997, and so on).
Others have found none (Anaekwe, 1997; Madu, 2004). Hence, gender differences in
ICT perception are inconclusive.
However, despite the fact that University of Ilorin has commenced eassessment examination since 2007/2008 academic session, there has not been any
research work carried out to determine whether it is generally acceptable by the
students across faculties and whether students‟ field of specialization has influence on
their perception of the e-assessment platform. Hence this study seeks to find out
students‟ perception on e-asessment in the University of Ilorin, Ilorin, Nigeria.
Purpose of the Study
The purpose of this study is to find out students‟ perception of e-assessement in the
University of Ilorin. Specifically the study will seek to find out:
11
1.
The perception of students in the University of Ilorin on e-Assessment.
2.
The perception of male and female students in the University of Ilorin on eAssessment
3.
The influence of students‟ area of specialization on their perception of eassessment.
4.
The influence of e-assessment on students‟ perfomance in University of Ilorin.
Research Questions
In this study, answer would be sought for the following research questions:
1.
How does student of University of Ilorin perceived e-assessment?
2.
Do students‟ gender influence their perception of e-assessment?
3.
Does students‟ area of specialization influence their perception of eassessment?
4.
Does e-assessement influence students‟ performance in University of Ilorin?
Research Hypotheses
1.
There is no significant difference in the perception of male and female students‟
on e-assessment in University of Ilorin.
2.
There is no significant difference in the perception of students accross
Faculties.
12
Scope of the study
This study shall be limited to perception of e-assessment by students in
University of Ilorin. Students would be selected from Humanities and Science. A total
of 500 students shall be selected from Humanities and Science to form the population
for this study. Twenty five (25) male and female respondents shall be selected from
each faculty for the study, while simple random sampling technique we be used to
select the respondents.
Significance of The Study
The outcome of this study would be of use to lecturers, students, universities
management, curriculum developers, school administrators, professional organization
and researchers. Significantly, this study may create awareness to lecturers in the
universities in Nigeria to know students‟ perception on e-assessment. The study would
also help the universities to find out what has been the students contraint in the use
of computer based test or any form of e-assessment and to also improve on their
mode of setting question in e-assessment to measure the course contents. This study
would also provide the universities with useful information on the level of ICT literacy
of students hence, assisting them in trainning and retrainning of the lecturers to help
teach the skill to their students.
The outcome of this study could also be beneficial to various profesional bodies
and trade unions such as Nigeria association for Educational Media and Technology
(NAEMT), Science Teacher Association of Nigeria (STAN) Mathematics Association of
13
Nigeria (MAN) and Acedemic Staff Union of Universities (ASUU) among others in the
area of providing members wih necessary ICT infrastructures, research facilities and
organising workshops and seminars to update their professional standard especially in
the area of e-assessment. The study would provide curriculum developers with
information on the advantages of e-assessment and enable them to formulate policy
that would make e-assessment a way of assessing student.
Finally, the findings of this study could eventually be a source of reference for
all stakeholders in the area of educational processes and products for planning
beffitting educational programmes for our nation in the nearest future. The study may
provide researchers in all areas of study with the oppurtunity to access emperical
evidence in their quest for futher studies on perception of students and lecturers of eassessment in Nigeria Universities.
Clarification of Major Terms and Variables
E-Assessment: e-Assessment is the use of computers and computer software to
evaluate skills and knowledge in a certain area. It can range from on-screen testing
systems that automatically mark learners' tests, to electronic portfolios where learners'
work can be stored and marked.
Perception: Perception is the process by which organisms interpret and organize
sensation to produce a meaningful experience of the world.
computer-based assessment (CBA) is used in this research work to refer to
assessments delivered and marked by computer.
14
computer-assisted assessment (CAA) to refer to practice that relies in part on
computers – for example, use of online discussion forums for peer-assessment,
audience response systems in group work,completion and submission of work
electronically, or storage of work in an e-portfolio. However, it should be noted that
these terms are often viewed as interchangeable
15
CHAPTER TWO
REVIEW OF RELATED LITERATURE
The literature review for this study is discussed under the following headings:
1.
History, nature and meaning of e-assesment
2.
Students‟ perception on e-assessment
3.
Methods of e-assessment
4.
Merits and demerits of e-assessment
5.
Influence of gender on the perception of e-assessment
6.
Unilorin computer based test mode and authoring style
7.
Appraisal of reviewed literature
History, nature and meaning of e-assesment
Assessment is not new to academia, with the roots of the current movement
dating back over two decades (Martell & Calderon, 2005). But two decades hardly take
us back to the origins of educational assessment in the United States. According to
Pearson, Vyas, Sensale, and Kim (2001), assessment of student learning has been
gaining and losing popularity for well over 150 years. In K-12 education, assessment
first emerged in America in the 1840‟s, when an early pioneer of assessment, Horace
16
Mann, used standardized written examinations to measure learning in Massachusetts
(Pearson, 2001).
After losing momentum, the scientific movement of the 1920‟s propelled the
use of large-scale testing as a means of assessing learning (Audette, 2005). The
1960‟s saw further support of standardized testing when the National Assessment of
Educational Progress was formed, which produced the Nation‟s Report Card (Linn,
2002). But perhaps no initiative has had as broad and pervasive impact as the No
Child Left Behind Act of 2001 (NCLB), which formally ushered us into an age of
accountability. The NCLB act is a sweeping piece of legislation that requires regularly
administered standardized testing to document student performance.
The NCLB act is based on standards and outcomes, measuring results, and
holding schools accountable for student learning (Audette, 2005). In 2006 Congress is
required to reauthorize the Higher Education Act and it is predicted that NCLB will lead
to changes in Higher Education Assessment requirements (Ewell & Steen, 2006).
Assessment experts point to pioneers of the assessment movement, Alverno College
and Northeast Missouri State University, which have both been committed for over
three decades to outcomes-based instruction. Kruger and Heisser (1987) who
evaluated the Northeast Missouri State University assessment program found that the
variety of assessments and questionnaires employed as well as the use of a
longitudinal database that provides multivariate analysis makes this institution an
exemplar in the effective use of quality assessment to support sound decision making.
17
The oldest recognized undergraduate assessment program in the United States
can be found at the University of Wisconsin which has reported on some form of
student outcomes assessment continuously since 1900 (Urciuoli, 2005). The
assessment movement is not limited to the United States. In the United Kingdom, the
Higher Education Funding Council was established following the Further and Higher
Education Act of 1992, requiring the assessment of quality education in funded
institutions. In 2004, the Higher Education Act was passed with the goal of widening
access to higher education as well as keeping UK institutions competitive in the global
economy (Higher Education Funding Council for England, 2005).
The history did not stop in the western world but continue in Nigeria for as a
worldwide phenomenon, Open and Distance Education (ODE) has also become an
acceptable mode of education in Africa and particularly in Nigeria (Adekanmbi, 2004).
As far back as 1977, the idea of an open university has already reflected in the
National Policy on Education as it states that: “maximum efforts will be made to
enable those who can benefit from higher education to be given access to it. Such
access may be through universities or correspondence courses, or open universities,
or part-time and work study programme” (FRN, 1977,p6). It was this policy statement
that paved way for the National Open University (NOU), the forerunner of the NOUN.
After a prolonged debate in the National Assembly, an act establishing the
Open University of Nigeria was passed. The NOU was formally established on 22nd
July, 1983 but before it could take off, the act was suspended via a budgetary
pronouncement made by General Muhamnmadu Buhari, the then military head of
18
state, on April 25, 1984, after the military junta took over (Blueprint, 2002). However,
in 2002 another democratically elected government which had assumed power in 1999
removed the suspension and the university started with the name National Open
University of Nigeria (NOUN). NOUN started with four schools and later added another
one making five to date.
Furthermore, the Nigeria National IT policy, which was formulated in the year
2000, is responsible for the monumental developments across the various sectors of
the economy. The vision is to make Nigeria an IT capable country in Africa and a key
player in the information society. Its primary mission is to “Use IT” for: education;
creation of wealth; poverty eradication; job creation; governance; health; agriculture;
etc. (Ajayi, 2005).
As Ayo, Akinyemi, Adebiyi and Ekong, (2007) opined that, the advent of web
applications into the computing technology has brought about a significant revolution
in our social life including the traditional system of education and examination. Many
institutions are beginning to re-evaluate their traditional methods and have considered
providing pedagogical materials through the Internet. Ayo et. al. (2007) defines eexamination as a system that “involves the conduct of examinations through the web
or the intranet” (p. 126).
Joint Information Systems Committee (JISC, 2007) defined the term eassessment as a broadly-based one, covering a range of activities in which digital
technologies are used in assessment. Such activities include the designing and delivery
19
of assessments, marking by computers, or humans assisted by scanners and tools and
all processes of reporting, storing and transferring of data associated with public and
internal assessments. e-Assessment is the end-to-end electronic assessment processes
where ICT is used for the presentation of assessment activity, and the recording of
responses. This includes the end-to-end assessment process from the perspective of
learners, tutors, learning establishments, awarding bodies and regulators, and the
general public, www.jisc.ac.uk
E-assessment or Computer Based Testing and Assessment or Computer
Assisted Assessment mean many things to many people, and comes under many
different names and titles. It embraces the use of Information Technology for any
activity which involves the assessment of skills, knowledge, understanding,
competency or aptitude. It is used in formal qualifications, to support learning, to
collect evidence of competency and achievement, in diagnostic testing of learning in
many other similar applications.
In the broadest view it covers virtually all aspects of assessment activity where
the computer is used to deliver a task, or set of tasks and questions, and then collect,
store the response and allow them to be evaluated or marked. This includes set
assignments and coursework as much as the more obvious and common on-line test.
It could also involve capture of work originally on paper that is scanned in to a
computer and then marked by some combination of human markers or the electronic
markers.
20
Where questions or tasks are delivered to candidates via a computer terminal this
typically involves some combination of the following six stages
1.
Develop: author, develop and store questions or tasks in an item bank or
repository
2.
produce: assist the selection of a subset of questions or tasks, and gather
together in an electronic paper or assignment
3.
deliver: display of computer stored questions or tasks
4.
process: collect responses from candidates in a controlled and secure manner
5.
mark: by computer or support human marking of responses
6.
feedback: return results to candidates and administration systems
In fact, it fits into a range of areas of work among others including:
1.
e-Learning - as the method of measuring progress on a course of study,
2.
electronic portfolios - as means of developing and holding course work or
material for a set assignment,
3.
computer based examination administration - to hold registrations, entries and
results of qualifications, examinations and tests,
4.
EDI and data transfer - as a means of transferring large volumes of data
between computer systems in a quick, fool proof and auditable manner, and
5.
OMR response/data collection systems
21
Computer Based Assessment and Testing can be used as part of high stakes
qualifications (such as publicly accredited qualifications) and low stakes assessment
within the classroom, workplace or at home.
Though the definition of Wikipedia is that of e-assessment it is related to eexamination. E-assessment in its broadest sense is the use of information technology
for any assessment related activity. This definition embraces a wide range of student
activity ranging from the use of a word processor to on-screen testing. Due to its
obvious similarity to e-learning, the term e-assessment is becoming widely used as a
generic term to describe the use of computers within the assessment process. Specific
types of e-assessment include computerized adaptive testing and computerized
classification testing, Wikipedia, (2011). However, the researcher opined that eAssessment is the use of computers and computer software to evaluate skills and
knowledge in a certain area. It can range from on-screen testing systems that
automatically mark learners' tests, to electronic portfolios where learners' work can be
stored and marked.
The nature of e-examination would naturally be traced to the further
deployment of the potentials of the internet and the intranet. El Emary and Al Sondos,
(2006) say that as schools around the world establish connections to the Internet and
teachers and students gain proficiency with navigating through the vast quantity of
readily available information, the true educational potential of the World Wide Web
can finally begin to be understood. The Web can be a dynamic tool capable of
22
assisting educators in propelling learning exciting and competing levels and of bringing
education to any students, anywhere, at any time (p. 1715)
Students’ perception on e-assessment
Improving the quality of the student learning experience is a key issue in higher
education sector, and it has been widely recognised that e-assessment can contribute
to this. However, it is interesting that whilst much research has been carried out into
the perception towards e-assessment on the part of instructors, e-learning experts and
educational technologists (e.g, Bull & McKenna, 2004; Stephens & Mascia, 1995;
Warburton & Conole, 2003), there is relatively little research into what students think.
Whilst we often make assumptions about what students feel, it would be useful
and interesting to put these to the test and gain some first-hand data from students
themselves. Moreover, the perception and opinions of test candidates are always
important because these affect an assessment‟s face validity (Anastasi, 1982).
The repertoire of assessment methods in use in higher institution of learning
has expanded considerably in recent years. New modes of assessment have enriched
the „conventional‟ evaluation setting, formerly characterized by both the multiplechoice examination and the traditional evaluation by essay (Sambell et al., 1997).
More recently, portfolios, self and peer assessment, simulations and other innovative
methods were introduced in higher educational contexts. These concepts make up the
current evaluation context. Students‟ perceptions about these recent formats of e-
23
assessment and the more common multiple-choice and essay examinations constitute
an important part of this review.
Entwistle‟s (1991,1999) finding that the student‟s perception of the learning
environment determines how she/he learns and not necessarily the educational
context in itself. Reality as experienced by the often forgotten student, is an
intervening variable, which cannot be neglected if full understanding of student
learning is the purpose of our educational research and practices. However, student
learning is related to evaluation practices. This provides the rationale for the primary
focus of the present inquiry into student‟s perceptions about evaluation practices and
assessment methods in our current learning environments
Improving the quality of the student learning experience is a key issue in the
higher institution of learning, and it has been widely recognised that e-assessment can
contribute to this. However, it is interesting that whilst much research has been
carried out into the perception towards e-assessment on the part of instructors, elearning experts and educational technologists (Bull & McKenna, 2004; Stephens &
Mascia, 1995; Warburton & Conole, 2003),
If students do not have confidence in a test, it will affect their levels of
engagement and cooperation (Domino & Domino, 2006). A survey was carried out at
the University of Bradford in April and May 2008 to measure student attitudes towards
e-assessment:
this
consisted
of
the
Student
Perceptions
of
e-Assessment
Questionnaire (SPEAQ), which was delivered to the students who had taken part in
24
online assessment during the academic year 2007–2008. The survey had several aims;
to identify drivers and obstacles to the uptake of e-assessment, to test various widelyheld assumptions about student attitudes, to anticipate risks so that they might be
managed more effectively in future, and more specifically, to inform an e-Learning
Pathfinder Project to establish support systems for e-assessment (Dermo, 2008;
Dermo & Eyre, 2008).
According to Phillip, Wheater, Langan and Dunleavy (2005) the success of eassessment depends greatly on how the process is set up and managed. Greater
understanding is needed about the effects of inexperienced markers on assessment
and about the involvement of students in the development of marking criteria and
how it affects the final mark. On the other hand, Papinczak, Young and Groves (2007)
implemented the development of student-involved criteria, but students‟ perceptions
of e-assessment experience remained quite negative. They suggested that may be
students needed years of practice in e-assessment in order to become comfortable
with the process.
Methods and Purpose of Assessment
Assessment is one of the most significant areas of an educational system. It
defines what students take to be important, how they spend much of their academic
time and in many ways how they value themselves. Rowntree (1987), says of
assessment as a system employed to discover truth about an educational system, we
must look to its assessment procedures. In addition, assessment is important because
25
students cannot avoid it, and is now becoming the mode of assessing students
performance. Boud, (1995) says, Students can, with difficulty, escape from the effects
of poor teaching, they cannot (by definition if they want to graduate) escape the
effects of poor assessment.
Birenbaum and Feldman, (1998) examined the relationships between the
students' learning patterns and their perception towards e-assessment which
characterized by multiple choice examinations, among students in higher education.
The results reveal two patterns of relationships between the learning- related variables
and the assessment perception. Dochy, Katrien & Steven (2002) agreed that Students
with good learning skills, who have high confidence in their academic ability, tend to
prefer the constructed response type of assessment over the multiple choice type. And
vice versa, students with poor learning skills, who tend to have low confidence in their
academic ability, prefer the choice over the constructed- response type of assessment.
The other pattern shows that low test anxiety measures were related to
positive attitudes towards the Open-ended E-assessment (OE) format. Students with
high test anxiety, have more unfavourable attitudes towards the OE format and a
preference to the choice- response type, probably because it puts less demanding
requirements on their information processing capacity during the testing situation
where that capacity is occupied by worries and test- irrelevant thoughts (e.g.
Hembree, 1988).
This undermines the significances of getting our assessment practices right for
our students. Rowntree (1987) declared that assessment procedures offer answers to
26
the following questions; What student qualities and achievements are actively valued
and rewarded by the system? How are its purposes and intentions realised? To what
extent are the hopes and ideals, aims and objectives professed by the system ever
truly perceived, valued and striven for by those who make their way within it?
Assessment has two main purposes; the first reason is to assist learning, when
looking at this area we must always strive to make the assessment relevant to the
overall goals of the subject matter and to make our assessment part of the learning
process. The second is to determine the effectiveness of the education system. Only
with this can we as educators improve the education of our students. However, we
must be able to determine not only the overall learning but which areas are not
effective and need modification.
As tutors we assess for a variety of reasons:
1. To pass or fail a student.
2. To grade or rank a student.
3. To select for future courses.
4. To predict success in future courses.
5. To provide a profile of what a student has learnt.
6. To diagnose students' strengths and weaknesses.
7. To provide feedback to students and improve their learning.
8. To help students to develop their skills of self−assessment
9. To motivate students to provide feedback to teachers.
10. To evaluate a course's strengths and weaknesses.
27
We must then question what we are assessing in the first place. A number of
assessment points must be considered among which are:
1. What do we want to assess? Basic knowledge, skills, higher cognitive skills.
2. For what purpose? Diagnostic, formative, summative.
3. In which mode?
There is need to be specific; on the part of the students, there is also a need to
ask the following questions why, what and how, and relate these to the objectives of
our courses and the learning outcomes devised for students. One must ask these
questions to make sure that assessment matches educational purposes. Lecturers
should then find the most appropriate assessment method for the set assignment or
to assess the desired learning. When considering the assessment tasks, there is need
to consider the strengths and weaknesses.
It is important to appreciate that students expect to receive much of their
information, whether educational or social, from online assessment and so we should
be moving to assessing them by ongruent means. As student numbers have
increased, and staff contact hours have in many cases decreased, students have
asked for supplementary support. An example of this is given by Clarke Students had
requested additional ways in which to learn and judge their progress during periods of
low contact time with their tutors − especially in the lead up to examination (Clarke et
al., 2004) From the INQUIRE evaluation it is strongly indicated that reinforcing the
content of the lectures through formative assessment can act to cement students'
understanding of key concepts and ideas (Clarke et al., 2004).
28
Ramsden (1992) says, it will be rare to find one assessment method which will
satisfy the assessment of all the intended learning outcomes for a course, so it will be
necessary to consider a range of assessment methods for our students. Weavers
(2003) concurs: diversity decreases the dependency on the traditional formal
examination, a method that does not suit the learning styles of many students.
As it suggests, diagnostic assessment is used to diagnose the level of learning
that has been achieved by our students, and is generally used at the beginning of
course units for lecturers to determine the level at which they should be aiming their
teaching, or to suggest to lecturers (or students themselves) the level of support that
may be required. Lecturers may use diagnostic assessment at the end of a lecture, or
a series of lectures to see if students have comprehended the information conveyed,
and students appear to like this, as it is a way for them to keep a track on their
learning.
However, diagnostic assessment does not provide a tool to enhance student learning
unless it has an element of feedback within it, unless it becomes formative.
Assessment that is formative occurs during a course, and provides feedback to
students to help them improve their performance. The feedback need not necessarily
be derived from only the lecturers, but can be from students' peers or external agents
such as clinical tutors or placement supervisors. It is important that the feedback
should be given in relation to the criteria against which the work is being assessed.
Involving students in peer assessment aids students in understanding and using the
assessment criteria (Bradford, 2003). Indeed, Giving feedback on another student's
29
work, or being required to determine and defend one's own, not only increases a
student's sense of responsibility and control over the subject matter, it often reveals
the extent of one's misunderstandings more vividly than any other method (Ramsden,
1992).
Assessment that is summative may or may not include feedback. The main
difference between this form of assessment and that which is purely formative is that
grades are awarded. The grade will indicate performance against the standards set for
the assessment task, and can either be part of in−course assessment, or assessment
at the end of a course or module.
Boud (2000) says that assessment activities have to encompass formative
assessment for learning and summative for certification. We should move away from
providing merely summative assessments of our students' learning, especially when
these occur at the end of units of study, because students will not be able to use
these to improve in their learning. Summative and formative are not types of
assessment but rather purposes to which assessment are put.
Merit and Demerit of e−Assessment
The characteristics of a good assessment technique that would have been a
strong well evaluated pedagogy, as well as providing support for both staff and
students − and of course, online assessment has all the other advantages of remote
access and choice of time and place of assessment, although the latter may be limited
for summative assessments that require security (JISC, 2008).
30
When looking to use e−assessment we can find that grading swiftly is one of its
strongest points.Test feedback can be on a question by question basis and with the
use of a „knowledge tracking system‟ students can follow their progression and self
determine their weaknesses (and strengths). This aspect of tracking progression
combined with careful nurturing of student expectations can assist in developing
students as autonomous learners (JISC, 2008).
Feedback also to students is an issue of quality assurance and quality
enhancement, Institutions should ensure that appropriate feedback is provided to
students on assessed working a way that promotes learning and facilitates
improvement (QAA Code of Practice for the assurance of academic quality in higher
education, section 6 May 2000) The importance of feedback for student learning
cannot be overstated (Gipps, 2003).
Improving formative feedback has been shown to raise standards in
assessment, a conclusion based on a review of over 250 papers from several countries
by Black and Wiliam (1998). They have also shown that the giving of marks has a
negative effect, as students ignore feedback comments when marks or grades are
given. Clarke et al., (2004) have shown that formative assessment can reinforce the
content of lectures and can 'act to cement students' understanding of key conceptsand
ideas' (Clarke et al., 2004: 259)
There is a lot of advice given about feedback, that it should be timely to be
effective, that it should provide constructive information to help with learning, that it
should be related to assessment criteria that are clearly understood by the students,
31
and that it should make explicit to students what is required for high−quality work
(Black et al 2002; Cowan, 2003; Sadler 1998). Comments on student work are only
useful as feedback if students can use them to help them improve in similar further
work, and Black et al (2002) say that: to be effective, feedback should cause thinking
to take place.
Feedback should be provided in a timely manner, the longer the gap between
the assessment performance and the feedback on that assessment, the more that
students are likely to treat the feedback as summative, as they have already moved
on to new knowledge and new learning experiences. The information provided to the
student must be of use to them, feedback functioned formatively only if the
information fed back to the learner was used by the learner in improving performance'
(Black et al 2002). Feedback should focus on what needs to be done, providing a
motivation that improvement is possible, rather than focussing on ability, which can
cause damage to self−esteem of low attainers.
For students to be able to make use of the feedback, they have to be able to
understand and apply the assessment criteria to their work. Once they can do this,
they should be able to start making assessments of their own performance and begin
to manage their own learning. Sadler (1989) argued that assessment criteria do not in
themselves help in judging performance, but that students have to be helped to
interpret the criteria for any piece of assessed work. Involving students in peer
assessment, where they actively engage in using the criteria is one way of helping
students to understand them, and then apply them in their own work.
32
Analysis of the feedback that staff give to their students can reveal more about
the nature of the assessment task. Black et al., (2002) describe this, when talking
about work with teachers in schools: They found that some tasks were useful in
revealing pupils understandings and misunderstandings, but that others focussed
mainly on conveying information. From this analysis, the teachers decided to modify
some of the activities, remove some and find others which assessed the outcomes
that were intended for their pupils. In the same way staff in higher
and further
education can analyse their feedback comments to evaluate the assessment tasks that
they construct for their students. In addition, they can use an analysis of their
feedback comments to provide information about the teaching that has been
happening, and the evaluation may show areas of misunderstandings across the
student body that require further attention in teaching situations.
One of the potentials of the web is the ability to conduct examination through
electronic means. Many scholars have hinted on the advantages of the deployment of
e-examination. Awosiyan (2010) quoting Prof. Olu Jegede, the Vice-Chancellor of
NOUN, says that: e-assessment was introduced to address series of anomalies being
encountered in the manual tests. He said that the e-assessment would remove all
human errors recorded in manual examination and create opportunity for students to
access their results immediately.... With this, we have removed so many hiccups in the
compilation of answer scripts and movement of examination papers from one part of
the country to another. The examination is conducted now through the net.” it would
be difficult for students to carry out any form of examination malpractice.
33
From the above statement, the following can be said to be the advantages of eexamination:
Removal of human errors involved in the process of examination.
The eradication of the compilation and physical movement of examination
scripts, especially in the NOUN where we had to exchange examination scripts
between the 39 study centres to avoid the halo effect and guarantee the quality
of the assessment.
It would also eradicate examination malpractice from the part of the students
There are many advantages provided by e-assessment which demonstrate the same
benefits to learner, instructor and administrator alike.
1. Provides more flexibility than pen and paper
2. Question types can include multimedia material
3. Document management techniques can be used to organise and store
questions and question papers.
4. Marking can be speeded up and even automated in many situations
5. Easy and secure distribution of assessment material
6. Capability of providing instant feedback to the candidate both for individual
questions, but also for the whole test or task if marked automatically
7. Automatic links possible to central record keeping systems
8. Ability to deliver on demand formative or even summative tests
9. New educative and/or motivational experience for the candidate
34
10. Provides opportunity for adaptive tests that respond to the candidates answers
11. Re-use and Cloning of tests
12. Use of randomisation and individualisation of tests and tasks.
13. Link to e-Learning facilities
14. Professional feedback to administrators, teachers or trainers
15. Analysis of responses to question level.
Though as good as e-assessment is, it has some demerits which seem to be
militating against the smooth running of the new assessment format. E-assessments
take time and money to produce: Traditional assessments are quick and simple to
produce but slow and expensive to use. E-assessment shifts the effort to the front of
the process. They are time-consuming and expensive to create and quick and simple
to use. The pay-back for e-assessment is long term but educational budgets are
short/medium term, www.jiscinfonet.ac.uk. E-assessment cannot be used for every
type of assessment: Although e-assessment can be used to assess most types of
knowledge and understanding, it has limitations. For example, essay marking is
currently not well done by a computer; e-assessment is (currently) not very good at
measuring creative skills.
E-assessment requires more support: Centres need to be tooled-up for online
assessment. The infrastructure for traditional assessment is already in place; the
infrastructure for e-assessment is not. Depending on the specific e-assessment system
in use, centres may require dedicated assessment centres and there is a lot of money
(wikipedia, 2011).
35
Influence of gender on students’ perception of e-assessment
Gender is liken to a social attributes and opportunity associated with being male
and female and the mutual relationship. These attributes and relationships are socially
constructed and are learned through socialization processes while technology
development serves as forum for exploring the linkage between changing gender
relations and technological development (Ewhrudjakpor, 2006).
Traditionally, technology is a male sphere, and research has previously shown
that males have a greater interest in technology itself than females. females want to
use the technology (e.g., Durndell, et al. 1995; Turkle 1988). In addition, this study
also indicated gender differences, with males having more favourable attitudes
towards the choice response format than females (Birenbaum & Feldman, 1998).
These gender differences were attributed to a personality dimension of risk- taking,
with females being more reluctant than males to guess on MC questions and being
more likely to leave items blank (e.g. Ben- Shakhar & Sinai, 1991).
Tapscott (1997), on the contrary, says that he cannot see any differences
between how male and female use the Internet when he studies what he calls the Net
generation or N-Gen. N-Geners are people born after 1977. They have grown up in
the digital age, and he predicts that when the N-Gen takes over, at least there will be
equality between the sexes on the Internet. More recent statistics have shown that,
unlike earlier statistics, females are as frequent Internet users as males (Carlsson &
Facht 2002). Jackson, et al. (2001) got the same result, but in their study women
used e-mail more than men did, and men searched the Web more than women did.
36
This cannot be seen in Swedish statistics, where the use of these functions is
shared equally between the sexes. An interesting trend in Sweden though is that
males to a greater extent have access to Internet in their homes (Carlsson & Facht
2002), and it is more common for boys than for girls to have their own computer
(Sjöberg 2002). Girls' Internet use on the other hand is often highlighted in media in
connection with men seeking sexual contact with teenage girls, which increase
parents' worries about the girls' use of the Internet. This shows that the question of
equality between the sexes concerning computer use is very complex.
The results of a computer competency test, which included both theoretical and
practical knowledge (Bain, et al. 1999) showed that females were slightly less
competent than males. Jackson et al. (2001) found that females reported more
computer anxiety, less computer self-efficacy, and less favorable and less stereotypical
computer attitudes. It has also been reported that males show a more positive
attitude toward computers than females (Kadijevich 2000). It seems that research has
found that technology is no longer reserved for males, but that females react
somewhat differently to computers, and also have to deal with different conditions in
society regarding this issue.
Unilorin computer based test mode and authoring style
The study of Olawale and Shafi‟i (2010) shows that very few Universities have
started using the e-exams system for their test/exams in Nigeria and these includes
Federal University of Technology Minna, University of Ilorin, Covenant University Ota,
37
Nigerian Open University of Nigeria (NOUN), to mention but a few. Their study
revealed that, all the Universities used for their study were all operating almost in the
same way. Only NOUN uses internet for the e-exams, while others used intranet setup
within the University environments. In University of Ilorin, the intranet was setup in eexams centers containing 500 computer systems and a server and the CBT centre is
being managed by COMSIT.
Ayo et al (2007) and Akinsanmi (2010) considered the architecture of the
existing system and presented a 3-tier architecture comprising the presentation tier,
the logic tier and the database tier. The presentation tier offers an interface to the
user, the logic tier serves as the middleware that is responsible for processing the
user‟s requests, while the database tier serves as the repository of a pool of thousands
of questions. It also consists of other modules for authentication (using User
name/Registration Number and Password) and computing results.
In preparing e-examination questions, the first approach is preparing the eexamination questions is to ask the lecturer in-charge of the course to submit the
questions to the administrator at the center via the faculty/school exams officer some
days before the commencement of the actual exams. The second step is for the
administrator (mostly private operator) to enter the pool of questions into the
database. The last step is to set the timing for the exams. The implication here is that,
when examination questions passes through so many hands it is likely that the
questions may leak, especially when a private individual is involved (Olawale and
Shafi‟i, 2010).
38
Appraisal of the Literature Reviewed
The literature reviewed in this work has provided an in-depth knowledge on the
various parameters involved in this study such as history, nature and meaning of eassesment, students‟ perception on e-assessment, methods of e-assessment, merits
and demerits of e-assessment, influence of gender on students‟ perception of eassessment and Unilorin computer based test mode and authoring style. This review
has contributed tremendously in giving this study a focus.
The literature considered the world history, nature and meaning of eassesment. Assessment was not seen as new to academia because assessment has
been part of our educational system before e-assessment was introduced to enhance
the conventional method (paper and pencil method) and different definitions of eassessment given by different authors were examined to form the bases for the study.
Such scholars works cited include; (Martell & Calderon, 2005, Pearson, Vyas, Sensale,
and Kim, 2001, Pearson, 2001, Audette, 2005, Linn, 2002, Ewell & Steen, 2006, FRN,
1977, Adekanmbi, 2004, JISC, 2007) and among others. The review in this section has
helped the researcher with history, nature and meaning e-assessment. This will guide
the researcher to carry out his study within the confinement of study.
Literature was also reviewed on students‟ perception on e-assessment. The
review focus on students‟ view of e-assessment and if students do not have
confidence in a test, it will affect their levels of engagement and cooperation. In
establishing the ealier statement, differrent authors who had worked related topics
were cited which include; (Anastasi, 1982, Domino & Domino, 2006, Sambell et al.,
39
1997, Entwistle‟s 1991,1999, Bull & McKenna, 2004; Stephens & Mascia, 1995;
Warburton & Conole, 2003). The review in this section provides the researcher with
basic information about the various perception of students on e-assessment. The
review will consequently assist the researcher to carry out his study adequately.
Literature reviewed on methods and purpose of assessment considered the
various ways in which assessment is carried out such as: diagnosis, formative and
summative assessment were critically examined. Works of many researchers who had
carried out investigation on methods and purpose of assessment were cited which
include the works of the following; (Rowntree 1987, Boud 1995, Birenbaum and
Feldman 1998, Dochy, Katrien & Steven 2002, Hembree, 1988, Clarke et al., 2004,
Weavers 2003, Boud 2000).
Literature reviewed on merits and demerits of e-assessment, critically examined
the advantages and disadvatages of e-assessment over the traditional method of
paper and pencil assessment. For instance, Awosiyan (2010) quoting Prof. Olu Jegede,
says that: e-assessment was introduced to address series of anomalies being
encountered in the manual tests. He said that the e-assessment would remove all
human errors recorded in manual examination and create opportunity for students to
access their results immediately. JISCinfonet 2006 opined that, Though as good as eassessment is, it has some demerits which seem to be militating against the smooth
running of the new assessment format. E-assessments take time and money to
produce: Traditional assessments are quick and simple to produce but slow and
expensive to use. E-assessment shifts the effort to the front of the process. They are
40
time-consuming and expensive to create and quick and simple to use. The pay-back
for e-assessment is long term but educational budgets are short/medium term,
www.jiscinfonet.ac.uk. E-assessment cannot be used for every type of assessment.
The literature also considered influence of gender on students‟ perception of eassessment, (Bebetos and Antonio 2008; Kadel, 2005) liken gender to a social
attributes and opportunity associated with being male or female and the mutual
relationship. Though works of (Durndell, et al. 1995; Turkle 1988) indicated that males
have greater interest in technology than females. Tapscott (1997) on the contrary,
says that he cannot see any differences between how male and female use the
Internet when he studies what he calls the Net generation or N-Gen. N-Geners are
people born after 1977.
E-exams result presentation/checking: In most of the centers visited in this
research work, students don‟t get to see their results immediately after the exams. In
some cases, the results may take weeks or even months before it is made available to
the students. This violates one of the major objective of e-exams (instant access to
results). This may give room for alteration of students result. There is also no room for
the users to see the correction of their tests if they so wish.
41
CHAPTER THREE
RESEARCH METHODOLOGY
This Chapter describes the procedures the researcher would follow in the
course of carrying out this study. This would be discussed under the following subheadings: Research Type, Sample and Sampling Techniques, Research Instrument,
Validation of Instrument, procedures for Data Collection and Data Analysis
Techiniques.
Research Type
This research is a descriptive survey research type. This research deals with
conditions that exist, practices that prevail, and sometimes with how and what is or
what exists in relation to some preceding events that have influenced or affected a
present condition or event. Descriptive survey research is adopted based on it‟s
appropriateness for large population sample. The survey will involve the use of
researcher-designed questionnaire to elicit information on students‟ perception of eAssessment.
Sample and Sampling techniques
The target population for this study shall be all undergraduate students of
University of Ilorin. The population for the study will be randomly drawn from
humanity and science and total population for this study will three hundred (300)
respondents. Fifty (50) students shall be drawn from each faculty using simple random
42
sampling technique to select the samples for the study. Twenty five (25) male and
female students shall be selected to determine the gender perception of e-assessment
in university of Ilorin.
Research Instrument
The instrument to be used for this study is a reseacher-designed questionnaire
drawn based on students‟ perception of e-Assessment in the University of Ilorin. The
questionnaire was designed in such a way that it would provide answers to the
research questions raised and the hypotheses which the study seeks to answer and
test respectively. The questionnaire will have an instructive section on how the
respondents would answer the questions. The questionnaire will also have a section
for the respondents biodata in which the sex, age, faculty, department and their
programme of study.
Validation Of Instrument
According to Kerlinger (1973) content validity is the most important property of
an instrument. Also, Casley and Lury (1987) have argued that to validate an
instrument is one of the most important steps
a researcher could take when
constructing an instrument. In view of the above, care has been taken by the
researcher in the process of validating the questionnaire.
After the questionnaire had been drafted, it was taken to four experts to
validate. One from department of computer science, one from Computer Services and
43
Information Technology Unit (COMSIT) and two from department of science education
for the content and face validity.
Procedure for Data Collection
Data will be collected by direct administration. Firstly, permission will be sort
from the University administrator. The instrument will be administered to the
respondents and retrieved immediately after the respondents would have filled them.
If for any reason some of the copies of the questionnaire are not immediately
retrievable the researcher will go back to such faculty to retrieve them.
Data Analysis techniques
The data obtained would be analysed using frequency counts and percentages
while t-test would be used to test the hypotheses which the study seek to answer and
test respectively.
44
References
Aborisade,
Akinwale
2010,
“NOUN
student
grumble
about
poor
academic
environment”, The Punch, 30 May, p. 8.
Adekanmbi, G 2004, The transformation of distance education in Africa. Viewed 16
July, 2010. http://www.col.org/forum
Ajayi G.O (2005): E-Government in Nigeria‟s e-Strategy, Paper presented at 5th Annual
African Computing and Telecommunications Submit, Abuja, Nigeria.
Anastasi, A. (1982). Psychological testing. London: Macmillan.
Ashcroft, K. and Palacio, D. (1996) Researching into Assessment and Evaluation.
London: Kogan
Ashworth, P., & Bannister, P. (1997). Guilty in whose eyes? University students'
perceptions of cheating and plagiarism in academic work and assessment.
Studies in Higher Education, 22 (2), 187-203.
Askar, P., Usluel, Y. K., and Mumcu F. K. (2006) Logistic regression modeling for
predicting task-related ICT assessment − is workload another test mode effect?
British Journal of Educationa Technology,35(1), 111−113.
Awosiyan, Kunle 2010, “Stress and success of NOUN examination”, Nigerian Tribune,
July 1, p. 10.
Ayo, CK Akinyemi, IO Adebiyi, AA & Ekong, UO 2007, “The prospects of e-examination
implementation in Nigeria”, Turkish Online Journal of Distance Education, vol.
8, no 4, pp. 125-134.
45
Batane, T. & Mafote, S. (2007). The Impact of WebCT on Learning: A Student‟s
Perspective. The Proceedings of the International Association of Science and
Technology for Development (IASTED), Computers and Advanced Technology
in
Educatiion
CATE.
Beijing,
China.
http://www.actapress.com/Abstract.aspx?paperId=31896.
Bebetos, C; and Antonio, S. (2008). Why use information and communication
Technology in Schools? Some theoretical and practical issues. Journal of
Information Technology for Teacher Education. 10(1and2), 7-18
Ben- Shakar, G., & Sinai, Y. (1991). Gender differences in multiple- choice tests: the
role of differential guessing tendencies. Journal of Educational Measurement,
28, 23-35.
Bennett, R. E. (2002). Inexorable and inevitable: The continuing story of technology
and assessment. Journal of Technology, Learning, and Assessment, 1 (1).
Biesta, G. & Burbules, N. C. (2003). Pragmatism and educational research. Oxford:
Rowman and Littlefield.
Birenbaum, M. (1990). Test anxiety components: comparison of different measures.
Anxiety research, 3, 149-159.
Birenbaum, M. (1996). Assessment 2000: towards a pluralistic approach to
assessment. In M. Birenbaum & F. J. R. C. Dochy (Eds.), Alternatives in
assessment of achievements, learning processes and prior knowledge.
46
Evaluation in education and human services (pp. 3-29). Boston, MA: Kluwer
Academic Publishers.
Birenbaum, M. (1997). Assessment preferences and their relationship to learning
strategies and orientations. Higher Education, 33, 71-84.
Birenbaum, M., & Feldman, R. A. (1998). Relationships between learning patterns and
attitudes towards two assessment formats. Educational Research, 40 (1), 9097.
Birenbaum, M., Tatsuoka, K. K., & Gutvirtz, Y. (1992). Effects of response format on
diagnostic assessment of scholastic achievement. Applied psychological
measurement, 16 (4), 353-363.
Blackboard Inc. Retrieved on 7 July 2008 from www.blackboard.com.
Bloom, B.S. (1956). Taxonomy of educational objectives, Handbook I: the cognitive
domain. David McKay Co Inc., New York.
Boes, W., & Wante, D. (2001). Portfolio: het verhaal van de student in ontwikkeling/
Portfolio: the story of a student in development [Unpublished dissertation].
Katholieke Universiteit Leuven, Department of Educational Sciences.
Boud, D. (1995a). Assessment and learning: contradictory or complementary? In P.
Knight (Ed.) Assessment for Learning in Higher Education. London: Kogan
Page, 35−48.
47
Braak, J. V. (2001) Factors influencing the use of computer mediated communication
by teachers in secondary
Bradford M., A view from the top. Exchange, 4, Spring 2003: 8.
Bransford, J.D., Brown, A.L., Cocking, R.R. (Eds.) (2000). How People Learn: Brain,
Mind, Experience,and School. Expanded Edition. Washington DC: National
Academies Press.
Bridgeman, S., Goodrich, M.T., Kobourov, S.G., Tamassia, R., (2000). PILOT: An
Interactive Tool for Learning and Grading, In Proc. of SIGCSE 3/00 Austin, TX,
USA, ACM Press, pp. 139-143.
Brosnan, M. (1999). Computer anxiety in students: Should computer−based
assessment be used
Brown S and Knight P 'Assessing Learning in Higher Education' Kogan Page 1994.
Brown, G. and Peterson, N. (2001), E-Learning, the Library Learning Centre-Space &
Strategies Consultative Report to the University of Botswana.
Brown, S., Race, P. (1996). 500 Tips on assessment, Cogan Page, London, UK
Bryman, A. & Cramer, D. (2001). Quantitative analysis with SPSS release 10 for
windows: a guide for social scientists. London: Routledge.
Bucci, T. T., Copenhaver, LJ, Lehman, B, and O‟Brien, T (2003) Technology
integration: Connections to CBT Publications, Ankara, Turkey (in Turkish).
Bull, J. & McKenna, C. (2001). Blueprint for Computer Assisted Assessment . London:
RoutledgeFalmer.
48
Bull, J. & McKenna, C. (2004). Blueprint for computer-assisted assessment. London:
Routledge-Falmer.
Canberra: Australian Universities Teaching Committee.
Carroll, J. (2002) A Handbook for Deterring Plagiarism in Higher Education. Oxford:
Oxford Centre for Staff and Learning Development
Challis, M. (2001). Portfolios and assessment: meeting the challenge. Medical Teacher,
23 (5), 437-440.
Chambers, E. (1992). Work- load and the quality of student learning. Studies in Higher
Education, 17 (2), 141-154.
Chapman, G. (2005). Drivers and Barriers to the Adoption of Computer Assisted
Assessment for UK Awarding Bodies. Proceeding of 2005 CAA Conference (Ed:
Danson M) Available at http://www.caaconference.com/pastConferences/2005/
Charman, D., Elms, A. (1998). Computer Based Assessment: A guide to good practice,
Volume I, University of Playmouth.
Chun, M. (2002). Looking where the light is better: A review of the literature on
assessing higher education quality. Peer Review. Winter/ Spring.
Clariana, R. B. &Wallace, P.E. (2002). Paper−based versus computer−based
assessment: Key
Clarke, Sophie. Lindsay, Katharine. McKenna, Chris. New, Steve − INQUIRE: a case
study in evaluating the potential of online MCQ tests in a discursive subject −
49
Volume 12, Number 3/September 2004 of ALT−J. conference archive section of
the CAA Conference website.
Cohen, L., Manion, L. & Morrison, K. (2003). Research methods in education. London:
Routledge-Falmer.
Conole, G. (2002). The evolving landscape of learning technology research. ALT-J:
Research in Learning Technology, 10, 3, 4–18.
Conole, G. (2003). Understanding enthusiasm and implementation: e-learning
research questions and methodological issues. In J. K. Seale (Ed.), Learning
technology
in
transition:
from
individual
enthusiasm
to
institutional
implementation (pp. 129–146). Abingdon: Swets and Zeitlinger.
Cope, C. and Ward, P. (2002) Integrating learning technology into classrooms: The
Technology 106
Crisp, G. (2007). The e-assessment handbook. London: Continuum.
Culwin F.,(1998). Web hosted assessment: possibilities and policy, Proceedings of the
6th annual Conference on the Teaching of Computing/3rd Annual ITiCSE
Conference on Changing the Delivery of Computer Science Education, Pages
55–58.
Dawson, V., Forster, P., and Reid, D. (2006) ICT integration a science education unit
for preservice science
De Corte, E. (1996). Active learning within powerful learning environments/ Actief
leren binnen krachtige leeromgevingen. Impuls, 26 (4), 145-156.
50
De Vaus, D. A. (2002). Research design in social research. London: Sage.
Demetriatis, S., Barbas, A., Molohides, A., Palaigeorgiou, G., Psillos, D., Vlahavas, I.,
Tsoukalas, I.,(2004)
Demiraslan, Y. and Usluel, Y. K. (2008) ICT integration processes in Turkish schools:
Using activity theory to
Dermo, J. & Eyre, S. (2008). Secure, reliable and effective institution-wide eassessment: paving the ways for new technologies. In F. Khandia (Ed.),
Proceedings
of
12th
International
CAA
Conference
(pp.
95–106).
Loughborough: University of Loughborough.
Dermo, J. (2007). Benefits and obstacles: factors affecting the uptake of CAA in
undergraduate courses. In F. Khandia (Ed.), Proceedings of 11th International
CAA Conference (pp. 155–162). Loughborough: University of Loughborough.
Dermo, J. (2008). Implementing online assessment: finding the right path for an HE
institution.In A. Ladwa (Ed.), E-learning in HE (pp. 8–9). Leeds: JISC Regional
Support Centre Yorkshire and Humber.
Dillman, D. A. (2007). Mail and internet surveys: the tailored design method.
Hoboken, NJ: Wiley.
Dochy, F., Segers, M., & Buehl, M. M. (1999). The relation between assessment
practices and outcomes of studies: the case of research on prior knowledge.
Review of educational research, 69 (2), 147-188.
51
Dochy, Katrien & Steven (2002): Students' perceptions about assessment in higher
education: a review Katrien Struyven, K.U.Leuven, Centre for Research on
Teacher and Higher Education, Vesaliusstraat 2, 3000 Leuven, Belgium
Domino, G. & Domino, M. L. (2006). Psychological testing: an introduction.
Cambridge: Cambridge University Press.
Douce C., Livingstone D., and Orwell J., (2005). Automatic test-based assessment of
programming: A review, Journal on Educational Resources in Computing, ACM.
Retrieved on 28 May 2010 from: http://www.scribd.com
Drew, S. (2001). Perceptions of what helps learn and develop in education. Teaching
in Higher Education, 6 (3), 309-331.
Edelstein, R. A., Reid, H. M., Usatine, R., & Wilkes, M. S. (2000). A comparative study
of measures to evaluate medical students' performances. Academic Medicine,
75 (8), 825-833.
Eizenberg, N. (1988). Approaches to learning anatomy: developing a programme for
preclinical medical students. In P. Ramsden (Ed.), Improving learning: new
perspectives. London: Kogan Page.
El Emary IMM & Abu, JAA 2006, “An online website for tutoring and e-examination of
economic course”, American Journal of Applied Sciences vol. 3, no 2, pp. 17151718.
52
Elliot B., (2008). Assessment 2.0: Modernising assessment in the age of web 2.0.
Scottish Qualifications Authority, Evaluation, 23 (4), 279-298.
Entwistle, N. J. (1991). Approaches to learning and perceptions of the learning
environment. Introduction to the special issue. Higher Education, 22, 201-204.
Entwistle, N. J., & Entwistle, A. (1991). Contrasting forms of understanding for degree
examinations: the student experience and its implications. Higher Education,
22, 205-227.
Entwistle, N. J., & Ramsden, P. (1983). Understanding student learning. London:
Croom Helm.
Entwistle, N. J., & Tait, H. (1990). Approaches to learning, evaluations of teaching,
and preferences for contrasting academic environments. Higher Education, 19,
169-194.
Entwistle, N., & Entwistle, A. (1997). Revision and experience of understanding. In F.
Marton, D. Hounsell, & N. Entwistle (Eds.), The experience of learning.
Implications for teaching and studying in higher education [second edition] (pp.
146-158). Edinburgh: Scottish Academic Press.
Entwistle, N., & Tait, H. (1995). Approaches to studying and perceptions of the
learning environment across disciplines. New directions for teaching and
learning, 64, 93-103.
53
Entwistle, N., McCune, V., & Walker, P. (2001). Conceptions, styles, and approaches
within higher education: analytical abstractions and everyday experience. In
Sternberg and Zhang, Perspectives on cognitive, learning and thinking styles
(pp. 103-136). NJ: Lawrence Erlbaum Associates.factors associated with the
test mode effect. British Journal of Educational Technology, 33 (5), 595−904.
Federal Ministry of Education 2002, Blueprint and implementation plan for the national
open and distance learning programmes. Abuja.
Federal Republic of Nigeria. 1977, National Policy on Education. Lagos
Flint, N. (2000, December). Culture club. An investigation of organisational culture.
Paper presented at the Annual Meeting of the Australian Association for
Research in Education, Sydney.
Frankfort-Nachmias, C. & Nachmias, D. (1996). Research methods in the social
sciences. London:Edward Arnold.
Franklyn- Stokes, A., & Newstead, S. E. (1995). Undergraduate cheating: who does
what and why? Studies in Higher Education, 20 (2), 159-172.
Friedman Ben- David, M., Davis, M. H., Harden, R. M., Howie, P. W., Ker, J., &
Pippard, M. J. (2001). AMEE Medical Education Guide No. 24: Portfolios as a
method of student assessment. Medical Teacher, 23 (6), 535-551.
54
Gaytan J. (2007) Vision Shaping the Future of Online Education: Understanding
Its Historical Evolution, Implications, and Assumptions. Access date:
15th,
June
2007.
www.westga.edu/`distance/ojdla/summer102/gatyan102.htm
Giannini-Gachago, D., Molelu, G.B, Uys, P.M. (2005). Analysing the impact of
introducing e-Learning in higher education within developing settings: the case
of the University of Botswana. In Grabowska, A., Cellary, W. (Eds). E-learning:
experiences, cases, projects. Poznan: The Poznan University of economics
publishing house.
Gibbs,
G.
and
Simpson,C.(2004-5).Does
your
assessment
support
your
students‟learning? Learning and Teaching in Higher Education (on-line), 1(1), 331.
Available
at
http://www.glos.ac.uk/adu/clt/lathe/issue1/index.cfm
(Accessed 9/11/06).
Gill, L. and Dalgarno, B. (2008) Influences on pre-service teachers‟ preparedness to
use ICTs in the classroom.
Gipps, C. (2003) Should universities adopt ICT−based assessment Exchange Spring
2003. 26−27.
Greasley, P. (2008). Quantitative data analysis using SPSS: an introduction for health
and social studies. Maidenhead: Open University Press.
Guetl C. (2007). Moving towards a Fully Automatic Knowledge Assessment Tool,
extended paper of the IMCL 2007 paper, iJET International Journal of
Engineering Technologies in Learning.
55
Harun, M. H. (2001) Integrating e-Learning into the workplace, Internet and Higher
Education, 4 (3&4), 301Hayes, B. G. (1999). Where's the data? Is multimedia instruction effective in training
counselors? Journal of Technology in Counseling 1.1 [On-line]. Retrieved on 7
July 2008 from: http://jtc.colstate.edu/vol1_1/multimedia.htm.
Hembree, R. (1988). Correlates, causes, effects, and treatment of test anxiety. Review
of Educational Research, 58, 47-77.
Holden, H., Ozok, A. A. and Rada, R. (2008) Technology use and perceptions in the
classroom:
Hollingsworth, J. (1960). Automatic graders for programming classes. Commun. ACM,
3(10):528–529
Hounsell, D. (1997a). Contrasting conceptions of essay- writing. In F. Marton, D.
Hounsell, & N. Entwistle (Eds.), The experience of learning. Implications for
teaching and studying in higher education [second edition] (pp. 106-126).
Edinburgh: Scottish Academic Press.
Hounsell, D. (1997b). Understanding teaching and teaching for understanding. In F.
Marton, D. Hounsell, & N. Entwistle (Eds.), The experience of learning.
Implications for teaching and studying in higher education [second edition] (pp.
238-258). Edinburgh: Scottish Academic Press.
56
Howie, I., Kelly, J.X., Lay, S., Massey, M., McAlpine, M., McDonald, D., MacDonald, M.,
http://cnets.iste.org/students/s_stands.html (accessed 11 July 2006).
http://www.quasar.ualberta.ca/edmedia/readingsnc/suen.html (accessed 23 March
2010). ISBN 83-7417-122-7, pp. 29-38.
Iyilade, J. S. and Odekunle, W. O. (2005) A Web-Based Student Testing and
Assessment System. Proceedings of the International Conference on Application
of ICT to Teaching, Research, and Administration, AICTTRA, vol. 1 pp. 16 – 24.
Jackson, D & Usher, M 1997, “Grading student programs using ASSYST”, Proceedings
of the 28th SIGCSE. Technical Symposium, pp 335–339.
Jackson, D, and Usher, M (1997) Grading Student Programs using ASSYST. In
Proceedings of the 28th SIGCSE. Technical Symposium, 1997 pp 335 – 339
James, R., McInnis, C. and Devlin, M. (2002) Assessing Learning in Australian
Universities.
Janssens, S., Boes, W., & Wante, D. (2001). Portfolio: een instrument voor toetsing en
begeleiding/ Portfolio: an instrument for evaluation and coaching. In F. Dochy,
L. Heylen, & H. Van de Mosselaer (Eds.), Assessment in onderwijs/ Assessment
in Education. Utrecht, Netherlands: LEMMA.
Jedeskog, G. and Nissen, J. (2004) ICT in the classroom: Is doing more important
than knowing?, Education and society
57
Jegede O. (2005). Towards a New Paradigm for Assessment in Lifelong Learning,
Presented at the International Association for Educational Assessment (IAEA)
Conference held at Abuja from 5-9 September 2005.
Kadel, C (2005). Innovation in education: the increasing digital world-issue of today
and tomorrow. New Jersey; Lawrence Erlbaum Associates
Kennewell, S. (2001) Using affordances and constraints to evaluate the use of
information and communications
Kerlinger, F.N. (1973). Foundations of behavioural research. New York: Holt,
Rinehart and Winton, Inc.
Kniveton, B. H. (1996). Student perceptions of assessment methods. Assessment and
Evaluation in Higher Education, 21 (3), 229-238.
Krathwohl, D.R., Bloom, B.S., and Bertram, B.M., (1973). Taxonomy of educational
objectives,the classification of educational goals. Handbook II: affective
domain. David McKay Co. Inc., New York.
Lei He. (2006). A novel web-based educational assessment system with Bloom‟s
taxonomy.Current Developments in Technology-Assisted Education, Published
by FORMATEX, Badajoz, Spain,Volume III, 1861-1865.
Lomax, R. G. (1996). On becoming assessment literate: an initial look at preservice
teachers' beliefs and practices. Teacher educator, 31 (4), 292-303.
58
Loveless, A. M. (2003) The interaction between primary teachers‟ perceptions of ICT
and their pedagogy.
M‟hammed Abdous and Wu He (2007) Streamlining Forms Management
Process in a Distance Learning Unit. Access date: June 15th,
2007 www.westga.edu/`distance/ojdla/summer102/gatyan102.htm
Mackenzie, D. (2003) Assessment for E−Learning: What are the Features of an Ideal
E−Assessment System? CAA Conference Proceedings 2003. 185−194.
Mapoka, K., Eyitayo, O. (2005). Staff views on the use of WebCT in teaching a
computer literacy course: a case study of the University of Botswana.
Proceedings of 35th Southern African Computer Lecturers Association (SACLA)
2005. Gaborone, Botswana: Department of Computer Science, University of
Botswana.
Marlin, J. W. Jr. (1987). Student perception of End-of-Course-Evaluations. Journal of
Higher Education, 58 (6), 704-716.
Marsh, C. (1988). Exploring data. An introduction to data analysis for social scientists .
Cambridge: Polity Press.
Martell & T. Calderon, Assessment of student learning in business schools: Best
practices each step of the way (Vol.1, No. 1, pp. 1-22). Tallahassee, Florida:
Association for Institutional Research.
59
Martell, K., & Calderon, T. (2005). Assessment of student learning in business
schools: What it is, where we are, and where we need to go next
Martin, S. and Vallance, M. (2008) The impact of synchronous inter-networked teacher
training in information May 2004).
Marton, F. (1976). On non- verbatim learning. II. The erosion of a task induced
learning algorithm. Scandinavian Journal of Psychology, 17, 41-48.
Marton, F. (1981). Phenomenography- describing conceptions of the world around us.
Instructional Science, 10, 177-200.
Marton, F., & Säljö, R. (1997). Approaches to learning. In F. Marton, D. Hounsell, & N.
Entwistle (Eds.), The experience of learning. Implications for teaching and
studying in higher education [second edition] (pp. 39-59). Edinburgh: Scottish
Academic Press.
Meyer, D. K., & Tusin, L. F. (1999). Pre-service teachers' perceptions of portfolios:
process versus product. Journal of Teacher Education, 50 (2), 131-139.
Milliken, J. and Barnes, L. P. (2002) Teaching and technology in higher education:
student perceptions and utilization of ICT
Mires, G. J., Friedman Ben- David, M., Preece, P. E., & Smith, B. (2001). Educational
benefits of student self- marking of short- answer questions. Medical Teacher,
23 (5), 462-466.
60
Mitchell, P. D. (1997). The impact of educational technology: a radical re-appraisal of
research methods. ALT-J: Research in Learning Technology, 5, 1, 49–54.
Molelu, G.B, Uys, P.M. (2003). Development of eLearning at the University of
Botswana: Challenges, Achievements, and Pointers to the Future. eLearning
Conference, MORUO Communications, Fourways, Johannesburg, South Africa.
Muir-Herzig, R. G. (2004) Technology and its impact in the classroom, Computers &
Education, 42(2), 111-131.
Mumtaz, S. (2000) Factors affecting teachers‟ use of Information and Communications
Technology: A review of ICT
Myers R. (1986). Computerized Grading of Freshman Chemistry Laboratory
Experiments, Journal of Chemical Education, Volume 63, Pages 507-509.
Nicol D.J. and Macfarlane-Dick D. (2004). Rethinking formative assessment in HE: a
theoretical model and seven principles of good feedback practice. Available at
http://www.heacademy.ac.uk/resources.asp?process=full_record§ion=gen
eric&id=353 (Accessed 9/11/06).
Nicol, D.J., Macfarlane-Dick, D. (2006).Formative assessment and self-regulate
learning:model and seven principles of good feedback practice. Studies in
Higher Education, 31(2):199-218
Nolen, S. B., & Haladyna, T. (1990). Personal and environmental influences on
students' beliefs about effective study strategies. Contemporary Educational
Psychology, 15 (2), 116-130.
61
Noyes,
J.
M.,
Garland,
K.
J.
&Robbins,
E.
(2004).
Paper−based
versus
computer−based
O‟Mahony, C. (2003) Getting the information and communications technology formula
right: access + ability.
Olawale and Shafi‟i (2010). E- Exams System for Nigerian Universities with Emphasis
on
Security and Result Integrity.
Oliver Osuagwu (2003): Nigeria‟s Open University: Expanding Educational
Opportunities to Hinterlands through VSAT based Internet Technology,
Conference Proceedings of NCS, vol 14, pg 233-234.
Oppenheim,
A.
N.
(2000).
Questionnaire design ,interviewing and attitude
measurement. London: Continuum International.
Orsmond, P., Merry, S., et al. (1997). A study in self- assessment: tutor and students'
perceptions of performance criteria. Assessment and Evaluation in Higher
Education, 22 (4), 357-369.
Osuji S. N (2005): The Mass Media in Distance Education in Nigeria in the 21st
Century, Turkish Online Journal of Distance Education (TOJDE), vol 6,
no 2. Article 5.
Pelgrum, W. J. (2001) Obstacles to the integration of ICT in education: Results from a
worldwide educational perceptions, Educational Technology & Society, 5(1), 6770.
62
Prensky, M. (2001). Digital Natives, Digital Immigrants. In On the Horizon, October
2001, 9(5)NCB University Press.
Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9, 5, 1–15.
PriceWaterhouseCoopers
Qualifications
(2005)
and
Review
Curriculum
of
e-assessment
Authority.
process
models.
Available
at
http://www.qca.org.uk/downloads/PDF-05-1892_reviewprocessmodels.pdf
(Accessed 9/11/06).
proceedings/ChapmanG.pdf (Accessed 9/11/06)
Proulx, M. and Campbell, B. (1997) The professional practices of faculty and the
diffusion of computer Publications, Ankara, Turkey (in Turkish).
Ramsden, P. (1981). A study of the relationship between student learning and its
academic context [Unpublished Ph.D. thesis]. University of Lancaster.
Ramsden, P. (1997). The context of learning in academic departments. In F. Marton,
D.,Hounsell, & N. Entwistle (Eds.), The experience of learning. Implications for
teaching and studying in higher education [second edition] (pp. 198-217).
Edinburgh: Scottish Academic Press.
Ramsden, P., (1992) Learning to Teach in Higher Education. London: Routledge
Reimann, P. & Zumbach, J. (2003). Supporting virtual learning teams with dynamic
feedback. In K. T. Lee & K. Mitchell (eds.), The “Second Wave” of ICT in
63
Education: from Facilitating Teaching and Learning to
Engendering Education
Reform (pp. 424-430). Hong Kong: AACE.
Reiser, R. A. (2001). A History of Instructional Design and Technology: Part I: A
History of Instructional Design. ETR&D, 2001, Vol. 49, No. 1, 53–64.
Richardson, J. T. E. (1995). Mature students in higher education: II. An investigation
of approaches to studying and academic performance. Studies in Higher
Education, 20 (1), 5-17.
Ricketts, C., and Wilks, S. J. (2002) Improving student performance through
computer−based assessment: insights from recent research. Assessment and
Evaluation in Higher Education. 27 (5) 475−479.
Rickinson, B. (1998). The relationship between undergraduate student counseling and
successful degree completion. Studies in Higher Education, 23 (1), 95-102.
Roblyer, M. D. (2003) Integrating Educational Technology into Teaching (3rd ed), New
Jersey: Merrill Prentice
Rottmann R. M. and Hudson H. T. (1983). Computer Grading As an Instructional Tool,
Journal of College Science Teaching, Volume 12, Pages 152-156
Rovai, A. P. (2000) 'Online and traditional assessments: What is the difference?' The
Internet and Higher Education 3, 3, 141−151
Säljö, R. (1975). Qualitative differences in learning as a function of the learner's
conception of a task. Gothenburg: Acta Universitatis Gothoburgensis.
64
Sambell, K., & McDowell, L. (1998). The construction of the hidden curriculum:
messages and meanings in the assessment of student learning. Assessment
and Evaluation in Higher Education, 23 (4), 391-402.
Sambell, K., McDowell, L., & Brown, S. (1997). 'But is it fair?': an exploratory study of
student perceptions of the consequential validity of assessment. Studies in
Educational Evaluation, 23 (4), 349-371.
Sarason, I. G. (1984). Stress, anxiety and cognitive interference : reactions to tests.
Journal of Personality and Social Psychology, 46 (4), 929-938.
Schmelkin, L. P., Spencer, K. J., & Larenberg, L. J. (1997). Students' perceptions of
the weight faculty place on grading criteria. Perceptual and Motor Skills, 84 (3),
1444-1446.
Sclater, N., Boyle, E., Bull, J., Church, C., Craven, P., Cross, R., Danson, M., Halliday,
L., Service. Proc. of the 2005 CAA Conf.
Segers, M., & Dochy, F. (2001). New assessment forms in Problem- based Learning:
the value- added of the students' perspective. Studies in Higher Education, 26
(3), 327-343.
Shemi, A. P. Mgaya, K.V. (2003). Evaluation of Academics views on Information and
Communication
Technology
diffusion
65
in
Higher
Education:
Botswana
Perspective, in Muuka, G. N. (ed.) 2003 IAABD Conference Proceedings,
University of Westminster, London. ISSN: 0-9700504-0-5
Sim, G., Holifield, P. and Brown, M. (2004) Implementation of computer assisted
assessment:lessons from the literature. ALT−J, Research in Learning
Technology. 12 (3) 216−229.
Slater, T. F. (1996). Portfolio assessment strategies for grading first- year university
physics students in the USA. Physics Education, 31 (5), 329-333.
Stina, B Michael, T Stephen, G & Roberto, T 2000, PILOT: An interactive tool for
learning and grading”, SIGCSEB; SIGCSCE Bulletin.
Stina, B., Michael, T., Stephen, G., and Roberto, T. (2000) PILOT: An Interactive Tool
for Learning and Grading, SIGCSEB; SIGCSCE Bulletin, 2000.
Task Group on UB and Digital Scholarship (2008). Final report of the Task Group on
UB and Digital Scholarship,University of Botswana, Gaborone, Botswana.
Thomas, P. R., & Bain, J. D. (1984). Contextual dependence of learning approaches:
the effects of assessment. Human Learning, 3, 227-240.
Tinoco, L., Fox, E., and Barnette, D. (1997). Online evaluation in WWW-based
courseware. In Proe. 28th SIGCSE Tech. Syrup., pp. 194-198.
Toots, A and Idnurm, T. (2001) Tiger under Magnifying Glass: Study on ICT in
Estonian Schools in 2000. TÜSİAD-T/99-2/252, Istanbul, Turkey (in Turkish).
66
Traub, R. E., & MacRury, K. (1990). Multiple choice vs. free response in the testing of
scholastic achievement. In K. Ingenkamp & R. S. Jager (Eds.), test und tends 8:
jahrbuch der pädagogischen diagnostik (pp. 128-159). Weinheim und Base:
Beltz Verlag.
Treatwell, I., & Grobler, S. (2001). Students' perceptions on skills training in
simulation. Medical Teacher, 23 (5), 476-482.
Trigwell, K., & Prosser, M. (1991). Improving the quality of student learning: the
influence of learning context and student approaches to learning on learning
outcomes. Higher Education, 22, 251-266.
Usluel, Y. K., Mumcu, F. K., and Demiraslan Y. (2007) ICT in the learning and teaching
process: Teachers‟views on the integration and obstacles, Hacettepe University
Journal of Faculty of Education, 32, 164-179
Uys, P M. Nleya, P. and Molelu, G.B. (2004). Technological Innovation and
Management Strategies for Higher Education in Africa: Harmonizing Reality and
Idealism. International Council for Educational Media, EMI 41:1, pp67-80.
http://www.globe-online.com/philip.uys/200303uystechnologicalinnovation.pdf
(accessed on 16 Dec 2010)
Uziak, J. (2008), Acceptance of Blackboard Technology by Engineering Students.
Proceeding Conference on UB and Digital Scholarship.
Van der Merwe, M., Giannini-Gachago, D. (2005). Imparting lifelong learning skills
through e-learning in undergraduate students – Experiences at the University of
67
Botswana. Proceedings of the conference “What a Difference a Pedagogy
Makes. Stirling, Scotland: The Centre for Research in Lifelong Learning.
Van Rossum, E. J., & Schenk, S. M. (1984). The relationship between learning
conception, study strategy and learning outcome. British Journal of Educational
Psychology, 54 (1), 73-83.
Volery, T, Lord, D. (2000), Critical success factors in online education. The
International
Journal
of
Educational
Management,
14/5
216-223
http://www.essaybay.com/articles/linguistics4.pdf (accessed on 21 Dec 2010)
Warburton, W. (2006). Quick win or slow burn? Modelling UK HE CAA uptake. Proc. of
the
2006
CAA
Conference.
Available
from
http://www.caaconference.com/pastConferences/2006/proceedings/index.asp
(Accessed 9/02/11).
Warburton, W. and Conole, G. (2005). Wither e-assessment. Proceedings of the 2005
CAA
Conference
(Editor
Danson
M)
Available
online
http://www.caaconference.com/pastConferences/2005/index.asp
from
(Accessed
9/11/11).
Weavers, C. (2003) What do students want from assessment? Exchange, Issue 4
Spring
2003:12−13 http://www.exchange.ac.uk/files/eissue4.pdf
Weller, Martin − Assessment Issues on a Web−based Course − Assessment
&Evaluation in Higher Education, Mar 2002, Vol. 27 Issue 2, p109, 8p
68
Whitelock, D. and Brasher, A. (2006). Roadmap for e-assessment. Joint Information
Systems
Committee
Report
June
2006
Available
from
http://www.jisc.ac.uk/elp_assessment.html (Accessed 9/11/06).
Wikipedia http://en.wikipedia.org/wiki/Blended_learning (accessed on 18 Dec 2010)
Woolley, D. R. (1994). PLATO: The emergence of on-line community. ComputerMediated Communication Magazine, 1(3), 5.
Woolley, D. R. (1994). PLATO: The emergence of on-line community. ComputerMediated Communication Magazine, 1(3), 5.
Yuan, Z Zhang, L & Zhan, G 2003, A novel web-based examination system for
computer
science
education,
33rd
ASEE/IEEE
Frontiers
in
Education
Conference. S3F-7 – S3F-10.
Yuan, Z. Zhang, L, and Zhan, G. (2003) A Novel Web-Based Examination System for
Computer
science
Education.
33rd
ASEE/IEEE
Frontiers
in
Education
Conference. S3F-7 – S3F-10.
Zeidner, M. (1987). Essay versus multiple-choice type classroom exams: the student's
perspective. Journal of Educational Research, 80 (6), 352-358.
Zoller, U., & Ben-Chaim, D. (1988). Interaction between examination-type anxiety
state and academic achievement in college science: an action- oriented
research. Journal of Research in Science Teaching, 26 (1), 65-77.
69
APPENDIX 1
QUESTIONNAIRE ON STUDENTS’ PERCEPTION OF E ASSESSMENT IN
UNIVERSITY OF ILORIN, ILORIN, NIGERIA.
Dear respondent, this questionnaire is being administered to you in respect of a reseach being
conducted to find the students’ perception of e-assessment/computer based test (CBT). Kindly
complete the questionnaire with absolute honesty bearing in mind that your responses are
strictly for research purpose and will be treated with absolute confidentiality.
SECTION A
Name of your Faculty..................................................................................................................
Level............................................................................................................................................
Sex: Male (
) Female ( )
Your subject area of specialization ............................................................................................
SECTION B
Please tick (
) the column that best suite your level of Information and Communication
Technology (ICT) literacy. The response model for this section are : Strongly Agree (SA),
Agree (A), Disagree (D) and Strongly Disagree (SD).
How does student of University of Ilorin perceived e-assessment?
S/N
1
2
3
4
5
Items
Strongly
agree
I prefer registering my courses
online to manual registration.
I prefer Writing all examination
online (CBT) is a modern
approach than traditional paper
and pencil.
I spend less time when doing CBT
examination.
I am more comfortable with
taking CBT exams than paperpencil based one.
CBT does not allow me to
express my mind.
70
Agree Disagree
Strongly disagree
Do students‟ gender influence their perception of e-assessment?
6
7
8
9
10
Using a computer adds to the
stress of exams.
I would feel more comfortable if
the exam was on paper, not CBT.
I'd rather do exams on a computer
than on paper, because I am used
to working online.
I find it hard to concentrate on the
questions when doing CBT
exams
I expect computers to be used as
part of assessment at university.
Does students‟ area of specialization influence their perception of e-assessment?
S/N
Items
Strongly
Agree
11
12
13
14
15
CBT
examinaton
is
appropriate for my subject
area.
Because you can guess the
answer, online multiple choice
questions don't really
reflect
your
level
of
knowledge
The online test environment
is appropriate for test taking
and convenient
Using computer makes
examination writing easier
for me.
My subject area is too
complex to be dealt with by
online
multiple
choice
questions.
71
Agree
Disagree
Strongly Disagree
Does e-assessement influence students‟ performance in University of Ilorin?
S/N
16
17
18
19
20
Items
Strongly Agree Agree
Marking is more accurate,
because computers don’t suffer
from human error.
CBT exams cover all aspects
of the course, hence I read
wide to pass.
Online assessments favour
some students more than others
Randomised questions from a
bank means that sometimes
you get easier questions.
I am confident that my grades
for online assessments are
secure.
72
Disagree
Strongly Disagree