Question Preparation, Validation

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

SEMINAR

ON
QUESTION BANK
PREPARATION,
VALIDATION AND
MODERATION BY
PANEL
SUBMITTED TO:
PROF.K.CHANDRALEKHA
DEPT. OF MEDICAL SURGICAL NURSING
ICON
SUBMITTED BY:
MR.P.NEELAKANDAN
M.SC NURSING I YEAR
ICON.

SUBMITTED ON:
QUESTION BANK-PREPARATION VALIDATION,
MODERATION BY PANEL
INTRODUCTION:
The question bankmakesavailablestatisticallysoundquestions of known technical
worth and model
questionpapersandthusfacilitatesselectionofproperquestionsforawelldesigned to
questionpaper. A large number of questions can be created and pooled from which the faculty
can select appropriate questions based on the test blueprint. Test banks also are created easily
by using a word processing program that is used to store test items and make item and make
item revision easier. All the questions would be vetted by course team members for quality
and then they would be indexed and banked. These questions when pooled in a bank, is called
a question bank.
The advantage of question bank is that questions which work out well in practice can
be reused on a number of later situations. Thus, new questions do not have to be generated at
the same rate from year to year and the quality of question gradually improves. The question
bank is, thus a planned library of test items pooled through the cooperative efforts under the
aegis of an institution for the use of evaluators, academics and students in partial fulfilment of
the requirements of the teaching-learning process.
DEFINITION:
“Arelativelylargecollectionofeasilyaccessibletestquestions.”
Thequestionbankmaybedefinedasakindofreservoirofanumberofsetsofquestionson
Eachsubjectinwhichexaminationistobeheldandfrom whichasetforany particular
examinationcouldbepickedout at random andatshortnoticeandsenttothepress.
Itisaplannedlibraryoftestitemsdesignedtofulfilcertainpredeterminedpurposes.It should
covertheentireprescribedtext.
PRINCIPLES:
 Bank planning: Analysis of subject matter and content
 Collection of test items: Teachers and item writers specially trained for purposes, Past
 Examination papers
 Try out and item analysis
 Using Item analysis data
 Banking selected items
 Administering Sample test

PURPOSES OF QUESTION BANK:


 To improve the teaching learning process.
 Through instructional efforts the pupils growth will be obtained
 To improve evaluate process
 A tool of test can be used for formative and summative evaluation of the pupils.
 It is a pool of ready-made quality questions made available to teachers and examiners.
So that they may select appropriate questions to assess predetermined objectives
PLANNING A QUESTION BANK:
Planning for a question bank involves the following:
 Defining processes for preparation of individuals who develop question bank.
 Preparatory work for the question bank.
 Identifying what has to be established with the question bank.
The two major objectives of planning a question bank can be:
 To increase the value of the measurement
 To increase the pedagogical value of evaluation.
PREPARATION OF QUESTION BANK:
 Spent adequate amount of time for developing the question
 Match the questions to the content taught
 Try to make the question valid, reliable, and balanced
 Use a variety of testing methods
 Write questions that tests skills other than recall
 Tomeasureknowledge
 Tomeasurecomprehension
 Tomeasureapplication
 Tomeasureanalysis
 Tomeasuresynthesis
 TomeasureEvaluation
PREPARING THE QUESTION CARDS
 It is the easiest method where an institution can prepare for its own. The finalized
questions may be written on the card using various colours for essay, short answer
and objective type questions and also for various units for easy identification.
 Then these cards can be properly arrange with the details of topic, estimated
difficulty level, subject specification, estimated time limit, etc.
DEVELOPMENT OF QUESTION BANK:
The type question in the question bank may range from written examination
questions, oral examination questions, practical exam questions or the question of all three
types.
BLUEPRINTING FOR DEVELOPING QUESTION BANK:
The blueprint can be prepared for:
 The behavior/objective aspect.
 The content/subject area aspect.
The objective aspect refers to the expected learning outcomes in terms of abilities like
recall, recognition, translation, extrapolation, application, analysis, synthesis, evaluation and
any other abilities.
The content aspect include the unit and the subunit. A good question pool will be one
that contains questions on all the topics in a subject testing all abilities. A mere collection of a
large number of questions will not constitute a question pool of quality, unless all such items
fit into a predetermined structure.
Questions may be collected from old question paper set in various examination, from
standardised test, experienced teachers, examiners and paper setters.
It can also be collected from the practicing teachers.
Preparations of question banks require a lot of cooperative efforts. Expertise has to be
tapped from all the available sources (from within and outside the university) and pooled
together.
Writers and reviewers of question for the bank should have, besides their expertise in
the subject content and teaching experience, sufficient grounding in evaluation methodology.
Even persons selected to act as paper setters, moderators or evaluators should have
not only prescribed experience of teaching the subject, but also adequate background modern
evaluation methods.
SCREENING OF QUESTIONS:
After the questions are written, it can be passed when members of the group for their
comments. Then, the comments are passed on to the question paper setter or the author of the
questions for corrections. Then it can be passed on to a committee or a team to review and
finalize it.
ITEM REVIEW:
Review, editing and revalidations of items submitted by the item-writers should be
done in presence of item writers under the guidance of content and evaluation specialists.
Generally the target of number of items per course at the optimum level is taken as 10 times,
the total number of items to be taken in a question paper. There is no upper limit to question
bank size. Each question is item to be deposited in QB for a particular course must.
 The selection/chapter and unit number of the book
 Type of item and subtype
 Estimated level of difficulty from the point of view average learner
 Maximum marks
 Time (in minutes) required to answer the question paper
 The marking scheme for supply question type and ‘key’ answer for selection
type questions
 The level of educational objectives which is intended to possibly test (in the
taxonomy hierarchy).
Items available for different courses should essentially pass through the revalidation
process particularly common and evaluation-edition at the time of their usage for proper
setting.
Post validation of question used in evaluation involves determining the descriptive
statistics such as mean, standard deviation, marks distribution, standard error of measurement
of marks distribution, student (population sample) in the examination of each course and item
analysis.
Item analysis is the statistical study of the performance inn each question by the group
as against the performance in question paper as a whole.
There is a necessary feedback for the future improvement of the question bank. This
can be achieved with the help of a computer and using standard method of analysis.
Descriptive statistics from the test analysis is extremely useful in making decisions
such as pass/fail, grace marks is borderline cases, grading and so on. The results of analysis
are essentially important for improvement of question banks, for deciding on the reuse of
‘good’ question for future examination and improvement or rejection of poorly functioning
questions.
PROCESS OF ITEM ANALYSIS:
Item analysis indicates which items may be too easy or in difficult and which may fail
to discriminate between the better and poor examinees. Item analysis suggests why an item
has not functioned effectively and how it might analysis has be improved. A test is more
reliable if item analysis has been developed. Item analysis consists of following steps. They
are:
Step 1: conduct a test on the terms prepared.
Step 2: evaluate the answer sheets objectively.
Step 3: arrange answer sheet based the score obtained in descending order.
Step 4: identify an upper group and a lower group. Upper group is highest-
scoring 25% and lower group is lowest-scoring 25%
Step 5: for each item, count the number of examinees in the upper group, who
have chosen each response alternative. Do similarly for the lower group.
Step 6: calculate the items difficulty (called the p-value)
Step 7: calculate the index of discrimination (D value)
Discrimination index can be used in the selection of the best items (i.e. most highly
discriminating) for inclusion in an improved version of the test. Analysis of a wide variety of
classroom test suggest that the indices of item discrimination for most of them can be
evaluated in the following manner.
If index is greater than or equal to 0.40 then items are good items.
If index is between 0.30 to 0.39 items are good but possibly needing improvement.
If index is between 0.20 to 0.29, items are marginal and need improvement.
Below 0.19, the items are poor and liable for rejection. Hence it becomes clear that a
test with higher average index of item discrimination will always be more reliable and better.
The sum of the indices of discrimination for the items of a test and the variance of the scores
on the test is expressed in the formula by Ebel.
Sx2 = (∑D)2/6
Where D is the discrimination index and Sx is the standard deviation of scores.
Hence, one can say that score variance is directly proportional to the square of the sum of
discrimination indices, (∑D)2. The larger score variance for a given number of items, the
higher is the reliability of the sources. The formula also indicates that the greater the average
value of the discrimination indices, the higher the test reliability is lightly to be.
The possibility of preparing valid, reliable and useful questions is greatly enhanced if
a basic steps are followed. They are:
 Discriminating the purpose of testing
 Developing the test specification
 Selecting appropriate questions
 Assembling the test
 Using the results.
FILLING AND STORAGE OF QUESTIONS
It can be stored in file cabinet kept under lock. It is good to have a duplicate of it.
Duplicate can be issued to the teachers or testers whereas the original can be kept intact for
official use and reference.
REVIEW AND REMOVAL OF UNWANTED QUESTIONS
A question bankmay become a store outdated material after some years, if not
evaluated at regular intervals, enrichment of questions by updating, replacing, discarding,
modifying, adding new questions, regrouping and classification is to be an ongoing process to
give the question bank a dynamic look.
Computer expertise is an essential requirement of question banking. One should be
capable of modifying computer programmes, establishing a database system, and capable of
running packaged program. For planning a question bank, evaluation pattern of the program
has to be specified.
COMPUTERIZED QUESTION BANKS:
One essential activity for the “on demand” examination system is the preparation of
question banks. The difficulty of the test is determined by difficulty of the items that
comprise it. Item analysis is thus a good exercise to do. As long as differences in student
learning exist and as long as the purpose of testing is to identify such differences, the
distribution of test source should exhibit high variability.
The larger the standard deviation of the score, the more successful the test construct
or has been in capturing the individual difference in achievement. The reliability of the
source of a group is an important measure of the quality of the scores.
All these characteristics are important to consider in evaluating the quality of a test
and the evaluation of each can provide clues regarding the ways in which the test items might
be received and improved, for further use.
Computerized question banks are very useful in test department. Items are classified
according to relative difficulty. Once items are inserted in the question banks, new set of
question papers can be made with known or desired characteristics. The effects of including
or excluding particular items can also be predicted. A question bank can store as many
questions as possible so that generation of randomized tests is done without any difficulty.
Question banking thus provides substantial serving of time and energy
overconventional test development. In a conventional setup, questions can be described
relative to the other items within the test and between the students who took it, whereas
question banks are not specific, questions are described by their relative difficulty across
grade levels and drawing them from the question bank allows one to make fairly accurate
predictions concerning composite test characteristics. A question bank also helps in providing
a platform for discussing curriculum goals and objectives. The items put in the question
banks can be made to inherit properties like common mistakes made by the students, their
capabilities and in-capabilities, etc. this provides a way to discuss possible learning
hierarchies and ways to better structure curriculum.
TEST:
PURPOSE:
Before instructions:
 To determine readiness
 To place the candidate or categorize
 To assess existing knowledge.
During instructions:
 To assess learning
 To use as diagnostic tool.
After instruction:
 To assess the learning outcome
 To assess the level of mastery
 To grade.
General purpose:
 To direct, stimulate, motivate
 To assess teaching effectiveness.
TYPES OF TESTS:
Criterion referenced tests:
Tests are those that are constructed and interpreted according a specific set of
learning outcomes. Useful to measure mastery of subject matter.
Norm referenced test:
Norms referenced test are those constructed and interpreted to provide a
relative ranking students. This type of test is useful for measuring different
performance among students.
Table of specification:
It includes map, test grid and test blue print. The purpose of developing table of
specification is to ensure that the test serves its intended purpose.
Step I. Define the specific learning outcome to be measured. It can be derived from
course and unit objective. It is written as a statement that specifies the completion of
instruction. Bloom’s taxonomy is used as guide for developing and levelling general
instructional and specific learning outcomes.
Step II. Determining the instructional content to be evaluated and the weightage to be
assigned to each area. This is done by developing a content outline and using the amount of
time spent teaching the material as an indicator for weighing. This is calculated by using the
below mentioned formula.
Number of items/section= percentage of teaching time* total number of items.
Percentage of teaching time for each content is calculated from course plan. The total
number of items is planned by the teacher.
Two ways grid is developed with content areas being listed down in the left side and
learning outcomes listed across the top of the grid. Each cell is assigned a number of
questions-based on the weighing of content and cognitive level of learning outcomes.
SELECTING ITEM TYPES:
Items may be selection type which provides a set of response from which to choose or
supply type, which requires the student to provide an answer. Common education type items
include true false, matching and multiple choice. Supply type items include short answer and
essay. The choosing of type of item depends on what we want to measure from the student.
Generally lower level outcomes (knowledge, comprehension and application) can be
easily evaluated by selection type items whereas higher level outcomes (analysis, synthesis
and evaluation) require the use of supply type.
EDITING AND VALIDATING ITEMS:
It is very essential to edit the edits the items and make any needed corrections. At this
stage, peer review of questions is helpful for refining the question, ensuring accuracy, testing
for reliability and eliminating grammatical errors.
ASSEMBLING AND ADMINISTERING A TEST:
Once the items are written and edited, they must be assembled into a test. This step
includes arranging the items, writing test directions and responding and administering the
test.
ARRANGING ITEMS:
The following points to be considered in arranging the items:
 Grouping similar item types together
 Place items within each group in ascending order of difficulty
 Begin the test with an essay question. Follow simple to complex and known to
unknown
THEQUESTIONSMAYBEARRANGEDISAS FOLLOWS:
 Objectivebehavioraspect/abilitiesincognitiveandaffectivedomains.
 Content/subjectaspect.
 Form ofthequestionaspectlikeessaytype,shortanswer.
 Weightagesaspect.
DIMENSIONSOFQUESTIONBANK:
I. STUDENT LEARNING:
 Studentoutcomes
 Collaborative/Cooperativelearning
 Studenteffortandinvolvement
II. TEACHING PRACTICE:
 Organizationandpreparation
 Communication
 Faculty/Studentinteraction

III.COURSEELEMENTS:
 Grading
 Examinations
 Textbook
 Assignments
 Audio-Visualaids
 Technologyusage
 Coursedifficulty,Paceandworkload
LEVELSOFQUESTIONBANK:
ZeroLevel:Thequestionbankisjustquestionsandquestionsareclassifiedaccordingthe
areasofthesyllabus
LevelOne:Certaindetailsarrivedintheform ofguessestimatesbyconsensusof
Experiencedteachersandsubjectmatterexperts
Leveltwo:Thequestionsareclassifiedaccording tothecontentoflearningobjectivesthat
they testandeachquestionispre-testedanditem analysiscarriedouttogivemoreaccurate
Informationsuchasthefacilityindexanddiscriminationindex
Levelthree:Itisamereextensionofleveltwo, atthislevelthequestionswiththeirtechnical
detailsarestoredinacomputerfacilitatingtheirretrievalandmanipulationwithinavery short time.
WRITING TEST DIRECTIONS:
The written test directions should be self-explanatory. It should include the following
information:
 Time allotted to complete the test
 Instruction for responding (choose the most appropriate answer)
 Instruction for recording the answer in answer sheet
 Marks assigned to each question.
REPRODUCING THE TEST:
o The test should be easy to read and follow. The following guidelines are suggested:
o Type test neatly
o Space items evenly
o Number items consecutively
o Keep item stem and options on the same page
o Place introductory material (graph or chart) before item
o Keep matching on the same page
o Proofread after compiling and before duplicating

ADVANTAGES:
 It is the storage of large number of question.
 It saves time and energy over conventional testdevelopment.
 It provides platform for discussing curriculum goals and objectives.
 Questionbankmakeavailablereadymadetestitemsforusebyeveryteacher
 Thecooperativeeffortsresultintheimprovementofitem quality
 Mostoftheexaminationweaknessesareminimizedbyusingquestionbanks
DISADVANTAGES:
 Questionbankisnotcureallformeasurementproblems
 It requires great deal of time in preparation, planning and development of the question
bank.
 All the items should be analysed before including in the question bank.
 Item analysis involves the use of various mathematical and statistical procedures.
GRADES:
 In the evaluation system, grading is a recent phenomenon, earlier and even now in
many course of study the scoring system is still used. Grading when compared to
traditional system of scoring has some pertinent advantage.
 Drawbacks of traditional scoring system:
 Marking involves subjectivity and bias
 Results are declared as either or fail
 All scores have to be summated the end for assigning a particular division
 The traditional grading system was to assign a single letter grade for each subject-
A,B,C,D,E
BASIC DRAWBACKS OF THE TRADITIONAL GRADING SYSTEM:
 They are a combination of achievement, effort, work habits and good
behavior
 The proportion of students assigned each letter grade varies from teacher to
teacher.
 They do not indicate the student’s specific strength and weakness in learning
While assigning grades the following queries have to be classified.
 What should we include in letter grade?
 How do we convert a client data into grades?
 How do we create a frame for grading?
Basically, letter grades are meaningful if they represent achievement, but if it is mixed
up with the other external factors like efforts, works etc, it gets contaminated as they do not
consider quality aspects.
However, teachers feel even these factors should be considered while giving grades.
It is very difficult to assess exactly a student’s effort/potential; also it is hard to distinguish
between aptitude and achievement.
Some students are good at certain aspects and other are good in some other aspects.
This means if grading is considered differently for different students if may send wrong
messages that may be unfair at times.
While converting scores into grades different components have to be given different
weightages and the composite score should be generated for which grade should be decided.
For instance, 60% weightage to the final examination, 20% to the presentation and
20% to be decided and together the composite should be generated for 100.
But in the above example, the weightage given to different components being
different the strengths of the student in each area may vary and affect their aggregate score.
The other way may be to take less number of aspects, e.g. 2 aspects or assign equal
weightage to different components, thus the scoring and grading becomes more objective. In
fact a more refined weightage could be using standard deviation as a measure of variability.
If teachers are equipped with computing skills then measurement becomes very simple and
more objective
According to Jacobs and Chase (1992), the characteristics of a good grading system
include:
 Informing students of the specific grading criteria at the beginning of the course
(stated clearly in the prospectus/syllabus).
 Grades should be assigned only based on the learning outcomes and not taking
into considerations of the factors such as attendance and effort.
 Collecting sufficient data before assigning a grade
 Recording data collected for grading purpose quantitatively (e.g. 90% not A)
 Following uniform grading systems for all students
 Using statistically sound principles for assigning grades
ASSIGNING GRADES:
Grading the students not only gives feedback and it also motivates the students. Each
institution might have grading policy or scale. The two basic methods for assignment the
absolute and the relative or comparative scales.
ABSOLUTE SCALE:
The grading is very convenient when the course objective have been clearly specified
and standard and mastery level approximately set the letter grades in an absolute system may
be defined as the degree to which objective have been attained. The students’ earned marks in
percentage are compared with the standard and grades are assigned
They can be discussed in the following ways
 Pre-established percentage score
 Criterion referred grading
 Numeral rating
I. PRE-ESTABLISHED PERCENTAGE SCORE:
The different range of percentage marks for assigning grade letter is used. The test
and the assessment designed to yield scores in terms of percentage of correct answer; the
absolute grading can be decided as given below:
A =95% to 100% correct
B =85% to 94%
C =75% to 84%
D =65% to 74%
E =below 65% correct
For example:
Assigned grades
90-100 A
80-89 B
70-79 C
60-69 D
<60 E
II. CRITERION REFERRED GRADING:
The criterion referenced standard has been fixed either by the teacher or the
authorities before hand in view of the difficulty the test, and standard or quality of learning
performance needed from learners, i.e. decision with regard to the students performance in
terms of behavioural changes, on the basis of which letter grades are gained.
This can be represented as:
Grade Performance Level of performance
A Outstanding Students has mastered all the courses, major and minor
instructional goals
B Very good Has mastered all the course, major instructional goals
and most of the minor ones.
C Satisfactory Student has mastered all the courses, major
instructional goals but just a few minor ones.
D Very weak Student has mastered just a few of the courses major
senior instructional goals are basically has essential
needed for the next highest level of instruction.
E Unsatisfactory Student has not mastered any of the lower major
instructional goals. Does not need next higher level
instruction. Remedial work is needed

THE GRADING SYSTEM:


A+, A, A-:
Full mastery of the subject; in the case of the grade of A+, the student must be
extraordinary distinction.
B+, B, B-:
Good comprehension of the course material; a good command of the skills needed to
work with the course material; and the student’s full engagement with the nurse requirements
and activities
C+, C, C-:
Adequate and satisfactory comprehension of the course material; the skills needed to
work with the course material; the student has met the basic requirements for completing
assigned work and participating in class activities.
D+, D, D-:
Unsatisfactory, but some minimal command of the course materials; some minimal
participation in class activities that is worthy of course credit toward the degree
E:
Unsatisfactory and unworthy of course credit towards the degree
Range of Letter grade
marks
85-100 A+
75-84 A
70-74 A-
65-69 B+
60-64 B
55-59 B-
50-54 C+
45-49 C
40-44 C-
35-39 D+
30-34 D
25-29 D-
00-24 E

III. NUMERICAL GRADING:


Here, each objective or benchmark is represented with a numerical rating. This can
be as follows:
Acquired proficiency:
 4 = skill developed, good proficiency
 3 = skill developed satisfactory, proficiency could be improved
 2 = basic skill developed low proficiency, needs additional work.
 1 = basic skill not acquired
ADVANTAGES:
o The objective-based grades are determined based on the students performance
o Performance based grade is assigned
o If all students demonstrate the same level of mastery, all of them will receive high
grades respectively.
o A comprehensive report generally includes a checklist of objective to inform both
parents and students
o Reports provides detailed information to indicate which skill is acquired and which
has not been acquired
o All standard procedures are followed.

LIMITATIONS:
o Teacher have no flexibility to prepare as per the local needs
o Objective based performance may not be seen all the times.

RELATIVE SCALE:
o A relative scale rates students according to their ranking within the group.
o The faculty has to record the scores of all students in a descending order.
o Grades may then be assigned using a variety of techniques.
o One method is to assign the grades using natural “breaks” in the distribution. This
method has disadvantage of being subjective.
o The other method is to find out the measures of central tendency (mean or mode). In a
bell shaped curve (normal distribution) mean and mode will be the same and in
skewed distribution, median can be used as a measure of tendency.
o Then determine the standard deviation. The C grade will be set as the mean plus or
minus one half the SD (encompassing 40% of the scores).
Grade Calculation
A >upper B
B Upper + 1SD
C Mean ± 5 SD
D Lower – 1 SD
E < Lower D

ADMINISTERING THE TEST ROLE OF FACULTY:


Provide conducive physical environment
Conducive physical environment include adequate lighting, comfortable room
temperature, and sufficient space for each candidate, minimal interruptions,
away from loud noise, comfortable furniture, and ventilation. Faculty should
maintain nonthreatening attitude and avoid unnecessary conversation before
and during examination.
Faculty should avoid giving unintentional clue to the students
Maintain confidentiality
Maintain the test security (looking up the question paper)
Do not give the same paper each year
Inform earlier the consequence of cheating
Ensure close supervision throughout the examination
Ensure careful seating arrangements and spacing of students
VALIDATION AND MODERATION BY PANEL
The term validation is used to describe the process whereby expert panels meet to discuss
and offer critical comment on test materials. Before marked work is handed back to the
students, it undergoes two kinds of moderation:
Internal moderation.
External moderation.
INTERNAL MODERATION
Internal moderation can be carried out in a number of ways, but the principles remain
the same, i.e. to ensure fairness and consistency of marking across the program.
Internal moderation normally consists of the security of a sample of students work across the
range of marks.
Internal moderation and double-marking (second-marking): It is important to
distinguish internal moderation from double marking. Double or second marking is a process
whereby two tutors independently mark a student's work and come to an agreement about the
final mark awarded. However, this process has limited value: as it focuses on the work of
individual students it does not tell us anything about the overall consistency of a given
marking tutor across their entire range of marking. Moderation, on the other hand, involves
one marker evaluating another marker’s judgment.
Internal moderation process: One effective way to carry out internal moderation is
to select from each marking teacher a sample of marks that they have awarded at each grade
or percentage band. Of course, it is important that markers avoid moderating their own work.
The moderators will have a sample of each grade or percentage band for every marking
teacher and are then in a position to ascertain the degree of consistency between markers for
each grade or percentage band. If only one unit is being assessed, the internal moderation will
sample only in relation to that unit.
When internal moderation has been completed, a sample of assessments should be
sent to the external examiner for further scrutiny.

EXTERNAL MODERATION
External moderation is undertaken by the external examiner, who will inform the
program leader of the procedures that he or she would like adopt for this.
Normally the external examiner would receive all referred papers and all papers
awarded the highest grade, as well as a sample of papers from each grade or
percentage band. The programme leader needs to ensure that the external examiner
has a time scale sufficient for adequate scrutiny of the papers.
Once the moderation system has been completed, assessment work, can be returned to
students. Assessments are made available for collection at the departmental office.
Students should note that any mark awarded is provisional at this stage. The final
mark is determined when the board of examiners meets, and a unit pass list is sent to
each successful student as soon as possible after that meeting. Students who were
unsuccessful are informed individually by letter.
PURPOSE OF MODERATION
The purpose of moderation is to ensure consistency and fairness of marking amongst
the unit markers.
Inter-marker reliability, i.e. consistency, is notoriously low, and internal moderation
seeks to expose marked papers to a second scrutiny so as to determine the consistency
of marking standards between different unit markers.
External moderation is carried out by the external examiner appointed to the program,
and the aim here is also to monitor consistency and standard of marking to ensure that
students are being assessed fairly
SUBJECT ASSESSMENT PANELS
Some higher education institutions allow board of examiners to delegate
responsibility for the assessment of groups of units to subject assessment panels. The chair of
such panels is normally the head of the school that houses the majority of units, and
membership consists of internal examiners, i.e. staff who taught the units under
consideration, and external examiners.
CHAIRING THE BOARD OF EXAMINERS
The board of examiners is normally chaired by the head of department or his or her
nominee, and the approach should be formal and rigorous. Each meeting should be
numbered, for example ‘fifth meeting of the board of examiners’; this facilitates review of
decisions taken at previous meetings.
The chair should never take the minutes of the meeting, as this would distract him or
her from conducting the meeting in an appropriate manner. The minutes should be headed
‘confidential’, and recorded by an experienced administrator who should identify the chair
and secretary by name.
VALIDATION BY PANEL
The validation panel is comprised of key stakeholders in early childhood education.
The panel plays an important role. The Validity of an instrument refers to the degree to which
an instrument measures what it is supposed to be measuring. For example, a temperature
measuring instrument is supposed to measure only temperature; it cannot be considered a
valid instrument if it measures an attribute other than temperature; it cannot be considered as
valid instrument.
According to TREECE and TREECE, Validity refers to an instrument or test actually
testing what it supposed to be testing
The number and choice of panelists will depend on circumstances and resources.
The need for confidentiality must be stressed. There will be a panel chairperson, who
might also act as a co-coordinator for distribution of the materials well in advance.
The other panelists are chosen for their expertness, the variety of viewpoints they can
contribute, and their number should include some representation from those who will
eventually use the results.
Gender balance should be maintained where possible.
Each item writer whose work is being reviewed acts as secretary to the panel while
those items are being discussed. The item writer using the panel meeting records,
written or taped, to edit or vet the draft items as meticulously and slowly as possible.
The panel will have engaged in a variety of discussions and made many suggestions:
they will have offered hunches about the validity or difficulty of items, have given
their perceptions about the plausibility of distracters, and have pointed out actual
errors of fact or language use.
It is now up to the item writer to respond and accommodate to as many of these
comments as seem sensible from a professional viewpoint.
SUMMARY:
So for we are discussed about question bank-preparation, validation, moderation by
panel, utilisation of introduction, definition, principles, purpose of question bank, planning a
question bank, preparation of question bank, preparing question cards, development of
question bank, process of item analysis, filling and storage of question, review and removal
of unwanted question, computerized question bank, test, purpose of test, types of tests,
selecting item types, editing and validating items, assembling and administering a test,
arrange items, arrangement of questions, dimension of question bank, levels of question bank,
writing question directions, reproducing the test, advantage, disadvantages, grading,
drawbacks of traditional grading system, assigning grades, absolute grades, the grading
system, relative scale and administering the role of faculty
CONCLUSION:
Teachers have to play multifaceted roles as part of their job requirements. They can
contribute significantly in setting a question paper, evaluating the answer script and awarding
the mark/grade of healthy environment and social order.
BIBLIOGRAPHY:
1) B.T BASAVANTHAPPA;“Nursing education”; 1st edition 2004; EMMESS medical
publishers
2) KP NEERAJA; “a textbook of communication and education; 2nd edition; 2006.
3) J. J GULIBERT; “educational handbook for health personnels”; 6th edition; CBC
publishers.
4) MARILYN E.PARKER; “a textbook of nursing education”; 1st edition 2003; JAYPEE
BROTHERS publishers.
5) MR. ELAKUVANA BHASKARA RAJ; “textbook of nursing education”; EMMESS
medical publishers.
6) JASPREET KAUR SODI; textbook of nursing education, 1st edition 2017; JAYPEE
BROTHERS publication.

JOURNAL
1) THE CREATION OF “QUESTIONS BANK” AND INTRODUCTION OF 2.0.
EXAMINATION SESSION ANDRZEJ FILIP, PIOTR DRAG
Author: ANDRZEJ FILIP, PIOTR DRAG
Journal: Information system in management
Distance Learning Centre, Jagiellonian University
The Institute of American Studies and Polish Diaspora, Jagiellonian University
In the Institute of American Studies and Polish Diaspora Jagiellonian University, with the
support of the Distance Learning Centre at the Jagiellonian University was introduced an
innovative method of examination based on empowering the students. During the 2014
session, the students were invited to create test questions. Accepted by the lecturer questions
were used on the exam. Extensive “Questions Bank” may be used in subsequent
examinations 2.0. The authors of the paper present practical advice on how to prepare and
carry out such an examination. They share know how of practical suggestions from
pedagogical to technical aspect of moving from teaching to learning while using the idea of
Questions Bank. They discuss the impact on the motivation and creativity of students, the
principles of achievement and assessment, methods of verifying the content of the questions
and technical measures to make questions and hindering cheating. The use of innovative
methods of preparing and conducting the exam based on the Questions Bank had a positive
impact on the mobilization and involvement of students, which resulted in a very good
performance evaluation questionnaires of the lecturer.
2) AUTOMATIC GENERATION OF QUESTION BANK BASED ON PRE-DEFINED
TEMPLATES
Author: AHMED EZZ AWAD, MOHAMED EHIA DAHAB
Journal: International journal of innovation and advancement in computer science
The preparation of question bank is a difficult and time consuming. This paper
explains an algorithm that provides solution to automatic generation of question bank based
on a set of pre-defined templates. All possible questions are generated by parameterized
concepts from a set of pre-defined templates. The generated questions cover all selected
topics in all level of difficulties the form of a multiple-choice question (MCQ). The outputs
can be used for both examinations and asynchronous training. If the output used in
asynchronous training, it augmented with a set of brief explanation. A successful automatic
generation of question bank has been developed for a biology course (Bio110) in KING
ABDULAZIZ UNIVERSITY (KAU).
3) VALIDATION OF A QUESTION BANK AS PREPARATION FOR THE
EMERGENCY MEDICINE IN- TRAINING EXAMINATION
Author: JOMASELLI P, GOVERNATORI N.
Journal: Western journal of emergency medicine
These results suggest that a questions bank may be useful for predicting performance
on in-training exam scores, Major limitations of the study include small sample size and the
use of one particular question bank. Further research is necessary to compare different study
preparation materials

You might also like