Training Evaluation - PPT 6
Training Evaluation - PPT 6
Training Evaluation - PPT 6
6-1
Objectives
After reading this chapter, you should be able to:
6-2
Objectives (continued)
5. Choose the appropriate evaluation
design based on the characteristics of
the company and the importance and
purpose of the training.
6. Conduct a cost-benefit analysis for a
training program.
6-3
Introduction
Walgreen Company wanted to determine if the
time, money, and effort devoted to training
technicians actually made a difference.
It was interested in assessing the effectiveness of the
training program.
6-4
Introduction (continued)
Training effectiveness refers to the benefits that
the company and the trainees receive from training.
Training outcomes or criteria refer to measures
that the trainer and the company use to evaluate
training programs.
Training evaluation refers to the process of
collecting the outcomes needed to determine if
training is effective.
Evaluation design refers to from whom, what,
when, and how information needed for
determining the effectiveness of the training
program will be collected.
6-5
Reasons for Evaluating Training
Companies are investing millions of
dollars in training programs to help
gain a competitive advantage.
Training investment is increasing
because learning creates knowledge
which differentiates between those
companies and employees who are
successful and those who are not.
6-6
Reasons for Evaluating Training
(continued)
6-7
Training evaluation involves:
Formative evaluation – evaluation
conducted to improve the training
process.
6-8
Why Should A Training Program Be
Evaluated?
To identify the program’s strengths
and weaknesses.
To assess whether content,
organization, and administration of the
program contribute to learning and the
use of training content on the job.
To identify which trainees benefited
most or least from the program.
6-9
Why Should A Training Program Be
Evaluated? (continued)
To gather data to assist in marketing
training programs.
To determine the financial benefits and
costs of the programs.
To compare the costs and benefits of
training versus non-training investments.
To compare the costs and benefits of
different training programs to choose the
best program.
6 - 10
The Evaluation Process
Conduct a Needs Analysis
6 - 11
6 - 12
Training Outcomes: Kirkpatrick’s Four-Level
Framework of Evaluation Criteria
Level Criteria Focus
1 Reactions Trainee satisfaction
Affective Outcomes
Results
Return on Investment
6 - 13
Outcomes Used in Evaluating Training
Programs: (continued)
Cognitive Outcomes
Determine the degree to which trainees are
familiar with the principles, facts,
techniques, procedures, or processes
emphasized in the training program.
Measure what knowledge trainees learned
in the program.
Skill-Based Outcomes
Assess the level of technical or motor skills.
Include acquisition or learning of skills and
use of skills on the job.
6 - 14
Outcomes Used in Evaluating Training
Programs: (continued)
Affective Outcomes
Include attitudes and motivation.
Trainees’ perceptions of the program
including the facilities, trainers, and
content.
Results
Determine the training program’s payoff
for the company.
6 - 15
Outcomes Used in Evaluating Training
Programs: (continued)
Return on Investment (ROI)
Comparing the training’s monetary benefits
with the cost of the training.
Direct costs
Indirect costs
Benefits
6 - 16
How do you know if your outcomes are
good?
Good training outcomes need to be:
Relevant
Reliable
Discriminate
Practical
6 - 17
Good Outcomes: Relevance
Criteria relevance – the extent to which
training programs are related to learned
capabilities emphasized in the training
program.
Criterion contamination – extent that training
outcomes measure inappropriate capabilities or
are affected by extraneous conditions.
Criterion deficiency – failure to measure
training outcomes that were emphasized in the
training objectives.
6 - 18
Criterion deficiency, relevance, and contamination:
Outcomes Identified
Outcomes
by Needs
Measured in
Assessment and
Evaluation
Included in Training
Objectives
6 - 19
Good Outcomes (continued)
Reliability – degree to which outcomes
can be measured consistently over time.
Discrimination – degree to which
trainee’s performances on the outcome
actually reflect true differences in
performance.
Practicality – refers to the ease with which
the outcomes measures can be collected.
6 - 20
Evaluation Designs: Threats to Validity
6 - 21
Threats to Validity
Threats To Internal Validity
Company
Persons
Outcome Measures
Threats To External Validity
Reaction to pretest
Reaction to evaluation
Interaction of selection and training
Interaction of methods
6 - 22
Methods to Control for Threats to Validity
Random Assignment
6 - 23
Types of Evaluation Designs
Posttest – only Time series
6 - 24
6 - 25
Factors That Influence the Type of Evaluation
Design
Factor How Factor Influences Type of Evaluation Design
Change potential Can program be modified?
6 - 26
To calculate return on investment (ROI),
follow these steps:
1. Identify outcome(s) (e.g., quality, accidents)
2. Place a value on the outcome(s)
3. Determine the change in performance after
eliminating other potential influences on
training results.
4. Obtain an annual amount of benefits
(operational results) from training by
comparing results after training to results
before training (in dollars)
6 - 27
To calculate return on investment (ROI),
follow these steps: (continued)
5. Determine training costs (direct costs +
indirect costs + development costs + overhead
costs + compensation for trainees)
6. Calculate the total savings by subtracting the
training costs from benefits (operational
results)
7. Calculate the ROI by dividing benefits
(operational results) by costs.
The ROI gives you an estimate of the
dollar return expected from each dollar
invested in training.
6 - 28
6 - 29