Training Metrics Matter
Training Metrics Matter
Training Metrics Matter
Matter:
Developing Measurements
That Drive Business Results
From Your Training Efforts
by
Joe Thomas, Ph.D.
Orgwide Services™
165 N. Main St., Suite 202
Collierville, TN 38017
901.850.8190
Table of Contents
Before we explore each of these reasons (excuses?) in more depth, let’s quickly
review why we should even care about improving our understanding of
quantitative and qualitative measurements of outcome and impact. So let’s ask
the question that all managers ask: What’s in it for me?
Training metrics matter. Metrics are the language of operations managers and
thus, should be the language of training managers as well. And, most training
managers, despite their lack of confidence, are quite capable of developing
meaningful training measurements, applying them to their training initiatives,
and learning to “listen” to what those metrics say about the efficacy of their
training activities, efforts, and programs.
Let’s look at each of the three, general reasons that managers typically offer to
avoid adopting and applying measurements to their training programs. Then
we’ll review the basics of developing training metrics in a short primer.
REASON #1: Training managers don’t trust their own capabilities to collect and
understand metrics.
Training managers often cite the excuse, “It’s too technical and specialized” to
explain why they believe they are incapable of understanding basic training
measurement systems. In most instances, this reasoning reflects an acceptance
of the mystery that surrounds statistics and applied math, a fallacy that has been
perpetrated by measurement specialists themselves. It has always been in the
best interest of statisticians to create an aura of difficulty around the practical
use of numbers to describe natural phenomena—I suppose in the interest of
“job security.” In reality, applied statistics make perfect sense and are quite
easily understood by the typical non-statistician.
Another reason frequently offered is that training statistics are generally “not
well understood.” Nothing could be further from reality. Training measurement
systems are among the most prolific and well-understood numerical descriptors
used in the business world today, second only to financial metrics. This excuse
falls flat in the face of the plethora of journals, books, and magazine articles that
succinctly describe and explain how to measure training effectiveness. From
Kirkpatrick Model of Training Evaluation to basic operational efficiency
measures, training measurement systems are most certainly not a mystery.
Finally, we often hear that capturing training metrics is “just too costly.” That’s
like saying that balancing your checkbook is too costly an exercise. Or taking the
time to capture financials about your operation is not worth the time and
expense. What’s the true cost of not measuring your training activities and
initiatives? How long can training managers offer courses that purport to
improve a business unit using guesswork, hoping everything will work itself
out? Hope is not a good training strategy.
REASON #2: It’s been their experience that training metrics aren’t going to be used
anyway, so why bother?
It’s easy to understand why managers adopt a “Why bother?” attitude. Many
numbers are bantered about in an operation; few are acted upon. However, the
reason for this excuse is understandable—other than commonly-used financial
information, many numbers published are actually worthless and don’t tell the
executives a thing about the operation. So, the managers are actually correct in
assuming executives will ignore their “numbers.” But it’s because experienced
executives ignore trivial, nonsensical, numerical descriptions of a training
department because they know which numbers to pay attention to. As a result,
these managers enter into a never-ending cycle: show meaningless numbers, be
ignored. Technically, this excuse is reasonable—but the cycle need not continue.
Managers have the power to apply the right measurements to their training
efforts to get their senior management team to pay attention. But, they have to
know which metrics to apply.
This is a reason I can finally get my head around. If I were concerned about my
competence as a training manager and afraid that I didn’t have the skills to
develop and deliver training that improves my organization, I would assiduously
avoid using numbers to describe my training initiatives. This is the unspoken,
real reason that many training managers avoid quantifying their training efforts:
they are uncertain what the metrics would say about their training activities—
what the metrics would reveal about their ability to provide meaningful and
purposeful training.
First, what are the most popular measurements used by training managers?
They fall into two categories: (1) audience-based metrics and (2) outcome-
based metrics.
I’m going to present the remainder of this primer in an inductive manner, and
I’m going to use examples that you’ll instantly recognize. It’s rather unfair that
training departments are among the most often maligned for being unable to
quantify their contribution to organizations when there are so many good
training metrics available to build on. I’m going to build your understanding on
the most fundamental concept: the distinction between measuring inputs and
outputs of a training department.
The very basic numbers that describe any training effort can be characterized in
terms of either input or output descriptions. Simple? Well, actually, it really is
this simple. If you think in terms of what characterizes an input to your training
efforts, and how you can characterize output, you have most of the problem
defined. In the context of training, the clever term for this combination of
measures is “productivity measurement.” In the simplest—and most useful—
terms, productivity can be defined as: results as a function of effort. Or, for the
mathematically inclined:
Note the subtle difference in the bottom two ratios: they differ only in how the
denominator is formed—“number of instructors” vs. “number of instructor
hours.” They each measure something different and each is a reasonable,
productivity metric. The bottom-left metric describes a measure about
efficiency of the training department while the bottom-right metric describes
the efficiency of the trainers themselves. Some might even interpret the bottom-
right metric as a measure of the efficiency of the scheduling mechanism used to
deliver instruction by these trainers. You, the training manager, will be the best
judge of how to interpret such a metric. Note, each measurement tells a
different story but both metrics together tell a more complete story. Combining
output and input measures into ratios yield metrics that are directly comparable
across time periods and business units, and can be used as baselines and
benchmarks for larger measurement initiatives.
2. What outputs are also useful for measuring your training operations
success? (Think “results,” but don’t worry about whether they match
your input units of measures…yet.)
3. Can you collect input and output data that are accurate? (Inaccurate
measures will corrupt your metrics and ruin the credibility of related
measurement initiatives.)
If you have specific questions about how your training team is performing, try
combining your input and output measures in a logical way to answer your
questions. If you don’t know where to start, try combining your output and
input measures in various ways to determine what training metrics your
combinations result in. Many ratios of outputs to inputs yield valuable
information and insight, often with surprising results. Be creative—but always
strive for meaning.
A Few Cautions
“Meaning” must rule. If you can’t easily explain your training metric to another
manager, the metric is probably either too complex or too esoteric. Either way,
it’s useless. Drop it and find another metric to replace it.
Consider the cost of collecting the output/input data. Just because you can
doesn’t mean you should. Some metrics that describe training are very
interesting but very expensive to collect. Weigh the cost of the data collection
against how you might use the metric to improve your training efforts. Calculate
whether you can even recoup the cost of the data collection as a function of
potential improvements directly related to the training. Make informed—and
numbers-based—decisions!
Ensure your data are accurate. If you are not collecting the data yourself,
validate where the data are coming from. If you are relying on other
departments to provide data, find a way to validate their data. Take nothing for
granted about the accuracy of the data you are using. Well-intended suppliers of
data can be inadvertently providing mischaracterized information.
What’s Next?
Okay, so what are you going to do with your new-found skills and understanding
of how to build measurements that describe your training efforts? Simple:
If you’ve bothered to collect data and calculate any metrics, you’re doing it for a
reason—to manage your training department better. Start by selecting the key
inputs/outputs for your training program(s) and activities, and begin
measuring them. Then, create baseline data and measure over a period of time.
By definition, you’ll be monitoring your training metrics. Finally, by using your
metrics intelligently, you’ll find yourself managing what you are able to manage.
Not every aspect of your training department will be in your control. That’s
okay. There are enough activities within your control for you to make a
difference with your metrics.
What are the instant and obvious benefits of adopting this simple practice of
measuring, monitoring, and managing? You’ll be able to convey your training
department’s value to your organization (and by proxy, YOUR value) easily and
unambiguously. You’ll find yourself in a position where you are able to make
adjustments to your training programs more quickly to respond to discovered
or alleged inefficiencies. You’ll develop a vernacular that is common to senior
management—the language of numbers. You’ll become supremely confident in
your ability to understand your own training department and describe its
benefits to others. Now, isn’t that last reason alone motivating enough to begin
to explore and apply training metrics to your department’s training activities?
A Final Note
There are a number of excellent resources that are available to help you get
started in your discovery (or re-discovery) of the domain of training metrics.
Consider the Sage (publisher) Applied Social Research series for deeper
background information. Of course, any Harvard Business Review
“practitioners” book would be a good place to search. Also consider books or
articles related to Six Sigma, TQM, ABC, and EVA.
For more than 20 years, Dr. Joe Thomas, has created and evaluated technology-
based learning and assessment solutions for education and workplace training.
Joe is a former member of senior management at NIKE, Inc. where he worked
domestically and internationally to implement Change Management and training
solutions in Asia and Europe. Prior to joining NIKE, he served as a senior
industrial psychologist with Federal Express Corporation where he led the
Testing and Training Technologies department’s efforts in developing scientific
measurement models and evaluating employee competencies for all customer
service-based jobs.
Joe earned a B.A. and M.A. in experimental psychology from California State
University at Fullerton, and an M.A. and Ph.D. in quantitative psychology from
the Johns Hopkins University. Because of his passion for teaching, he has
maintained a number of adjunct teaching positions during his professional
career. He also maintains memberships in many professional organizations, and
is a highly-regarded presenter in the topics of innovative eLearning, assessment
methodologies, and performance improvement.
Orgwide Services™ was founded in response to a need for a more efficient and
cost-effective way to deliver mission-critical training, news, and information
across organizations. Leveraging our expertise and extensive experience across
multiple industries, we will help you significantly compress the time it takes to
achieve the transfer of knowledge across your geographically dispersed and
intricate organization. Orgwide Services™ is your one-stop partner for
ANSWER FROM PAGE 11 eLearning and classroom training, team-member surveys, and internal
As summer temperatures rise, so
does the concomitant sale of ice communication development and delivery. We’re with you from "Needs
cream AND crime rates. While
these two variables show an
Assessment through Evaluation” helping to enhance the ways in which you
uncanny correlation, one of the engage your audience, exchange ideas, and empower your entire organization
highest in the statistical literature,
they are not causally related. with the certainty to succeed.