Training Metrics Matter

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Training Metrics

Matter:
Developing Measurements
That Drive Business Results
From Your Training Efforts

by
Joe Thomas, Ph.D.

Orgwide Services™
165 N. Main St., Suite 202
Collierville, TN 38017
901.850.8190

enhance  engage  exchange  empower

May 2011 www.orgwide.com


METRICS MATTER

Table of Contents

Why Should I Care About Measuring My Training Efforts? ....................... 4

Reasons Training Metrics Are Often Shunned ................................................. 5

Types of Training Measurements ........................................................................... 7

A Primer on Training Metrics ................................................................................... 8

A Simple Guide for Creating Training Metrics ................................................10

A Few Cautions ...............................................................................................................10

What’s Next? ....................................................................................................................12

A Final Note .....................................................................................................................12

About the Author ...........................................................................................................13

About Orgwide Services™ ...........................................................................................13

© 2011 OrgWide Services. All rights reserved Page 2


TRAINING METRICS MATTER

TRAINING METRICS MATTER:


Developing Measurements That Drive
Business Results From Your Training Efforts

Why are training managers so afraid of training metrics?

A business question has plagued me for 20 years. As an organizational


psychologist, I have reviewed and consulted with many service and
manufacturing organizations—and personally managed and led a major
distribution center operation as well. During that time, I encountered a near-
paranoia attitude toward the use and adoption of training metrics by many
training managers. By “training metrics,” I’m referring to those specific
measurements that quantitatively describe the impact, effect, and outcome of
activities that are directly related to a training initiative. This includes the types
of training that are required to perform successfully in a position (i.e., job-
specific), but also includes regulatory compliance training (e.g., sexual
harassment, diversity, ethics, etc.), and new-hire orientation training.

Discussions with many managers have led me to characterize their


disenchantment with measuring their training efforts—an activity that could
clearly improve their ability to run their respective operations—into three
general categories:

1. Training managers don’t trust their own capabilities to collect and


understand metrics.
2. It’s been their experience that the measurements aren’t going to be used
anyway, so why bother?
3. They’re afraid of what the “numbers” might show.

Before we explore each of these reasons (excuses?) in more depth, let’s quickly
review why we should even care about improving our understanding of
quantitative and qualitative measurements of outcome and impact. So let’s ask
the question that all managers ask: What’s in it for me?

© 2011 OrgWide Services. All rights reserved Page 3


TRAINING METRICS MATTER

Why Should I Care About Measuring My Training Efforts?

Operationally speaking, many organizational leaders cite three “pain points” in


their training departments: 1) Lack of strategic alignment among
departments/divisions regarding training; 2) Inability to agree or focus on
specific training goals; and 3) Lack of accountability or ownership of training
programs. Interestingly, each of these so-called pain points could be easily
ameliorated by a better understanding of how to develop a meaningful
measurement system, and then to apply it to interpreting outcome data.

1. Lack of strategic alignment among departments/divisions regarding


training. One of the most commonly-noted questions at the mid-management
level is: How can I manage my training dollars when I don’t really know what’s
important to my boss or our business unit’s success? Without a clear vision of
what success looks like in the organizaiton,
managers are forced to establish their own
training priorities. This approach inevitably
results in the creation of a veritable Tower
of Babel around training programs with
managers needlessly expending budgets
pursuing non-mission-critical training
activities.

2. Inability to Agree or Focus on Specific Training Goals. As a corollary to


the lack of strategic alignment about training—and as a direct result of it—
managers are unable to agree about specific training goals. Improved customer
service? Reducing costs? Improved compliance? The inability to develop
S.M.A.R.T. (i.e., Specific, Measureable, Achievable, Results-oriented, Time-bound)
objectives around training efforts results in wasted training dollars.

3. Lack of accountability or ownership of training programs. It’s easy to


dodge accountability if there aren’t any numbers or measurements in place to
hang on to performance. Even if team members accepted responsibility for their
efforts to drive corporate performance through training initiatives, the absence
of a meaningful measurement system would cause them to rely on intuition and
guess-work to ascertain the impact of the training activities.

© 2011 OrgWide Services. All rights reserved Page 4


TRAINING METRICS MATTER

Training metrics matter. Metrics are the language of operations managers and
thus, should be the language of training managers as well. And, most training
managers, despite their lack of confidence, are quite capable of developing
meaningful training measurements, applying them to their training initiatives,
and learning to “listen” to what those metrics say about the efficacy of their
training activities, efforts, and programs.

Reasons Training Metrics Are Often Shunned

Let’s look at each of the three, general reasons that managers typically offer to
avoid adopting and applying measurements to their training programs. Then
we’ll review the basics of developing training metrics in a short primer.

REASON #1: Training managers don’t trust their own capabilities to collect and
understand metrics.

Training managers often cite the excuse, “It’s too technical and specialized” to
explain why they believe they are incapable of understanding basic training
measurement systems. In most instances, this reasoning reflects an acceptance
of the mystery that surrounds statistics and applied math, a fallacy that has been
perpetrated by measurement specialists themselves. It has always been in the
best interest of statisticians to create an aura of difficulty around the practical
use of numbers to describe natural phenomena—I suppose in the interest of
“job security.” In reality, applied statistics make perfect sense and are quite
easily understood by the typical non-statistician.

Another frequently-quoted reason related to the first


is, “I’m not a mathematician.” This is a lame excuse.
The fact is, basic training metrics require the
application of only the four, fundamental
mathematical operations: addition, subtraction,
multiplication, and division. Nothing more complex
than that. And I’ll prove that in the primer that
follows at the end of this discussion.

Another reason frequently offered is that training statistics are generally “not
well understood.” Nothing could be further from reality. Training measurement
systems are among the most prolific and well-understood numerical descriptors
used in the business world today, second only to financial metrics. This excuse

© 2011 OrgWide Services. All rights reserved Page 5


TRAINING METRICS MATTER

falls flat in the face of the plethora of journals, books, and magazine articles that
succinctly describe and explain how to measure training effectiveness. From
Kirkpatrick Model of Training Evaluation to basic operational efficiency
measures, training measurement systems are most certainly not a mystery.

Finally, we often hear that capturing training metrics is “just too costly.” That’s
like saying that balancing your checkbook is too costly an exercise. Or taking the
time to capture financials about your operation is not worth the time and
expense. What’s the true cost of not measuring your training activities and
initiatives? How long can training managers offer courses that purport to
improve a business unit using guesswork, hoping everything will work itself
out? Hope is not a good training strategy.

REASON #2: It’s been their experience that training metrics aren’t going to be used
anyway, so why bother?

It’s easy to understand why managers adopt a “Why bother?” attitude. Many
numbers are bantered about in an operation; few are acted upon. However, the
reason for this excuse is understandable—other than commonly-used financial
information, many numbers published are actually worthless and don’t tell the
executives a thing about the operation. So, the managers are actually correct in
assuming executives will ignore their “numbers.” But it’s because experienced
executives ignore trivial, nonsensical, numerical descriptions of a training
department because they know which numbers to pay attention to. As a result,
these managers enter into a never-ending cycle: show meaningless numbers, be
ignored. Technically, this excuse is reasonable—but the cycle need not continue.
Managers have the power to apply the right measurements to their training
efforts to get their senior management team to pay attention. But, they have to
know which metrics to apply.

REASON #3: They’re afraid of what the “numbers” might show.

This is a reason I can finally get my head around. If I were concerned about my
competence as a training manager and afraid that I didn’t have the skills to
develop and deliver training that improves my organization, I would assiduously
avoid using numbers to describe my training initiatives. This is the unspoken,
real reason that many training managers avoid quantifying their training efforts:
they are uncertain what the metrics would say about their training activities—
what the metrics would reveal about their ability to provide meaningful and
purposeful training.

© 2011 OrgWide Services. All rights reserved Page 6


TRAINING METRICS MATTER

Senior management should insist on the use of training metrics to measure


every training initiative and program. While it’s nearly become cliché, Dr. W.
Edward Deming’s (paraphrased) expression still rings true 30 years after it was
introduced in the context of his “seven deadly
diseases”: “You can’t manage what you don’t
measure.” And, interestingly, most training
managers would be surprised (and pleased)
what a clear, unambiguous numerical
description of their training activities would tell them. Good metrics would
simply confirm what they already believe they know about the efficacy of their
training activities—and the efficiency in which they are designed, developed and
deployed. . Metrics can, and should be, a training manager’s first line of defense.

Types of Training Measurements

The following discussion is relevant to training managers attempting to


establish the value of their training initiatives. In addition to recommending
metrics that can be employed to describe your training programs, we’ll also
remind you of (or for some, introduce you to) the basic principles of developing
and understanding training metrics.

First, what are the most popular measurements used by training managers?
They fall into two categories: (1) audience-based metrics and (2) outcome-
based metrics.

Audience-Based Metrics. These are the typical measurements used by training


managers to justify their existence. Some are meaningful, others are just noise
and don’t provide an actionable understanding of the department’s impact.
Examples include:

SAMPLE AUDIENCE -BASED METRICS


Number of employees who indicate they need training
Number of employees who actually take the training
Number of training participants in a course
Ratings from “Smiles Sheets” at end of training (Kirkpatrick Level 1)
Passing scores for training participants

© 2011 OrgWide Services. All rights reserved Page 7


TRAINING METRICS MATTER

Outcome-Based Metrics. These measurements focus on the outcomes of the


training and are more meaningful in determining the value of training efforts to
an organization. They can be aligned directly to an organization’s expressed
mission by way of explicit training strategies and outcome measures. Some
examples are:

SAMPLE OUTCOME-BASED METRICS


Retention of training knowledge N weeks following training
Demonstrated ability to apply training knowledge to problems
A measureable change in behavior at the end of training
Increase in performance as measured before-and-after training
Return-on-Investment (Yes, it can be done!)

Notice that the outcome-based measurements are demonstrable, tangible, and


can be measured objectively. Self-reports of training satisfaction and
improvement are notoriously over-rated and terribly unreliable. Seek, instead,
to focus your energies on developing and reviewing objective measurements,
measurements that can’t be unconsciously and unintentionally biased.

A Primer on Training Metrics

Let’s look more closely at how to develop and understand training


measurements. The most important principle is related to the application of a
potential metric. Look past the apparent, obvious meaning of a metric and ask,
“How does my understanding and use of this metric help me do my job better?”
While this appears at first blush to be an esoteric question, it is indeed the most
practical question you should ask. There are hundreds (some would argue
“thousands”) of training metrics that you might encounter that purport to
measure or describe your training efforts. And many, if not all, will in fact do so
in some way or fashion. But all metrics are not equally important or useful.
Some do a better, and quicker, job of describing how effective your training
initiatives are performing.

I’m going to present the remainder of this primer in an inductive manner, and
I’m going to use examples that you’ll instantly recognize. It’s rather unfair that
training departments are among the most often maligned for being unable to
quantify their contribution to organizations when there are so many good
training metrics available to build on. I’m going to build your understanding on

© 2011 OrgWide Services. All rights reserved Page 8


TRAINING METRICS MATTER

the most fundamental concept: the distinction between measuring inputs and
outputs of a training department.

Input and Output

The very basic numbers that describe any training effort can be characterized in
terms of either input or output descriptions. Simple? Well, actually, it really is
this simple. If you think in terms of what characterizes an input to your training
efforts, and how you can characterize output, you have most of the problem
defined. In the context of training, the clever term for this combination of
measures is “productivity measurement.” In the simplest—and most useful—
terms, productivity can be defined as: results as a function of effort. Or, for the
mathematically inclined:

Results (Results divided by Effort)


Effort

Could it be simpler than that? Productivity is classically defined as the ratio of


output to input, or the output realized divided by the input required. For
example, each of the following ratios yields a productivity measure for training
activities (although not all are equally useful in managing a training
department):

number of students trained number of training courses


hours of instruction number of hours to create

number of seat hours conducted number of seat hours conducted


number of instructors number of instructor hours

Note the subtle difference in the bottom two ratios: they differ only in how the
denominator is formed—“number of instructors” vs. “number of instructor
hours.” They each measure something different and each is a reasonable,
productivity metric. The bottom-left metric describes a measure about
efficiency of the training department while the bottom-right metric describes
the efficiency of the trainers themselves. Some might even interpret the bottom-
right metric as a measure of the efficiency of the scheduling mechanism used to
deliver instruction by these trainers. You, the training manager, will be the best
judge of how to interpret such a metric. Note, each measurement tells a

© 2011 OrgWide Services. All rights reserved Page 9


TRAINING METRICS MATTER

different story but both metrics together tell a more complete story. Combining
output and input measures into ratios yield metrics that are directly comparable
across time periods and business units, and can be used as baselines and
benchmarks for larger measurement initiatives.

A Simple Guide for Creating Training Metrics

To start your own training metrics initiative—or to improve your current


measurement practices—ask yourself the following four questions, then
relentlessly pursue their answers. Focus on data and metrics that allow you to
identify indicators of training success and optimize training results.

1. What inputs are important to you for measuring your training


operation’s success? (Think “effort.”)

2. What outputs are also useful for measuring your training operations
success? (Think “results,” but don’t worry about whether they match
your input units of measures…yet.)

3. Can you collect input and output data that are accurate? (Inaccurate
measures will corrupt your metrics and ruin the credibility of related
measurement initiatives.)

4. Can you reasonably collect your input and output measures in a


reasonable time period? (Don’t waste your time collecting data that
can’t be used.)

If you have specific questions about how your training team is performing, try
combining your input and output measures in a logical way to answer your
questions. If you don’t know where to start, try combining your output and
input measures in various ways to determine what training metrics your
combinations result in. Many ratios of outputs to inputs yield valuable
information and insight, often with surprising results. Be creative—but always
strive for meaning.

© 2011 OrgWide Services. All rights reserved Page 10


TRAINING METRICS MATTER

A Few Cautions

It’s easy to get training-metric crazy once the fundamentals of designing


meaningful training metrics are understood and practiced a few times. As such,
there are a few pitfalls to avoid as you begin developing your metrics library.

“Meaning” must rule. If you can’t easily explain your training metric to another
manager, the metric is probably either too complex or too esoteric. Either way,
it’s useless. Drop it and find another metric to replace it.

Consider the cost of collecting the output/input data. Just because you can
doesn’t mean you should. Some metrics that describe training are very
interesting but very expensive to collect. Weigh the cost of the data collection
against how you might use the metric to improve your training efforts. Calculate
whether you can even recoup the cost of the data collection as a function of
potential improvements directly related to the training. Make informed—and
numbers-based—decisions!

Ensure your data are accurate. If you are not collecting the data yourself,
validate where the data are coming from. If you are relying on other
departments to provide data, find a way to validate their data. Take nothing for
granted about the accuracy of the data you are using. Well-intended suppliers of
data can be inadvertently providing mischaracterized information.

Beware of spurious relationships. When you are interpreting your metrics,


be particularly careful about making inferences that your data don’t support.
For example, if you’re reviewing training data and notice that as the amount of
training dollars increases, so does retention, are you justified in concluding that
there is a causal relationship between the two variables? Not without a lot more
study, you’re not! It’s easy to want to link the two activities causally, but resist
doing so until you’ve researched the relationships more extensively.

© 2011 OrgWide Services. All rights reserved Page 11


TRAINING METRICS MATTER

Remember, there’s a near-perfect


relationship between the sale of ice cream
and crime rates in all cities. But halting the
sale of ice cream will not succeed in driving
down crime. (Can you figure out the
common factor between the two? If not, the
answer is in the margin of the last page, in
small print.) In most instances, the not-so-
obvious conclusion will be that “more data
are required.” But, that’s a good thing because it spurs the training manager to
interact more with—and understand more deeply—the meaning of the data
source for the metrics.

Keep your training metrics uncomplicated. Metrics should be immediately


understandable, obvious, and actionable. Look for opportunities to develop
metrics that “converge” on an answer. That is, look for training measurements
that provide you the same—or similar—information in several different ways.
Convergence is another form of validity, and all training metrics require some
form of validation to be of use to the training manager, the team being trained,
and the training manager’s superiors.

What’s Next?

Okay, so what are you going to do with your new-found skills and understanding
of how to build measurements that describe your training efforts? Simple:

Measure. Monitor. Manage.

If you’ve bothered to collect data and calculate any metrics, you’re doing it for a
reason—to manage your training department better. Start by selecting the key
inputs/outputs for your training program(s) and activities, and begin
measuring them. Then, create baseline data and measure over a period of time.
By definition, you’ll be monitoring your training metrics. Finally, by using your
metrics intelligently, you’ll find yourself managing what you are able to manage.
Not every aspect of your training department will be in your control. That’s
okay. There are enough activities within your control for you to make a
difference with your metrics.

© 2011 OrgWide Services. All rights reserved Page 12


TRAINING METRICS MATTER

What are the instant and obvious benefits of adopting this simple practice of
measuring, monitoring, and managing? You’ll be able to convey your training
department’s value to your organization (and by proxy, YOUR value) easily and
unambiguously. You’ll find yourself in a position where you are able to make
adjustments to your training programs more quickly to respond to discovered
or alleged inefficiencies. You’ll develop a vernacular that is common to senior
management—the language of numbers. You’ll become supremely confident in
your ability to understand your own training department and describe its
benefits to others. Now, isn’t that last reason alone motivating enough to begin
to explore and apply training metrics to your department’s training activities?

A Final Note

There are a number of excellent resources that are available to help you get
started in your discovery (or re-discovery) of the domain of training metrics.
Consider the Sage (publisher) Applied Social Research series for deeper
background information. Of course, any Harvard Business Review
“practitioners” book would be a good place to search. Also consider books or
articles related to Six Sigma, TQM, ABC, and EVA.

© 2011 OrgWide Services. All rights reserved Page 13


TRAINING METRICS MATTER

About the Author

For more than 20 years, Dr. Joe Thomas, has created and evaluated technology-
based learning and assessment solutions for education and workplace training.
Joe is a former member of senior management at NIKE, Inc. where he worked
domestically and internationally to implement Change Management and training
solutions in Asia and Europe. Prior to joining NIKE, he served as a senior
industrial psychologist with Federal Express Corporation where he led the
Testing and Training Technologies department’s efforts in developing scientific
measurement models and evaluating employee competencies for all customer
service-based jobs.

Joe earned a B.A. and M.A. in experimental psychology from California State
University at Fullerton, and an M.A. and Ph.D. in quantitative psychology from
the Johns Hopkins University. Because of his passion for teaching, he has
maintained a number of adjunct teaching positions during his professional
career. He also maintains memberships in many professional organizations, and
is a highly-regarded presenter in the topics of innovative eLearning, assessment
methodologies, and performance improvement.

About Orgwide Services™

Orgwide Services™ was founded in response to a need for a more efficient and
cost-effective way to deliver mission-critical training, news, and information
across organizations. Leveraging our expertise and extensive experience across
multiple industries, we will help you significantly compress the time it takes to
achieve the transfer of knowledge across your geographically dispersed and
intricate organization. Orgwide Services™ is your one-stop partner for
ANSWER FROM PAGE 11 eLearning and classroom training, team-member surveys, and internal
As summer temperatures rise, so
does the concomitant sale of ice communication development and delivery. We’re with you from "Needs
cream AND crime rates. While
these two variables show an
Assessment through Evaluation” helping to enhance the ways in which you
uncanny correlation, one of the engage your audience, exchange ideas, and empower your entire organization
highest in the statistical literature,
they are not causally related. with the certainty to succeed.

To learn more about Orgwide Services™, please contact us at 901.850.8190 or


visit us at:
www.orgwide.com

© 2011 OrgWide Services. All rights reserved Page 14

You might also like