Lyle M. Spencer 105-150
Lyle M. Spencer 105-150
Lyle M. Spencer 105-150
NOTES
1. Seligman, M. (1991), Learned optimism, New York: Knopf; Peterson, C, Seligman, M. E., &
Vaillant, G. E. (1988), Pessimistic explanatory style is a risk factor for physical illness: A thirty-
five-year longitudinal study, Journal of Personality and Social Psychology, 55(1), 23-27.
2. Zullow, H. M., Oettingen, G., Peterson, C, & Seligman, M. E. (1988), Pessimistic explana-
tory style in the historical record, American Psychologist, 43(9), 673-682.
3. For more details on creating new organizational vision, see Kelner, S. P. (1992), Visionary
leadership workshop, Boston: McBer.
PART
III
Developing a Model
CHAPTER
10
Designing
Competency Studies
This chapter describes three alternative methods for the design of competency
studies:
C. Studies of single incumbent and future jobs where there are not enough
jobholders to offer samples of superior and average performance
PREPARATORY WORK
This process refers to the steps an organization takes to identify its goals and
critical success factors and to develop strategic plans for reaching the goals.
For example, if Company X identifies strategic business unit Y as a significant
source of much of the firm's future growth, the presumption is that the growth
depends on the firm's ability to attract, develop, and retain innovative techni-
cal managers with entrepreneurial skills for unit Y.
93
94 Developing a Model
Organizational Structure/Design
This factor refers to how the firm will organize itself to carry out its plans,
with the emphasis on identifying critical jobs. Critical jobs are those value-
added "make or break" positions held by the people who will make the biggest
difference in whether the firm succeeds. Typically these are the jobs that de-
and direction, or carry responsibility for achieving major strate-
fine strategy
and technology)
gic outcomes, for controlling critical resources (labor, capital,
or for managing relationships with key markets or customers. Competency
studies (and human resource management in general) are most cost-effective
when they focus on these "value added" jobs.
This preparatory step is most critical for full-scale classic studies (Method
A), which are relatively expensive. Expert panel based studies (Method B) are
more suitable for analysis of large numbers of less critical jobs.
3. Collect data.
DEFINE
Hard data: sales, profits, productivity measures
Performance Supervisor nominations
Effectiveness Peer ratings
Criteria Subordinate ratings (e.g., managerial styles, morale)
Customer ratings
IDENTIFY
Collect Data
1 1
IDENTIFY
Selection
© Applications Training
Professional development
Performance appraisal
Succession planning
Evaluation of training, professional development
programs
96 Developing a Model
For military officers, good criteria would be unit performance outcomes, such
as combat inspection scores or reenlistment rates. For human service workers,
the best criteria are client outcomes. For example, for alcoholism counselors,
the best measure of performance is percentage of clients who are still "dry,"
regularly employed, and have had no arrests for drunkenness in the year fol-
lowing counseling.
Sometimes it is necessary to develop criteria for a job. For example, to iden-
tify effective doctors, a measure of accurate diagnosis and treatment could be
developed. A panel of expert physicians would evaluate the symptoms of a
group of patients and formulate a diagnosis and treatment plan. A sample of
doctors could then be asked to examine the same patients. The criterion for
superior doctors would be how close their diagnoses and treatment plans for
this group agreed with those of the experts.
Competitive simulations can also be used as performance criteria. An ex-
ample would be military units participating in highly realistic war games. The
leaders of units that consistently win mock battles could be considered supe-
rior officers.
If hard criteria aren't available, nominations or ratings by bosses, peers,
subordinates, and/or customers and clients can be used. Research 1
indicates
that peer ratings have high criterion validity, that is, they do predict hard job
performance outcomes. Studies consistently show that the subordinates of su-
perior managers report higher morale, as measured by organizational climate
or job satisfaction surveys. 2
Defining effectiveness criteria —
and the right effectiveness criteria for a —
job is extremely important. A
competency model based on superior perform-
ers cannot be any better than the criteria on which these people were selected.
If the wrong criteria are used (for example, personal popularity instead of
The job effectiveness criteria or ratings developed in Step 1 are used to iden-
tify a clear group of superstars and a comparison group of average performers.
A third group of poor (ineffective or incompetent) performers can also be
identified if the purpose of the study is to establish competency levels that
predict minimal success in a job (e.g., to set a cut-off score for hiring).
In some organizations, it is politically impossible to get a sample of people
doing a poor job. Supervisors insist that "there's no such thing as a bad offi-
cer," that "poor doctors don't work at this hospital," or that they "fire people
who perform badly." Sometimes it is even difficult to get people to identify
average colleagues. When told "all of our people are good," the interviewer
can agree gently, but say, "Yes, but some must be especially outstanding
who are the best?"
The hard criteria and nominations and ratings gathered in Step 1 are in-
valuable in identifying a good criterion sample. Nominations all but force
Designing Competency Studies 97
identification of two or three top people. The best way to be absolutely sure
you have identified the best superstars is to use several criteria and select only
those people who are rated highly on all the criteria.
Some employees come out well on hard criteria such as sales, but are so
insensitive or politically naive that they anger their managers or co-workers.
Others may be rated highly on the basis of their personality but really don't
like their jobs. These people are not likely to get promoted or even keep their
jobs. The someone who does well on all the hard criteria and
real superstar is
who "comer" and who is genuinely liked and respected
the boss perceives as a
by co-workers, subordinates, and customers.
Sometimes the real stars are those who do well on two different criteria.
For example, some Navy officers achieved high inspection scores by working
their people so hard that most of them left the service as soon as the ship
returned to port. During periods when the Navy has severe personnel short-
ages, a very important measure of a good officer is the unit retention rate
the number of sailors working for the officer who choose to stay in the service.
The real mark of a superstar Navy officer is top scores on all inspections and
a crew with high morale and retention. If an officer was high on both these
measures and was rated highly promotable by his or her boss, the officer was
put in the superstar group.
Ideally, each job study sample should include at least 20 subjects: 12 su-
perior and eight average performers. This number permits simple statistical
tests of hypotheses about competencies (such as f -tests, chi-square, ANOVA,
or Discriminant Function Analysis of the difference between mean level of
competence shown by superior versus average subjects). Smaller nonstatisti-
cal samples (e.g., six superior and three average performers) can provide
valuable qualitative data on the expression of competencies in a given orga-
nization, such as how influence is used effectively in a specific job. Small
samples should include two superior performers for every 1.5 average per-
formers. A rule of competency research is "you always learn most from your
superstars."
Data Type (a) Behavioral Event Interviews. Superior and average perform-
ers are interviewed using the in-depth "Behavioral Event Interview (BEI)"
technique developed by David C. McClelland, a professor of psychology at
Harvard University, and colleagues at McBer and Company. 3
98 Developing a Model
Asking people to focus on the most critical situations they have faced pro-
duces data on the most important and competencies. Interviewees tell
skills
vivid "short stories" about how they handled the toughest, most important
parts of their jobs, and, in doing so, reveal their competencies to do the job.
Freedom from Racial, Gender, and Cultural Bias. The BEI approach has
been adopted by many organizations because it is predictively valid
without being biased against minority candidates. 5
Missed Job Tasks. Because the BEI focuses on critical job incidents, BEI
data may miss less important but still relevant aspects of a job.
Impractical for Analysis of Many Jobs. Labor time, expense, and exper-
tise requirements make BEI studies impractical for analyzing a large
number of jobs.
Data Type (b) Expert Panels. A panel of experts is asked to brainstorm per-
sonal characteristics employees need to perform the job at an adequate (mini-
mally acceptable, or threshold, level) and a superior level.
These experts can be supervisors for the positions being studied, superstar
performers in the job, or outside experts, perhaps human resource profession-
als who know the job well. (Average incumbents should not be included in
these panels because, by definition, they do not know what it takes for supe-
rior performance.) The expert panel prioritizes the characteristics according
to importance to job success.
ment methods, and variables; and their involvement can develop consen-
sus about and support for study findings.
choice was mentioned only a few times. The officers did not have to face
moral issues very often, did not find these decisions critical in doing
their jobs, or (what the study indicated) perceived them to be managerial
rather than moral issues. A variant of the BEI method was used to find
out just what ethical and moral competencies military officers actually
used. Instead of the usual procedure of letting interviewees focus on
what they thought were their most critical job incidents, this time inter-
viewees were asked to tell about the hardest moral or ethical decision
they had to make in their career. Analysis of those incidents again
showed that management competencies were the real issue.
Omission of Critical Competency Factors for Which Panel Members
Lack Psychological or Technical Vocabulary. For example, superior fur-
niture salespeople have a competency called "eliciting visual and tac-
tile imagery," which means they think in terms of color (mauve, taupe,
rust) and textures (nubby, silky, scratchy). They also get their cus-
tomers to think in these terms and thus can steer the prospect to
specific pieces of furniture. Expert panel members may not know a
concept such as "eliciting imagery" and hence would miss this impor-
tant competency.
Experience indicates that experts' hypotheses about the competen-
cies needed to do a job are about 50 percent accurate, when compared
with BEI data. Experts suggest competencies that are not validated by
BEI data 25 percent of the time, and also miss competencies found in
analysis of BEI data 25 percent of the time. For this reason, competen-
cies are best verified by BEI or direct observation data.
Data Type (c) Surveys. Expert panel members and others in the organiza-
tion ratecompetency items (competencies or behavioral indicators) according
to importance in effective job performance, how frequently the competency is
required, and the like.
Typically, a survey focuses on specific skills one at a time and asks:
1. How much the skill distinguishes superior from average performers. For
example, since achievement orientation distinguishes superstar from
average salespeople, this would be an important competency to select
for or teach potential salespeople.
2. Whether failure is likely if employees don't have the skill. For example,
honesty and basic numeracy are important competencies for bank
tellers.
tiative are hard to develop, for example, while specific product knowl-
edge is easier to teach.
Advantages of Surveys
This method facilitates quick and cheap collection of sufficient data for
statistical analyses. Large numbers of jobs can be studied efficiently and
at different times to identify trends in competency requirements.
Filling out a survey permits many employees to have an input and builds
consensus for study findings.
Disadvantages of Surveys
Data are limited to items and concepts included in the survey and there-
fore often miss competencies not included by those who constructed the
survey. Surveys cannot identify new competencies or provide detailed
information about the nuances of competency expressed by different
people in different parts of the organization. Survey data may reinforce
folklore or motherhood competencies not predictive of performance.
The method can be inefficient. Surveys often ask the same 100 questions
of everyone from the CEO to the janitor, when only a subset of items is
great detail each task, function, or action the jobholder performs in a given
period of time. Data are collected using written questionnaires, time logs, in-
dividual or panel interviews, or direct observation.
Produces very complete job descriptions useful for job design, compen-
sation analysis, and by inference, some competency analysis. For exam-
ple, specification of the technical tasks required in a job can be used to
deduce the cognitive skills needed for the job.
Provides characteristics of the job rather than those of the people who do
the job well.
Task lists tend to be too detailed (e.g., 3,002 motions needed to drive a
car) to be practical and do not separate the truly important tasks from
the routine activities.
Data Type (f) Direct Observation. Employees are directly observed per-
forming (critical) job tasks, and their behaviors are coded for competencies.
The Army trains soldiers in very realistic mock battles called "REALTRAIN"
exercises. One group of soldiers attacks a and another defends it. Soldiers
hill,
wear clothing that changes color if they are "hit" by opponents, whose weapons
fire a harmless beam of light.
Observing a REALTRAIN battles from the hill of the defending unit, one of
the authors saw on one side, a lonely private muttering, "Man, we gonna get
blown away, we gonna get BLOWN away."
Asked why, he said, "No one's covering my flank, this side of the hill. The
enemy's gonna come right up here and wipe us out."
Asked why he didn't tell the Captain, he said, "Wouldn't do no good that —
dumb SOB never listens to a word I say."
104 Developing a Model
Sure enough, 30 seconds after the battle started, the attackers came right up
the undefended side of the hill and "blew away" the defenders.
cal incidents a year on their jobs. It will take a lot of observer time to
have a chance of seeing something important. Like job task analysis, ob-
servation risks sweeping up a lot of routine "chaff" to find a few grains
of competency "wheat."
In this step, data from all sources and methods are analyzed to identify the
personality and skill competencies that distinguish superior from average per-
formers. This process is called hypothesis generation, thematic analysis, or
concept formation.
Two or more trained analysts start by laying data about superior and aver-
age performers side by side. Then they search for differences —motives, skills,
Diplomat A
Despite the troubles we had with them, I never stopped liking and respecting
understand that they needed to rebel against us, to stand up to us, even throw
us out —even when they wanted to burn my library! I told them that and in-
vited them to use our facilities to hold some of good
their meetings. I've got
contacts with some of the student leaders now. And we haven't been burned
down yet!
Diplomat B
I finally came
to the conclusion that [people of country X] were just stupid,
dumb, and unmotivated. I kept trying to schedule English classes, so these kids
could learn enough to go to the United States to study, which is what they all
said they wanted. But fewer and fewer showed up. So finally I canceled the
classes. What can you do with people like that?
The differences are obvious. The superior diplomat expresses positive ex-
pectations and accurate empathy toward others, and the average performer
does not.
These competencies predict superior performance. In the State Department
study, analysts found thesesame negative and positive patterns in several hun-
dred superstar and average diplomats' stories.
Analysts keep refining the definition of competencies seen in behavioral
events until each can be recognized with acceptable interrater reliability.
"Interrater reliability"means that two or more people can read the same
story and agree on whether or not it contains a competency. Stories are re-
peatedly rated or scored until interrater reliability meets desired standards.
Empirical coding of interviews can be done with high interrater reliability
[Rs = .80-.90] 7 and provides quantitative data that can be used in standard
statistical tests of significance.
The final task is to develop a behavioral codebook that describes the com-
petencies predictive of job performance. This codebook defines each compe-
tency and the criteria for scoring it and provides examples from BEIs of
when the competency is noted or not. Competencies scaled in just-notice-
ably-different ( JND) intervals permit precise definition of job competency
requirements, as well as assessment of individuals at any level in a job family.
The behavioral codebook provides the competency model for the job. This
model can be used for selection, training, performance appraisal, career
planning, and the like.
A short job competency assessment (JCA) process using primarily data from
an expert panel consists of these steps (see Figure 10-2):
Identifies (brainstorms):
Expert Panel
• Job Current Job Future Job
—Accountabilities
—Results Measures (used to identify criterion
sample)
• Competencies
—Baseline (threshold) competencies: essential,
Must have do job
to
—Superior: Competencies distinguish
that
superior performers
• Obstacles to performance
B. Competency Requirements Questionnaire
Fills out
(CRQ) on job
Responds to Expert System questions as a group
(reaching a consensus when there are disagreements)
Conduct Behavioral
m Event Interviews
New Comps. 1 X ?
2. X X X X
o Validation
For each target job or job family, knowledgeable human resource specialists,
managers, and superior job incumbents identify:
Data from the expert panels, surveys, expert system, and BEIs are content ana-
lyzed to identify behaviors and personality characteristics that (a) distinguish
superior from average job incumbents, or (b) are demonstrated by all incum-
bents adequately performing the job.
Outputs of a Short JCA. The outputs of a short JCA are one or more job
description "Competency Models" that include:
b. [Optional] Career paths for the job, with some estimate of when,
where, and how key competencies for the job are developed.
Future Jobs
Three approaches for studying future jobs (in inverse order of desirability) are
(a) expert panel "guesstimates," (b) extrapolation from job elements with
110 Developing a Model
Expert Panels. An expert panel analysis of future jobs is similar to that de-
scribed for the Short Competency Model Process. Experts first list the ac-
bine elements of diplomatic and high-tech sales jobs. Competency models al-
ready exist for both diplomats and high-tech salespeople. From the diplomatic
model, competencies for the "technical ambassador" job included "cross-
cultural interpersonal sensitivity," "overseas adjustment" (adaptability, liking
and "speed of learn-
for novelty, resistance to stress caused by living overseas),
from the diplomatic model; from the high-tech
ing (foreign) political networks"
sales models, competencies included "achievement orientation" and "consul-
tative selling skills."
points in time.
For example, "knowledge engineers" (persons who debrief human experts
and translate their expertise into artificial intelligence "expert system" com-
puter programs) now represent fewer than 1 percent of employees in data pro-
cessing but are expected to make up 20 percent of data processing jobs after
the year 2000. A competency study might show superior "knowledge engi-
neers" have both higher level cognitive competencies such as pattern recogni-
tion, conceptualization, and analytic thinking (the ability to recognize and
state problem-solving algorithms used by human experts in computer pro-
grammable "if -» then" rules) and the interpersonal interviewing skills
Designing Competency Studies 111
needed to establish rapport with and debrief subject matter experts. 8 These
findings suggest selection and training criteria for EDP personnel to be hired
and developed over the next decade.
Even if an organization lacks people with the competencies needed to do a
future job, people may be doing the job in another organization. For example, a
sleepy neighborhood thrift institution wanted to become a marketing-oriented
commercial bank. Its existing branchmanagers were kindly sixtyish gentlemen
who stamped little old ladies' passbooks and chatted about grandchildren. The
thrift wanted marketing-oriented branch managers who would sell savings cus-
tomers additional financial services (i.e., grab the little old lady and sell her a
trust for her grandchildren).
The thrift did not have anyone with the competencies of an aggressive,
cross-selling branch manager. So it gave a grant to a banking industry associa-
tion,which then hired a consulting firm to study superior branch managers in
banks that the thrift had identified as the best marketers in its area (i.e., the
superstars of its future competitors).
A on competency analysis of future jobs is whether charac-
final question
teristics that predict superiorperformance in 1990 will still predict superior
performance in 2001. Coding of historical data sources 9 and longitudinal
studies in the U.S. Navy between 1976 and 1987 10 suggest that, while the
behavioral indicators (e.g., "uses computer to conduct factor analyses") for
competencies may change, the underlying competencies (e.g., "conceptual
thinking") do not. Achievement Orientation has the same predictive accuracy
for economic activity in Greece in 300 B.C. and in many cultures in 1991,
Random passages of Cae-
although business practices have obviously changed.
sar's Gaulic Commentaries, coded for leadership and management
ca. 30 B.C.,
thisjob were identified by conducting BEIs with superiors (the hospital's CEO
and directors), peers (other functional and operating vice presidents), key sub-
ordinates, and customers (union leaders and prominent members of the commu-
nity who dealt with human resource issues with the hospital). Respondents were
asked to identify critical incidents in which they had seen previous VPs of hu-
man resources be particularly effective or ineffective. If they could not think of
112 Developing a Model
incidents involving a previous job incumbent, respondents were asked for inci-
dents involving any health care VP of human resources.
For example, asked to cite an instance of effective performance, after a
long pause, the CEO said:
Well, there was this very tense meeting with the nursing staff, who were about to
go out on strike ... X (the previous HR VP) came in and cracked a joke. Ev-
eryone laughed, and that sort of broke the ice . . . the meeting was less tense
after that.
The worst thing I ever saw X do was his absolutely disastrous presentation at our
top management "vision for the future" retreat. Everyone was supposed to
present where he or she thought we should be going in the next 10 years, based
on labor force demographics, economic, technological, industry, market, etc.
trends. X was the kind of person who lives in the "now" I don't think he could —
think as far ahead as next week. So he had his staff write a speech for him but —
he didn't bother to read the speech before trying to give it! He embarrassed
—
himself and all of us. And then, when he got negative feedback, his response
was to go back and punish his staff for writing a lousy speech!
Asked for critical incidents involving any health care HR VP he had seen as
particularly effective, the CEO said:
The best one I know —the head of a university health system — really thinks
ahead and has pulled off some really incredibly innovative staffing. He couldn't
get enough qualified nurses here, so he thought of recruiting from Indian medi-
cal and nursing schools: He found he could get first-rate people who thought a
chance to come to the United States and make $12,000 a year was a "died and
gone to heaven" opportunity. He even worked a deal with Immigration to get
them green cards (U.S. residency and work permits) by lobbying local congres-
sional representatives that only this way would they get better care for their el-
derly constituents . .
."
NOTES
1. Lewin, A. Y., & Zwany, A. (1976), Peer nominations: A model, literature critique, and a
paradigm for research, Springfield, VA: National Technical Information Service; Kane, J., &
Lawler, E. (1979), Methods of peer assessment, Psychological Bulletin, 85,(3), 555-586.
2. Caldwell, D. F. (1991, April 12), Soft skills, hard numbers: Issues in person-job/person-
organization fit, Paper presented at the Personnel Testing Conference of Southern Califor-
nia Spring Conference. Ontario, CA.
3. McClelland, D. (1976), A guide to job competence assessment, Boston: McBer.
4. Flanagan, J. C. (1954), The critical incident technique. Psychological Bulletin, 51 (4), 327-358.
5. Austin, A. W., Inouye, C. J. & Korn, W. S. (1986), Evaluation of the GAEL Student Potential
Program, Los Angeles: University of California, Los Angeles.
38290-38309.
7. Boyatzis, R. (1982), The competent manager, New York: Wiley. Also see data on reliability
of interview coding in Chapter 18.
9. McClelland, D. C. (1976), The achieving society, New York: Irvington; Zullow, H. M., Oet-
tingen, G., Peterson, C, & Seligman, M. E. (1988), Pessimistic explanatory style in the his-
11
Conducting the
Behavioral Event
Interview
This chapter explains how the Behavioral Event Interview (BEI) method dif-
fers from traditional interviewing methods and provides step-by-step instruc-
tions on how to conduct a BEI.
The Behavioral Event Interview is the heart of the Job Competency Assess-
ment process. BEI data are the richest source of hypotheses about competen-
cies that predict superior or effective job performance. To do competency
research, it is essential to know how to conduct and analyze a BEI.
In addition, properly conducted BEIs can be used as psychometric tests to
assess competencies for selection and other human resource applications (see
Chapter 18).
view probes such as "Tell me about your background," "What are your
strengths and weaknesses?" "What jobs have you liked and not liked?" are
ineffective for two reasons.
First, most people don't know what their competencies, strengths and weak-
nesses, or even their job likes and dislikes really are. It's not unusual to find that
managers who earnestly believe their greatest strength is "dealing with people"
114
Conducting the Behavioral Event Interview 115
are disliked and distrusted by their co-workers. Artists who say they "hate busi-
ness" and think selling "degrading" can become first-rate salespeople if
is —
they are high in achievement motivation. Harvard psychologist Chris Argyris
has shown that people's "espoused theories of action" (what they say they do)
bear no relation to their "theories in use" (what they actually do). 2
Second, people may not reveal their real motives and abilities. Most inter-
view questions are "leading" and most people can give the "socially desirable"
answer: what they think the interviewer wants to hear. As a result, people's
self-reports of background, strengths, and preferences do not provide reliable
information about their competencies.
The basic principle of the competency approach is that what people think or
say about their motives or not credible. Only what they actually do, in
skills is
the most critical incidents they have faced, is to be believed. The purpose of
the BEI method is to get behind what people say they do to find out what they
really do. This is accomplished by asking people to describe how they actually
behaved in specific incidents. The following interview examples may help ex-
plain this difference.
Example 1
Most managers in industry have been told for years that they should be
"Theory Y, democratic-participative" leaders. They should listen, let sub-
ordinates participate in decisions, and manage by consensus. This is their
espoused "theory of action," how they think they manage. An interview
with a manager might go:
Example 2
In the military, the espoused theory of management is just the opposite.
Leaders are expected to be authoritarian, to give direct orders that are
immediately obeyed. An interview with a military officer might go like
this:
Navy Officer: When you take over a command, you have to step on them
hard right off the bat. It's like a kindergarten: If the teacher doesn't
show who's boss the first day, the kids won't respect her and she'll
never have any control of the class. So I come in tough. I scare them
and punish anyone who doesn't get the message. I think if you don't
create a little bit of fear, you'll never get any respect.
Interviewer (using the BEI method): Can you give me a specific exam-
ple of a time you used this approach?
Officer: Sure. When I took over here, the ship had just come out of a
major overhaul was still torn up something terrible, dirt
in the yard. It
and debris everywhere. As weapons officer, I've got 33, maybe 34,
spaces to maintain. I only had four men, and the skipper was on my tail
to get everything shipshape in a couple of weeks for a major inspection.
My guys were working 16 hours a day, down on their hands and knees
in 100 degree heat, scraping and painting. They were totally demoral-
ized —they thought it was hopeless.
Interviewer: What did you do?
they had any ideas. One of my chiefs said he knew where there were
some other sailors from another department who weren't doing anything
who we might steal. Which I did. We got a realistic plan together which
I sold to the skipper. Also I pitched in myself —
showed 'em I wasn't too
proud to do some work. You gotta be visible. Once they got about 3-4
spaces cleaned up, they saw it wasn't hopeless and morale starting going
up. It's sky high now —
and we maxed the inspection!
Interviewer: That's an example of "stepping on them hard right off the
bat?"
Officer: Sure the hell is. They knew who was boss from the moment I
arrived.
an actual incident and very detailed example of real behavior, the BEI method
gets much closer to the truth.
The Fact Finder. The fact finder asks for specific information about peo-
ple's background. Typical probes are "What was your college grade point av-
erage?" "How many people did you manage?" "What type of course did you
design?"
The problem with facts of this type is that they say little about a person's
motives, values, self-concept, or cognitive skills. They reveal nothing about
why the person received good or bad grades, what motivates him or her, or
how he or she behaves in critical situations. Fact-finding probes control the
responses of the interviewee. These may be data, but they are not about many
important competencies.
The Therapist. The therapist asks about people's underlying feelings, atti-
tudes, and motives. Typical probes are "tell me about yourself . .
." followed
by "reflections" of what the interviewee is saying: "So in that situation you
felt . .
."
Data from therapist interviewers depend very much on the therapist's inter-
pretation of the interviewee's reactions, and these interpretations are notori-
ously unreliable. "Feeling" data usually say little about what a person can do
or actually does. A person might feel negatively about a task but do it well
because he or she is high in achievement motivation, or is highly skilled. An-
other person may feel great about a task and his or her ability but, in fact, lack
both motivation and skill for that task. From a competency assessment stand-
point, the feeling may be irrelevant. The competencies are the achievement
motive and the skill — and the therapist will miss these.
The Theorist. The theorist asks people for their espoused beliefs or values,
about how they do things. Typical probes are "Why did you ?" . . .
The problem with this approach is that it gets theories or after-the-fact ra-
tionalizations of why a person thinks he or she did something, not actual be-
havior. As shown in the preceding examples, people's theories about what they
do often bear scant relation to their actual behaviors or competencies.
Whenever someone starts with "My general approach to management (or any-
thing else) is . .
." be very skeptical; ask for a specific example.
The objective of the BEI is to get very detailed behavioral descriptions of how
a person goes about doing his or her work. The interviewer asks other ques-
tions, but these are either designed to set the stage or to lead people to provide
critical-incident "short stories." The interviewer's job is to keep pushing for
complete stories that describe the specific behaviors, thoughts, and actions the
interviewee has shown in actual situations.
Because most personnel professionals have been trained in one of the tradi-
tional approaches, the BEI may not be as easy as it sounds. Interviewing habits
can be hard to break, particularly for psychologists and others trained in coun-
seling methods. 4
1. Know Who You Will Be Talking To. Learn the name of the person to be
interviewed and how to pronounce it correctly, his or her job title and some-
thing of what the job involves, and what the person's organization does.
Interviewers should not, however, know whether the person they are inter-
viewing is rated as a superior or average performer. This can bias the inter-
view. If the person is known to be a superstar, you may ask leading questions
that give them an unequal opportunity to say how good they are. If they have
been identified as average, you may not interview them with equal interest or
support and thus limit their opportunity to provide useful data.
2. Arrange a Private Place and Vh-2 Hours of Uninterrupted Time for the
Interview. The interview should not take place where others can overhear you.
It may be best for the interview to be away from the interviewee's office and
interruptions from the telephone or visitors.
interviewer's version of the facts than the interviewee's. BEI transcripts can
also provide a valuable source of training materials, such as case studies, role
plays, and simulations.
When interviewer, respondent, and transcription time are taken into ac-
count, each BEI represents an investment of several hundred dollars. It is well
worth your while to use a good tape recorder, with fresh batteries and tapes.
Check it before the interview and then test with the interviewee to be sure it is
operating correctly. Labeling of tapes on the spot can prevent mix-ups later.
4. Know What You Will Say. Memorize the scripts provided in the following
sections for each step of the Behavioral Event Interview. Interviewers have
found that preparing a prompt to remind them of what to say at each step is
BEIs contain five steps. Most of the interview should focus on Step 3 —the
Behavioral Events themselves. The steps are as follows:
job —two or three "high points" or major successes, and two or three
"low points" or key failures.
Here are detailed objectives and statements or questions that can serve as a
script for each step of the BEI. Pointers on techniques and dealing with prob-
lems are provided for each step.
Step 1. Introduction and Explanation. The real purpose of this step in the
BEI is to establish a sense of mutual trust and good will between yourself and
the interviewee so he or she is relaxed, open, and ready to talk to you. Specific
objectives are:
120 Developing a Model
The purpose of this interview is to (or more personally, "I've been asked to try
to") find out what it takes to do your job. The way we do this is by asking
people like you —
the ones who are actually doing the job how you do it. You —
have been selected by (the organization, your supervisor, etc.) as someone who
can tell me what I need to know about the kind of work you do. Since you are
the obvious expert about what it takes to do your job, all I will do is ask you
some questions about how you do your work. The best way we have found to
do this is to ask you to describe some of the most important incidents you
have encountered on your jobs —
what the situations were and what you actu-
ally did.
Alternatively, you can give the interviewee a printed outline of the BEI
and say:
I will be asking you about your duties and responsibilities; and about some
"critical incidents:" some "high" or success incidents, and some "low" or failure
incidents you have had on your job in the past 12 to 18 months. We've found it
helpful to give you a few minutes to reflect and jot down your most important
responsibilities and some critical incidents on this outline.
I'll give you a few minutes to think while I set up.
Busy yourself getting your notes and tape recorder ready to avoid giving the
interviewee the idea you are impatiently standing over him or her. When he or
she looks up from the outline, continue:
Everything you say in this interview will be kept strictly confidential and will
not be shared with anyone else in your organization. Your data will be tran-
scribed "blind" — without your name or anyone else's attached — and included
with data from everyone else we are interviewing.
With your permission, I would like to record the interview so I can pay more
attention to you and not have to take so many notes. Again everything you say
will be kept confidential. But if there is anything you want to say off the record
or don't want me to record, just let me know and I'll turn off the tape.
Conducting the Behavioral Event Interview 121
Pause briefly, to see if there is any objection, then say immediately, enthu-
siastically:
Almost everyone gives this permission and soon forgets the tape recorder.
Pointers on Technique
Establish trust with another person by openly explaining who you are,
what you are doing and why, and then asking the person's help. If you are
open, informal, and friendly, the interviewee is likely to respond in kind.
Asking someone for his or her views minimizes the status differential
between the interviewee and you as the "expert researcher." Taking the
role of an "inquirer" and being genuinely interested will establish your
respect for the interviewee's knowledge and the value of what he or she
has to say. Treating the interviewee as an expert on his or her job is
empowering — it makes him or her feel strong, safe, and in control. Most
people find it rewarding to talk about themselves, their jobs, what they
know well.
To deal with this, repeat the purpose of the interview, emphasizing that it is
to get data about the job, not to evaluate the interviewee personally. Reassure
the person that he or she is only one of many people being interviewed. Em-
power the interviewee by acknowledging his or her expertise.
Optionally, depending on the interviewee's curiosity, you can say:
This is part of a research program that should lead to better selection and train-
ing for the job. If we can identify the skills and abilities you use to do your job,
we can better select and train people for jobs like yours.
To deal with this, repeat the promise of confidentiality and what will be
done with data from the interview. Emphasize that the tape recorder is only to
help you take notes. Offer to turn it off if the interviewee requests.
You can say:
Everything you say will be kept strictly confidential. Your interview data will be
put together, without your name attached, with data from everyone else we will
122 Developing a Model
be talking to. The tape recorder is just to help me take notes. If there is anything
sensitive you want to say "Off the record," I'll turn the recorder off.
Optional Step la. Career Path. The objective of this step is to identify
"feeder" jobs, education, and life experiences that may have developed the
interviewee's competencies to do his or her present job. These data can be
helpful in designing career paths and succession planning systems.
This fact finding can also be a low-threat way of encouraging the intervie-
wee to start talking specifically about what he or she actually does on the
job or has done in his or her career. Occasionally interviewees will mention
a major event in the past that they feel has had a major influence on their
personality or life. You may want to ask more about this event, using it as a
critical incident.
Specific questions would focus on educational background, major jobs be-
fore the current job and their most important responsibilities, and how the in-
terviewee got the current job.
Pointers on Technique
2. "Whom do you report to?" Note the supervisor's title and/or position.
You can say, "I don't need his or her name, just his or her title."
Pointers on Technique
You can do this by asking clarifying questions and by asking for specific
examples. For example, a police captain may say, "Well, I supervise the lieu-
tenants." So you ask him or her to explain a little more what he or she means
by "supervise," and what actually is involved in the supervising. The response
may range from reading reports written by subordinates to working with them
in critical situations.
Similarly, if a staff person says, "I prepare long-range strategic plans," you
should ask what he or she does to prepare a plan. Again, responses might range
across tasks requiring very different skills, from reading technical reports to
interviewing top executives.
Often in the course of describing their work, interviewees will use techni-
cal jargon and acronyms or say things that puzzle you and that you want clari-
fied. For example, an aircraft radar technician says, "I repair 102DZ FCS
'black boxes.'"
Always ask the meaning of anything you don't understand: "What is a
102DZ FCS 'black box'? What does 'FCS' mean?" "Naive" interviewers often
elicit better data because they draw people out when they ask many questions.
Ask for moderate detail so that you're clear about how much time the
person spends on what activities.
Listen for possible incidents you may want to ask the interviewee about if
Interviewees often start telling a critical incident on their own: "I handle all
To deal with this, you can interrupt the interviewee and ask for a specific
example. You can say, "Could you choose one of your most important tasks or
responsibilities and give me a specific example of how you handled it?" Or
more specifically, "You mentioned that you have to make all the tough hiring
124 Developing a Model
decisions. Can you think of a particularly tough decision you had to make, and
tell me about it?"
Step 3. Behavioral Events. The central objective of the BEI is getting the
interviewee to describe in detail at least four and preferably six complete sto-
ries of critical incidents. Some respondents provide as few as four incidents,
and others as many as ten. This section should take up the bulk of the inter-
view time and should provide specific details. A good rule of thumb is that you
have sufficient detail if you could stage a videotape (with voice-over for the
interviewee's thoughts) of the incident without having to invent much of it.
Now, I'd like to get a complete example of the kinds of things you do on your job.
Can you think of a specific time or situation which went particularly well for
you, or you felt particularly effective ... a high point?
To get a complete story, you want the answers to five key questions:
3. "What did you (the interviewee) think, feel, or want to do in the situa-
tion?" Here you are particularly interested in the person's perceptions
and feelings about the situation and people involved in it.
How was the person thinking about others (e.g., positively or nega-
tively?) or about the situation (e.g., problem-solving thoughts?)?
What did the person want to do — what motivated him or her in the
4. "What did you actually do or say?" Here you are interested in the
skills that the person showed.
5. "What was the outcome? What happened?"
Get the Story in Proper Time Sequence. Try to get the interviewee to
begin at the beginning and take you through the story as it unfolded.
Otherwise you may get confused about what happened and who did
what. This may be difficult, because the interviewee will usually start
by remembering the outcome of an event. Think of a time line running
Conducting the Behavioral Event Interview 125
from a starting point to a conclusion point. Do not proceed until you are
clear about those two. You can say:
That's exactly the kind of incident I'm looking for. Now could you walk me
through it, starting at the very beginning, and continuing to the end, so that I can
understand what happened, in what order?
Fill in all the gaps in the narrative by asking the interviewee for the data
you need to get a complete story.
If the interviewee gives you a complex incident, ask about the most impor-
tant or memorable subincidents within it. For example, if he or she says, "Over
the past three years I 'blue-skyed', sold, spec'd, developed and installed a $50
million inventory control system in our 90 offices worldwide," you can ask:
What was the single most important step in the overall project? What stands out
for you as being most memorable?
The response will likely be: "The presentation I made to the Board of Di-
rectors where I asked for the $50 million!"
When the interviewee identifies a critical subincident, continue by asking
the BEI questions: "What led up to that presentation . .
." and so on.
Ask Questions That Shift the Interviewee into Discussing an Actual Situa-
tion. Focus the interviewee on real past occurrences rather than on hypo-
thetical responses, philosophizing, abstractions, and espoused behaviors.
Keep your probes short —no more than 6 to 10 words — and in the past
tense. Often, all you need to ask is "Who did that?" "What happened?"
"How did you do that?" "When did you do it?" or "What was going
126 Developing a Model
through your mind at that time?" Use "why" carefully: It often elicits
a person's theory about a situation, not what he or she actually did.
Similarly, questions in the present ("What do you do in that situation?"
and future ("What will you do next time?") invite hypothetical re-
sponses. Questions longer than a sentence tend to confuse and block
interviewees or become leading questions, which bias their responses.
He said:
She replied:
He then said:
If the interviewee says he/she can't remember the actual words, say:
"Just give me the flavor of it. What sort of thing did you say?" Getting
interviewees to re-create the dialogue almost always triggers recall of
actual behavior.
Probe for Thoughts behind Actions. Probe for thought processes in tech-
nical problem solving, pattern recognition, strategic planning. In
"knowledge worker" jobs, 75 percent or more of the job is thinking.
Even in simple jobs, much behavior is "covert." For example, an auto
mechanic tightens nuts when mounting wheels. The important part of
this task is knowing when the nut is tight enough. Good mechanics will
have an algorithm or rule: "Tighten the nut finger-tight, then a further
three-quarters of a turn with a wrench. A quarter-inch less and the nut
loosens, a quarter of a turn too far and its threads are stripped — and
the wheel falls off the car." Good competency research identifies these
algorithms. You can ask:
"How did you know to do that? That that was the case?"
"How did you reach that conclusion?"
A better probe: In every case you should ask for an actual incident:
"Tell me about someone you had a particularly good or bad interview
—
you about his or her motivation or skill in using power. In the actual
incident, the interviewee may not have thought of or wanted to influence
someone else at all. Your leading probe may bias the interview data by
introducing a competency the interviewee doesn't really have. Similarly,
jumping to a conclusion by saying to the interviewee "so you succeeded
in selling the prospect" may lead the person to tell you what you want to
hear and give you an outcome to the incident that didn't happen. Don't
assume you know what is happening, or who is involved, unless the inter-
viewee specifically states it. When in doubt, probe!
Interviewer: How did you feel at that point (in the incident . . .)?
Avoid Probes That Restrict the Interviewee's Domain of Subjects. For ex-
ample, avoid this kind of statement: "Tell me about a critical incident in
"
which you had to deal with a people problem.
In competency research studies where the BEI is used for hypothesis gen-
eration (to identify competencies important to doing a job), it is better to
cast the widest net possible (i.e., ask simply for "a critical incident" without
restricting the incident to "dealing with people"). What interviewees choose
to talk about is what is salient to them; what they consider "critical" is an
important clue to their competencies. Often superior and average intervie-
wees' choice of critical incidents is so different it sounds as if they were in
different jobs. For example, average salespeople talk about keeping their pa-
perwork straight; stars talk about client contacts. Average operations man-
agers talk about interpersonal conflicts; stars talk about planning. Average
Conducting the Behavioral Event Interview 129
chief engineers talk about solving engineering problems; stars talk about in-
fluence strategies and organizational politics. (An exception to this rule is
the "focused" BEI used to assess specific competencies for selection, dis-
cussed in Chapter 18.)
ure or low-point incident, you can say, "That helps me understand much better
what you do in your job. Now, can you think of an instance in which you feel
you weren't as effective as you could be, when things didn't go well, when you
were particularly frustrated —
a real low point?"
If the interviewee balks, you can add, "We're interested in your worst expe-
had to face, because these are things we
riences, the toughest situations you've
would want to prepare anyone coming into this job to face."
Asking for particularly "tough" or "frustrating" experiences is a useful in-
direct way of getting ineffective or failure incidents.
When the interviewee comes up with a specific event, you again want to get
Stay with One Situation at a Time. Don't let the interviewee change the
topic or go on to a new incident until you have a complete behavioral
event.
Look for Patterns. As the interviewee tells you additional incidents, you
are learning things about him or her. You should ask questions that will
verify or double-check inferences you are beginning to draw about his or
her competencies. For example, if several of an interviewee's incidents
deal with conflict situations, you can be on the alert to probe how the
person feels about, views, and deals with others in conflicts.
that went particularly well or poorly. The interviewee just can't seem to
think of anything important. He or she may begin to get frustrated or an-
noyed about not being able to do what you want. In this case you should use
other approaches to get the interviewee to talk.
Things you can do:
Tell about an experience of your own in behavioral-event story form
to illustrate the kind of material you want.
Give an example of a good behavioral event from someone else you
have interviewed with which the interviewee can empathize (but be
careful not to lead the interviewee too much).
I'd like to come back to something you said earlier. Could you tell me
more about that?
Ask, "Is there anything else you do in your job?," "Was there any-
thing else you did during that time?," or "Do you work with anyone
else?" When the interviewee recalls something, let him or her de-
scribe it in general terms for a few moments, then zero in by asking,
"Could you give me a specific example?" or "Could you tell me about
a specific time when you did that/dealt with that person?"
Remain silent. The interviewee will usually break the silence with
new material.
Continue to Step 4 and ask "What do you think it takes to do this
job? What would you look for if you were going to hire someone to do
what you do?" When the interviewee mentions something (e.g.,
"integrity" or "I have to be good with figures"), immediately ask for
an example: "Can you think of a situation on the job that called for
integrity/using figures?" Continue with the critical-incident format.
I don't need any names. Just tell me what happened, (or) It's okay to disguise
the organization and people's names; I'm only interested in what basically
happened and your part in it.
Ask for examples (see suggestions for dealing with vagueness, above).
The Interviewee Asks You for Advice. The interviewee may try to get your
feedback or your conclusions (e.g., "Have you ever been in a situation
like that? What should I have done? How do you think I handled it?").
Don't get sucked in. Anything you say is likely to elicit hypothetical re-
sponses ("What could have been done") or turn into an abstract bull ses-
sion.Try to turn the interviewee's question back into another incident:
"Have you ever encountered that problem before? How did you deal with
it that time?"
Step 4. Characteristics Needed to Do the Job. This step has two objectives:
1. To get additional critical incidents in areas that may have been over-
looked.
2. To leave the interviewee feeling strong and appreciated by asking for his
or her expert opinion.
132 Developing a Model
What to say:
The final thing I'd like to ask you is what characteristics, knowledge, skills, or
abilities you think are needed do your job. If you were hiring or training
to
someone to do your job, what would you look for?
This question appears to ask for the very hypothesizing that the BEI
method tries to avoid. In fact, it is a strategy to get additional critical incidents
that may shed on some of the organization's espoused or folklore values.
light
Pointers on Technique
viewee has not been able to come up with 5 or 6 incidents before this point.
can say, "What do you think you know, what skills do you have, that
enable you to do the job well?"
the characteristic, or how it has made a difference, on the job. Often you
will find that what the person means by the characteristic is very differ-
ent from what it sounds like.
Summary Write-Up. After the interview is over it is a good idea to sit down
quietly for an hour and summarize what you have learned. If there is time, this
is the best point to write up the entire interview, while your memory is still
fresh. This may include a brief characterization of the person you have just
interviewed. Use the write-up to define things about which you are still un-
clear. Note any hypotheses you may have about competencies needed to do the
job, so that you can check them in later interviews.
Summarize the data from the interview.
questions:
The physical appearance of the interviewee and his or her office (e.g.,
neat/messy).
How the interviewee made you feel (e.g., uncomfortable/ relaxed), and
what he or she was doing to have this effect.
Any difficulty you had getting the interviewee relaxed or able to talk
about high and low points.
NOTES