A241125 Task Identification and Evaluation Systex (Ties)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 147

ifC-1b-OO AD-A241- 125-

TASK IDENTIFICATION AND EVALUATION SYSTEX (TIES)

Submitted To:-

Air Force-Human -Resources-Laboratory-


Brooks- Air- Force- -Base, Texas- 78235-5601

Submitted By:-

The Texas MAXIM- Corporation


8301 Broadway, Suite 212-
~~~ ~~San Antonio, Texas 78209 Ti ocmn
hproo
ha bee

*~~ ~~ di bbtjo~ is ulmtd


blcra
-Walter E. -Drikill
The MAXIMA Corporationr
J
..........San Antonio,, Texas 78209-

Edward- Boyle
Logistics -and--Human, Factors -Division
0-Air, ore Human Resources Laboratory
-Wright-Patterson-AFE, Ohio--45433-

Sharon K. -Garcia
-Manpower, and' Pe rsoninel -Division
Air Force-Human Resources Laboratory
Brooks AFB, TX 78235-5661

Contract-No.- F33615-83-C-003-0

31-July 1986

91 92-6 0=66-
NOTICES

This report is published as received and-has notbeen-edited by the technical


editing staff of the Armstrong Laboratory.

Publication of this report does not constitute approval or disapproval of the ideas
of findings. It is published in the interest of scientific and technical information
(STINFO) exchange.

When Government drawings, specifications, or other-data are used for any


purpose other than in connection with a definitely Govemment-related procurement,
the United States Government incurs no responsibility or any obligation whatsoever.
The fact that the Government may have formulated or in any way supplied the said
drawings, specifications, or other data, is not to be regarded by implication, or
otherwise in any manner construed, as licensing the holder, or any other person or
corporation; or as-conveying any rights or permission to manufacture, use, or sell
any patented invention that may in any way be related thereto.

The Office of Public Affairs has reviewed this report, and it is releasable to the
National Technical Information Service, where it will be available to-the general
public, including foreign nationals.

This report has been reviewed and is approved for publication.

JO A.GUTHALS, 2Lt,USAF WILLIAM E.ALLEY, Technical or


Contract Monitor Manpower and Personnel R h DiVisio

Vjj_ For
ROG . ORD, Lt o , -- - C
Chief; Manpower and Personnel Research Division' DTCrA l
an C e J3
8u.. . . ............................
..

Distibutiol I
Avai8bility CoCos
Avail and Ior
Dist Special
yss
REPORT DOCUMENTATION PAGE L OMB No_74.188
Public reporting burden for this collection InfInk alon Is ee.tatd to average I ho r onse, including
ire the time .' r wing insreuction, s aeching edsting d iasources,gatheng
and maintaining the dta neded,. and completing and revie ng the collection of Infomation. Send comnienu mgarrg thus burden etma or any other aspect of thi collecton o
information, Including or ouggee
reducng
d this bun, to Wahingt H ua, Service. Directorate for Infonmakr Ow tns and Re,"%ts, 1215 Jeff~eon Dav, Highway, Suite
1204, Adlngtn, VA 22202-4302, and to the Office of Maa 1ent ad Budget, PapewoKrk Rejuction Project (0704186), ,ymhington, DC

1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED
August 1991 Interim - January 1986 - July 1991
4. TITLE AND SUBTITLE S.FUNDINGNUMBERS
Task Identification and Evaluation System (TIES) C - F33615-83-C-0030
PE - 62205F
PR - 7719
6. AUTHOR(S) - TA - 19
Waiter E. Driskill WU - 11-
Edward Boyle
Sharon K Garcia -

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION


The Texas MAXIMA Corporation- --REPORT NUMBER
8301 Broadway,Suite 212
San Antonio, TX 78209

9. SRONSORING/MONITORING'AGENCYNAMES(S) AND ADDRESS(ES) 10. SPONSORING/MONITORING AGENCY


/- Armstrong Laboratory REPORT NUMBER
( Human Resources Directorate AL-CR-1991-0005
Manpower and Pertonnel Research Division .
Brooks Air Force Base, TX 78235-5000
11'SUPPLEMENTARY NOTES

Armstrong Laboratory Technical Contact: 2Lt Jody A. Guthals, (512) 536-3551

12a. DISTRIBUTION/AVAILABIUTY STATEMENT 12b. DISTRIBUTION CODE


Approved for public release; distribution is unlimited.

13.ABSTRACT (Maximum 200 words)

Five maintenance task data systems inthe Air Force provide a variety of task information. Each was
developed for different purposes, but -only two of these systems use any of -the data from the others. The five
systems are the Occupational Survey Method, the Maintenance Data Collection System, the Logistics
Composite Model, the Logistic Support Analysis Records, and Instructional Systems Development data. Each
of these task data systems and-their uses are described in detail. Data elements which would permit their linking
in a Task Identification and Evaluation System (TIES) are identified. While-the different systems can be linked,
considerable subject matter specialist assistance will be required to -automate the system. Nevertheless, the
potential importance to -manpower, personnel, and training integration makes development of a TIES highly
desirable.

14.SUBJECT TERMS 15. NUMBER OF PAGES


ASCII CODAP 145
Comprehensive Occupational Data AnalysisPrograms (CODAP)
Instructional Systems Development data (ISD) (Continued) 16.PRICE CODE
17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. UMITATION OF ABSTRACT
OF REPORT OF THIS PAGE OFABSTRACT
Unclassified Unclassified Unclassified UL
NSN754O01.2W5W Standard Fon.296 Q S2

i 2W6-02
14. (Concluded)-
Logistics Composite Model (LCOM)
Logistic Support Analysis Records (LSAR)
Maintenance Data Collection System (MDCS)
Manpower
Occupational Survey Method (OSM)
Personnel
Small Unit Maintenance Manpower Analysis (SUMMA)
Task Identification and Evaluation System (TIES)
Training-(MPT)

iii
TABLE-OF CONTENTS-

I. -INTRODUCTION..............;.................. ....................... 1

II. TASK DEFINITIONS................................................... 3

III. OCCUPATIONAL ANALYSIS PROGRAM....................................... 5

Background........................................................... 6
Davelopment and Administration of Occupational Surveys ............... 7
Backg~round Information Section....................................... 8
Duty-Task List Section .............................................. 8
Administering USAr Job Inventories .................................. 20
Analyzing Occupational Survey-Data.................................. 20
Task Difficulty-Ratings ............................................ 28
Training Emphasis.................................................. 28,
PRTMOD .............................................................. 28-
Storage and Retrieval of OSM Data ................................... 29
Reliability of Occupational Analysis-Data ........................... 29
Uses of Occupational-Analysis Data .................................. 31
Critique of The Occupational Analysis Program ...................... 37

IV. MAINTENANCE DATA COLLECTION SYSTEM .......................... 9

Origini of MDC....................................................... 39
Development of MDC Data Base ....................................... 40
Processing-and Analyzing MDC Data .................................. 51
Reliability- of MDC Data ............................................. 57
Uses-of MDC Data...................................................... 67
Critique of MDC ............................................ 9

V. LOGISTICS COMPOSITE MODEL DATA ............................... ...... 69

Origin-of LCOM..................................................... 69
Origin of LCOM Task Data............................................ 70
The LOOM Model...................................................... 75
Analysis of LCOM Model Data........................................ 80
Uses of LCOM ........................................................ 82
Critique .......................................................... 83

VI. LOGISTIC SUPPORT ANALYSIS.......................................... 84

Origin............................................................. 84
Developing LSA ............................................... ..- 87
LSA Data Storage and Retrieval ..................................... 101
Reliability and Validity of LSA Data .............................. 109
Uses of LSA Data .................................................. 110
Critique of-the LSA............................................... 111

Wi
VII. INSTRUCTIONAL SYSTEMS DEVELOPMENT ................................. 112

Origin of ISD.... .............................................. 112


Development of ISD Data ..................................... .... 113
ISD Data ........................................................ 113
Storage and Retrieval of ISD Data .................................... i:4
Reliability of ISD Data ......................... ................. 144

VIII. CAN TASK DATA BASES BE LINKED..................................... 114

IX. USER TASK DATA REQUIREMENTS ................................. 117

X. TIES APPLICATIONS ....................................................... 121

Interface Problems and Issues....... ............. '.................. 12-5

XI. SUMMARY AND CONCLUSIONS ............... ............................. 129

XII. REFERENCES ...................................................... 131

XIII. BIBLIOGRAPHY .. ...................... E 5

THE APPENDIX 136

List of Figures

Figure Page

I Standard Background Page ............................................. 9

2 Work-Related Background Page ........................................ 10

3a Job Inventory Task List, AFS 326XOC/D ................... ............. 18

3b Job Inventory Task List, AFS- 423X0 ................................... 19

4 Alphanumeric Task Listing ........................................... 22

5 Dictionary of Variables,............................................. 23

6 Computerized Job Description-Task Level ........ .............. 26.


7 Computerized Duty Dgscription ....................................... 27

8 STS-Task Matching (PRTKOD) ........................................... 30

9 AFro Form 349 ............ ............................................ 41

10 AFTO Form 35O ........................... ;........................... 42

11 MDC Data Flow .... ................................................... 52

iv
'Figure Page

12 MDC, M!ICS, and EDITS in MODAS ...................................... 53

13 MODAS Main Menu... .................... ............ ................. 54

14 MODAS Airborne Menu ......................................... 55

15 MODAS' EAD'Menu ....................... ................... ..... 56

16 MODAS Worst Case Menu ................................................. 58

17 MODAS Reliability Report. ............................................ 59-

18 -MODAS Airborne -Data Menu .............................................. 60

19 MODAS Record Type Menu .................................................... 61

_20 'MODAS Search Index Menu ............................................. 62

21 MODAS Detail-Maintenance Report ..................................... 63

22 MODAS Tables and Libraries Menu ........................................ 64

23 MODAS Master:Data List .............................................. 65

24 MODAS Tables Men..................................................... 66

25 Data Preparation Subsystem Process Interfaces/Flow ................... 71

26 LCOM Simulation Subsystem Structure .................................. 76

27 Logistics Composite Model ................................................ 77

28 Overview of LCOM-Network Logic ........................................ 78

29 Example of a Task -Network.......................... .................. 79

30 LSA Data Documentation Process................................. ...... 85

31 LSAR Data Flow and System Engineering Interface ....................... 86

32 LSAR Data Record Utilization by Hardware Breakdown Level.............. 95

33 LSAR Data Record C .................................................... 97

34 LSAR Data Record D ................................................... 98

35 LSAR Data Record DI.................................................... 99

36 LSAR Data Record E ............ ;..................................... 100

V
Figire Page

37 LSAR Data Record G ................................................... 102

38 LSA-001 Sumnary ...... ............................................. 103

39 LSA-002 Summary ........................................................ 104

40 LSA-006 Summary ....................................................... 105

41 LSA-011 Summary ....................................................... 106

42 LSA-014 Summary ...................................................... 107

43 LSA-015 Summary_... ................................................... 108

44 Summary: USAF enlisted manpower, personnel, and training system .....123

List of Tables

Table Page

1 Task Data Systems and Level of Task Coverage .......................... 3

2 Reliability of "Percent Performing" and "Percent


Time Spent by Total Group" Vectors in Consolidated-
Job Descriptions.................................................... 32
3 Data Elements Utilized in the Maintenance Data
Collection System ..................................................... 43

4 MDC Action Taken Ccnveision to LCOM Action Taken Codes ............... 72

5 Definitions Assumed for LCOM Action Codes.............................. 73

6 MDC Action Taken Code Definitions ..................................... 74

7 LCOM Extended Form 11 Listing .......................................... 81

8 Data Item Description (DID) Relationships to the LSAR ................. 88

9 Illustration of Task Data Systems and Data Provided ................... 115

vi
Preface

This report of a Task Identification and Evaluation Systen (TIES)-


describes five data system- that provide one or -more of the xmnpuwmr,:
personnel, and training (PT) functions with data relevant to their purposes.
The array of data provided by the five systefro is varied, yet in aggregate,
this array is never used by a MP function. A TIES wuld provide a universal
data base consisting of all of the variety of task info tion. Availability
of such a data base should be a- first step in the integration of MPT. Since
each of the MPr functions are izracted differently by occupational
classification structure, a TIES wuld provide mh of the data needed to
optimize structure decisions.
In the preparation of this report, mmurous Air Force personnel
contributed significant informtin. Sincere and deep appreciation is
accorded each of these personnel for their tine, patience, and dedication to
I L g detailed inforation on which this report is based. These personnel
include:
Mr. Chuck Gross, AFIMC/MS, Wright-Patterson AB, OH
Mr. Fred Fuertes,- AFW/!44ES Wright-Patterson MFB, OH
Mr. Jim Harris, AFACI, Wright-Patterson MB, OH
ms. janet Pissant, AFmL/LR, Wright-Patterson MB, OH
Ms. Sandra Y. Jeffcoat, System and Applied Sciences Corp.,
Wright-Patterson FB, OH
Lt. Col. Paul Cunnia, Hq TC, Langley MB, VA
T/Sgt. Paul Bergeron, H TRC, Lingley AB, VA
Mr. Stacey Fenner, H3 TAC, Langley AMB, VA
Dr. Squy Wallace, USAC/CM, Randolph M, TX
Capt. Donna Graham, USA&CH/=AI, Randolph AEB, TX
Mr. Jay Tartell, tWAM/Cff, Randolph.MB, TX
Mr. Jim Keeth, USAK/(M'X, Randolph MB, TX
M. Elena Weber, AXKIC/CMIP, Randolph MB, TX
Maj. TMm Ulrich, A1V/TTX, Randolph MB, TX
SM Barry Graeber, -Operati Ication 1,
3306 Test and Evaluation Squadron
RAxxolph AFB, TX
Capt Randy Hogberg, MIPC/ MCr, Randolph M, TX
Wayne S. Sturdevant, McDnnell Douglas, AB, TX
Aergstrn
Mr. Johnny Weissmuller, The M Corporation, San Antonio, TX
Mr. Michael Staley, The MXM opoation, San Antonio, TX
In the final preparation of the report, Ms. Ioretta Whitehead and Sonya Brown
were supb.

vii
SUMMARY

Five task data systems supporting key Air Force manpower, personnel, and

training (MPT) analyses were reviewed in depth to show their origins,

implementation, and uses. These -five systems were the -Occupational Survey

Method (OSM), Maintenance Data Collection System (MDCS), Logistics Composite

Model (LCOM) data, Logistic Support Analysis Records (LSAR), and Instructional

Systems Development data (ISD). The purpose was to determine- if -these varied

systems could be integrated or reconciled to make MPT analyses more efficient,

informed-, and comprehensive. Also, each system was assessed in terms of its

present purposes and how well it serves these purposes. The basic features of

each data system are described in detail.

The conclusion was that while there are formidable barriers to -cross-

walking the data, task matching could be done. While much of this matching

could be done automatically, considerable manual work by subject matter experts

would still be required. In particular, new capabilities in the ASCII CODAP

(Comprehensive Occupational Data Analysis Programs) system offer a rich

potential for linking occupational analysis data with other data systems.

Further, no single system at present fully serves all MPT uses; heice, MPT

analysis integration. A TIES would provide much of the MPT data needed for new

weapons systems acquisition and for force management decisions, especially for

aircraft maintenance specialty reorganization.

Quite apart from TIES, movement toward standardization of task terminology

and greater discipline in the generation of task data is needed and should be

implemented immediately.

viii
- .SK TEICATIN AM EVAWW SYSE24 (TIM)

I. nC]N

Five task idetifica Mtion data -systetru, each -providing a variety of


descriptive and analytic data useful for nau par, personnel, and training
(MPT) decisions, co s in the U.S. Air Force. Since they each derive from
wrk performed by job iUrents, it is not unwpected that they duplicate e
another in sme facets, while at the same time contribiting unique informtion
about the work damain. These five task identification data systems are the
Maintenance Data Collection System (MDC) which suports IaX, the utnal
Survey Method (OSK), the Instructional Stems Develcpzent odel (ISD), the
oi stics Ccqxwoite Model (IaXI),. and the Logisftic Support -Analysis System
(IBA).
Three of thes systems have no mchanim fointerfacing of their data
bases, while to of them derive data fra the other systems. ISD has a basis
in both the OH and S, while data employed in IM4 are partially derived
franMDC. -For the r ining three systems (OS4, M. and MOC), the few
efforts to relate data from one to another has involved considerable data
transformation and use of subject matter specialists to provide linkages amwg
tasks (Crank, 1986; Monson, Wagner, I Eisenberg, 1985). That they do not
easily interface, of course, is not surprising. Each developed for different
reasons and at different tines during the past two decades and, eoept for the
few linkage efforts cited above, the systems have funt indeeetly, to
the possible detriment of WF integratio.
Because they were developed indepndetlyserving different purpmes-
they differ in structure, content, rel iability, and ccputer analysis
'ti, st e, and rl. Te sinefeature comm to all is the
identificat of tasks performed, but even here there are W diffe of
pecificity, ation, and scxe. Since task identification is the primy
prerequsite for mny of the various pmr, personel, and training (HPT)
functions, each of the task identificatin system regardless of its original
purpose has been employed in oner re of these functions. Rarely, if ever,
has a MPT function utilized data from nore than one system for a single
purpose. Yet, each of the system. may provide relevant infomtion for that
purpose, especially in the personnel and training functiins. Consider, for
exmiple, the training- function. CEM data provide information about task
perfo , difficulty, and training priority, each of which is essential for
decisions= about whom to-train and when and where they shoiuld be trained. For
training develomemnt, task analysis mist be oimpleted a a part of the
instructional system. develpment process. These data, in mny cases, are
available in ISA data. There is m evidence, however, that these two sources
have been employed in developm nt of resident training coures.
There are tw principal reasons why the several data bases are not
eployed. First, there is a wids a lack of knwledge of what the vrious
task i systes provide. Second, even if potential users know of
the system, they would be faced with the very formidable task of accessing,
1
interfacing, collating, and analyzing the massive amount of data available from
the various systems. At present, the only option is to accomplish these
actions manually, and even here there would be considerable problems to
overcome.

A Task Identification and Evaluation System (TIES), which this report


addresses, would at least alleviate the second problem. In a TIES, the various
task identification and associated data systems wouldm be interfaced so that
relevant data could be readily accessible, collated, analyzed, and reported in
a usable format. The Occupational Research Data Bank (ORDB) at the Air Force
Human Resources Laboratory (AFHRL) represents an initial effort at providing a
centrally located, user friendly data system (Ellingsworth et al. 1985). But
the ORDB is restricted to OSM task data and information about the Air Force
personnel who perform the tasks. Project FOOTPRINT at the Training Development
and Analysis Center is another effort directed at assembling a data base of a
variety of MPT information from each of the military serupl:es. In the Navy,
HARDMAN is an aggregation of MPT data. TIES, for the Ir Force, would not
duplicate any of these initiatives. Instead, it would permit the aggregation
of the various task data systems and the crossflow of these data for MPT use.
At present little, if any crossflow occurs. Different MPT functions use
different sources, although each of the sources originate in the tasks that
Air Force personnel perform. The result is that a task, for example,- is
defined differently in manpower and personnel and training usage. TIES could
-be a first step in a closer integration of MET by providing a system for
interfacing and analyzing all of the task data. Each function could then
operate from the same basis.

Given the present situation in which the different systems provide data
about work accomplished in maintenance specialties and the lack of any means of
interfacing the data among systems, several important questions emerge:

1. Is there overlap in the data they provide and, if so, are they
justifiable?

2. Do the data from the various systems need to be interfaced?

3. Are the data adequate for the various MPT uses to which they are put?

4. What would it take to unify, crosswalk, or interface the data systems


and reconcile the differences among them?

The remainder of this report addresses these four basic questions.


Specifically, each system will be described, including the processes involved,
data collection and analysis relevant to MPT issues, data uses and reliability,
linkage mechanisms, and uses to which data resulting from interfacing the data
systems might be useful.

But, first, some discussion of the differences in the way maintenance


tasks are recorded in each of the systems is warranted. Task identification is
provided at different levels of detail. These differences are the source not
only of specific criticisms made of the various data but are the cause of so-me
of the problems to be faced in interfacing the data systems. MDC tasks are

2
very detailed, showing the action taken at the component level. This detail is
essential for identifying reliability and maintainability of equipment and
systems. LSA data, for the same reasons, are available at the same level of
detail. In addition LSA can provide the component steps or elements of each
action taken as well. LCOM uses MDC action taken data as an input for building
task networks and estimating maintenance probabilities and manhours. As
employed for manpower determinations, MDC actions taken are compressed. Thus,
LCOM tasks are slightly broader than MDC or LSA tasks (actions taken).
Occupational analysis tasks are more generic, infrequently providing any
actions taken at the component level. These tasks are most frequently used -for
personnel and training purposes. ISD tasks- take a variety of forms. Since
they are directly used in developing training courses and materials-, the level
of description varies according to the particular situation. Sometimes they
are broadly stated, incorporating several occupational analysis tasks. Other
times, they are described in more detail than the MDC or LSA actions taken. In
any case, ISD uses require that each task, however described, must be -analyzed
into its component steps. Table 1 briefly displays the levels of task
identification by the five systems.

Table 1. Task Data Systems and Level of Task Coverage


Equipment,
Weapons System System SubSystem Component SubComponent

MDC x x x x

LSA x x x x x

LCOM x x x x

OSR x x x * *

ISD x x x x x

* Infrequent

II. TASK DEFINITIONS

The word task and the terms task identification and task analysis have
different-meanings depending upon the user, as is the case in the five systems.
Each is based on "task" performance but those actions referred to -as a "task"
are generic at best.

As Meister (1976) points out, a task is- a construct. As a construct, it


may have a variety of meanings depending upon the purposes of task description.

3
The problem for any task identification system is to employ a definition that
is most powerful for producing the data desired of the system. As will be
noted in the discussion of the systems that follow, that which may be-
identified as. a "task" is consistent with the use to which data are employed.
Perhaps it should be stated at this point that the concept task is not explicit
in the MDC, LSA, or LCOM data. Rather, the term action taken is used.

While task definitions vary along two major dimensions (Fleishman- &
Quaintance, 1984), the tasks of the five systems fit under only one dimension.
They state that ".-. .task definitions vary greatly with respect to their breadth
of coverage (p.48)." At one end of this dimension-are definitions that view a
task as an integral part and indistinguishable from the larger work situation.
A task is the totality of the situation imposed upon the performer. Such a
definition is consistent with the practice in ISD.

At the other end of this dimension is the definition that treats a task as
a-specific performance. MDC usage, in the form of action taken on a specific
component, is consistent with this definition. LSA usage, which parallels MDC
except that it is developed for new weapons systems, also is consistent with
this definition.

Ranging.between these two extremes are the definitions employed by the OSM
and LCOM. These will be described in more detail later. Briefly, the 0SM
tasks are more generic (Driskill & Gentner, 1978), LCOM less so, but not so
specific as MDC and LSA. LCOM tasks (actions taken) are derived from
combinations of action taken entries in MDC (DPSS working paper, 1986). Each
of the five systems is in fact a task identification system, not a task
analysis system, except for LSA when task analysis is also procured-

The term task analysis, too, has a variety of definitions (Fleishman &
Quaintance, 1984) and provides an even wider variety of information. In
practice the term has sometimes referred to data that are more appropriately
task identification, as in the frequent reference to OSk and LCOM data as task
analysis. These data have never been purported -to be task analysis. More
correctly, the term applies to situations ranging from the analysis of- tasks
into their component steps to more detailed analysis of such items as knowledge
and -skill requirements and cognitive processes involved in task performance
(Horton, 1985). The scope- of task analysis information is extensive, Reed
(1967), for example, cited 45 different types -of such information.

In regard to task analysis, each of the five systems provides, first, the
source of the analysis--task identification. Second, each provides
information, such as equipment used, relevant to task analysis. Of -the five
systems, only the LSA may provide the component steps of a task. •-None provide
all of the information needed for task analysis, however one might define the
term.

The relevance of the above discussion to TIES is sixfold' evident in the-


following:

1. Task statements- in the five systems derive from different constructs,


whether explicitly stated or not, because of their varying purposes.
2. Task identification data are thus at different levels of specificity.

3. Use of the data from any of the systems is limited by its level of
specificity.

4. Linkage of data among systems becomes more difficult and onde


accomplished may be expected to haiie a hierarchical structure, -ranging from a
generic task to the more detailed tasks appr'-priate to it.

5. No single system, nor any combinations of them, can be expected to


satisfy all -of the purposes for which task identification or analysis data are
useful, simply because the task construct of these users likely will differ.

6. Yet, singly and in combination these systems provide a wide variety of


information essential to manpower, personnel, and training analysis and
decisionmaking.

7. There is no such thing as a truly unified MPT analysis capability at


present. Without such a capability, it is not surprising that MPT functions
are not fully integrated. Personnel decisions directly impact manning and
training requirements, yet they operate from different data sources and
premises.

8. Despite their different purposes, the task identification methods


overlap in some respects. For example, the LSA task analysis data, if it were
available to trainers, would eliminate some of the needs assessment analysis
required by the instructional development process. An interfacing of the task
identification systems would, in part at least, eliminate duplication of
effort.

9. A case in point is the Small Unit Maintenance Manpower Analysis (SUMMA)


work which clearly shows that M and P should and must interact with respect to
maintenance in dispersed small unit operations. How maintenance specialties
are structured is an important determinant of the manpower requirements. Some
modelling of manpower requirements reported in Moore and Boyle (1986) shows
that different specialty structures require different manpower requirements.
Also, Dunigan et al. (1985) reported the same result. They used the Theater
Simulation of Airbase Resources (TSAR) and TSAR Inputs Using the Airbase Damage
Assessment (TSARINA) models to simulate combat maintenance requirements. They
concluded that reorganization of personnel into fewer, more broadly trained
types could improve the number of combat sorties launched by 13 percent for the
F-16 in their scenario. They also concluded that such reorganization could be
implemented in peacetime, but they do not show how to do it.

III. OCCUPATIONAL ANALYSIS PROGRAM

The Occupational Analysis Program is conducted by the USAF Occupational


Measurement Center (USAFOMC) under the provisions -of AFR 35-2 and ATCR 52-22.
The OSM has produced data that have been used for a variety of MPT purposes.

5
Background

Research -conducted by the Air Force Human Resources Laboratory (AFHRL) led
to the establishment of the OSM operationally. According to Christal (1985),
research began as a result of an Air Staff requirement for a job analysis
system that would provide detaiied -task data collected from a large number of
job incumbents. Initial efforts were directed at devising methods for
describing the tasks of an Air Force Specialty (AFS) and administering these
task lists to job incumbents to collect job data. Later efforts developed the
Comprehensive- Occupational Data Analysis Programs (CODAP) to analyze data
collected from the incumbents. Although the OSM was implemented in July 1967,
research on job analysis methods and CODAP extensions have continued, extending
the technology greatly. The present OSM is an outgrowth of this research and
the application of the processes to most of the AFS and to more than one and
one-half million Air Force enlisted, officer, and: civilian job incumbents. In
addition the procedures were -adopted by the other U.S. Services and the
Canadian Armed Forces as operational programs, as well as other allied nations
and civilian businesses.

In the beginning, the- objective of the OSM was to provide data for use in
personnel classification, specifically for AFS structure and description in AFM
39-1. The importance -of task data to training decisions soon became apparent,
especially for use in ISD and was followed by the emergence of other
applications. These added applications caused subtle but significant changes
in task description methods and in CODAP. Still, the current program clearly
and closely resembles the one originally- implemented in 1967.

The OSM is an AFSC or personnel-oriented system and, in contrast to MDC,


LSA, and LCOM, it is applied to most enlisted and officer AFS as well as some
civilian series positions. OSM survey techniques normally are not applied to
an AFS with a population smaller than 200, although there have been instances
when AFSs with as few as 20-30 incumbents have been surveyed.

As previously implied, the OSM relies on a task survey technique. The


technique involves five basic steps: development of a USAF Job Inventory (a
task list and background information) for the AFS being surveyed;
administration of the Job Inventory to a large sample (frequently, -the
population) of AFS incumbents; CODAP analysis; interpretation of results; and
report writing and briefings of results.

While a wide array of computer-generated reports can be produced, these


reports provide six basic types of information:

1. Tasks comprising the AFS.

2. Percentage of incumbents performing each task.

3. Relative percentage of time spent on each task.

4. Relative difficulty of each task.

6
5. Relative training emphasis recommended for each task.

6. Summaries of background information--such as equipment used or


maintained, test or special equipment, and job satisfaction information.

The numbers and kinds of data reports- available are almost unlimited and can be
overwhelming. It must be clearly realized, however, that these reports provide
the six kinds of information shown.

Initially AFSs were scheduled for survey on a periodic basis. Now, the
schedule is the result of a priorities- working group consisting of
-representatives of the Air Force Military Personnel Center (AFMPC), the Deputy
Chief of Staff for Technical Training (ATC), the Air Force Human Resources
Laboratory, the ATC Surgeon- General, the Air Staff (DPMPT), and the USAFOMC.
This group considers requests for surveys three times a year. It also reviews
AFSs for which surveys are dated (four to five years old). Requirements are
assigned priorities and scheduled in Section 8, Volume II, Program of Technical
Training (PTT). The PTT provides for a three-year schedule.

The PTT is the source document to determine what AFSC are to be surveyed
and when the completion dates are projected. In addition the PTT provides the
date the last survey of an AFSC was completed.

Development and Administration of.Occupational Surveys

The basis of data-produced by the OSM is the USAF Job Inventory for an AFS
being surveyed. A Job Inventory consists of two parts: a background
information section and a list of duties and tasks comprising the AFSC:° The
Job Inventory task list is the instrument for collecting -the percent members
performing and relative ratings of time spent performing tasks, task
difficulty, and task training emphasis. The background information section
provides the data from which computer-generated job -description reports are
designed and the basis for summaries of work environment information. In
addition, the background information contains items to capture job incumbent
job satisfaction, sense of accomplishment, and reenlistment intentions (for
enlisted personnel).

Inventory development specialists at the USAFOMC develop the USAF Job'


Inventory from research of previous job inventories for the AFS (if the AFS has
been surveyed earlier), publications, technical orders, Career Development
Courses (CDC), interviews with training specialists and 7-skill level
technicians at the prime technical training center and Air Staff functional
managers-, and extensive interviews with 7-skill level technicians in the AFSC
at various operational sites. The purpose of this effort is threefold. First,
problem areas to which job data are applicable are sought so that. the survey
will provide useful data for their resolution. Second, the research and
interviews provide the source of information for gathering items to include in
the background information section. Finally, the research. and, particularly,
the interviews with technicians in operational organizations provide the list
of tasks and-duties. All tasks in a USAF Job Inventory are generated by the 7-
skill level technicians in the AFSC being surveyed.

7
Background Information Section
Two examples of data-gathering items from job inventofes e' s0wn in

Figures 1 and 2. Figure 1 represents standard demograph(c inrfb&ation


collected, such as name, social security number, and duty AFSC. thes6 data are
collected in each survey and all items must be completed by an incmbent before
the data are used. The second figure is an examp!6 of the kind of work-
environment information- collected. In this case, the examples are from the
inventory for AFSC423X0, Aircraft Electrical Systems. Each of the elements of
each item may be used alone or in- a combination with others to produce a job
description for personnel working on-the elements.

Data collected from the Backgroufnd Information Section are essential for
later analysis. These data serve as the basis for generating job descriptions.
Inadequacies in the content of this section limit data analysis, because
information essential for identifying job incumbents for whom job descriptions
are desired is not available.

A review of the 'USAF Job Inventories maintained by the USAFOMC reveals


that the Background Information 'Sections do not always include all the
pertinent work environment information for surveys of maintenance AFSC. The
inconsistency of -coverage among surveys occurs for several reasons. Task
inventory construction (to be-discussed below) sometimes permits the writing of
tasks at more specific levels so that equipment, for example, can be included
in the task statement. This type of situaton does not inhibit analysis,
although a different set of computer programs must be employed than is normally
used. Also, during the development stage, functibnal managers may indicate
they have, no need- for certain kinds of information, because it is available
through other sources. In this casei, the items may be left our of the section.
This situation further inhibits later analysis and use of the data for unique
applications-, such as the identification of specific weapons system or
equipment for use in designing and computing job descriptions.

For analyzing the OSM data-, a consistent policy of including work


environment items is desirable. Particularly from the viewpoint of TIES,
availability of this information to use for generatin, equipment or weapons
system specific information is vital.

Duty-Task List Section

This section contains a comprehensive list of tasks comprising an AFS.


These tasks are generated by 7=skill level technicians in the AFS being
surveyed during interviews with the inventory development specialist. It is
crucial that users understand that each task in a USAF Job Inventory is a
product of an experienced -technician in the AFSC and that each has- been
reviewed and verified by a number of other such skilled personnel.

Before discussing task writing rules and objectives, some description of


the organization of the list of tasks is needed. The tasks are organized into
groups, called Duties. The purpose of the Duties is to make responding to the
survey easier, research (Christal, 1985)- having shown this type organization to

8
RACKGROUN INFORM ATION I CASE., ONTROL NUMBE R
C=)C==
C=)=C=:)C=) - --THISSEE
FORM-IS SUBJECT
SEPARATE TO THEACT
PJVACY PRIVACY ACT oF I974 g
STATEMENT)
C=)C=C=C=)C_= PRINT YOUR-ANSWERS AND CHECK PROPER BOXES
CZ= ) C= " -NAME (LasL, First Middle Initial)

F:i[
SOCIAL SECURITY ACCOUNT NUMBER I - -F] I
DATE IUse numocis only)

______________ _ "Ye., Month .,D

PRESENT GHADE r Malt' DATE OF BIRTH


-El E2 E3 E4 ES E6 E7 ES E9 _

AS AMN AIC
[0
0_ SSI
I F1_ [
T Sv-,
hNSgl SStI
]Leal CMsIt _Yea:
_

Month Day
CIRCLE THE HIGHEST SCHOOL GRADE OR COLLEGE/UNIVERSITY-YEAR YOU HAVE COMPLETEO-IncIuoe eaual level, li e GED. but
NOTfspecial It;indinL like vocaional..outside-cerjlat school)

ELEMENTARY SCHOOL HIGH SCHOOL COLLEGE OR UNIVERSITY


I M 03 04 05 06 07 08 09 10 10 12 13 -!4- 15 16 17 1

PRIWARY AFS DUTY-AFSC IF CONVERTED OR RETRAINED, WHAT


AFSC DID YOU HAVE PREVIOUSLY?

[n 1
r-(Le&ve bltik if none)---I
i l I 0 11 r-(LLave blank if none)---
i-1 I
I- 1..., I1
r-(Leava blank if none)-

PREFIX NUMBER SUFFIX PREFIX NUMBER SUFFIX PREFIX NUMBER SUFFIX


FOR HOW MANY AIRMEN AND CIVILIANS ARE AUTOVON PREFIX TELEPHONE (Duty emensit
YOU THE IMMEDIATE SUPERVISOR? (include A N IE( e...s
onl" those" who epott ditecily to you)

URGANIZATION TO WHICH ASSIGNED "BASE TO WHICH ASSIGNED - -'

MAJOR COMMAND SEPARATE OPERATING AGENCIES ICneck one box,

OA O O OF OH OJ ON
AC r,,_JUSAFA [. USAFE DAFLC DAFSC lT [HQ USAF
OQ OR OS OT OU oY 09
MAC 0 PACAF ,r SAC 1 -jTAC ESC AFCt [-AFMOC
is 1wrSPACE
j-"-' r] AF ENG 3C AF
EUOP€ 3V OTHER',
*- cr L SRC ~ _A
LEMENTS F AF-- ELEMENTS
EJOTHER
SER CTR _JEUROPE

OTHER-UNIT OR OROANIZATION (Please soecily,

TITLE OF YOUR PRESENT-JOB (Duty assipment). -NOTE: Db. ';t; ve)u..r:', , o,,.J A ,Force S ecially UNLESS t-I is the onl
name you job has)

TIME IN PRESENT JOB -[

YEARS -m0NT)HS

TOTAL TIME IN CAREER FiEL-D______


-__ _-_ _YEARS MONTHS

TOT AL ACTIVE FEDERA~L MILITARY SERVICE (TAPMS)


YEARS MON , S

Figure 1. 'Standard Background Page-

9
rEaCIXGROUND INFORMATI ON- (CONTrINUED)
C=)=)=)C)=BLACM~] CIRCLE (1J TO THM RIGHT:OF
EACH RESPONSE YOU WISH TO INDICATE

O- gus!b101 h~L~rougl
4~u 43 5 'f~. 77cle k 1 -to Ene riga -

of each responsev-ou vish to -ir You may have -numerous-

-respoases -to some ol thle ~ieY~exampie, -in question


21, if you have completed' fto a;Bw
d 3AZR42370 cous,-
~ ~ ice~ woi i
vuu e 'L 7c1",rint o - responses Z 555-.3.

iuuld-IE o~elow
t±. apic Due re '"L 4ave _?SeDS-Yqu

completed applicable to your.c~.,., , _zddern. Blacken ci.rcle


LU ri-11gut Of eacu re ,"'V~ wisn L.0 niCaLe.

~ deoLcm..eanyCourses
-~ ~ ~ ~~~_~ * -

3,0R4n4.503, AlircraIL QL.Lcr1M=a. S IC1i5_

-. ~ ~L~L~'h3I~
lrZaIL. Lect~ric Systems IEfinic-al

14 . oraer (Please spe city on-51aniYpiges at end or DOOIRe. CCDDDDa(

Z27ThIncat~e Meow any tEesEt. quipmenaT -you use in your presez.Y


assignment. Blacken-zcircle (1) to the right oi _each resDonSe

F vou- wisn to Indicate.

~
~rcrafL
Ri
~~~YST
eniyzers

'eneraltor-e, SEWV
ans Cvarl-orivaj-
-
1 o oa

.3. R~meLers-

4. MxLI-sr4iO .esters -ca-cco3a

.. Basic ZlelLc est. 5banOS k2-Z -j ________

o. battery Ana~yzersI

i. LontroJ. kanei. ltest bets - 7-4) - OMC~DIMCDOMCv

6. I. Control- taxnei TJesters (1-31) -I______________,,

9. 1 Ire LieLector lesters

-10. .rrequency Countersj

li. -Growlers MCDCDaC

- Figure 2. Work~-Related Background Page

l0
* AFS 423X0

BACKGROUND INFORMATION (CONTINUED)

Cb BLACKEN CIRCLE (1) TO'THE RIGHT OF


EACH RESPONSE YOU WISH TO INDICATE

12:- High Voltage insulztion- Tester

11. Ignition -Testers-

14. Inverter Test Stands

[
15o

16.- Load
17.,
Linedr Actuator Test-Stands

Ijultimeters
-ank's -CC0DQD0a>I

z____________
0CCCC10

18. Nosewbeel Steering Testers I aa)aD<a

19. Oscilloscopes- -

20.- Portable DC Rectifier3 0DW00C

21. Reflectometers I
22. Resistance-Decade Boxes -~ ~ xCDD0a

23. Rotary Actuator Test Stands

24. T-170 AC System,-Test St-

25. Transiztor Testein

26. Tube Testers IocoD

27.* Universal Starter Test Stands 1C

2F, --Vacuur Tube Voltmeters OaC )(OO'

29> Vibra-tion, Meters

30. Wheatstone Brid' s aaaxDD0

31. Windshield ControlTet's,0DQADCa<

'32. Vib rating -Reed Frequency Meters XCDD0(VD

33. Stick-Grip Testers o-,ma-WO

34. Other (Please-specify on blIank pages at end-of booklet.) 0CCCCOD

35. I do not use test equipmnent - _______________

Figure 2 (Continued)

11
- Ar Z 4_'3X0

BACKGROUND INFORIAT ION- (,CONTINUED)

CnC:CD(C=)BLACKEN-CIRCLE (1) TO THE-RIGHT OF


=CCC;- EACH RESPONSE YOU-WISH TO INDICATE

-m ~ I t~1 I__ __ __
.. I. noicate Delow any aircraft e.Lectr.ca.l systems you maintain-
-in -your -present assignment. Blacken--circle (l1)to-the right NW I
oraeach response you wisn to inoxcate.

i . A-/D0LUMDOC

- - .. AU-.L4UA

- . L-i.=U I CDDCaD aa

- 0-iJU 0000000/

1U. C-7Amo

- i. U-9 MQ>ww
-12. (.-1SA 0)OMM

10. Gi3;-*MOMOM

11. G-1403=UDLa:a

- bIt. C-Unjxoc1X

19. C-H30DD7DDaa

20. CH/M~-53C O MW M

- ~21. (;T-39j ~ o ow

-- 22. E-3A ocoom >

CODE 01, YPE 1-9.8 Figure 2 (Continued)

12
AFS 423X6 1

BACKGROUND INFORMIATION -(CONTINUED) __

BLACKEN CIRCLE (1) TO TIE RIGHT OF __

=)c=)=c=)c=)EACH-RESPONSE YOIY WISH-TO INDICATE--

23. E-4A IO~~cX


24. EC-135 O D O

25. F-4C

26. F-4D -0'lC~DQC

27. F-4E-

29. F-5A 0MDMK______

30. F-5B

31. F-15 -

32. F-16-

33. F-105 I ooio)60GD-


34. F-106
j o wimD DO OD-
35. F1-11
36. F3-3:1]

37. HC-130 0amo o~om

38. fIM-3E-

39.1 HH-53B j X~)000a


40. KC-100IaCMO=D

41. KC-135 I _________a

42. O-2A 0W~a)'Qa xDa)z

43. OA-37B cmamm

44. OV-10 I -D~a~D0D


43. RC- 135 ooc~aDommm _

46. RF-4C 0= ___D~)C

CODE-0-9Figure 2 (Continued)

13
c=c)=_-)) AFS 423XO -

=)c=Dr_)c=)c=BACKGROUNDINFORIIIATION (CONTINUED)

-=cD__l=c BLACKEN CIRCLE-(1) TO THE RIGHT--OF


=)c=)=c:ED EACH--RESPONSE YOU WISH TO INvDICATE

41. Ioa-ac.a
~a

49f. 1731A/b -00CD=

30u. -3

- :,±.
jc 00 o-

33. - 1437 ~x

-- j Un It-
---.

- 3~~~~~~.
Uf I F~ _ _ _ _ _ _

O3V. WLut
ibuZD( 1

- 01u. WL; - - -I

oz. Utner (k'lease spec2.:y on olanK pages- at end oz Doox.Let.)

oj. i do not mai.ntain aircralt elecFFrical sy-stems

-14
AFS 423XO

=)=)C==)C=)BACKGROUN4D INFORMATION (CONTINUED)

of th repe hich-bmaena
d c~ oucseet
lt.f o
n~loescribe
aa u-
yorpetjbassignment OI0. Chorsexaonly on resioa -18
blacke aicl
no a ) bgesid tha aircraft maintenance function- o~aoxe

ai. raerofuncti~on at bet cibsyorpesnpjbA

assignment- is 'Eectairao bnecpmatin youR wop -oace - -c

-1 . i vionics
Lae aic at op
nenance l n ti n w ic e c i ~ s]

B.±attery Znop 0000MMOW

0. vepot Iaintenance - OoDOMCDOOZDDJ

I. .lectrca±-ShoD - 1 DcD c!DD(1(

8. L.ngine builoup Shop IooCDoC1m0X


E.nvironmentai. shop - i 000M

10. l±±ight-.ine faintenance (Non-POflO) I )(~a~aO

11. fiightline taintenance (POMO) - - ~ ~ ~ aoaa

U2. jer.. Snop 1 OMOM

13. Maintenance Control a!Ca=QD

-14. haintenance Schediuling - J DO aDOaDOc~D

15. Maintenance Standardization j DO~aO(O

16.- Phase Maintenance- ODD~)DOC

17. Quality Control C)DDDDD00

18. Records -and -Repor s- 00OCOWD A

CODE 99 Figure 2 (Concluded)

15
be most effective. Tasks are listed alphabetically within a duty, research
again revealing this organization to be most effective.

The guideline for organizing duties is to align or group tasks in the


combination that workers are most likely to perform them. This decision is
also made by AFS technicians. Three generic structures are usually found:
geographic, functional, and a combination- of the two. In the geographic
structure, tasks are organized by the location in which they are performed--
e.g., flightline, shop, test cell. Functional structure occurs when tasks are
performed by functions--e.g., reclamation and repair, troubleshooting, general
tasks. Many maintenance AFSs fit one or the other of these two structures, but
occasionally tasks are organized in a combination of the two. In any of the
three, the structure of the work performed as described by technicians in the
AFS determines the USAF Job Inventory Duties.

Developing the list of tasks is also a crucial aspect of the survey


process. While there are some very specific rules that apply to task writing,
some other factors, primarily two, operate to affect the level of specificity
of the tasks. First, more than one AFS may be included in the survey, by
decision of the Priority Working Group and based on a request from a user, For
example, a request to determine the similarity of work performed among two or
more AFS. 5econd, the work of an AFS may be very heterogeneous, comprised of
numerous jobs. In either the first or second case, the number of tasks to
describe the work of the AFS becomes so long that some compromises about task
specificity must be made. As the number of AFSs surveyed jointly increase, or
when there is a very broad AFS, tasks must be more general. Otherwise, the
task list is too long.

In regard to task specificity, OSM survey tasks tend to be more general


than MDC, LSA, or LCOM tasks. That is, the tasks infrequently describe work
performed at the component level. The principal purpose of survey tasks is to
provide information useful for decisionmaking for training and job
classification. The tasks are purposely written at a more general level than
are those in MDC. For such use, more general tasks are adequate--for
classification it has been generally considered sufficient to determine who is
doing what tasks. There is growing evidence, however, that personnel decisions
need to be based onmore specific tasks (e.g., SUMMA). The same is generally
true for deciding what tasks are trained where, although over time, the policy
has been to write tasks as specific as possible. Such ancillary uses of the
data as the benchmarking of tasks for learning difficulty and determination of
walk through performance measures would be facilitated if more specific tasks
were available. As Driskill and Gentner (1978) pointed out in a discussion of
the criteria for describing tasks, "Theoretically, each occupational specialty
should be described at the lowest level of work activity (p.205)."

Within the parameters of number of jobs surveyed, the capabilities of


respondents to answer long task lists, and computer capacity limitations, the
occupational analyst is charged with writing tasks that meet the criteria
listed below. For a further discussion of them, see Driskill and Gentner
(1978).

16
a. Each task should be time-ratable--that is, the job incumbent can
reasonably estimate the relative amount of time he or she spends-on each task.
This criterion normally eliminates tasks that begin with such words as
"insure", "have responsibility for" and "understand", which make it
difficult or impossible to determine the relative time devoted to this
activity.

b. Each task should communicate in the language of the- specialty. The


task statement must be clear so that it is easily understood by career field
incumbents, the people who must answer the questionnaire. Terminology
consistent with current usage in the career field reduces the chance for error
or differing interpretations of task statements.

c. Each task should be mutually exclusive or independent of other tasks


in the inventory; that is, whether a job incumbent indicates that he or she
performs a task must be independent of his or her performance of all other
tasks in the inventory.

d. Each task should differentiate among workers where actual task


performance differs because of such factors as- differences of jobs, experience
level (apprentice, journeyman, technician), organizational level (command,
staff, base,. flightline, or shop), and whether or not the person is a
supervisor.

Examples of tasks differing in specificity are shown in Figure 3a and 3b.


The tasks in Figure 3a are from USAF Job Inventory, AFPT 90-326-428A, for AFSC
326X00 and D, Avions Aerospace Ground Equipment. In this survey, the tasks
could be developed for the A-7D, C-5A, and F-4/RF-4 avionics aerospace ground
equipment. Tasks are at a much greater detail than those shown in Figure 3b,
the tasks in this figure being taken from USAF Job Inventory, AFPT 90-423-501,
for AFSC 423X0, Aircraft Electrical Systems. Since this career ladder was not
shredded, that is, not subdivided as is the AFSC 326X0 C and D, the tasks had
to be written for electrical systems on all aircraft. The generality of the
AFSC 423X0 tasks, however, does not inhibit analysis, since persons working on
each aircraft (see Figure 2) can be identified from their background
information responses. Job descriptions can then be computed by aircraft
worked on.

Interviewing of technicians continues at operational units until no- more


tasks are- being added. After extensive quality control reviews for coverage
and compliance with task writing rules, the background and task sections are
printed as a USAF Job Inventory.

While the development process is intended to provide comprehensive


coverage, the final product should not invariably be considered an exhaustive
listing of tasks that may be performed by personnel in the AFSC. The key word
is comprehensive, and most inventories achieve this quality. In so doing, USAF
Job Inventories include tasks, at a more general level, captured by MDC, LSA,
and LCOM, and equally important, many tasks that are not captured by the other
systems. These other tasks include supervisory, supply, administrative,
training, and extra duty tasks as well as tasks that may not appropriately-
belong to the AFS. An occupational survey task list is intended to describe

17
=C)) C) 1. checkc lais you oeflornow '~ CHECK TIME SPENT-
C= C C )C )= 2. it you don't do it - on't chieck It. Present Jot)-
C===C) 3. In th'e"TimetSDent" COlumA. -Y I"ta~kS an time Soent
1ate all Ch~ecked RATE__________
= C)~ ) C ) in otesent lot. It you chaecked it -Rate-it. I. Very. Amali amount.-
AFS 326XOC/D 3vc
2 1.s. avg.
C==(C=)) PNCILONL-PLASEIF -1. Slightly 60;0. *'5.
PENCILC=C DONE S. Abouta @,.
NOW 6. slightly 4-80. avg.-
S NOTE: If any task you perform under this 7. Above avg
dutY is not listed, please specify V~ uhu@*5
on blaak pages at end of booklet. thK9
Wl Very *
!3,.:mrq .

'Lit. R~eview base seli-suzziciency reports ==Z )XzC

i±o. aeview Nt reports

.1A±MA±LA.dilj A-ill SZ 1±AUTOLIATIC TEST SETSI-

.4 g aqcs.i inertiai measurement- set (1Z21S) test sets

12u. .- an rA-N1-4uz navigation -Weapons-oeiivery Computer I-


(YNwDC) test -sets
14j Aliga AN/AV&I-±A acs-up aispiay kUULD) test sets-

i .uIgn .~w1~arament- station- control unit kASCU)


test sets I I________
ii~.Aign 5U OM --ta.i-y puncnea tape reaaers -I -

12". tCaiibrate LNA~qk1-375 INS test Sets I~-


125. Cai.rate -W/ASM-403 NWDC test sets

! 2o. Caliorate AN/AVh-11A- iUD -test -sets


127. Calibrate AN/AWII1-55 ASCU test setsI -

128. Calibrate 5001L11 tally punched tape readersI-

129. Perform fault isolation of AIN/ASM1-375 IMS test sets -

130. Perf.orm fault isolation of AIN/ASLNI-403 ~NDqC test sets


131. Perform fault isolation of A/AVh-11AKHbD test sets --

132. ?erform fault isolation of U4/AWN-55 ASCU -test sets --

133. Perform fault isolation of 500 R.M tally punched tape I-


readers
134. ?er4form self-,tests of AN/ASMI-375 LI1S test setsJ J -

-135. Perf-orm self-tests of AN/ASM-403 NWDC test setsI -

136. ?e=rform self--tests of AIN/AVM1-11A HfLM test sets-

137. Perf-orm self-tests of AL%/AWL-S5 ASCU test sets

G. !MAI.T.T!ING A-7D_ ANUAL TEST SETS

138. Adjust SM-661/AS-388air data computer (ADlC) simulatorsj

CODE 99
Figure 3a. Job Inventory Task List, AFS 326X0C/D

18
C=C--=):-= 1. Check tusk~s you perform now I V" CmECK PIMESENT
4 =C)CC)= 2. It you don't-fO It -Don't check it rAetJo

C=C)=C)Z 3. In the "Time Spent" column, rate all checked IV taskcs on time spent RT
C=
C ) in present job. If you-Checked-it -Rate it. - ~ '. ml mui

C= 423X0 2=)
. sac le. ,.

CZDC=) C=)C=)DONE $. Alouvt avg.


NOTE: If any task YOU -perform under this Now 6. Sllghily aove avg.
=))D=C)duty is not listed, please specify S. AMuc Ge v.
on blank pages at end of booklet. WVr,9 Very large afmouni.

Tubec-wI.n ana Uaance sys em control circui >WCD0C ) DC


components

- ELECTRICAL SYSTEMS
-. -3l~ 4 Mafucio ns on ALgenerator systems WD(DGaaa

13V.41C udiilc~lason erspae roun Equipment .WDDDDaC

electrical equipment -circuits


t36.tbuateWatilatiou 3nair recovery system circuits CCaD()a)

fb~dlw! Hdtf]1=Un_15 ai reueing circuit 0CCsSa cx

~
239.~~ ~
rdun va M ir~dc
Uraf L power cistrti 0 )ion)C 1 0CDC
circuits
2-46.ibutce wafunmns2o aical power isrioution
circuitsI
Z41. 1SV 1 6H aircrat 11i control circuits VC)MIa(l

242.tsudtzroafuntios U lai-ice or deice e ectricaI-


re control and warning circuits
44.3. 501d Ma~ncions -on an isxi
- circuits-
-
444.150a~eMal~CionIs on auxiliary Power unit Au0DCaarcx
components
2,5.fol~ Wlunrio on bali-out or ejection system oadwca)(aa
-- circuits
246.tbutte
Mlfuctios M a~cry caarger system circuits 0~)Ea(=1C

t~.fatt aluci n5 Datery distribution circuits 0DDSD=0C

-t48.tbotAe M91 1 o-Battcry heater system circuits


0~n 0DDDD))C

Z49.tsoateMalu~ions on Isade inspection Meto (BMnoDD~_-WQa


circuits
25v. 1s.35ate ma.llunctions on DonD door system circuits cca

_75- 150a~emaiacions on cabin neater circuits 0CZ DI=M

25Z s 13e mallunctions on camera circuits 0I D~~aCCDl

253 Iso ae mairunctions on canopy control circuits om)D~aa


ZZ4 Iso ae malunctions on car go door control- and warning :ID-daacca0
circuits
2_5- solate mailunctions on cargo hoist circuits ODD~~oa

CODE 01 TYPE 1-9-B


Figure 3b. Job Inventory Task List, AFS 423X0
19
comprehensively all -of the work and tasks that comprise the- jobs actively
performed by personnel in the specialty. This characteristic sets the OSM
product apart from the other systems as of considerable usefulness for training
and classification purposes.

Administering USAF Job Inventories

Job Inventories are administered to a large sample of incumbents of an


AFS. For specialties with iless than 3,000 total manning, a 100 percent sample
is sought. For larger specialties, a stratified random sample based-on number
of jobs, using commands, and operating locations is used. The sample is drawn
from the latest tape files of the Uniform Airman Records (for enlisted
personnel) compiled by AFMPC.

Inventories are mailed to occupational survey control officers in


Consolidated Base Personnel Offices. They administer the inventories by name
to job incumbents, and return the completed inventories to the USAFOMC. It is
in this administration process that a major criticism lies. Managers contend
that personnel, for a variety of reasons, do not give accurate information.
But consistency and stability analyses of survey data clearly show that this
criticism is not valid.

Job incumbents in the AFSC being surveyed respond directly in the Job
Inventory. They complete the standard background items and the work
environment information appropriate- to each of them. They then complete the
duty-task list section. Each respondent, first, goes through the entire list,
checking those tasks performed in the present job. Next, the respondent rates
each task on a 9-point relative time spent scale indicating the relative amount
of time spent performing each task. The scale and instructions to the
respondent can be seen in Figures 3a and 3b. It should be noted that the
relative time spent means relative to all other tasks; it is not an absolute
task time estimate.

At the USAFOMC, an administrative specialist is designated as project


officer for each AFSC surveyed. The duties include suspensing returns,
confirming unit receipt, following-up- to obtain returns, and personally
reviewing each returned inventory. This review includes checking- for
completion and reasonableness of the tasks performed, checking for collusion
among respondents, and returning inventories to incumbents for correction.
With this process it is not unusual for the sample return for processing to
exceed 90 percent. This return rate is extraordinarily high, even considering
the non-voluntary nature of the survey. Mailed surveys generally have a return
rate no higher than 50 percent.

V Analyzing Occupational Survey Data

Background information section and task list data are entered into the
AFHRL Sperry 1180 for processing. Initial processing provides further quality
control of responses. Final input must be at least 99 percent of the initial
input.

20
Of special relevance to TIES are several CODAP computer reports (briefly
described in the Appendix. The first of these is a duty-task list (SK XX,
see Figure 4). This report is a complete listing of the tasks and duties in
alphanumeric order as they appear in the- inventory. As the figure shows, each
task has a unique alphanueric identifier (Q636, for example). This listing
would appear to be the priAry source of OSM tasks for linking with tasks of
other systew.
Anothe iqztart initial report is the dictionary of variables (DICTXX,
Figure 5). "lis catalogues by numrber each of the backgroundinomtn
section itm. Those numbers preceded by V indicate item in the -background
section (for example, V0011, Duty ASC, in Figure 5). Those numbers preceded
by C are computed variables. In these ccputed variables, values of two or
more sets of item may be combined, or may represent variables generated from
the data. This report provides the format for each of the background
variables, and is essential in designing subsequent analysis.
Also at this time, a job description for each survey respondent is
computed and stored for later retrieval as a job description for a single
individual, or for combination with other respondents to form group job
descriptions. 'These cuibinations of individual job descriptions are based on
ccrbinations of variables selected, from the background section items and
prpsented in the dictionary of variables.
atation of individual job descriptions is straightforward. The time
ratings a worker provides for each of the tasks performed in his or her
present job are somned, divided by the nwber of ratings, andmnultiplied by
100 to
yield a relative tire spent rating for each task. The job description for any
individual can be reported, along with the background section items to which
the worker responded.
cmrutation of group job descriptions is also Forplicated.
For
whatever job description is designed, -based on one or mre background or
computed variables, the job descriptions for the individuals who responded to
these variables are combined. The relative percentage of tine of all in the
group for each task is suned and divided by the nwber in the group to yield
an average percent tine spent for each task.
The tire spent performing tasks is -not only a relative figure, but it is
expressed as a percentage. Since it is a relative figure, it is questionable
that any attempt to convert to- absolute or real time spent should be made.
The purpose of the relative time spent totals is to rank order the tasks
performed by any group front the nost to least time-consuming. -Since
incumbents may work shifts of differing lengths, or may only be working part
tine on some or all of the tasks, conversion to real time spent is difficult.
There is, haever, evidence that the ordinal relationship between
relative tire spent and actual tii- spent is high. Garcia (1984) reported a
correlation of .81 between relative tine spent on the 9-point scale used in
OSM and actual observed time. Earlier, M.Farland (1974), in a study of the
use of occupational analysis data (OSM) by nnagement engineering teams (MET)
21
coo
I -7=0

winn

w w

WlLii
_j = U
< =

L
C

0- IL C0
wA.0 -, Li
IL C, 4 . V4 41
.44 Ca a .. . a G w ~ in

3 3 A XI9
C E
aA ' .O I Ia4 L. &n4
o~~~~~~ 2.4 > IC CI L -
'4' * J
0.5 - L
I~4 .6 -. 1 Li 1,
AI 4, Ca. a0 U, -14..','~

*. L, C_ -1. i 91 M
a 0 W_- a-.- C uu4 0 L. 0-
CC al 0cnr. a a ),4,M *-i ).4 wo a I -0 M L L.- 4J
Li C Li L- IA -I>.JWA A ~ . I I CAA1- > >- a C..
- 00. . .4C0 4.4.. r 003C
. a* I L- L aC 4,

4..~ ~~~
4, 4. 4,9I-LC 4. 4, ~ ~ - .4A L L i- - I.4.

-0co 0 0 0- 0 I I CE--4.L 0 LI.IaII ICLi.-


c u U- L) ci Ui _u -1 I AL -I ' II 0
4-a~I<-- <-
I1 1. 4,a- fIEa, W -f..~4. M 4, - - IC:
C ) u) in u in m c C I, -l .4A4.@II IL
I *a--i m=4~4.' vC:
o CD(* CD lW C (12 (*2 I I CLI.. 4L40,, I I i,'C4C

a - - - -- 0-- iC
1~ a . U vlSq C CO41
MEAAA
co41 m I= a
4 - -- 0 ,4 - I - - .N C I I~ UUL iuE
(* I I
=z
,

0 w w
.
I; oC.JI-
V-
5I 1
I
~.4--;~ii
M W EaCCII
L4,
aAal tM in.
C'N3N3N r.
I I I
I
I L a 4
I u.
.4 U. wwU .0 LL. U. C-I ) 3..,:04U'a
c
C ~ ~ n.-00 A
MT_)
V D
I'
1.4iL
0C.(4 C
VA 0. 144, I AAIa aa
I-. U 0C4 a &iC 0 LC4L.~aL.4.4 z CC %..1, f4.W W
a00C a =.a i-a- aC0IL a00~I~ I Ii I IN-i L

a --.-ola. 0. 0.0 C. r W- . Z. 1-00AI 0C V-I I I . OW - 40


IC2 r- >C-
-
C ~ 4, ~
4,44IA4 4,44, ~ LL ~ :'
LiLi4'I. ~ -n
C
I'~a =i:i =..~...--.-2UCI
U.---.---------------
LL uI =i n~ -I
2
-m 4,) A- I I *(1l Cm X x a r. dI CI CrI I I t .I -U WLn 2Lm UDU
-wL IA

a . Uo . IA I r-
IA
Licfl-.0
4.
0
Li 0 Ci L
0 ai 0 0
L.
0
0
L. L.
-A
0
i0 C
I(n2 r 1
fmI I
I
UL6
L.
4,
-
00
4,
,2 a
A
-C
&-'=
I
W(4, 0,,.6
IL-4 W44.4..0
, IU
ED
~ I 1,
LL-L
. - 4-'
iI 1,44
.U WU.UI ULL>
,44-

01 11.411 i
r_ 0 VC O 41 N ('Ca~ -i -l U2'NO f f f '4 ('2> mC2 I -C '
4 a = 2, D E0 !. 'S-2 -10 1 0 0 .- O C a4--
>4 P4I~
win n l4
am I-, iIA I

Li I .4I CC-I
0I cow I 10--I
. '0 N co 4,
0.-0 W N ('2 i, am
4. I i c Ln Kq. o c- C''@
-r 'ON
-a.4K004N la-I ' 4 u
4.4 %0.~ 0 -0-0 -0 'a %a '0 vC 0 '00
-4G000''.' '0.00'o.%0
a.4 -'0.0
0 3 o 00 0 0D 0 i _ c a: c W w : 1 ui tn cn u wwf m
I II
x*
I I

'0I
ENI II

('2 I 22
000

(12 C2
W 0 fn
M UI

LLIW

U=0-

(LJ'- - - - -- - -' -'C2


CI- W tfl US IA TA W- tito
00~~~.
- *- A-.

- N-.44444
c a)4>=444 P' 4

C3
- ~r ~r ~
-O - 'r. t 'r,
r.'JU

3.. 0- , - - 0- 0Z
W tn-. 1W.4 s 4i -
L..0 C & = C 0 ..
o =- 44.4 a~ a.=.=-
tr -to ,. 0 AnC..J

..- 44 3.. 000 >4t


44 '00--
L ~> -
4-' - 0
m M L
1 W4
01.0.
404 (

= % t>t 0 -L.zL '0 0 N. J


00 LI f- L. C 0- W to M4
(120 W Ion - t'' -0. . , O

000 44 aOO00
-~~~~. 44(0 I IA-11

rd X0 P4' :1 1 4.L*-
40.4
04 &44 .-0C..C
44 U.U.L L & U U 4- IA %A M00000
4 IL ti ( M W44. =4 -0 . 0 '.
W . . W __ W I! Q
.... -, L - _'. 0 M..' ; 4 )I.0 cc44
0 =4 t-0 c.0nrC;f 0. _a0 C
Z . 3 . .U L44 L mC.C. -=.=-4.(
4 =EN-ala 4
X4 4c-
'0(
4, on a4 a0 L. §4 009 Li.0-~ I. L0 L
o0 a,4 W 0 U v ' 0 CL 09='.-44
3. IL a 41 41 .0..L
CY>>>L L0r 0If .<< .L .v0 - '0 L 29L 4- te44.
LL . 4La a= -'
00iW _W OLZ - - a4. a a,. a, C.>U9444490 OC- c
C2 W 6,q 044 0 44' a" > - - -- -N&02aa .. 004 04-4-4-4-4-4
20l6
to44444CU ot E1 E. 0
.4 :- * zZ:F M.0C> ot X30 3 0.
44J~it9I4. 49 0'0' E0_ a -. ,.o
L 0 ",.44 L 2.1 -2 z9904
4 0 4 L4"44-!-a44200C

-44 000000000000000.00440000 0000000 0000 000 0 000 00


'C o r~~~~~.3x
O , . ' o~~~~~~~o . . . . 4 4 4 4 0 . . - ' . 4 . 0.40. 0L0

0 UU..,1.L.I
4440 ' 40. VI.. 00 0 ~004444234
cin

C .

wn

- - -- - - -- - ----- ----
0j <
1:0

,..- I-. U. -- tWt

0.1
-- " N C

LUm .u a
I-

C C C C CC CC CC C CCC CCCCC
w

,.0, "If1r a, . ,
NfL. tlu
"" ', 2-. cI N7- -N. C rN-l_,
aCON N N_ N fl-- - I" -.P7
v.,NNNNNNNNN
aC =
Q w a, - m.'N -N L.. &-N1n
N NNN .

a, L- L ,4,J Ill. to L a G f- L

_a aSI
ll
VM
'<Z C'J(7JC'
II,-
----

NJL L: L.r
V
-toZ..
I C~C
MUC3IL
rIAnmL C"
U%,
I" :l. u,
" I -I,---
-
o,
IA
IIA0
IA-

.I.,_
=:+
%- IAIA
Vt
P7t

C -
-
Z'9.'-I I aA90 A I
c L
.wqC
A- 2.jI

IA
0a
IA eIA -.------.-------
-iV Ag
* -AI
LA-
vi2L
a.
I =
>

A
gu
LA41 0L ta.

LJJ M-
-AIA,I
=,=
4B

NC
ViL
-toC
IN-s
LAC
c-LA
IL
aI f
AA9
Il
a
-a I -A
aC-
IA LIA IA
- CN.J-- _W
_
-
0 C
A
.
IA IA -IA IA 4
--- LAnG.
I
L
L

0'
. CA
0a
---

NJIAI~a .. LAA. ;=AC; ula A I-0 .. L LA _LA


LA. JLA
J'JANJN%.0 n IA.. L C,..
.- NIA
= IL -LA a. =
IA L 0 L.
B. 4LA-B. . L>.> 44
LA
LA3J .!.. ta00
3A 3 3- a C3 V LA-C.a
NJ.NIJ I IAI IA - -10
c W
1--N 1A 1 I IJ I I- a,N I I_ . r0
a.IA I-I uI LA , '-IAI_ LA'
j" S" 0" i- LJ c-.I. s- f
"A "= WV
-w, 2
-f. -- =w -M- - 11 -r L: W: aP L. w. C_,
3 -3A - ZNB.C
3 2JN 3. LA IAI .. AA aIL J A @AJA N

-~ ~~~~~~~~~ . ->A
-A- -AJ-I
-JJ'INNN..
-- -aN
-CIA- -~B-A - -C IA
-J m

in o L4 M if7 III nAk


NL toA a 4IAE I -s 1 34'. @I.AB. IANJ LA wA ;,C ". C 0.
.L a9 %,11 B. " -N .aIA.LA-LA
11 -1 *%,%" " N %,AJALAA
NI "Ia %,1
ALA % 1 11N%
IAC.
Wt La'-yi E-
L1WM11 f -. - -- - - ------
C - -C~IL-JIA
IA-AJAIIII -. -AN-L -j

>
NJ ~ 00-
>>
~..J > ~
> rN r. M
~ AL 11 Z-ILa CL
-I a&aAN
9
a,~aN
L 41 -a
91JN
. LL INL =
. JUJLI ,41-
-
rIaLMA..L _ -IA
aIA
1 LII
a L r- -. mLA
CU
C.

IoA; NJ -A C -ZN
-- - -A-A
IA W O J J L LA aI - J~
UA 04 A9C%. A 0COC.A.AI-N. IO--ALLAAOAABICI.0 L L
a> =C2nI.CN2AI-~.CJ-N20-JLAIAAAA-
:2> OOJA0CAN

-- B A tI--
O~. .2 ..- 0L E I CI.0 A A AIA~ ALA 01 A . L224-.0
and the measured actual time provided by the MET. In the civilian community,
Page (1976) found a similar relationship between relative and actual percent
time spent. Since the ordinal relationship- is quite good, it may be that some
linear transformation of relative time spent to actual time spent is possible.
If such a transformation is possible, use of OSM for manpower determinations
could be valuable. The importance of a relative-actual time spent
transformation method would be especially important for LCOM modelling of non:
maintenance AFSs.

An example of a job description (PRTJOB) is shown in Figure 6. This job


description is for the total sample of workers with duty AFSG 326X5B. It sh6ws
a listing of all the tasks (truncated in the example) performed by these
workers, the percent of members performing each task, and percent time -spent by
those- performing and averaged across all members, and, the last column, the
-cumulative sum of time spent by all members. Every job description computed by
CODAP contains this basic information.

A duty description is also computed (see Figure 7). Percent performing in


a duty is determined by counting all members who perform at least one task in a
duty. Similarly, 'the amount of time spent on the tasks in a duty is summed- zo
provide the percent tiwe entries.

A fourth computation early in the data analysis effort is the hierarchical


clustering (OVRLAP, GROUP) of job incumbents based on the percentage of time
spent on tasks (Archer, 1966). The process is based on the amount of overlaps
of time spent on tasks reflected in any two job descriptions. In the first
step, the time-spent-on-tasks overlaps of each incumbent with every other
incumbent is computed. The two most similar job descriptions are merged, a new
2-person job description computed, overlaps determined and the next two most
alike merged. This process continues until there is only one job description
representing the entire sample. This clustering process is printed in a
diagram format for use by the occupational analyst.

The analyst determines, from the diagram and from actual job descriptions
for groups selected from the diagram, the structure of the specialty. This
structure is expressed by the job types--those people who are doing the same
tasks--and job- clusters, which are groups of highly related job types. These
data are important for classification purposes--deciding whether a career
ladder should be subdivided or merged with another--and for training decisions-
whether to channelize or not. The data also are useful for multiple other
purposes, such as in the benchmarking and performance measurement areas. Of
considerable importance is the existence of different jobs in every specialty
(Driskill & Mitchell, 1979). The similarity, or lack of similarity, dictates-
classification structure and training programs. A significant problem with the
grouping process is- the lack of an empirical basis for determining clusters and
job types- or for defining the level of homogeneity or sameness of the work
-performed by the members in a given group (-for a discussion of this issue, see
Harvey, 1986). The product of the hierarchical grouping process is provided in
summary form in the- occupational survey report prepared for each specialty.

25
xO a 0r 0U)V
-o
CV N~G. C.) m

> CC

ON -C ) N 0 0N'0N0C4 .0 0 C CO
2 ~ 0 00. mN- .7-Ml CY rCP. 0 -0 N
C 0
- -L00.
00 1
00.O 0
Cy 0-
LncnaC-.
CO V'
0
a,
N
.0 c') -
---
OCy
3
cc
OCU
== 0 0 -NM 'T W WN M N. 0 .NLNCO-0N
N.N M N M'

wmzC C

mm_ C 0.

=s-
4"

wcn

owto
~ ~ 0000..
4,
~~~... ~
N0..Ll
...
~... 0 ~~ ?. ~
0. .
l
...
-I 00) -0..0m
V,0N
00
. .0C . .
.- C~
N .N.NN.N
Lnc'-C.
N.
N00
. ..
CC4, N 000C)C :C -C .0C .. ) -r l O . ' 9 -)C .C
C.mCNC =aOCO0 Mo, 0 D-CpC c 3 CO C=D mmN0 )
WW1

C L C

to t . C0o a

AD o, a~, N . E. .1 C_ c -oZ
C--t.-.- =&, * Eu
a oata2- a

a. w - Ad COf-r.

C .L3=' .. nNW
a0W
i.. j' c 1 0C' 00.
r- r_ CC 0. -O '. .0.t

L.0.C 000 0. >O.


41, L.C L 0j NC l f- N 4,Z - N 0a--. Cs 0 COON0.0
a,-=4 a, r . , -C o r _ C - g
0 to= a. 0 . % = _

4,_d

L0 L oU4 0
Go LA

4,~'s
.a ..- Cn ~- -0
.
U! C.
!-
U!
S r
L-0A..
a.toCat -0
U2-C
I- -W L ut t3 4
MN-LIA . 4
-C C-w),.' 0 *J)0 = 6- , -V j = aU! .)r - 41e. -=
a, I . 0a C =-I . , a o 0:3- 0-c - C.j
-U -1 kn Cn
c. U! -
U C. aZ -~ =4, -L6 ..- ~ aM I C
I
. 0.
-Ca_
C-
U!C
Ci
-a
_%CU.X
-~nE
.- -
X a
=-= -U!
EC 2. a
4
-
U
2
4~.
U) to . L
a L
.
L_
~.
CC
AC a
t .
m'-4U 4, C~a 0
U! C0~- C
a~
V,.. CO.0..' IA Ow -1AWI 4, . -fU&
-U t.=-U .U-
4, 0CW C - 0=.U! IU!L L.4"~! CZ.'C a - -a
4, GO !2 0J!
.w L (-. L I. LU!_0aCC .: a 0to 0F
0, alf 4'-!U.. a.U=! -U! to - 3'C: -u IOU! 0... -C 4j >I -'0 0
to : l C
oj-t .. 3 C0 to in U ! W .0
o! A4, aL zCC- 'C a :_ -> LU
& 4 IA-C. - 0Lw>wL4__ 0 0 In j , bl r.C- C U!
C4 N. l U C a
... LI0 C,.3 C-lLz ; .'O..
cc c -- L)O C3> a!.C

-0 0 ) CCEP...cU!4,
WU M-0L -.-- U!- 4, (m.U4,
a. 10,-C'!
0.0 N N 0 V *
r!LC
uJC V- L.C-ZZ
inccmow a- m Ci -or-ic 6.uiaU!c 'n-m Zo o-U!
Car CC y ,0El - -0 -ON- -- l4, 4IAC P COO C

w,,
U.

U!,~U-CCLUU!-U! L a-
Lw

C C .. U N.-'
N

N26C) i
cm U) m CVI , 0 C
> 0 C 0

LL co = Ifl , I-tlO C~0 A


w A N r C' N O oON ' - o--M.
w

W CD- N OOO~C0C
--

0
L)

L0w -. 0NL.40 .
.0
&'- -N C C1) N 0 C4N

a-.=' 0 N- m m0 N w~0~

1 caj Lf c. N

4- C, C. -4 N,-4 C
Q1.
C.

InI

4.
I-
.2
- 0- -N
i c-
0 ft
Co
N0 t-
a
CD
N ~ '' N
c cm N o -NM so 'l

> 0.0.

00 u);0 L> 0,t


t0 1- 2 1 ' DNON

-9 W,41 ,a 6i
01 0. . 0U) r f - aa= I
LI c

.0 >1moo

U)0

27 J
In addition to the job descriptions and- background data for each of the
job groups (described above), a number of other analyses are available.
Typically, job descriptions for each skill level, for each using command, and
CONUS-overseas group are reported. Each of these is accompanied by a summary
of the background information section variables for each group.

Computation of these job descriptions is based on the variables listed in


the Dictionary. A job description can be computed for any of these variables
or a combination of them. Thus-, if a user wanted a description for 5-skill
level airmen who worked in a shop area for a particular weapons system, the
combination of the appropriate variables from the -Dictionary will produce the
description. It is this feature that makes development of the background
section so important. The jpb -descriptions produced- show the tasks performed
by the selected group, the percentage of the group performing them, and the
relative percentage of time spent on each task (see Figure 6).

When the background section may not include appropriate items for
computing a desired description another option exists if the task list is
equipment specific. One or more tasks can be combined into a computed variable
to use to identify persons performing the key tasks. This computed variable
then can be used to print a job description for those personnel.

Task DifficUlty Ratings

Using the USAF Job Inventory task list, the USAFOMC obtains responses from
a small number of senior technicians (30 to 50) about the relative difficulty
of each task, defined in terms of-how long it -takes a person to learn to do the
task relative to all other tasks in the inventory. The raters use a 9-point
relative difficulty scale. Results are averaged for each task across raters
and summarized and an intraclass correlation of the responses to determine
interrater agreement is computed. A .90 level of agreement is the criterion,
and over the years few instances of agreement falling below .90 have occurred.
The task difficulty ratings can-be reported by each task alone, or along with
other task data such as percentage performing. An important point to remember
is that difficulty of tasks associated with specific items is not available if
the task list specificity-is not at the end item level.

Training Emphasis

The USAFOMC also administers the USAF Job Inventory to 30 to 50 senior


technicians in the AFSC to obtain training emphasis for tasks. They rate on-a
9-point scale the tasks they believe should be included in first-term training.
The same .90 criterion is applied, but smaller values are much more often
found. These data can be printed with each task, or be combined with other
task data.

PRTMOD

While numerous data displays are available from CODAP, the most important
for TIES may be the product of the program with the acronym PRTMOD. The run
stream producing this display permits the matching of survey tasks and
associated data items (such as percentage of incumbents performing and ratings

28
of task difficulty and training emphasis) with data items from other sources.
Most occupational survey reports include several of these displays--at least
one matching tasks and Specialty Training Standard (STS) elements and another
matching tasks to Plan of Instruction (P01) criterion objectives.

An example of a PRTMOD product is in Figure 8. This example is annotated,


so that it is self-explanatory. Although truncated, it reflects several
important features:

1. It shows how information and data from sources outside CODAP files can
be matched and displayed with survey task data.

2. It displays how data from job descriptions for various groups of


workers can be shown in summary format--as opposed to having to review the
descriptions individually.

3. It shows how training emphasis and task difficulty data can be


conveniently related to task statements as well as other data.

4. ASCII CODAP, developed by MAXIMA for the AFHRL, will permit various
statistical analyses of data from the different sources and display of these
analyses.

For TIES, the PRTMOD process will permit the display and, potentially, the
analysis of data from the various task identification systems. These data
would be grouped into modules that may be displayed individually or in
combination-with other modules.

CODAP is a comprehensive set of computer programs that provide a flexible


means of analyzing job data. In ASCII CODAP, analytic and display capabilities
are vastly extended. Explanations of these capabilities are presently being
prepared. Implementation of ASCII CODAP is forecast for early 1987. Data
analyzed, by the present CODAP System can be transformed into ASCII CODAP
structures.

Storage and Retrieval of OSM Data

OSM data are stored in four ways. Hardcopies of inventories, narrative


reports, and data displays are maintained at the USAFOMC. Raw data tape files
are retained at the AFHRL. Report data tapes, the source of the displays and-
the written reports maintained by the AFOMC, also are kept by the AFHRL. These
tape files are readily eccessible through the Occupational Research Data Bank
(ORDB) at AFHRL or by request to the USAFOMC/OMY. No data analysis may be
performed on the ORDB or report files. Any analyses that may be required must
originate with the raw data tapes.

Reliability of Occupational Analysis Data

Reliability, especially for data from a survey technology, is a crucial


question. For the OSM data, there are three specific examples of data
consistency that are illustrative of the issue. First, Christal (1971)
selected 10 previously surveyed career ladders. The cases in each skill level

29
I
B
B
I
C;
*r
1I 00
WLn I
.0
U
1
I
000
Ar U
1
I 0 00 1
I
Co
r
0

a0 B 0 1 0 00 1 .) 00 to B 0Y 0
ai B; Wf I;
I -U II I 0- Is I 4 0
co' ca ca 0. N CD rt"C a

- ca I M I I ! 0 v

w1 BAI- OC I1 .C Mo. IB-


L-U.I I N to0B ' 4- 1 . -. l I U)- 0

I B
I B . I C. Iv
-a Bn B~ W 1 Ik

7 - 7; B

B- 0, -0 1. 1

W- I B B B
~~C ~ BBLI
0 toB .Z .O

-I B>) Bj * B ** C. & . I*B

U ~~~ ao B

ac I c a I - Ba0c
1 .

B
wwI 2

-l B B u. LL 0 .

>- * I aa

zrC BBI B! I W.

00 BN
1'. a~ M~. -CU I -
I CD
crI - %I B B 0 4 -C10 B m C L B4. . c
II
B I L a I VJC
c Bn I (a 1ara ) 10 U r. w a o-a -- .( .

B ~ ~~~ -' a2
B W
B,
1 C- B1 BI 22 B" Co B
.. BW
I ) I B.
10 B) 03 MC B -

B)
It B B0
It La 'I aC cLa a.
as
I Bt
1L CC B1 .L

3r I r Bl
1 I.- L .1 K-a
B
B ~~~2 Ci - 0 2 C CL B

B I I 0) B IC -V IL, I MCC BI

a aC L a -a CC B Ca B0
within each of these ladders were randanly divided in halves. Consolidated
job descriptions were cczputed for each of these half samples. The percent
pbrfonning and percent time spent values for each pair of job descriptions
were correlated. Table 2 is reproduced from the Christal report. The
correlations, as the table shows, are very high* In addition, Christal
reported that the high values were applicable to job descriptions for as :few
as 15 mmbers.
The second evidence of stability of the data over tire is that reported
by Driskill and Bower (1978). They reported that of 76 career ladders
surveyed between 1 January 1977 and 30 June 1978, 71 of them represented-
resurveys. Of these 71, 59 remained stable over the tire since the previous
survey. They specifically cite two surveys as examples. No differences of
structure were found for either the Dental Laboratory or Recruiter
specialties. Further, for the Recruiter specialty, none of the participants
in the second survey were the same as in the first: inventory development
specialist, analyst, or respondents. Since specialty assignments are
controlled tours, none of the personnel with the AFSC who were in the initial
survey mere still assigned the AFSC five years later when the resurvey
ocurred. - Yet, the two surveys yielded highly caqarable results. Other
evidence concerning the conisterny and stability can be found in Christal
(1969).

The third significant evidence is that reported by Garcia (1984) on-the


validation of the relative time spent rating scales. The criteria for the
validation effort consisted of direct field cbservation of actual time spent
and frequency of observed task pronCeM. Ccxter progrIanig specialists
(AFSC 51Xl) capleted a job i n for the-specialty consisting of 577
tasks. In the second phase, actual task perfoace. and time perf nce were
observed and recorded. When the actual frequency and tine-spent data were
correlated with the sae variables from the job inventory administration the
correlations were .79 and .81, respectively. Stability was achieved with as
few as 15 xwbers.
Uses of Occupational Anlysis Data
Over the past 19 years of existence, numerous uses of occupational
analysis data have been made. The major uses and those with special relevance
to a TIES are described below:

1. Occupational classification structure described in AFM 39-1 is based


in large part on survey data. Several thrusts are evident. First, the data
from the hierarchical grouping process, which reveals the job structure of a
career ladder, are the basis for decisions of the division of a ladder into
two or more ladders or shreds. Such decisions are made on the dissimilarity
of the work performed by the job types. Where there is little similarity,
some division of the career ladder, provi dd such criteria- as p
potential and overseas rotation are met, may be required to minimize training,
both resident and on-the-job requiretmts, and facilitate work accouplishment.
The difficulty in making these decisions lies in the fact that the survey data
reflect only whether personnel in the job types are performing the same tasks
or not. The data do not directly address the questions of similarity of
31
Table 2. Reliability of "Percent Performing" and "Percent Time
Spent by Total Group" Vectors in Consolidated Job
Descriptions (From Christal, 1971)-

R !!Percent Time
Percent- Spent by
a
Air Force Specialty 1/2 N Performing Total Group

Helicopter-Mechanic-
43130 39 .965 .955
43150 256 .996 986
43170 81 .961 .828
43190 26 .957 .965
Medical Administrative
90630 76 .936 .894
90650 '347 .985 .969-
90670 189- .981 -.95L
90690 56 .968 .96i
Management Engineering
73331 42 .971 .958
73370 101 .984 957
73371 180 .994 .973
73391 133 .952 .889
Outside Wire/Antenna
36150 199 .987 .965
36170 92 .963 .919
36190 22 .958 .895
Electridal Power Production
54330 70 .953 .952
54350 457 .987 .986
54370 143 .981 .970
Radioiog-
90350 180 .997 .996,
90370 78 .986 .975
Education and Training
75132 146 .983 .972
75150 45 .974 .918
75170 30 .960 .935
75172 381 .995 .992
75190 28 .891 .895
Medical Materiel
91530 63 .888 .8!3,
91550 292 .978 .959
91570 137 .949 .908
91590 21 .931 .890
PreventiveMedicine
90750 113 .979 .955
90770 63 .966 .917
'Jet Engine Mtchanic
43230 76 .979 .961
43250 473 .997 .992
43270 241 .985 .968
43290 35 .982 .974
aThese values indicate the number-of cases in each of the two subsarnples. Total number of cases entering into
computaions reported in this table was 9.822.

32
Second, two or more career ladders are -sometimes surveyed jointly to
determine the degree to which members of one AFSC perform the tasks of another
AFSC. These data impact decisions to merge career ladders. The same
difficulty exists as for decisions for dividing an AFSC--the data do not
address similarity of knowledge and skills involved in task performance. The
question of whether members of one AFSC can effectively transfer to work in
another AFSC is not normally addressed by survey data. There have been a few
occasions when the question has been addressed in multi-ladder surveys by
employing concurrent interviews with members of each of the AFSC being
surveyed. During these interviews, each task for each AFSC is reviewed to
determine similarity -of knowledge and skills with the tasks of another AFSC.
This process is much more time-consuming and resource intensive than the normal
inventory development process. The process does, however, yield greater
information on which to base AFS merger decisions.

A further use for classification has been to use task difficulty data to
determine aptitude requirements for AFS (see, for example, Garcia, Ruck, &
Weeks, 1985). As a result of research by the AFHRL, difficulty ratings for the
tasks of a specialty are benchmarked so that they may be compared with the
ratings of other specialties. Based on this benchmarked difficulty, aptitude
requirements for an AFS are estimated. While the feasibility of operational
implementation-was established, such action has not occurred.

The importance of the technology lies in its being- at present the best
method of relating aptitude to work requirements. For TIES, it offers in
combination with other data from the OSM, LSA, and MDC, a means of benchmarking
difficulty of tasks for new equipment and estimating aptitude requirements for
the tasks for this equipment.

Survey data also were the basis for categorizing career ladders according
to their mechanical,- administrative, general, and electronic requirements
(MAGE) (Driskill, Keeth, & Gentner, 1981; Bell & Thomasson, 1984). Each of the
four categories was broken down into smaller, more meaningful categories. For
example, for the administrative area, three types of administrative tasks were
discovered: clerical, computational, and office equipment operation. Subject
matter specialists from various administrative fields then identified a set of
tasks representative of each of these types.

The categorization was accomplished by subject matter specialists for each


AFS. These specialists each reviewed a sample of tasks from their AFSC. These
tasks were selected on the basis -of a range of percent performing and
difficulty and training emphasis rating values. The specialists determined the
kinds of MAGE requirements in each of the tasks by comparing the AFSC tasks
with the benchmark tasks. As a result of the study, each A' C was categorized
according to the primary requirements of the sample of tasks.

2. Occupational analysis data have been an important source -for making


training decisions. In the early years of the OSM, training courses (POI and
STS) were compared against the percentage of airmen in their first Air Force
job performing the tasks of the specialty. Adjustments of course coverage were
made from this comparison. Later, ATCR 52-22 formalized this practice and
provided guidelines for making decisions.

33
More recently, the data have been a primary source for Utilization and
Training (U&T) Workshops. Representatives from the Air Staff, using commands,
and the training community review the job data to determine how the workforce
is being -utilized. Once utilization issues are resolved, they then sort the
tasks-, using ATCR 52-22 guidelines and task difficulty and training emphasis
ratings, into those tasks to be trained in the initial resident training
courses and those tasks -to be trained on-the-job or in advanced and lateral
courses. This process is being modeled in some current research to develop al
Training -Decision System (TDS), which-will be addressed briefly below.

As a data base for making training decisions, the occupational analysis


data have provided significant input to the Instructional Systems Development
(ISD) model. This model is generally well known and.need: not be repeated here
(see AFR 50-8 and AFP 50-58). What is important is that survey data help
satisfy the requirements of steps 1 and 2 of the 5-step model-:

a. Analyze system requirements

b. -Define education and training requirements

c. Develop objectives and tests

d. Plan, develop, and validate instruction

e. Conduct and evaluate instruction

It is important to note here that the data are employed for decisionmaking.
Interviews with training development personnel reveal :that the- task data
(percent performing, difficulty, and training emphasis) are the initial source
for ISD. They indicate that for actual development of training, task
specificity is an issue. In some cases, tasks are combined to represent what
more appropriately might be entitled jobs. In others, tasks are further
defined. In every case, detailed task analysis is required.

Nevertheless, in the computer-assisted task analysis program developed by


the Training Development Service, a division of the USAFOMC, the survey task is
the reference point. All task analysis data are keyed to appropriate survey
tasks and associated data.

Also, in the training area, occupational data have been the basis: for
forecasting requirements for training for developing systems for which real
data are not yet available. In one instance, a scenario-based approach was
used (Tartell, 1979). Highly qualified personnel were presented an extensive
list of tasks and equipment that might be employed to accomplish an
accompanying scenario. These personnel indicated which of the tasks and
equipment they believed would be involved. Interrater agreement was
exceedingly high (in excess of .90) and the results were used to establish
training.

In the second, the approach used a form of comparability analysis. After


an analysis of a new weapon system, analysts were able to- find existing weapons
systems with similar mission capabilities and configurations. Using tasks that

34
described the comparable systems, subject matter specialists for the newer
system indicated tasks they believed appropriate for the newer system. Again,
agreement among the specialists was quite high, and their estimates- of
requirements were used to develop training programs. While the accuracy of
their estimates could not be assessed, their high level of agreement provided
the best basis for training decisions in the absence of any other source.

Partially as an outgrowth of these two forecasting efforts, the


Occupational Analysis Division of the USAFOMO created a branch to provide task
information and data for new weapor. systems or for new functions to be
incorporated in non-maintenance AFSC. This organization is a prime candidate-
user of TIES.

3. While a variety of research studies have employed occupational


analysis data, three important current research efforts utilize these data and
would, it seems obvious, benefit from a TIES. The first of these is research-
to develop a Training Decision System (TDS). The objectives are to identify
utilization patterns, model alternate patterns, determine the costs (probably
relative) -of resident and on-the-job training, develop task training modules,
and provide data for decisions about where and when these modules should be
trained.

The task modules are developed- initially from occupational analysis data
by hierarchical clustering of the tasks to show how they are co-performed--
that is, tasks that tend to be performed together by the same job incumbents.
These modules are modified by subject matter specialist judgments. Since the
USAF Job Inventory tasks are the basis of the clustering, there are instances
(see, for example, the tasks in Figure 3b) when the tasks are very generic.
The array of systems or equipment on which job incumbents work are not
enumerated. Addition of MDC data, which is weapon system and even end item
specific, would add a level of detail that can be expected to enhance decisions
about the task modules.

A second-current research initiative is directed at building an Automated


On-The Job Training System. This research calls for the development of a
master task list (MTL) for each AFSC. In addition there is to be a local task
list for each unit. Interviews with the members of the research team indicate
the master- task list would contain these kinds of items:
1. Task statements that describe the AFS knowledge and performance
requirements

2. Task identification numbers

3. Source identifications from which tasks were taken

4. Specialty Training Standard (STS) identifications containing


corresponding task statements

5. MTL user identification codes

6. Training material identifications-and locations

35
7. Task certification-before.performance requirement codes

8. Task recertification-requirement codes and frequencies

9. Common subtask requirement codes

10. Position task requirement codes

11. Task factors (percent members performing tasks, task difficulty,


training emphasis, etc.)

12. Weapon system or equipment that AFS supports

13. Support equipment required to perform tasks

14. Training/evaluation/development priority codes

15. Mandatory task training and performance requirement codes

16. Subtasks (identification codes)/performance sequence

17. Task steps/performance sequence

Occupational analysis data are a source for some of this information,


particularly item 4 (from PRTMOD) and 11. Many of the master task lists,
however, would consist of generic tasks and would not, for maintenance AFSC,
reflect the many different weapon systems and kinds of equipment for which task
performance is required

Local tasks are tasks that supervisors identify as required in specified


duty positions but which are not contained on- MTLs or previously listed -as new
task requirements. New tasks are knowledge and performance requirements
identified during reviews of new and revised technical references. Local AFS
task lists consist of the following data elements:

1. Task statements

2. Subtasks (identification codes)/performance sequence

3. Task identification numbers

4. Follow-up requirement statements (statements generated and displayed


within -the AOTS that indicate local or new tasks have been identified and that
they must a) be matched against the MTL file, b) be coded if unmatched and
added to the appropriate MTL, c) have task-related data identified, cross-
referenced, and stored, d) be assessed to determine if they should be added to
the appropriate MTLs [based on the numbers and locations of users], e) have
sources of performance and proficiency data identified, cross-referenced and
stored and f) be deleted if found- fo duplicate entries on the MTL file)

5. Source identification from which tasks were taken

36
6. User identification codes
7. Training materials identification and locations
8. Task certificaticn-before-performance requireumnt codes
9. Recertification requirement codes and frequencies
10. Camun task requirement codes
11. Position task requirement codes
12. Task factors
13. Weapon system or equipment that AFS supports
14. Support equimnt required to perform tasks
15. Training/evaluation/development priority codes
16. Task steps/perfomance sequence
survey tasks, since they are not limited to Am specific tasks, can
provide sane of these data. Depending upon the -cmprehensiveness of the
Lnventory, tasks my not be included. Thus, additional local task development
effort can be anticipated. Availability of MDC and MEin a system that would
permit aggregation of similar task item could materially improve the
develor~n and quality of the master and local task- lists.
A third effort, Small Unit Maintenance Manpower Analysis (StM), is
developing an F-16 data base oriented toward- job specialty restructure. This
data base will permit redefinition of specialtie s without regard to existing
s ialty structure (Moore & Boyle, 1986). Special purpose task analysis
efforts have been required (e.g. to assess similarity of knowledge and skill
requiremnts). WM data tend to be general (in most cases), and not always
directly relatable to specific equipment, making cross-APS and cross Mission
Design Series analysis difficult. Specific equipment tasks can be identified,
hadever, in OM by Using cabinations of background items to generate jb
descriptions. The inability of existing task identification methods and data
bases to handle this urgent MPT problem (Boyle, Goralski, & Meyer, 1985) was
one of the reason for the development of a TIES (Ruck &Boyle, 1983).
Critique of The _cgkational Analysis Proq_.
Occupational analysis data have several important features that need
smmarizing:
1. The survey program extends to all AFS, providing data for a variety
of personnel, training, and research uses. In contrast to ISA, MDC, and 1COK,
which are maintenance oriented, the data are APS oriented.

37
2. Tasks in the survey data are at the highest level of generality of the
four data systems. While they have been extremely useful for classification
and training decisions, development of training materials depends on further
task analysis. All uses of the data could be enhanced if task description were
more consistently and specifically related to equipment maintained or operated.
In the LSA and MDC systems, a limited set of actions are. related to equipment
items (by work unit codes),.

3. Tasks are written in the language of the worker and are intended to
differentiate among workers. Thus, in the maintenance AFSC, there is not a
close relationship among the action verbs with those used in MDC, LCOM, and
LSA. Where task writing guidelines permit, closer correspondence of survey
task language and MDC, LCOM, and LSA tasks or actions- taken is desirable.

4. Especially when task- lists are more generic, -complete background


information to facilitate processing of job descriptions is essential. More
consistency of coverage is desirable.

5. Data analysis using CODAP is highly flexible and provides a wide array
of powerful analysis strategies.

6. Data that have been analyzed-as well as the raw data are fairly easily
accessed. Since the raw data tapes are maintained by the AFHRL, further
analysis can be made.

7. The CODAP capability to display and analyze data from other sources
along with task data is an important asset directly usable in a TIES.

8. Data reproducibility has been repeatedly shown to be high in vigorous


tests.

9. Several points of controversy and misconceptions about survey data


exist.

a. The narrative reports prepared by the USAFOMC provide only a


limited part of the data available from occupational surveys. Many other-kinds
of information are available, but, unfortunately, what this information may be
and how to retrieve and analyze the raw data requires an analyst expert in
CODAP. For this reason, many applications of the data may have been overlooked
by potential users.

b. Data many times are said to be out-of-date. While -most


specialties are surveyed on an approximate 4-year cycle, the data remain stable
over time, except for some highly volatile AFSs. 'The USAFOMC has installed a
unit to investigate currency of data. In most cases, most of the data from
older surveys are stable. Certainly, if these data could be augmented by MDC
and LSA data, the currency question would be largely resolved. On the whole,
AFSs have been shown to change slowly over time.

c. The generic nature of the task statements also poses problems for
some applications. Foley (1980), while recognizing the power of the
occupational analysis methodology, nevertheless criticized the task statements

38
for their lack of hardware specificity and for their use of nonstandard action
verbs and functions.

d. Survey data, since they are collected from individual job


incumbents, provide no information- about crew size or team performance
requirements.

e. Although there is a high ordinal relationship, the relative time


spent data for tasks cannot at present be equated with task time; that is, the
actual clock time it takes to do a task. This lack of comparability of
relative and actual times is unfortunate, since task time is an important
driver of manpower in maintenance analysis- in LCOM. In -addition, the
applicability of LCOM modelling to non-maintenance AFSs would-be facilitated if
the relative task times could-be used to estimate or determine actual time.

10. All the same, survey data, besides the task difficulty and, training
emphasis information, provide information not available in other systems and
which- is exceedingly important for MPT. First, tasks can be related to skill
level. Second, the job typing analysis provides work structure information.
The power of these data would be enhanced if they could be related to MDC and
LSA data. It would be particularly attractive if job types could be assessed
with regard to the detailed MDC or LSA actions taken tasks comprising them.
The advantage would lie in the ability to assess whether common knowledge and
skill requirements exist--or whether there- are sufficient differences to
warrant different training or-classification and assignment vehicles.

IV. MAINTENANCE DATA COLLECTION SYSTEM

The MDC is the oldest of the task identification and data systems
discussed in this report. It is an equipment or weapons system oriented data
base that does not easily lend itself to analyses of task performance by AFS.
No data for nonmaintenance AFS work performed is recorded. Data are collected
for aircraft, missiles, and communications equipment.

Origin of MDC

Prior to 1958, there was no formal reporting system except for an


unsatisfactory reporting system for identifying equipment with low reliability.
No formal records of manhours, actions taken, or items maintained were
utilized. In 1958, the Technical Failure Reporting System was initiated. In
the mid 1960's AFTO Form 349 became the vehicle for reporting maintenance
manhours and actions taken. MMICS was implemented worldwide in 1974 to augment
maintenance personnel, manpower, and training accounting and management. At
present, the Core Automated Maintenance Data System (CAMS) is being
implemented. While CAMS does not alter the kinds of data reported, it does
provide for the automation of key input items--as opposed to the manual input
presently employed in the-MDC.

MDC is described in AFR 66-1, TO-00-20-2, and AFM 66-267. Its purpose is
to collect at base level detailed-maintenance-data, including work center, work
unit code, action taken, how malfunctioned, maintenance time, crew size, and

39
worker identification. From these data, a variety of analyses providing
information about manhours, the reliability and maintainability of equipment
and weapons systems, product performance, weapons system readiness, product
improvement, and- support requirements and costs may be obtained (Quick
Reference Guide, undated).

Development of MDC Data-Base

The MDC data base consists of detailed data about on-equipment, off-
equipment, and depot maintenance work. The data originate at base level (or
depot), the input originating on AFTO Form 349 (Figure 9) with the input flow
depending upon whether the maintenance is scheduled, unscheduled, or phase.
Unscheduled maintenance is a term used to indicate that something is broken or
malfunctioning. At that time a AFTO 349 is prepared for the reported failure,
and designated for flightline action in the case of an aircraft. If the
failure can be corrected on-aircraft, the work is accomplished. If off-
equipment maintenance- is required, an AFTO Form 350 (Figure 10) is prepared and
routed along with the item to the appropriate shop. In some cases, the
maintenance work requires activity of more than one shop, in which case the
receiving shop prepares another AFTO Form 350 to route the item to the other
shops. The flow for scheduled and phase maintenance is essentially the same,
except the Initial AFTO Form 349 does not result from a reported failure.

In every-case that work is performed, an AFTO Form 349 is completed by the


workers doing the work. Under systems like CAMS and the F-16 Centralized Data
System, the AFTO Form 349 will be initially produced automatically and workers
will make entries on a terminal. Under the present manual system, the AFTO
Form 349 is completed by the worker and the data entered into the system by
keypunching the AFTO 349 entries.

Figure 9 shows an AFTO Form 349. All data elements are supposed to be
completed. The source for the entries are AFR 66-1, AFR 66-267, and the 00-20
series technical orders. Many of the codes to be entered are standard, but
there are subsystem and component entries required that must be obtained from
the 00-20 series technical order for the aircraft or appropriate technical
order series for missiles or communications equipment on which work is being
performed. The standard codes significant for a TIES are shown in Table 3.
Each of them can be related to the item number of the AFTO Form 349:

Inspection of these codes reveals that very detailed information is


provided on maintenance, including these important items: manhours, employee
identification, base identification, command, complete designation for
aircraft, missiles, and communications-electronic equipment, the work center
and work unit codes, and action taken to the component level. From this array
of information, plus the other collected by the AFTO Form 349, a variety of
information can be summarized.

40
MAINTENANCE DATA COLLECTION RECORD j~
1.MCM~D NO.- :.zW0RNKC2 13. I.D. N0./Stt2A . 1L
4Mo$ J. SD j.1I~ .PI~.3~2~O ~ ?

cKO
m TIMEt 11. ENINE I.. 12. :xsl EW, TIME m2INST. 1W.. I.D. 1S.. M 1. vlac $PCwisp. Jo .

II
20
. PART
NUMBER j21. &a. NO/OPE. TIME 1 1A MO
T. INST. MidM
PART
NO. ~14. SCBIDJNUMBER 125. CPU. TIMt

TY~PE~~
A __o

P K
WORUNIT
_________
COD A:MIlCOO
ME
ATMO
WHEN'P0WM.
is No b
- -

NT
UNT START
NOU
-
DA
TP
R
J EW AT O
L.

SIZE LAS ACT 101CODEI


ISM
_

ATSC/EMPLOIEE
UME
m_

A-I t I____ I I I..... I __1__1

5 ______ t___III~ I I5 ~ ___

Iawsu
-

J iNt't
-EDD * S

3tT 34 WILL
91110 ________

<~~______i
u_ _ _ _

i i i I I _ _ _

j~7 -

-
rI i Il . - S - S
-~-iij__

i I___
2L I~P~

2? OETIATO

6ml EONSATO

AFOw~~~WILS il

w
u
J

vi

<.

- - - -- - - - - - -j

Fiue9- FO o 4
~4
15cw 0 OW3 A.6
SUCHERItO 1203

WARNINS RafDam& fc
d-0,c1nj, - E"e""y flI ~W o balk
-A O~

REPAR CCLE
AT-REPARABLE ITEM -PROCESSING-TAGG
23. Ism 24. SILAN COf 1. Job CONTROL. N10. [2,5 a VAL M04 J. WO. NO4 A, WN4 DS

5-LHO .WS 141CO fTE n t


25. ILAKPOITAflON CONTROLNM&Wh

- 30. sr IIT PART


MJMU-
DAYS -STAThISCHEAJ4G M
26. IJLJAOYD -34. SWEICLAMk ~0 l

27.ILD. p~s~i spni 3. SERAL #MM 13, SLIMY DOCIAEPq ~t


27.~34 OESCRPAWENSPPL
1.DSRPP0
to IMO
2. SHIPPED

25. lUC*. AT SEA


35. CONDEMNED

30. ORDEREDAT MAINT.

X1 X1CD. IN MAR47.SHO0P
i. SHOP LU omit~

32L MADESLLYJCLA&LI

33. f301 OLIOT oNLY


USE .36. SUMYLINSPECTOR'S STA.MP

__ ____ __ __ __TAG N ;050809

DATE
SAI lUlAtt ClT DATA

IW 1111C"'*17NI
DAY'00. TL ITI~l IS. SUfLnY KUMft
DOCUMENT

u
10, ILPW? UMZ

TO.

20. ACTIOm (21. OTT.. 22. IC USEOtt?

CAT!COMM-, - IAG NO. Q 0 5,0 8 0 9 ATOO.0


Ao36t
1

Figure 10. AFTO Form 350

42
Table 3. Data Elements Utilized- in the Maintenance
Data Collection System

DATA ELEMENTS UTILIZED IN THE MAINTENANCE DATA COLLECTION SYSTEM


SOURCE:- AFR 66-1, AFR 66-267, AND 00-20 SERIES TECHNICAL ORDERS.

JOB- CONTROL NUMBER -(JCN) - A UNIQUE SE-VEN CHARACTER NUMBER USED TO CONTROL AND
IDENTIFY MAINTENANCE JOBS, AS WELL AS TO IMPROVE ANALYSIS CAPABILITY. -EXAMPLE:-
0410001 ;041 IS THE JULIAN DATE AND-O001 IS THE FIRST JOB OF THE DAY.

PERFORMING WORKCENTER CODE (PWC) - A SPECIFIED FIVE CHARACTER CODE USED TO


IDENTIFY THE WORKCENTER ACCOMPLISHING THE MAINTENANCE ACTION.
EXAMPLE: U4500 ; THE FIRST POSITION IDENTIFIES DIVISIONS, WINGS, SEPARATE
SQUADRONS, OR COMMANDS LOCATED ON A BASE. THE SECOND POSITION SIGNIFIES THE
VARIOUS FUNCTIONS WITHIN THE MAINTENANCE COMPLEX. THE THIRD POSITION IN MOST
-CASES IS THE SUBFUNCTION WITHIN A SQUADRON. THE FORTH AND FIFTH POSITION
IDENTIFIES A SPECIFIC BRANCH, SHOP OR SITE. THE ASSIGNMENT OF THE SECOND
-POSITION IS REQUIRED TO INSURE COMPUTER EDITS ARE CORRECT. THE ASSIGNED SECOND
POSITION WORKCENTER CODES ARE:

(1) 1 - CHIEF OF MAINTENANCE


(2) > - ORGANIZATIONAL MAINTENANCE
-(3) 3 - FIELD MAINTENANCE
(4) 4 - AVIONICS AND AIRBORNE MISSILE MAINTENANCE
(5) 5 - MUNITIONS MAINTENANCE
(6) 6 - GROUND COMMUNICATIONS - ELECTRONICS MAINTENANCE
(7) 7 - NOT AtSIGNED
(8) 8 - GROUND LAUNCHED MISSILE MAINTENANCE
(9) 9 - NONREPORTING WORKCENTERS
(10) 0 - AWAY FROM HOME STATION MAINTENANCE
(11) P - GROUND PHOTOGRAPHIC EQUIPMENT MAINTENANCE
(12) M - CIVIL ENGINEERING/ICB .MAINTENANCE
-(13) S - GROUND LAUNCHED MISSILE HONREPORTING WORKCENTERS
(14) G - FIRST ACFT GENERATION SQUADRON
(15) H - SECOND ACFT GENERATION SQUADRON
(16) E - EQUIPMENT MAINTENAHCE SQUADRON
(1T)..R - COMPONENT_ REPAIR SQUADRON
(18)- ALL REMAINING A - Z IS AUTHORIZED DEPOT MAINTENANCE

IDENTTFTCATrOl' NUMBER (IDT7 CONSISTS OF SIX CHARACTERS, AND IS USED- TO IDENTIFY


EQUIPMENT ON WHICH WORK WAS- PERFORMED OR FROM WHICH AN ITEM WAS REMOVED. THE
FIRST POSITION DESIGNATES WHO OWNS THE EQUIPMENT. THE SECOND POSITION IS THE
FISRT CHARACTER OF THE STANDARD REPORTING DESIGNATOR CODE. THE LAST FOUR
POSITIONS ARE NORMALLY THE LAST FOUR CHARACTERS OF THE EQUIPMENT SERIAL NUMBER.

STANDARD REPORTING DESIGNATOR (SRD)- CONSISTS OF THREE CHARACTERS AND ARE


ASSIGNED TO IDENTIFY A SPECIFIC TYPE OR CATEGORY OF EQUIPMENT. THE FIRST
POSITION OF THE SRD CODE IDENTIFIES- THE GENERAL TYPE OF EQUIPMENT AS LISTED
BELOW.

A - AIRCRAFT AND DRONES


B - GROU3D RADIO EQUIPMENT
C -
* - f if i

F - GROUND METEROLOGICAL EQUIPMENT


G - SUPPORT EQUIPMENT
H - PRECISION MEASUREMENT EQUIPMENT
J - GROUND SPECIAL ELECTRONICS
K - GROUND FIXED WIRE EQUIPMENT
L - VFSCELLANEOUS GROUND COMMUNICATION EQUIPMENT

43
Table 3. (Continued)

H --GROUND LAUNCHED MISSILES


N - AIR LAUNCHED MISSILES AND GUIDED WEAPONS
Q - ELECTRONICS -SECURITY COMMAND MISSION EQUIPMENT
R - REAL PROPERTY INSTALLED EQUIPMENT, SHOP WORK, ECM PODS/VEHICLES, GEARBOXES
AND MODULES, SPECIAL PURPOSE PODS.
S - AGE GAS TURBINES, AUXILIARY POWER UNITS.
T - TRAINERS, MOBILE TRAINING SETS, AND RESIDENT TRAINING EQUIPMENT
U - COMMUNICATIONS SECURITY EQUIPMENT
X - ENGINES
y - MUNITIONS
Z - MISCELLANEOUS LOCAL SUPPLIES
1 THROUGH 8 ARE NORAD COMBAT OPERATIONS CENTERS

TYPE MAINTENANCE CODES (TM) - A CHARACTER USED TO IDENTIFY THE TYPE OF WORK
ACCOMPLISHED. TYPE MAINTENANCE CODES ARE OBTAINED FROM THE APPLICABLE WORK UN!i
CODE MANUALS FOR THE TYPE OF EQUIPMENT WORK IS BEING PERFORMED ON. AIRCRAFT
TYPE MAINTENANCE CODES ARE LISTED BELOW.

A - SERVICING
B - UNSCHEDULED MAINTENANCE
C - BASIC POST FLIGHT OR THRUFLIGHT INSPECTION
D - PREFLIGHT INSPECTION
= - HOURLY POSTFLIGHT OR MINOR INSPECTION
H - HOME -STATION CHECK
J - CALIBRATION OF OPERATIONAL EQUIPMENT
M -- INTERIOR REFURBISHMENT
P - PERIODIC, PHASE OR MAJOR INSPECTION
Q - FORWARD SUPPOT SPARES
R - DEPOT MAINTEMANCE
S - SPECIAL INSPECTIONS
T - TIME COMPLIANCE TECHNICAL ORDERS
Y - AIRCRAFT TRANSIENT MAINTENANCE

COMPONENT POSITION (CP) - A SINGLE NUMERICAL CHARACTER TO SIGNIFT THE INSTALLE-


POSITION OF ENGINES AND ASSOCIATED COMPONENTS.

WORK UNIT CODE (WUC) - FIVE CHjARACTERS USED TO- IDENTIFT THE SYSTEM, SUBSYSTEM,
AND COMPONENT ON WHICH WORK IS REQUIRED OR PERFORMED. THE FOLLOWING SHOWS THE
BREAKDOWN OF A COMMON AIRCRAFT WUC:

EXAMPLE WUC. : 72117 T-39A NAVIGATION RADAR DOPPLER DRIFT AMPLIFIER

72 = RADAR NAVIGATION SYSTEM


721 = AN/APN-131 DOPPLER SUBSYSTEM
72117 = DRIFT AMPLIFIER COMPONENT

LISTED BELOW ARE THE BASIC STANDARD AIRCRAFT SYSTEMS AS INDICATED BY THE FIRST
TWO POSITIONS OF THE WUC

01 - GROUND HANDLING, SERVICING, AND RELATED TASKS


02 - ACFT CLEANING
03 - SCHEDULED INSPECTIONS
04 - SPECIAL INSPECTIONS
05 - STORAGE OF EQUIPMENT
06 - ARMING/DISARMING
07 - RECORDS PREPARATION
08 - NOT USED

44
Table 3. (Continued)

09 - SHOP SUPPORT
10 - NOT USED
11 - AIRF'iAME
12 - COCKPIT AND FUSELAGE COMPARTMENTS
13 - LANDING GEAR
14 - FLIGHT CONTROLS
17 - AERIAL RECOVERY
22 - TURBOPROP POWER PLANT
23 - TURBO-JET ENGINE
24 - AUXILIARY POWER PLANT
32 - HYDRAULIC PROPELLER
41 - AIR CONDITIONING, PRESSURIZATION, AND SURFACE ICE CONTROL
42 - ELECTRICAL POWER SUPPLY
44 - LIGHTING
45 - HYDRAULIC AND- PNEUMATIC POWER SUPPLY
46 - FUELS
47 - OXYGEN
49 - MISCELLANEOUS UTILITIES
51 - INSTRUMENTS
52 - AUTOPILOT
.55 - MALFUNCTION ANALYSIS AND RECORDING EQUIPMENT
56 - AUTOMATIC ALL WEATHER LANDING
61 - HF COMMUNICATIONS
62 - VHF COMMUNICATIONS
63 - UHF. COMMUNICATIONS
64 - INTtRPHONE
65 - IDENTIFICATION FRIEND OR FOE
66 - EMERGENCY COMMUNICATIONS
68-- AIR FORCE SATELLITE COMMUNICATIONS
69 - MISCELLANEOUS COMMUNICATIONS EQUIPMENT
71 - RADIO NAVIGATION
72 - RADAR NAVIGATION
73 - BOMBING NAVIGATION
74 - FIRE CONTROL
75 - WEAPONS DELIVERY
76 - ELECTRONIC COUNTERMEASURE
77 - PHOTOGRAPHIC/RECONNISSANCE
82 - COMPUTER AND DATA DISPLAY
89 -AIRBORNE BATTLEFIELD COMMAND-CONTROL-CENTER
9T - EMERGENCY EQUIPMENT
94 - METEROLOGICAL EQUIPMENT
96 - PERSONNEL AND MISCELLANEOUS EQUIPMENT
97 - EXPLOSIVE DEVICES AND COMPONENTS
98 - ATMOSPHERIC RESEARCH EQUIPMENT

ACTION TAKEN (AT)- ONE CHARACTER USED TO IDENTIFY THE SPECIFICMAINTENANCE


ACTION TAKEN AS LISTED BELOW.
A - BENCH CHECKED AND -REPIRED
B - BENCH-CHECKED -SERVICEABLE
C - BENCH CHECKED REPAIR DEFERRED
D - BENCH CHECKED TRANSFERRED-
E - INITIAL INSTALLATION
F - REPAIR
G - REPAIRS AND/OR REPLACEMENT OF MINOR PARTS, HARDWARE AND SOFTGOODS
H-- EQUIPMENT CHECKED NO REPAIR REQUIRED
J - CALIBRATED NO ADJUSTMENT REQUIRED
K - CALIBRATED -ADJUSTMENT REQUIRED
L - ADJUST

45
Table 3. (Continued)

M - DISASSEMBLE
N - ASSEMBLE
P - REMOVED
Q - INSTALLED
R - REMOVE AND REPLACE
S - REMOVE AND REINSTALL
T - REMOVED FOR CANNIBALIZATION
U - REPLACED AFTER CANNIBALI-ZATION
V - CLEAN
X - TEST-INSPECT-SERVICE
Y - TROUBLESHOOT
Z - CORROSION REPAIR
NOT- REPAIRABLE THIS STATION'CODES
1 - REPAIR NOT AUTHORIZED BY SHOP
2 - LACK OF EQUIPMENT, TOOLS, OR FACILITIES
3 - LACK OF TECHNICAL SKILLS
4 - LACK OF PARTS
5 -- SHOP BACKLOG,
6 - LACK OF TECHNICAL DATK
7 - LACK OF EQUIPMENT, TOOLS, FACILITIES, SKILLS, PARTS OR TECHNICAL DATA
REPAIR IS AUTHORIZIED_BUT THE ABOVE IS NOT AUTHORIZIED-
8-- RETURNED TO DEPOT
9- CONDEMNED

WHEN DISCOVERED (WD) - ONE CHARACTER USED TO IDENTIFY WHEN A DEFECT OR


MAINTENANCE REQUIREMENT WAS DISCOVERED, CODES ARE LISTED BELOW.

A-- BEFORE FLIGHT ABORT


B - BEFORE FLIGHT. NO'ABORT
C - IN-FLIGHT ABORT
D -- IN-FLIGHT NO-ABORT
E - AFTER FLIGHT
F - BETWEEN FLIGHTS BY GROUND CREW
H - THRUFLIGHT INSPECTION-
j - PREFLIGHT INSPECTION
-K - MINOR INSPECTION
L - DURING TRAINING
M - MAJOR INSPECTION
N - -REFURBISH
P - FUNCTIONAL -CHECK FLIGHT
Q - SPECIAL INSPECTION
-R - QUALITY CONTROL CHECK
S - DEPOT -LEVEL MAINTENANCE
T - DURING SCHEDULED CALIBRATION
U - NON-DESTRUCTIVE TESTING
W - IN-SHOP REPAIR AND/OR DISASSEMBLY FOR MAINTENANCE
X - ENGINE TEST STAND, OPERATION
Y - UPON RECEIPT OR WITHDRAWAL FROM SUPPLY STOCKS
2 - DURING OPERATION OF MALFUNCTION-ANALYSIS AND RECORDING EQUIPMENT
3 - HOME STATION CHECK
4 - BASIC POSTFLIGHT INSPECTION-

-HOW MALFUNCTION CODE (HM) THI-S- CODE CONSISTS OF THREE CHAR-ACTERS AND IS USED TO
IDENTIFY THE NATURE- OF THE EQUIPMENT DE-FECT, -OR THE STATUS OF THE ACTION BEING
ACCOMPLISHED. ONLY THOSE CODES THAT ARE APPLICABLE WILL BE LISTED IN EACH WORK
UNIT CODE MANUAL. rUE TO-THE NATURE OF SUPPORT GENERAL TYPE WORK, THE'-RECORDIX
OF ACTION- TAKEN, WHEN DISCOVERED:, AND HOW MALFUNCTION CODES IS NOT REQUIRED WIT
SUPPORT GENERAL WORK UNIT CODES. A COMPLETE LIST OF AUTHORIZED CODES IS

46
Table 3. (Continued)

CONTAINED IN AFM 300-4 IN BOTH DEFINITION AND NUMERICAL CODE SEQUENCE.

CATEGORY OF LABOR (CLB) - THIS DATA ELEMENT IS USED TO DIFFESRENTIATE THE TYPE OF
MAN-HOURS EXPENDED- AS LISTED BELOW.

1 - MILITARY REGULAR DUTY HOURS


2 - MILITARY OVERTIME HOURS
3 - FEDERAL SERVICE EMPLOYZE-R EGULAR DUTY HOURS
4 - FEDERAL SERVICE EMPLOYEE-OVERTIME HOURS
5 - LOCAL NATIONAL EMPLOYEE HOURS
6 - CONTRACTOR-LABOR HOURS

COMMAND/ACTIVITY IDENTIFICATION (CMD/AI) - TWO CHARACTERS USED TO IDENTIFY THE


OWNING COMMAND OR MAY BE USED BY THE UNIT TO IDENTIFY SPECIAL PROJECTS. TENANT
SUPPORT, OR OTHER ACTIONS. OWNING COMMAND CODES ARE LISTED BELOW:

OA - ALASKAN AIR COMMAND


Os - U.S-. AIr FORCE ACADEMY
OC - AEROSPACE DEFENSE COMMAND
OD -- U.S. AIR FORCES IN EUROPE
OE - AIR FORCE ACCOUNTING AND FINANCE CENTER
OF - AIR FORCE LOGISTICS COMMAND
OH - AIR FORCE SYSTEMS COMMAND
O - AIR RESERVE PERSONNEL CENTER
OJ AIR TRAINING COMMAND
OK - AIR UNIVERSITY
OL - USAF SOUTHERN :COKMAND
OM HQ AIR FORCERESERVE
ON - HEADQUARTERS USAF
00 - AIR FORCE DATA AUTOMATION AGENCY
OP - HEADQUARTERS COMMAND, USAF
OQ - MILTARY AIRLIFT COMMAND
OR - PACIFIC -AIR FORCES
0S - STRATEGIe AIR COMMAND
0? - TACTICAL AIR COMMAND
OU - ELECTRONIC SECURITY COMMAND"
OT AI FORCE COMMUNICATIONS COMMAJID
02 - AIR FORC'INSPECTIO'-AND SAFETY CENTER
03 - AIR FORCE TEST AND EVALUATION CENTER
05 - AIR FORCE INTELLIGENCE SERVICE
06 -_A1R FORCE AUDIT AGENCY
07 --AIR FORCE OFFICE OF SPECAIAL INVESTIGATION
O9 - AIR FORCE MANPOWER AND PERSONNEL CENTER.
lW - AIR FORCE ENGINEERING AND SERVICES AGENCY
IX - AIR-FORCE COMMISSARY SERVICE
40 - MILITARY ASSISTANCE COUNTRIES
41 - U.S. READINESS COMMAND
42 - ROYAL CANADIAN AIR FORCE
43 - ROYAL AIR. FORCE, UNITED KINGDOM-
44 - AIR FORCE TECHNICAL APPLICATIONS CENTER
45 - WEST GERMANAIR FORCE
46 - OTHER FOREIGN GOVERNMENT
-47 - COMMERCIAL AIRCRAFT
48 - SYSTEM SUPPORT MANAGER
49 - DEPARTMENT OF DEFENSE-
4A - OTHER USAF ACTIVITIES
48 - FEDERAL AVIATION AGENCY
4C - OTHER U.S. GOVERNMENT

47
Table- 3. (Continued)

4D - BELGIAN AIR FORCE


4 - ROYAL DANISH AIR FORCE
-4F - ROYAL NETHERLANDS AIR FORCE
4G - ROYAL NORWEGIAN AIR FORCE
41 - -NATO AWACS PROGRAM
-J - EUROPEAN PARTICIPATING AIR FORCE
4W - MEDICAL MATERIEL FIELD OFFICE
4Z - AIR NATIONAL GUARD

MISSION DESIGN SERIES (MDS) - THIS 7 DIGIT ELEMENT IS THE COMPLETE DESIGNATION
FOR AIRCRAFT, MISSILES AND C-E EQUIPMENT.
EXAMPLE: NKC135A

NKC = THE MISSION OF THE AIRCRAFT


135 = THE DESIGN OF THE AIRCRAFT
A = THE SERIES OF THE AIRCRAFT

sErrKL NUMBER (INT - 'THE 8 DIGIT SERIAL NUMBER ASSIGNED TO THE ITEM. FOR
ENGINES AND RELATED PARTS THIS NUMBER IS CONTROLLED BY AFM 400-1.

ESTIMATED TIME IN COMMISSION (ETIC) - YEAR, DAY AND HOUR OF ESTIMATED TIME AN
ITEM WILL BE RETURNED TO OPERATIONAL STATUS.

UNITS PRODUCED (UP) - PERMITS THE IDENTIFICATION OF COMPLETED MAINTENANCE


ASCTIONS; ACTIONS THAT WERE IN PROGRESS BUT NOT COMPLETED; OR ACTIONS IN WHIC
WORKCENTER PARTICIPATED BUT WAS NOT THE WORKCENTER ASSIGNED PRIMARY
RES'PONSIBILIT% FOR. THL COMPLETION OF THE ACTION.

DATE - YEAR AND DATE OF THE ACTION. EXAMPLE: 4099

STATION LOCATION CODE (SLC) - THIS I-S A 4 DIGIT CODE LISTED WITHIN AFM 300-4 FOR
THE BASE, OPERATING LOCATION, OR SITE AT WHICH THE WORK WAS PERFORMED.

TAG NUMBER -(TAG) - THE LAST THREE DIGITS OF THE AFTO FORM 350 TAG NUMBER THAT I:
PREPARED AND IS- TO BE ATTACHED TO THE REMOVED ITEM WHICH WAS IDENTIFIED WITH AN
ASTERISK IN- THE WORK UNIT CODE MANUAL..

FEDERAL SUPPLY CLASS (FSC) - THE FIRST FOUR DIGITS OF THE NATIONAL STOCK NUMBER
OF THE ITEM- BEING REMOVED.

PART/LOT NUMBER (P/N) - THE PART NUMBER OF THE ITEM BEING MODIFIED OR REMOVED,
INCLUDING SLASHES AND DASHES BETWEEN NUMERICS ONLY. FOR CONVENTIONAL MUNITIONS
ITEMS THIS WILL BE THE LOT NUMBER OF THE ITEM. FOR ITEMS THAT DO NOT HAVE
PART/LOT NUMBERS, ENTER THE NATIONAL ITEM IDENTIFICATION NUMBER (NIIN) WHICH IS
THE LAST NINE CHARACTERS OF THE NATIONAL STOCK NUMBER (NSN).

RE7ERENCE SYMBOL - THE GRID LOCATION OF AN ITEM ON AN EQUIPMENT WIRING DIAGRAM


OR ITS COMMON NAME.

48
Table-3. (Conicluded)

OPERATING- TIME- - THE HOURS A PIECE OF EQUIPMENT- HAS/WILL 'OPERATE.

FLYING_ HOURS (F/H) - THE TIME AN AIRCRAFT HAS FLOWN-.

49
Several points should be made about some of the data items.

i. Column H, AFTO 349, provides for AFSC/Employee Number. Typically,


employee number and not AFSC is entered in this column. Thus, there is
typically no record of AFSC in the MDC. AFCC could be generated at base level
by searching Maintenance Management Information and Control System (MMICS)
files for employee number and AFSC and then extracting data based on employee
number and matching to the employee's AFSC. This action is practical at base
level but not at higher levels of aggregation, because of duplicate employee
numbers among bases. Fortunately, there are other ways to identify AFSC in the
aggregated data, through the use of work unit codes.

A. further problem of identification of AFSC also exists. When more than


one person performs work to be recorded on the AFTO 349, the number of only one
worker is entered in column H. Usually this employee fs the senior member of
the crew, or in the case of Combat Oriented Maintenance Organization (COMO)
organizations, the number of the employee with the prime AFSC for the work
being accomplished is entered. While manhours are accounted for, as well as
number of employees, distribution of work time by AFSC cannot be obtained.

No indication of the skill level of the personnel performing the work is


provided. Thus, work performed cannot be differentiated as that which is
appropriate for 3-skill, 5-skill, or 7-skill level employees.

2. Column J elicits crew size for the action reported. This information
coupled with the work unit code and action reflects the number of personnel
required to perform the work. This number, however, is subject to
misinterpretation, because trainees are often included in the number of
personnel performing a job.

3. Column C and D and item 5 are especially significant for a TIES.


Column-C is the work unit code (WUC), column D is the action taken, and item 5
is the mission design series (MDCs). Using the example cited under WUC above
and relating this example to MDS and action taken, one can see a task statement
not unlike that observed in other task identification systems- -although often
in greater detail and concreteness. For example, reading downward:

English Translation AFTO 349 Coding

Remove and replace Column D, Code R (action taken)

(the) drift amplifier component Column C, 4th &-5th digit (WUC)

(on the) AN/APN-131 Doppler Subsystem Column C, 3rd digit (WUC)

(on the) Radar Navigation System Column C, 1st & 2nd digit (WUC)

(on the) T-39A Item 5 (MDS)

50
The reader should recall that the information in the third through fifth digits
of the WUC must be obtained from the 00-20 series -technical order for the MDS.
Codes, however, are accessible in the MDC B-4-master file.

The WUC along with the MDS- are the sources of matching actions taken with
AFS. Normally, a work center is manned by a single AFS. The exception is in
COMO flightline- maintenance, where- the WUC would match up -only to the lead
worker.
The same process as- described -for aircraft also applies to communications-
electronic equipment and missile maintenance.

Processing and Analyzing MDC Data

As indicated above, MDC data originate with the employee accomplishing the
work. The raw data are either keypunched or entered via a terminal at base
level. Data are aggregated at base level and listings appropriate to the base
maintenance activity are provided daily.

Base level data are further aggregated at major command level. Each 30
days, major commands forward MDC data tapes to AFLC/MME-2 at Wright-Patterson
AFB. There the- tape files are added to the Maintenance and Operational Data
Access System (MODAS) which is an interactive retrieval system for-data for the
past two years. Provision is made for operational units to have on-line access
to MODAS.

The flow of the data from base level to AFLC is illustrated in Figure 11,
which is reproduced from the Quick Reference Guide to Maintenance Data
Collection (MDC). Notice that the data are aggregated in the D056A file, from
which MODAS output is generated.

The input for MODAS, a G063 file, is the D056A file. MODAS design is
shown in Figure 12. Notice that MMICS data also are input to the G063 file.
MDC data are also maintained by the contractor who developed the weapon system.

MODAS is a menu-driven access system that provides reliability and-


maintainability, product performance, and product improvement data. Data are
available by weapon system or equipment base, command, Air Logistics Center,
and Air Force-wide. AFSC are not typically available. MODAS provides on-line
trend plots, detailed reports, and user designed reports.
Examples of MODAS menus are reproduced in Figures 13-24. Some brief
remarks about each of the figures:

Figure 13 the array of data bases

Figure 14 an example of option 1 from Figure 13 showing array -of


airborne data available

Figure 15 - menu for selecting aircraft system for display of data for
option 1, Figure 14

51
FY714
A
W
Q

1k-41

ene

1 )a)

°a

52
LD

cr)
0-)
0-
CD *E-4

0-
00 U3

C/) CE)
H- C
0 0-

53
4

0D

C44.

0C1
63 00i

CC I ~I

.D
cu 0; N
00 C *L41 .eI
L 3 41 63 +3 4

0 63 < 63
~zfl I C. '1
~o L C 63 .u

LI

I.L

54w
4V4

o4 01
*L L
mlam
Cn w
41w-1

I~ 00

:3 4 .041 61

H 00-4

~t- 0
L L 4J41LL-
3

4ISO41 M

1) 04z

~Ia 41o c a
L - t4 J C
L.es uL14C
44 ~ Uw S. Ia
_
~ l,- '141

to--
L~w41141 C.
wj

I~ U) I0~55
<(33C<<03 WC W <

LL
000LL- 0W0 0 N w

z ZZZZ -L!) MLT CDI-MWU)

Dqt)C 03 O
-0 CU CO 'V ID

WI U)

OCD

0 C3
OWLL
qq
Iq v
o0-
-vi0
0L
oLLLL
< m0q3m
4w w0
U m Dm ED ED
&o0-00
-
W
m
Z

0 00 00 0LLW V,-I:6
Figure 16 - worst case menu for option 7, Figure 15

Figure 17 reliability report for option 1, Figure 16; these data -have
implications for deciding where training emphasis may be required

Figure 18 - airborne data -menu requesting option 5, detail maintenance


data search

Figure 19 - menu for displaying detail maintenance- data search, option 5


in Figure 18

Figure 20 search- options for detail maintenance data- search.- This


display provides for some key searches: options 1, aircraft; 6, WUC; 7, action
taken; 10, base; 1, -command; 23, crew.

Figure 21 - a detail maintenance data report -showing aircraft, type


maintenance, WUC, and action taken. The English translation of the first line,
based on- MDC codes: unscheduled maintenance (B) to trouble shoot -(Y)
instruments (51000) on a given F-16A

Figure 22 important menu for obtaining work unit .odes; displays the
master B4 file

Figure 23 - example of WUC 65XXX for F16A

Figure 24 - menu for identifying action taken and other codes.

These figures represent examples of the menus, searches, -and reports


provided by MODAS. Users may generate a variety of reports of these and other
kinds.

Reliability of MDC Data

MDC are frequently maligned because of inaccuracies, but what is


frequently overlooked is the fact that at least -some parts of the data base are
sufficiently reliable for many purposes. Much of the criticism levied pertains
to the recording of manhours and the accuracy of action taken codes. The time
criticism stems from two contentions, and, indeed, these criticisms were
verified during interviews with maintenance technicians. First, units vary in
their interpretation of what time is to be reported for a maintenance action.
Some supervisors insist that an employee account for the time for his full
shift. Others require actual maintenance times. The other criticism is that
the MDC codes do not provide for accounting for such time consuming activities
as travel, preparation, and set up. Interviews with members of a manpower shop
employing the LCOM model revealed that they make operational audits of MDC data
to derive the manhour parameters used in the model. While this audit will be
described later, it is important to note that these specialists reported that
they did not find a large disparity between the times determined by the
operational audit and MDC data.

The assertion is sometimes heard that workers may not have ready access to
the codes when they are completing the AFTO 349, and this situation -reduces

57
0 L

> )>

M U) >%)

EIDO- U)

~0 0 0 A
z I xIw :3
I a CL - (fl
IHHH< -L
UIID EDt-M C
( I cc.
0 co
< 0l 0
0 cn

C.)
U) I

01 M
I>Dm
U) MH m
~~I>.>U)Zc

j ~I
M I DD 0E
-
I
U)
U)
LO10
C
IP
ItO0 r-t CQ
IHH-I-- < WL

58
C:) L 1' 0 amN 01 r
co 1 o1)m
0C c T o ci'qM-o I-
0o C: C-) I OLDWNCOED N Z
COIM

a I
CD I
<H I ~ ~ N)~W D1
U) ~ I
-0 c m
Ijx-L Vu m mV,
C-I N oi 4-0 H -m c
,T I nNa w 3 oc
U) . . *
O1 0"c)C;C C u

0 'a- = )I tD Dim1rOOO
o. 0 r- I CU 2WCDCD
N-DOIN

DD

fLLLI LIJI+-H, CD i-
ro L DZ
14 aL 0- a
r-l 0 72 ID aUN. N CNI-Wl--'W
a:-~ ~ V 0 <ULL D H-c

I Ic ~U0
>- Z
CL w z z 0- C, :r.: >

o o IC CU
U- U I U)H z NWi
Is L)UJL
I <W = <> I- cc =)
CDI ccz LDLLWO
0<H < I
I F-OWO<O
M M J

SI cl< < M- IM << <QM-W

UY

CI

59
0.
r"I 03
0 t :
U, I
I 4.3I/
03I .HI ,
SI .rl U,
I:
I a
I (10 CaI
• I .'H W0-

C-) I C ca
>%I
:3 m U)4J C)
Cl-
-- t W" Mi r

U)u- LM n-
a)4-) C4J d) M
0 -0~ co
¢II
P to
x ca
rr a)PPC r
m u...,.-I. .
0 M 4) to :3-E

o- • Cu-iJ
*0 7 W~4J M C I
CIfW L4.I)CJ

Ca ta m 00~

F >to ri Z b

lr Ja0
L .. M C 0 4

I U)E Cw
I L _ - CL)
0- =_a aa)a 3

Uu E zT
cow

06
WLJI Z
Li
LOJ

WLJI I'
<
~
: a:
w-
LU 0- 0
IL 0-
ZD
c~~o L0a Z4 af -
LLM 0 1-
LU
S< LUj H- H 0U)
)
Cc
I
CI01' L) 111 D 0 < LU
L) CD Z 0 LUI cr
WIHz 0 N 0
LUw : H I L
HH 2 < < [I
d a a -i LU U) Nr
0 LU
0- - E 0 IL N <L
0 0Z C LU- U) 0 < 0 ZD

o0: 2:
0
Z
0
LL
0
<
a 0a
cc
Z)
U)
0H
li 0

LLU
D x 0 - a: - L

cxi 5
0~ I, c

61
0

K I I
A
CcI xE
cl-
- I
I z
< MI
<~- -W I 0 0 cI

M M<-Z0:D cc I

U~

0
< I

z
z
EWI
W
-0um-
I

0
- W D

f
I
I
v
HWI<

0Z
i
ca

<
).

N CU CU cuN)- WCO LU
w W1- I U)2:2:

M 0 0 z:<

UIw CC. >Z W HI LLJHHWHHLU J


o W : LUW : -
< *- *Z * I HH Z) 7 LU
:D w **L *< H -j a I ZZ Z Z
< ~~ ~~~~~U z'TI qI- LUuW
4F4F

CE < - -0 x < 62I C-1CE:a: C) c a


31
1 LL 0
A

UZI

< I >-0 - - C
A
I 0- 0- 0 0 0 0
0 0 0 0 0 0 0 4
0LU1 0 0- 0 0 0 0w
W 0
D.-
IC '-''i-

:3:
1Lo) Io LO- [o, LO LO)
< A
< H-I
0 w HI

WLJI
nI
H-I ED Co 'M Co Co Co
A
HZ I "T O M-m
CU
H A cu In) LO) Lo) -0 co
w I )
0<I 0 0) 0 0 0 0D
I-~i 0 0 0 0 0 0
CCi- 0 0 0 0 0D 0
Cf) WL I-I -- .- V1 --
oJ~ AoC o O
0< A

U01 CD CD CD CD (f) CD
0 r I -T- c- T1--1
< -AHI 0 0 0 0 0 0
w < ILL LL LL LL LL I

63
'7 U
Lm
aY
I Cr)

Cd Q)

Lr a) a)- -,

Hl
LO -0 w0 Z-to-
n0 0 0

DD fJ.

Z1 L
F-

ro 0 - W)L

cu CT)

C')
ED P
0w

z
('3 LIJ

64
• I 0000.• -0

'0 < I
I 0000 0
L C Ir
I m
o000 0

0)I o0000 0
C
.0 I
i oooow
0000 • *00
SI 0000 0
c I 0000 0
>i II 00
0000 00 ... 0
0
W -1 0000 0
CcI 0000 0
<I 0000 0
IL 0000 0
I
I 0000
~ 2 0

1 1 00020
00-M0 -0 0
1 0000 0
11 oooo
0000 o
0
1 10000
0000 00
co10000 0
10000 0
<10000 0
I m m C)m m 4J-
m
o II LL
I -LL2

I- WZ -
I w
I (120 o
I >-~CC - 0
I U)2ED c

in <I LI-
LLccH-0OH ) ~
o 1000Ur. t0

1-+ 0 DcommE
mL I 0000 0
co 01 U LL LLLL L
0 U)
<HZ I
I lI
ED < -IIzw wwV.

rzo Ii
m -W 3- "q
65I11 "q
- • •

65
a ) o0n E ) a)
Z 0 0 0

"DI u ) a0 ) a)
a) 0)C O 0)

C -L -4-0 . C'-
) - U)
I 0)04~, 00c ) 0 Li
-Yn-0c W_0r- 41) L cc E-

M C 0 )iN%) ***

H I) 4--i-0)~U V E 0

0ZO=0 a) l0 >%< <


<3: m U- U)I W
u LUu

Ix " C CD1

66
accuracy of the data. Sometimes, interviews revealed, the worker logs the work
against some code recalled, or logs the work against the vaguely defined
"support-general" code. A large amount of support-general codes were reported
during interviews, and at least one command is testing the deletion of the
code.

Despite the possibility of incorrect WUC entries, the number of times it


occurs should not affect the adequacy of a list of actions taken on a given
subsystem or component. What may be affected is the time workers spend taking
certain actions.

A legitimate criticism can be levied against the manual system in which


raw data are keypunched by non-maintenance personnel. A review of MDC listings
at base level often reveals obvious keypunch errors. Under CAMS, however,
maintenance personnel believe these errors will be reduced, since maintenance
personnel will be directly entering the data. Thus, readability and
interpretation errors made by entry personnel will be eliminated. It should be
noted also that all F-16 bases worldwide are using the Centralized Data System
(CDS) to automate form generation and reduce paperwork. In addition the CDS
links all F-16 bases to a host main frame computer in Boston which aggregates
all F;16 maintenance data weekly.

There is provision for base level review of listings on a daily basis to


determine errors and for correcting them. Visits at operational sites
indicated that sometimes the listings are reviewed and sometimes they are not.

In summary, the question of -how reliable are MDC data is best answered by
stating that reliability is suspect in the minds of many maintenance personnel.
No empirical evidence of its unreliability was discovered in this study.
Perhaps the time has come to resolve the question once and for all. Such a
study, for example, could determine variances in manhours among units and
investigate the sources of any variance that was uncovered. But, there is
considerable opportunity for AFTO 349-entries to be inaccurate, for the reasons
cited above.

Uses of MDC Data

The primary uses of MDC data appear to be in the reliability and


maintainability and product improvement areas. Clearly, the data are important
for identifying components whose reliability is low, or for which maintenance
is expensive in terms of maintenance manhours and resource replacement.
Further, especially at base level, the data can be the basis for maintaining
bench stock.

In the MPT arena, the data are typically used in base level maintenance
shops to assess overall productivity, to track maintenance actions, and to
determine manpower impacts of shifting workloads. They are especially
important for maintenance managers to forecast manning requirements, according
to interviews with these personnel. On-the-job training, through MMICS, is
tracked by the experience that unit personnel acquire, and records of their
training and certification are documented.

67
Above base level there is little evidence of the use of -the data, except
as a basis for input to the LCOM model, to be discussed-below. In the 1977-78
time frame there was an abortive effort to use MDC data for designing resident
training, especially in determining requirements for electronic principles
training. The volume of material as well as the lack of a-means of aggregating
and analyzing the data made its use for training unsuccessful.

That MDC are not being used to any noticeable extent for -training or
classification and related personnel uses does not mean that the data cannot be
used for such purposes. Indeed, a review of the MDC such as reported -here
reveals the data base to be a rich source of information for -training and
personnel decisions.

Critique of MDC

Several observations about MDC are apparent.

1. A massive amount of detailed data about the maintenance of weapons


systems, communication-electronic equipment, and missiles is readily
accessible, although for MPT uses some means of analyzing and displaying these
data do not presently exist, except as employed in LCOM for manpower
requirements.

2. MDC are weapons system specific. AFSC analysis is not possible with
the present computer analysis capability--e.g., MODAS.

3. It is not possible to infer with any confidence who is doing the work,
since neither skill level nor grade is reported on AFTO Form 349.

4. Accessibility of the data base is excellent through MODAS.

5. Reliability of the data is suspect in the opinion of most maintenance


personnel. Task time is quite difficult to accept at face value, since there
is no standard way of reporting job or task time. Does it include set -up,
clean up, and so on, or does it indicate just active repair time?

6. While MDC data are not presently being used for personnel or training
purposes, the utility of the occupational analysis data (described in another
section) that are presently used for decisionmaking in these areas could be
considerably enhanced if MDC data were linked with the occupational analysis
data. For example, data about reliability of specific subsystems or components
could augment occupational analysis training emphasis data for making decisions
about what to train in initial resident training courses. Also, in the
personnel field the occupational analysis data are based on more generally
defined tasks than- MDC. If MDC tasks were linked with these more general
tasks, a much clearer picture of the scope of work of an AFS would emerge,
giving more substance on which to base decisions to merge or shred career
ladders. In addition, programs that are now in the research stage that are
discussed under the OSM section (i.e., the Training Decisions System, Advanced
on-the-Job-Training System) would be facilitated by availability of a system
for organizing and analyzing an exhaustive and detailed task identification and
data system. Certainly, the efforts in SUMMA would be-enhanced by a TIES.

-68
V. LOGISTICS COMPOSITE MODEL DATA

As indicated in the previous section, MDC data are a primary input to the
LCOM modelling process, either in the form of -maintenance tasks and manhours
for operational systems or in the form of comparability analysis for a new
weapons system. One of the inputs -to LCOM is a listing of tasks with
associated -manhours. These tasks are weapon system specific, but the tasks are
identifiable- with their appropriate AFS. While task identification data are an
input of LCOM, it is not the modelling process that produces them. The tasks
and associated data are produced from MDC and operational audits -and task
networking created for input into the LCOM model.

Origin of LCOM

Because the human resources required -to support and maintain a weapons
system are a major element in the total cost of the system, a need -exists to
identify and-develop estimates of these resources early in the development of a
new system. A pilot project was initiated in 1971 to develop, test, and employ
a simulation model that Would- effectively predict manpower requirements for a
new weapon system. The AX (now, A-10) Close Air Support Weapon System was
selected as the simulation test bed (Maher & York, 1974). The intent was to
transition the model to the operational command for use in determining manpower
authorizations. The LCOM, developed earlier by RAND, was the model chosen for
the pilot project. This transition -occurred and at present several commands
are employing LCOM to simulate manpower requirements.

In terms of-task identification, -an input to the LCOM process today yields
tasks performed on operational systems as well -as tasks to be required of new
systems. The task identification and associated data are generated by the
Data Preparation Subsystem (DPSS) for one input into LCOM. For the new
systems, the tasks are the result of comparability- analysis, although Cronk
(1985) reported an instance in which Logistics Support Analysis data -(see
Section VI) were employed as input in LCOM modelling of a new system.

It should be reiterated that the LCOM model -is a simulation process. The
task identification data are simply input data, which serve as one of the
parameters for modelling manpower and other factors.

The office of primary responsibility for the LCOM model for manpower is
Hq, AFMEA/MEXL, Randolph AFB, Texas. Their brochure (AFMEA, 1986) describes
LCOM as- a multi-functional computer modelling system designed to determine
through simulation the resource requirements of a system. The major uses of
LCOM for modelling manpower is the Tactical Air Command. AFMEA and interviews
with TAC personnel are principal sources of the information on LCOM in this
report.

It should also be noted that LCOM simulation is not restricted to weapons


systems or equipment or even aircraft maintenance, provided such parameters as
the tasks performed and the time required to perform them can be described.
There is nothing in the LCOM logic or software that limits its use to aircraft
maintenance or manpower. The software is extremely flexible both in the amount

69
of detail that can be input and in the range of resources and "systems" that
can -be studied. LCOM is merely a resource counter, and it runs on a queuing
concept. At present, the Army Research- Institute is evaluating LCOM, along
with TSAR, for use in analyzing manpower and unit performance. The AFMEA
Primer lists such uses as vehicle maintenance operations, space shuttle
operations, and aerial port operations. Interviews at Tactical Air Command
(TAO) headquarters in- June, 1986, revealed that manpower requirements for
supply and hospital operations -were modelled in LCOM.

Origin of LCOM Task Data

For operational weapons systems, LCOM task data originate from historical
data from the ". . .most current computerized maintenance records.. ." (AMEA,
1986), i.e., MDC; and from operational audits of these data. The task data for
emerging weapons systems, except for the instance cited by Cronk (1986)-, are
derived from comparability analysis.

The Air -Force Systems Program Office (SPO) determines the comparability of
the hardware developed for the new system with other systems currently in the
inventory. Engineers compare the designs of similar aircraft, drawing on the
experience of associates who have worked on various programs, contractor data,
and Air Force Technical Orders, as necessary. The results are then written up
by subsystem work unit code, and include: identification of comparable
aircraft and subsystem work-unit code(s); any additional Line Replaceable Units
(LRU) in the new subsystem or LRUs by work unit code in the comparable system
that are not applicable; any factors that are applied to the comparable
subsystem failure rates or task times in estimating for the new subsystem; and
narrative analysis specifying the criteria used and supporting rationale for
choosing the comparable subsystem -and factors. Any scheduled maintenance
considerations should also be mentioned. In some cases an item is so-new or so
changed that there is nothing reasonably comparable. In that -case, the best
source of data (e.g., contractor) should be identified, and-appropriate factors
and degree of confidence discussed. Study results should be- reviewed in
conjunction with experienced maintenance personnel to be sure no maintenance
considerations are missed.

The next step is to obtain MDC data tapes on the aircraft with comparable!
subsystems. For operational systems, acquisition of the MDC data tapes is also
one of the first steps. Data from these files are processed through the DPSS
which alters the form of the tasks and generates task sequence networks.
Processing of MDC are shown in Figure 25. This figure shows the input of MDC
data and use of the B-4 master file WUC dictionary.

The alteration of form of the task consists of a compression of the MDC


action codes into a smaller number of actions. The action taken conversion
table is shown in Table 4 and their definitions in Table 5. It can be seen
that MDC Action Taken Codes (ATC) A, F, G, J, K, L, and X convert for off-
equipment ATC to LCOM Action Taken Code (LATC) W. For the reader's
convenience, Table 6 gives the MDC ATC definitions so a comparison with the
LATC can be made. The process reduces the number of tasks identified.

70
B4 YAT:1-
_________ I ~S0N ~ ABD6DA.

~C M J
DIC:lOHAR:Y DATA
PROCESS Z=RAC' ON
PROCESS

_AUD11 5 DAlLA J
ANALYSIS UPDATE--Ai NTY'a: S
PROCESS ~ POZS~WRCS

SZLLZCT0 1rO

T
INPUT

DATA 7 AUDIT 8
SIL~CIONCANNIB-
~T~.CI0NAl::=AlION
?ROCTEcS ANAL!SIS

PROCESS RE-IM-NCvTS
_______PROCESS

NUPDAT 1 DATA (i12 kUDT


MAMMNA. ANALYSIS -UDA7TE
VOTES: DATA BU-lD PROCESS f~2 PP.OCTCS
N denotes PROCBSS - POST - POST-
process SELECT IONS~C0

EVh-D ELA-;---( To: Data St-ucturng Subsyste=

Figure 25. Data Preparation Subsystem Process Interfaces/Flow


(Source: DPSS Working Paper, 1986)

71
.able 4. MDC Action Taken Conversion to LCOM
Action- Taken Codes (Source: Form
DPSS-M~anual)

0II ru vii a 1"O Olt.i


:iij lpmn
-n -

- -

__ N

nor us-ed by C:L)E--_ I.CU.I mode 11 in g


fuIzlro~
zcrr

ally i : '
G M-~-i _______

~ j2 [ 1

'800- x
803 RT
80-1 X

ohrR N0t cS
Q samne a z; MrC ATC "F"___________(~
I_______ - Tlhz is a Dug i Itile 0:-
R -s:imtnl as MPIC AIC "P" - ibU.?2-i1z LAiC zon;ersien

IV, _____________ f________ x and hat btfl rcversed.


ThfCscone or tl-.!S e - n

59I
i
*
_______________cu._ . en Z! know.n, but. the C7._U:
Y 8 12 will be co-.-e:te in the nex:

-Tr,~ 0E __________

SC
C0
am 2fE
T
svsze- -'I'ase.

Se 'Ta I,e.1f7Lef ii- ,p

72
Table 5. Definitions Assumed-for LCOM Action Codes
(Source: DPSS 'Manual)

LATC Oe Fin it,on

-- ------------ on-equipment work-------- -- -- -- -- --


V Verify (norm.ally assumed to be a post-installati-on ops-check)-
T Troubleshoot
R Unscheduled remove ~and replace (-for- -fadled parts)
RT Scheduled remove .ind repIace
M 1;nor_-,a,,nte-Iance or -ea-,r in place (For faile.d parts)-
H Cannot-dupl ;cate-mil function trouhl c--hoot
X Remove and replace (of a good part to faciizeoteminnac
----- - -- -- -- --- off-equipment work -- - -----------
N Bench checked, not reparable this station (NRTS)
K Borich checked, no repair required
W_ Bench checked-and repaired
C Bench checked, re~n-', def-red
D bissassemblelresemle

7§3
Table 6. MDC Action Taken Code Definitions
(from AFM 300-4, Vol. XI)

MDC ATC Definition

A Bench checked- and repaired-


B Bench checked - serviccable (no rc!tair rcquired)
C Bench checkcd - repair defcrrecd
D Bench checked - transferrcd to :nother base or uui
- Initial Installation
F Repnir
G Repair and/or rcplacemcnt of minor parts
1I EqUipment checked - -no rcpair -equired
J Calibrated - no adjustment rcqitired
K Calibrated - adjustment required
L Adjust
N ,DisasscirbIe
N Assemble
P Remove
Q Install
R Remove and replace
S Rcmovc and reinstall
T Reoved for cannibali:ation-
U Replaced after cannibali:ation
V Cl .In
X Tes - inspect-service
Y Troubleshoot
Z Corrosion repair
I Bench checked - NRTS (not repairabl-e this, station) -- repair not
nuthori:e d
2 Bcnch checked - NRTS - l:ick of eqc, ipment, -tool-s, or -fcilities
3 Bench checked - 'RTS - lack of technLcal skills
41 Bc,ih checked - NRTS- - lick of I.art-s
5 Rendlech lhecked - 'RTS - shop backlog
6 Bench checked - NRTS - lack of -technical d:ita
7 Bench checked - NRTS - excess of base reqiti rements
8 Benuh checked - retuII to depo:-.;
9 Bench -checked- - condemed

74
As a result of the conversion process, every maintenance task performed on
the aircraft is identified. Accompanying data for each -of them are crew size.
,AFSC, and task time. These items are a part of an output vital to a TIES.

The LCOM Model

The LCOM model is displayed in Figure 26. It consists of three primary


modules:

i. The input module is used in the initialization process, in which the


events and resource- data are transformed into a format acceptable to the
simulation module.

2. The main module accomplishes- the simulation.

3. The postprocessor module provides summary data of the simulation


output.

To build a model (AFMEA, l986-Y, the analyst, through the input module
software, characterizes the operation being simulated. This activity includes
the data and rules describing each aspect of the operation- under study. Three
kinds of data (Figure 27) are entered in the input module:

1. One input in building the model is the devalopment of the operations


scenario. The analyst describes the logic of the operation and any -rules-or
constraints that will be used for the actu.ai operation. Operations data
describing the scenario would include, for a flight operation, such factors as
the sortie rate, sortie type, takeoff time, sortie duration, number of
aircraft, weapon configuration,- weather factors, and rules for scheduled
maintenance. A typical operations scenario for one aircraft sortie is shown in
Figure 28. It describes the sequence of events to fly a sortie.

2. Another input used to build the model is specific data on maintenance.


The analyst uses historical data from the most current computerized maintenance
records -and also gathers field measured work standards for -each maintenance
task. These standards are derived from an operational audit. Using- MDC data
as a-strawman for work time, analysts interview personnel in operational units.
They obtain from these specialists estimates of the time required to -perform
tasks. Also, these specialists provide input for the task networking process.
One major LCOM user -employs a sample of five specialists at one location to
obtain the task time estimates.

Maintenance data identifies every maintenance task that is performed on


the aircraft. The task sequence is described by networks including task
durations, personnel and resource requirements, and test equipment
requirements. Figure 29 shows a maintenance network for the repair of the VHF-
FM radio. Note -the sequence of subtasks, durations, probabilistic branching,
and -resource requirements.

3. Supply data are the- final input used to build the model. Supply data
identify resource type, cost, authorization, valid substitutes, failure rates,
stock levels, and other factors.

75
LIUA
w l.
U LUL

C U

LUU

LUU

C.

L- 0

C4

a
La

1CL

76
LUj

LU 0

0S

VI ..

coo0

z
4--

77
CL %
X

(S. -0

LLJE

UC-

LLU

78
U- m

C LU - W C)

U-CL

LU CL
LUU

-X

.-
LU

Tcp~x C' xct

CN V)r-

CL d
uiC

79
After the analyst has prepared the data using the Input Module software.
the analyst builds- a model which then is simulated in the Main Module of h-
program software (see Figure 27). The Main Module takes the model scenario and
actually simulates- the operation that was described. Individual aircraft are
preflighted, loaded with munitions, taxied, flown, recovered, and maintained.
Maintenance is divided- into scheduled, unscheduled, and phase (periodic
inspection). The simulation tracks the number of personnel and physical
resources to run the operation, as each aircraft is flown and turned according
to the operation scenario. With large simulation models with many maintenance
tasks, the simulation can take hours of computer CPU time.

Depending on the requirements of the study, the analyst uses the


Postprocessor Module to provide the output to analyze the system operations and
resource interaction. Statistics describing the simulated operations are put
in a format that can be used by the manager to make decisions. Specifically.
the manpower manager uses manpower statistics, by work centers and maintenance
skills requirements, as an important determinant of maintenance manning
requirements. Output statistics for applications other than manning
requirements can be generated. For example, supply requirements or operations
factors can be output for analysis.

Analysis of LCOM Model Data

The output -of a simulation includes a wide variety of data. First,


resources must be constrained in various ways, or, for example, sortie
requirements can be varied. For each such constraint or variation, output is
available. Included are such items as manpower requirements by AFS,
utilization rates, sortie capability, and supply requirements. One of the very
important applications of LCOM is to answer "what if" questions. For-example:

"If logistics or manpower resources are limited, how is sortie generation


affected?"

"If a designated sortie rate is required, what are the manpower


requirements for specific AFSC?"

"If a new weapons system is being designed with stated mission, logistic,
and maintenance requirements, what are the life cycle costs?"

For a TIES, a key document is produced by the Data Preparation Subsystem.


In the modelers terms, it is called an Extended Form 11. A truncated- example
is shown in Table 7. The first column is a line count number. The Prior Node
column -reflects the sequencing of the task in the network. In the third column
are the crucial entries: for line number 32500, the code Q82CLF is made up of
the LATC (the letter Q representing "install") and the WUC (82CLF, representing
a subsystem--third digit--and component--fourth and fifth -digits--of the
computer and display system--82). In effect, task identification is
established. In SUMMA, this combination of action taken and WUC was employed
for identifying tasks. Obviously, the LATC-WUC combination produced a "task"
looking very much like an MDC "task" and an OSM task.

80
Table 7. LCOM Extended Form 11 Listing

Line i Prior Task Next Selection Sub- Time- to Repair Crew Size
Nkumber Node Node Parameter Svstem Mean Variance Rat AFSC

32500 IH200G Q82CLF 28 82C** 5.62 02 2- 305X4

32510 IH200G G82CLF IH2COH 82C**

32520 IH2COH N82CLF 82C** 19 05 1 305X4

32680 IH2COM Q82CLU 37 82C** 56 02 2 305X4

32690 IH2COM G82CLU IH2COP 82C**

,Column titles were supplied by TAC LCOM specialists and data are extracted
from a TAC generated Extended Form 11.
2
1n minutes

81
The subsystem column reflects the aircraft subsystem (82--computer and
display). Under time to repair, the mean time for the install action on -h-
component is 5.6 minutes-, with- a variance of 2 minutes. The last two columns
'reflect that two personnel with AFSC 304X4 -are required. Contrary to- the usual
connection the X, the fourth character of the AFSC, does not stand for skill
level. Rather, the X -reflects that the task is performed on the flight line.
Typically in the Extended Form 11 document, the fourth character is used to
designate where the task is performed. For a TIES, the key data elements are
the action taken, work unit code, and the AFS. A significant problem is the
lack of a central-data base.

At present, the most up-to-date LCOM analyses are available from the
organization accomplishing the LCOM process. Mr. Charles Begin, ASD/ENSCC.
Wright- Patterson AFB, indicates, however, he maintains a collection- of LCOM
data- bases. The data are provided voluntarily, but Mr. Begin surveys LCOM
users semiannually to obtain new data files.

Uses of LCOM

A major use of the LCOM model is -to determine maintenance manpower


requirements. At the LCOM model steering committee meeting at Randolph AFB in
May, 1986,.MAC, TAC, ATC, USAFE, and ASD, reported employing LCOM modelling of
manpower requirements as follows.

MAC reported LCOM projects in progress on the CI3OEH, AC130/MCl30, C5A,


C-17, and for fleet services. The MAC representative stated that 50-55 percent
of manpower forecasting is accomplished through LCOM. MAC also has employed
LCOM to assess RIVET WORKFORCE impacts on manning. Future studies will be
directed at the H53/1HC130 and C141 aircraft.

TAC reported that all major TAC weapons systems have had LCOM modelling:
F15, F16, A1O, F4E, RF4C, FIIIA, EFIIlA, T38, and A7. Studies of surge.
sustained, peacetime, reserve, and guard have been completed. Studies are
reaccomplished each 4 to 5 years.

ATC indicated simulation of sorties for flying training. Regression


equations are derived to project manpower requirements. About 41 percent of
ATC maintenance manning is determined by LCOM modelling. The ATC
representative indicated that LCOM provides a better basis for determining
manning requirements than standard -methods. The ATC representative did,
however, indicate-problems of accuracy of the MDC used in the input module.

USAFE reported using LCOM for "traditional studies." At present, USAFE


uses TAC produced F-15 staffing results.

Before addressing ASD use of LCOM, it should be pointed out that


representatives at the conference indicated that LCOM is not just a manpower
model. One of the LCOM's principal virtues is that the interactive impacts of
manpower, supply, reliability, and maintainability, basing mode, etc. can be
studied at once. The capability of relating these to unit performance, e.g.,
sortie generation, is a big advantage. Usage of the model was reported for
alternate fighter engine analysis, system reliability, determining factors

82
affecting manpower, reliability- and maintainability needed in a componen::
subsystem, reallocation of tasks among work centers, utilization of personneL.
-(which is a manpower, -personnel, and user issue) and- in an- availabilicy-
readiness model.

Mr. Cronk, LCOM Group Leader for ASD/ENSCC, Wright-Patterson AFB,


indicated ASD has accomplished some LCOM studies, for example, for the T-46 and-
for LANTIRN. In addition, ASD is investigating development of an LSA-LCOM
interface program. More about this effort will be addresed in Section -VI. It
is sufficient to say at this point that LCOM modelling from an LSA data base
has presented a number of problems.

Another significant use of LCOM is in the SUMMA project. Here the


initiative is to optimize manpower and personnel requirements with respect to-
maintenance in dispersed small unit operations. LCOM has been used to model
impacts of alternative maintenance occupational structures on manpower
requirements- for F-16 maintenance.

Critique

i. LCOM converts information about the tasks to -be performed by


maintainers into required manhours, hence numbers of people, to support a given
weapon system at a given level of performance. Its principal advantage as an
analysis tool is the ability to model the impact on unit performance (e.g.,
sortie rate) of changes in operational requirements, logistics support, and
manpower. By varying any of these parameters, singly or in combination, their
impacts on unit performance can be determined. It is a simple matter to
illustrate, through LCOM or any other queuing technique, that total manning
requirements are driven by workload (R&K) and by the way work is assigned- to
AFSs. Other things equal, if equipment did not break as often (R) or if it
took less time to fix (M), the workload, and hence manpower, would decline.
Also, if fewer separate -specialties were defined, manpower would also decline,
with no sacrifice -of total unit performance. These two examples have in fact
been studied extensively with LCOM over the years. The equipment R&I issue is
readily dealt with by LCOM, as in the AFS or work center consolidation issue.
What LCOM will not do, though, is tell you-how or whether these changes can be
accomplished. It can only provide an assessment of the payoffs or penalties
that can be- expected if they were implemented.

A common complaint against LCOM in this regard is that task performance is


invariant, that a uniform "5-skill level" assumption is invoked. In fact, LCOM
could be used to evaluate the impacts of different "skill levels" task-by-task
if these data could be supplied in a form usable by LCOM; that is, in terms of
differentiated task performance times. So far, credible data on this question
have not been produced.

2. LCOM model usage is required in developing data about emerging weapons


systems, usually through comparability analysis. Interfacing LSA and LCOM has
been difficult.

3. As for the task identification data used in the modelling process,


task statements resemble MDC tasks. There is compression of the MDC action

83
taken items, resulting in a more concise but less detailed listing of tasks.
An -important feature of the associated data that differs from available MDC
:output from:MODAS is the indication of the AFS required to perform the acti-ons
taken and crew size.

4. The key data elements for a TIES- are found on the listing of the
"Extended Form it" and consist of LATC, WUC, and AFSC.

VI. LOGISTIC SUPPORT ANALYSIS

An elaborate system for regulating the collection and- reporting the


logistical -requirements for new weapons system has been developed. For TIES,
a key component is the LSA, the general requirements of which are contained in
MIL-STD-1388-lA. This MIL-STD specifies 72 tasks that can be purchased from
the contractor and which describe various aspects of the logistical support
required for the new system. Mandatory procurement of some of -these 72 tasks
is required-, others are-optional. Unfortunately, some of the more useful tasks-
for MPT purposes are optional.

Origin-
While data required for LSA have been developed over the past two decades,
it was not until the publication of MIL-STD-1388-2A, DOD Requirements for a
Logistics Support Analysis Record, 20 July 1984, -that Department of Defense
standard requirements, data element definitions, data field lengths, and data
entry requirements for Logistic Stipport Analysis Record -(LSAR) data were
defined. The standard allows delivery of LSAR data in a manual, automated, or
combination manual and automated format. The standard Joint-Service LSAR
Automated Data Processing (ADR) software may be used, or industry developed
LSAR ADP systems may be used. These industry-ADP systems, however, must comply
with minimum standards inhMIL-STDZl388-2A.

MIL-STD-1388-2A applies to all system and equipment acquisition programs,


major modification programs, and appliciole research and development through
all phases of the system or -equipment life cycle. LSAR requirements are
tailored by the- requiring authority who establishes the LSA documentation
requirements--based upon the elements--of those tasks procured.

The LSA process is intended to be conducted on an iterative basis through


all phases of the life cycle. -The LSA documefitation process is illustrated- in
Figure 30, and the data -flow and, system engineering interface in 'Figure 31
(b6th extracted from MIL-STD1388-2A)-. Figure 30 represents the general data
record generation process. The flow should not be interpreted as meaning that
one type of data- record must be completed in its entirety before the -next data
record can -be completed. Generation of some 'LSAR-data, however, (particularly
important to- MPT uses) is dependent-upon the design engineering process and the
release of drawings, preliminary-, development, and final. The LSAR data flow
is required for every reparable item comprising a system or equipment end item.

84
~ I

4L I o
-u -6

z 6

-Al

S 95

x 0

85u
0 ZUG

> z <_
z i o-Qz 10 Z
<
_ _~~=

--

86. - - -

o <

z a
0U

Z -Z
ZZZ o

00

- a.r- --

00

00

86
Develooing LSA

Some LSA products are developed by the requiring authority, usually


limited volume of data to define system level requirements. A large part of
the data- are procured. Data important for MPT uses- are produced by
contractors. Some LSAR result from reliability and maintainability of end
items-.e.g., manhour estimates, task identification! ';hile other LSAR are
developed by engineers or technicians -from these initial task identifications.

The requiring authority tailors the LSA data- requirements by specifying


the analyses tasks- described in MIL-STD-1388-lA. Next, the engineering and
Integrated Logistics Support functional element requirements which interface
with the LSA and which generate LSAR data must be identified. Scheduling LSAR
is then a complex process of determining when products of these elements will
be available for development of the LSAR. Also involved- is the establishment:
of the scheduled completion dates for data products that utilize LSAR data,
specifically for the Data Item Description (DID) which are reports that the
requiring authority may procure. For example, a DID that may be procured is
DI-H-7068, Task Skill and Analysis Report. This report provides "timely and
accurate" identification of technical tasks which will be performed by operator
or maintenance personnel, job description, and manpower requirements. All of
the data -in this DID are obtained -from various LSAR records and reports.
Required delivery dates for the products specified by the DID should be
established in conjunction with preparation of the solicitation package and
should take into account the significant milestones of the development effort.

Since there are a number of DID that may be scheduled- for delivery with
relevance to MPT, a list of them with a short description of their purpose and
LSAR application and record source are shown in Table- 8. These DID are not
available from LSA data tapes and are not automated.

LSA is initiated in the preconcept and concept exploration phase of the


acquisition process. This effort is directed at establishing support related
factors and constraints which must-be used in developing design quidelines and
trade study plans: reliability and maintainability-, life cycle cost, logistics
problems, and projection of logistic resource -requirements and their costs. A
limited volume of LSAR is produced from this effort.

In the demonstration and validation phase information about repair and


support requirements for all levels down to major subsystems is developed. Of
primary concern is the acquisition of data that would influence design
guidelines and design characteristics and the refinement of logistic support
planning data.

During full scale development, LSAR data records are completed to the
hardware indenture level identified in Figure 32. The data are used to develop
logistic support requirements for testing, operation, and deployment. The
iterative nature of the LSAR process culminates at the end of this phase. A
completed LSAR data base is -now in place.

87
C- - ' -~C-

0 ~ 0

- -6 -

-z v.

o - 6 '-

b. Lb
ae4c C~6
-v -6C

5Z *20 - caq oJ

.k65,. SE .nS

-. 05 0

cc
W
-6

-ec-
pl-4

4- 1 c-'
. -:.= = 0. t otw-!
- 0.a - v. - p
w C. . c .

I. z 4 tIZ c4 -

46 a-ao

a o -

ii - 2- c,. - 0X

v Le
I. I ! ..
2 C; * -E C
c ~

88~-~a- a J..
i.C

0a~~

-, 0. ~ ~ a-e ~ -- 0 Ct 0-

0~ c

.... r-z 2 a - 2
-~~~6: -00 70.- .6 -C 4t

=M - a 2!-. - - C

1.C

00 c

". 0 .0- Cc

r- -s 0-

a t0 -I CI

Zwa I wo* S - "-I 0

- S. ).O

-C 41 .2 0

72 M Z, -C Ta

sv X Ia.
a- oj2~..~ q - U 6
a *aU a -0~

42 Cc. a..
., 0-a

aAC"t.an Cu u...-s.89
- 0

-V a.....

C c

o ~~ - -- C

ZZ 0Z 2 L. 0

C I.; t*0cg

1 -2- a c

0 C;0
c 05

0 - L to 0

C.-30Zo
CC. LcC -- E
- -. a - LO. 00--

7;- W :a- CC ba
0
I C 10!:

I C C

72* cu C' .C.

-. - c. x- : *

at-. z4~ -- ~

Z.
-C a. D

a; -0 1b - C -

a,
I
CC- b L

a... c..c 90
- a-,'. - - - ,

-
0 -

CcC

0 M -- ' Zo--.
-'00 CL -4 o 0

- - . .4----
-~~ -0 r.0C' 00 4
-:O 000- --

U00-000 U
.0t 01 - 0
-=" 4 E92
-V0. ..

.3CC

-~ ~~~ 0 4 C 3. C..4

'Do4U41 0 - '
t
4..I3U.~3-
I~ ,C-.,1.'. 5~ 4v

4
.Cl ! 44 CC'3 cc V!CU

0 - - V- 0

4.0 '.

0. '

.C 4g.- -
.J-l.Z
=c r.4 =~ t

'.- u. 4. -

9x t-' C, I

0.01
1 - :. 41,
C.
O, It I r I

-Vt-~ ' -w
OJi v

W o -
CPC

0
C7
- CL 0

43C 4(191
- 0 -2

.. c.4s - - C

IV --.- - v,
o Ia Ou.

00740 C0 COCCS

-c
u~~~C~ M.a -

-c C ~C. .

r. r6, c-- c..

0 -2
- ' C. 4. 13

-c 044 j4. a- a.
v . VC4 - C.Cz -4.44C -

D!"a '=-& 0. 4.* 4.4v C

- -. 0- Z,1 21 c-

2CC

29
41o - V a .L
CD~ c U,=
o C- CU -
* st .Vt.Y w.

- 400.4.. *J-t, - 0

v =*' CL- *~ C
tf0 v z..- .JO U 1
0 .c V- 9
-.mi*''~
- -~- t-- - '-

-0
v U

-~~~~ ~ 0 m
--
*0==-
~~ .c
--
C3U1
- 05. -

-0' ~ -00.7-

01 *o. -

'4- 9-0 o8

400

cOrc
tt4
.4 * ~ 0.- iv
%

S.c C
-~~~ I- wLJ
0 4w- - - -

_W -
w X,0

w -

-
tv a a-4. 2.

0. 45C 1 uv - c

4~ *m~-4 .
iOw

1t C v.- -*u c 2

00 c ac

cm041S w - 4Sc.

do 0' 22

CC 47 IT 0 -w-

-0 - -- 93
C;& tas
7
ae.tf0- . L C 0: V

0t 2'Z -
1 1 Cc
8. aC... tZtt c.~

-. ~~
- 1e
-o tau .D6

ta-c --. q

-" c a C c4

ct ta 0.0 -Ez

-C 0.
C0z
C C

0 Q MC

"Co .. .c -
w
- - -- l

a.~~ -
.s-ia-t 1

.40C 0
@0 1 - C - -
a-.2 o-e a- oC.
-. aC. 00

*C -1 2,C%
a0*S
-0r a 1 V C.4a-
acit 0- t-a 0
t'a-2 a. I . - *CO a-e
--tao o2-10!ta C.taeat1 £0C a%
-u a -SU ~0
oX.E 0 Z-:5 gg 411
wu-
-.1--
33C Ic*.C XZ.~ --&~..23=2..:~~-0-
-1 :2 -c.a2C1
-oz
7= e ta, 0t
c -oI c.
TZl
94C alOIC.
C= -Y

CJ

-d4
LL..

1.

__c 0

-N ir_

-c

-4o
-i

C3

w : 14cl i UC)f

uLsCi
95Z
LSAR data established during the development phase are retained for use
during the production and deployment phase. They are used to support logistics
analyses and as a& basis for design change and requirements for succeeding
generations of the materiel acquisition.

LSAR data of specific use by MPT are produced by hardware item. As a


result data are produced- by the prime contractor as well as by subcontractors
who are developing subsystems or components.

Nine sets of standardized LSAR data records may be prepared, depending


upon the options exercised by the requiring authority. As indicated above,
some of the data records may not be procured. These nine sets are these:

1. Operations and Maintenance Requirements

2. Item Reliability (R) and Maintainability (M) Characteristics

3. Task Analysis Summary

4. Maintenance and Operator Task Analysis

5. Support and Test Equipment or Training Material Description and


Justification

6. Facility Description and Justification

7. Skill Evaluation and Justification

8. Supply support Requirements

9. Transportability Engineering Characteristics

In the succeeding paragraphs data LSAR records significant for MPT are listed.
Specific instructions for completing these records are in Appendix A,
MIL-STD-1388-2A.

1. Data Record C, Operation and -Maintenance Task Summary -(Figure 33), is


used to consolidate the operations and maintenance tasks identified for each
reparable assembly. Detailed analyses of the tasks are provided on Data
Record D.

2. Data Record D, Operation and Maintenance Task Analysis (Figure 34),


provides a detailed step-by-step description of how tasks identified on the C
record are to be performed, the specific skill specialty requirements, and task
time (manhours and .2apsed time). This record is initiated during the detailed
design effort. It is not completed until the development stage.

3. Data Record DI, Personnel and Support Requirements (Figure 35),


identifies training, personnel, support equipment, and supply support
requirements for accomplishing the individual tasks. The Dl record is
initiated and completed at the same time as the D record.

96
t V

. I n l I *
tI I . * 1-

- I I
I 5t
r il- * i i I l l s I i i -

-- ' * * S ! I S I * £ I * I i * .I I V S I I l

I _____t _1_ .I
_ _ _ -.- I .- I It
V I I-I I I I i---.'

-:-I-i:ifiji
I
iM1i•*.
I i I I I
A:
I !
i:
I
S I I
!;I
I
ij:4;:a!
I l I I
ii
ma a
I
i
-

I i- ~ ' a a-i a a a i tI I ai ~ ii i

F - 1 1 ta!

I I I I I I
Rlx -
-
it
I
aii
! I
I
g I I
I -I1
- v i
li
I I l
I
I
I
Ii
I I
t
=
,. , i i l i a i i I a a i i=~

q L i I i i i I I I I I I I I F-

-- ,I i I i_ I _a,
I ri iI I I gI i I ii t I iI , FI T II i i I, i I " I , i 7a1
1i
: :-i i 'll i all! ii i! , li i i l i ,. B -
itL Itti I I l i '
r
r lii lt iI I iI : I I Ia u- i~iiiii I t I I It I * i IIi II aI I

ct
-'$2 1 I1 1i I I I I i, 1 i i i i i i i 1 1i i I 1 1 i1 1i lt
1
_i i i , i i i i i h l i ti i i = -
- F -"i S5---77 i ,i li -.-
iiiiii t il i i i i.- ,

- i. jg at i-i
I I i i i a i i

"-- -- - - I iI I-, I I I I I I I I I I I I I1 17

I , i I ii i 1i =

I~ia-- -- t
9 7I I i Ii i i IT t I i
F-i~T I I '-T-"F-I- IIA i---i1 ,-- Ha T T .- ' T - iT i-T'_
l' l-- - -Ii -o i ,iii l i ii liii, !i ,a iat ,igiI,

Si i-i-- i IT i i '-"i-' T i t."' ti "a ait t ;-'T


-

L- ____ ga-l.- t -- ~' l i , ,i !i I I Ii '-' I ti , a usai a


"
,. I , g-' K r.'- ii i , - I*i i I S, ' ' i , , "'

Ir-.-I I_. ,__ , ,_.I


, . i i i i iI! i- . - ..- - i -i i. i ,=
IA"1 H' a
iNaF-a !
I"
1
ILrjI
- -
--
I
[ Rl-I!.-
l ,,,
I I -lI
II
l
.
iI
W...........
i I
:, : ,
I1 ! ! ia i a
Iii .I , ,a. ,i
I a a i t i
ii
a
,
! i a
i,
*
,,, i
a i
, ,:- -
I i ;
'- -i' -t-i
' l-,_"-L L i- -i- . i i i i ji i a a m i m a ,i , I i ' --

i .,, tl. f l .I I m
li I I I I I ! I I I I I I I i I 1I a a * t I a I
-- i.:a aai1 ! !i i; tat i i iiia l tI la

""F ii 1
, l ! ! ! I ;=!i t a i i a i a i i a i i~ i ii ti
:, -.jl-- i--.-------------i--i- i i i ----------- i ii
i'a L I i F- ia'1 ll i I I i i It a I a i ai a! i ,

i
S!-- i-i '1 I !' i I I i i i S a I a Il t i t -i 2
- J IIi i tat
i t lla ii ilt ai ii a i ilt i , ! lI i a i ll i t
SS------------------.l"-. i i ii ii i i i ---------. ii i i-
* .- ..ji i il i I - I. i I ti , it a Ia i i i i I i 5.-
a i i i I a i i I i i a i a I t i a a I i i i a i -

. '- .I !. 1 .. I . -I ' . I I *' I i i a I I I Ii a 1 a li I I -

__ . Iil~tIi I lii, ali I ~t.


i-I'a ~ S 3
I1 ': "

t t g I I II I I II II II I I
II I III 1

- i VI I II I t I 1 1 p i i
c.. [ i m i i i z.t * i ' IIi , 1., a i I l ,; I I I I i I

llI a, i 1 3 5 3 3 3 * I a -"
l;IF
* *i I I I i; I : I I I I

lI I
| II 3I I I £ I | * I I I 1 1 I I
.
I I
I-_I iI IfI 3 3, ! I : ,|-

.. '38.t
i ,I i
I i
I I ,
I i I ii° i i i i , ,
II I i3 I i i " , i ,1 io - I 1 - A1-,-i;
I t-.'
I 3$"x

m[
I

I I I I

3
I

3
I

A
I I ji

II
I A

i
i I

i
i i i I i I

I L
:a I

zTI
I

I I*
-I :I .3t li
I*t
Ii

c
sti I'l

1~ * - - I:I--,
[ -,i1 1 I I I I 3 I I | I I I 1 I,I I~1 I i
*,I I !II I I "L.l - J
r I I l 3 3 1 1 .11. . . . . . . . . . . IIL ' 3! l- ,
= I 1311"1
iiiI I i I i,I I .I- iii I 1 I~ 13 liii
I I 1i I i I _____

I -i : i7 " , IIII, Iii g : i i i I;.11 ,


F-II
I ! i I I-
*I-, , I I~ ' 1.5
I,:
- ~ I i1 l i liitI1

I t
- I 1-1
i i 4 I I 2 1 1 i ; i . . -

I Il~ Il i i I i i iI 3 ....I !..j........r I -l I S 3 3 I ! I I 3 I ii. .ai I li~


il....!L1 1 1I I I I
ii
I I I
ii, iiltI''''' . i I1i -. ~i m
.c
m

I-' ist ia -.- - -, - - - -a-.- i i- .-. i


-±,a'*i iI I ll l i~ ' .-
, ,i .L . I ii I '-LI I - |aLI.LLL1..!
'- Ii I I ti i ! I 4 ! !i I I
gf Jl'"I
lI _ 1_
_ j.w

I ' *iI ! I hiII i 1 11111 11


, I " ' ,--- " -. r--
S I [ I 3 I I -~~ I 2I I i- I _I .....a IF
I I111! I 33321-"
I 5I I ii I t1 I1 If i I 1 ! i* _ I __ _

!Illi -- | i I I -I ;1 I I i ! ! ! ,i3 I i i I

-------- I-----
----- L. .4L1. .. L.±
.=. I j i 3 I - i i 1 3 I -rI I I01~ I I I I I 3 I I ". I I I I I It e m I zA. I i
I I I II- I I -- I J I ' " I;-
i i , i -i. ' ' ; , ill , liF- i i--
___i L I -L II.. I I I I 1 ,11 * I I - I II 3I I _______'

,- i~i Ii I , -i-i--
-! , - - --

i ii.I-i ::i:!:rv 1i : i AH~ ~ : : -i- ~ i ii


!, ,: :i, • i* -L --

SrL ! -' ; I
*,,I • 3I tI ii!iiI
II i ' I i9 I •i -'3i - ---
-----------
ii l------------------------------------------------- I- i--
I *FI ; il
i i mI . t i! il i 11ii il i"I I F i]

-adL
" i I
IE -
-- -. i / i3
i +
IiI * , P*
. i . I i
I
*
I
,-----,
-------------
, i
3I
,3"
'
3
!
,
" .
3
I
3li , , a-
I I-
'! -'
_

-!
I-i ,.-i t , , .- I - t- !s . -i i- - I- -- I i - - , i ! . - l i

98
I~ c

' ' t i,_


I I ! i

I~~~ c111

I i -. II I I A
Iar .4.. A1i1'
I" - l ' ,- -' , i I I i i~1 ii i i i i i I

i, .... !: i:Id : -'-- ' -~~

=, I.i" ---.- i
i:,
''
-T--I,- "! i , I - !~*, i i Ii t
, I- '~Tr
,'ii ii r iriiT1 it 7i i i i

7T
- ; -
, ,i ,T I I , , i t . t i i i t , - -

, ,T) i , , I3, E
II lT-- ISM*,- ii
-1' i i 9 ,' i, , i ,
,i i I i i I , , i ,
, I i ! ,

,
i
* Ti t : 'I I, I ; ' I I ! t a aa Ii I I - 7

I!I

It
rI I I _
"
' ! u
H-±~:~JI
I 1 -1 - I
VAm -
o
W
i I . - 9 I II I

-t : '71 ! r 1 1r ,7-7!v i x,
I f I I I f I I I I I I - I IFi
I If

I II

~Rl. 3r :, v--str aiitt

0_ I

I III Ii ,I=:i , ! , , - ~z, , , , , ,99,

ii I I I I A I I I

I A II

- i II IA v .I
I.,I
I

.8

I~~ieis
I.~
' ~~~ --- I li.

I~ - II**lot
3

00
Z' PIT 1

*
-~~
I

F I
l In) -3
P.~
7 I
t i7 1 f.
Iw
,I p" f 3 1
J) 4-,1

a
-P

r~i R I

-
qR *
~~~ti
Olt-, I I

-00
-3p.
4. Data Record E, Support Equipment or Training Material Description and
Justification (Figure 36), is structured to consolidate information related to
xiew support or test equipment and training material.

5. Data Record G, Skill Evaluation and Justification (Figure 37), is used


to -describe and justify any new or modified personnel skills. A separate G
record must be completed for each new or modified skill identified- during the
task -analyses recorded on the D record.

LSAR reports with potential significance for MPT are given below:

I. LSA-001, Direct Annual Man-hours by Skill Specialty Code and Level of


Maintenance (Figure 38), is a summary of manhour requirements. The- manhour
values are derived by multiplying the task frequency by the manhours involved
and summing those values for each task performed by a particular specialty
code.

2. LSA 002, Personnel and Skill Stummary (Figure 39), reports the manhours
by skill specialty code, expanded on each task.

3. LSA-006, Critical Maintenance Task Summary (Figure 40), reports a list


of all tasks that exceed a specific value for task frequency, elapsed time,
manhours, or -annual manhours.

4. LSA-011, Requirements for Special Training Device (Figure 41), reports


operator or maintenance tasks identified as requiring a special training
location requirement.

5. LSA-014, Training Task 'List (Figure 42), provides the rationale for
training requirements and training location -requirements.

6. LSA-015, Sequential Task Description (Figure 43), provides a task


analysis for each task. This summary may be requested to identify all task
descriptions applicable to the total system, task identification level, a
specific maintenance level, or a-specific specialty skill code.

Each of these reports include significant data identification elements


useful in a TIES. These elements include skill specialty code recommended for
task performance, work unit code, and task identification.

LSA Data Storage and Retrieval

As indicated earlier, the LSA data are developed by the contractor. While
MIL-STD-1138-2A requires specific formats, data may either be in a -manual,
automated, or a combined mode. Most large development efforts- should be
recorded in the automated mode.

Interviews revealed that most data are stored at contractor facilities-.


Further, for past development efforts, there are differences in the kinds of
data recorded. When LSA data production and recording are complete, data- tape
files -are forwarded to AFLC, but there is no means of accessing them at the

101
I A

iti

t A I

I -g II II I II

II ' 4

II ii00
I U I II U I0I

A9 1 11

t
fil .

IFI

:1 I Ti5 I Fi; I

I I I I m I I I.

3 a

I I

I .14
I II II

102III
000 -D 0

U C NtV0000
CD N

IM 0 ccC -

00

w~ >e
0)- NU C.0 @0
0
Wj 0 0 0 0 0 0. 0 .

w In >U ! +

CO
an Ir C
'
w i n
00 c U 00 .)
CL X i-'C 000000

0i
w I a C-
0X 00000CD0

-C
ILW ~ C C, - ...

c U00 EE >' IX
CI \W W ) fi-
0 ZC C> CI-O

0n0 U W
01-
iiiu 4-
~ itwCC

.4Q w 0 000
cc-CO 0
w =i O.W CC ,. .
Ncc .- W)iw

. C cU;a n0

0w wC %

at ca C- 0 ,e
U~" ' li
-~t U -w-0 v ev000

0U a00000C
0 -
oO . o-'
a 0 j c o w
Q -Z aI a 0
U cc

J wr c a ~ u'
Ix
.
--0 -400
W. WL6 0 0 1.4 - z

Li0 C

II
.. 4E

im 1 c00000
In
(A - 00 000 00 -
m0 0 . f -.
-j Cw

- -~ OOOO C4 1 .103
CL( - . V C N In

0 C- in
Co a *1
Cl 0VC C r, 'm C 01
V'

C .C!

0' 00

> -coo cc
1J c 0 a 1 , m -- n n V --
C! 1!
1! 1a 1! M 0
fA0~
:fACo

=- ft-I a

v. . 0 IA x L
w 01 OIL\ 1 Z1 a.,=
-oICuICa z IA c ,
U r_ ' C IA

D .0 - VI 0. 0
~
>.
#A
~ ~
SL
C x C -0 C 0C 0I
N0 C 19 0 o Cc a
C- 0
C I. 0 0- C

0~
U QUI-,OP. v P Ct 9 - W) r8 - a iN-

I. a A.I..a
a. a -ac
Q

o Z
c
JJ
wA
J - *'0 C.IA
=
c
.
V)
A
- W-Aa._
-Cx . - 0
W

M 0 £ a a a sA A =f cr a.
th cc
- 0) I
- A I
w U a -a a.
mA 'Ul . A Z~ =0 (A
(3 6. E a. U J.a&..& W, 0 1 IL IA
0 ox LU- ... a 0 0 C-C
I -IX a I W C a. -U. 0 C U LL - 20.
,0 CA r -- 0
ZE U a
0: 0 0 IA .- Q0 w -C 2 M; a0 IA - a V)_
- 3 - I.. C 0 c I -u
. . c oa u- L, ;.
-L- -a. U- ua
.j U
IJ a.
0%X a -aU0 (A Z 0 C
ul
).j 3 a. Uj Cjs--a.- %"%a w Ow MU wC _jL
>IA - -W--a it-:; u A. L, Q..a uou-..
J_ x I~.I >. 0.__a -a..
_&L 0. a0 -IAC
0 CAfl r (.AUIAw~
0 CL Iw 9L a.0~ CL a c-J0 AUIZa W~~a
inaWLa. tl I L& IA WZ LU IIC-L UACI
CCZW C U X S*OQ W CX- LL L.. Z a X W CLILA

wA z U- j 0 u. a_ a a -C . a (I a.
x c. C. a. c. a c a a. < a. .1 . ca c. C. a a. c
IA C.Q.. OZ C L6. I. 0 0 LL 0 0 r Q 0 C C 0 LL LI.
a. - ~~u a. M -7 . : = - - - = a
Q

Nl >
*0 V, - - -
J. 0 V.Ir 0 0 c _V . . z?>
IL U
w U. Z A ZZ Z U. t/, IA 0
nA IA IA IA IA OIA 0 0 0 Q 0 W a a. a.
IA~~ IA I n I IA . J f - - 4 or 91~--
(a wU -a
Ae !e w a.U;t. - U I
IA IA 0 ccAIA 0 C1 a. a. a. . a. A -'A 0
19 a. a. a. a - a.
C
~w- IA
w
U =A
IA
w
U,
U
w
U
w
U
IA
u
W-
Ui-
-
IA
in
A
IA
IA
0
I-

VI ZI 4
U,C0--C~
. ~ U
c
U
oo.00
0 16 N atV) Rl'O
WCO OCZ
U.- U.
-
U.
IA
U.
La
L, 0rflvi~rn
C:,00 0 a COC
U. COO@0C.aa r, -Uaw cc caca.or C9C. a t@u u

0.0. C 3
0 r4 P)C
a. a.a r 03 w ;a w, u
-; (.A VI PI T e -C.1N

-104
I.,

lK v. V) Y . m , f s i - I

o 0 CDN-O o 1

> 00
0. 0.
(A .X
0 00( D 0 0 -0 -.2-uDa o 0 Q

Yo 0 0 0 r OOC

a-cc c- .- ),t- mr) r -r) in i. N' . )M m c

-1E-
-0 a '-A

C.-C-

_: - ,

IL =J -
aa VA N Q , WA 1

U) (a a u 2 w0 00 r C u 0 C a 0
00- ILO
c~ ~ ~ (A0-W 0i=w_ X V19 0 0190 j;. 0 1

0 J, 44 MN -0 9 ? i VM

'L'
12 a a. .-0 a =c -a

0 _u ( ,.00 0 . - 4' 1 U.0 -t

u &L m~ c 0 U u i U t 0 0 U

U V2 rU -- (A
T~ Mi

-- 0. CA 4WD
1.-k (ap a O 1 CU. .6 1-0
-L 0 - 124 - 4y 0--. 0~ V W.-2
*( 01 (A
of-I
120
( ~ 2 0 -C>
~ UIC.0
04U 04
S-.

.. -- w - .jn4 .ii h-.JwW

4. W L; IL- oW cc2 a m r M -i -l

4~~~~2 ---U (Ah 1h. -420~ _ 4 Q

>w -j>
<0
a. 2 U. w. X J4 14-
U, Q u -U U) 00 0 U 01 oz 0-
4 ULU
u U. %01 &L Uu w In LL U.
cc g -I 2 0 1 1 0 21x1
~
@1
-~~~~
~ ~ 0- U
a m. -2 ~. ~
0
I
00w19
1 1 9
Mi-?~1
M i 9 1
UU 12 - 1 0.' 0 . * 0--00, 'o

4 hi 0 (A e h '
(AGI
U hi 3 I a

(A 4 4 2 11 2 4 0 0 n- 10C
4 4( (a II I I C 0IL -o a
N~~' i > 4 4 axM -L
0 u.o 0 N ul w
c of U UL hi Ua U UwJ x c cc Q U_ o

1 N 1
0 0i 0 -
I2 Z a. a - oI 0 - - -1 IF.

0" u 0i _Q C 0a- U'


0~~ (A (A 2 1t(
0i 0
J W - -

-w
0. I- 1U - (a (A AU' A (A N -t~-105 I
In tK I& cv " " tv c

0. P.I! I-

4- IL.-
* - * - 4
J9 S - W

a#A a- - a a a
4. a - ~ (- z t (V (
oL (.3 "1 L i ;; 0- L L

a u. mv w
-. wi m alU a V

o j a- = . L
;a. ' L 'o c k
J
a W.

0 U. U 0 0 '- 0 0

w LA 1 8 a 09 P it l-
5 -OK 6 F 9 1 1

z z

WIt- L f~P t P l P

0 J K ~ -0i. S
9
P
Ic a- =

~. I- 106
r

UU

ccS

0.. .
w C4

om

u s#A 0i. CA

c C

',- ,_-<,- -
UU

Cal CG -r-"-

a wj

4z c
-Li 41 a
44 -W
L A C.-

X-- -

U 4.
4 w :
t 'J tt.

.. ,.

4 s- -k .- 2 0 t
0 1- A - -, 2 40

cl. "i ot - K L
-C.. c 9 u

- - .J . O1.J+1.

wC CA_ fCa zA U2

(- CA 0 'WI m ; - u
W U , U. a a -%a -

- - 4kl
1" U . 0 C- " 0 -( 0m

0a (A
wl ul = z X C -LJ D -u
-S Q aM m In - m Z-~

, £, (- 3 Ul- L
C4 0 -_=A 0 0 C' :
Q - IAW Uat Js -c c 0 C CC 4-3
CC W cc4
u -- a- *a 2 - -0-
.o u X I - - , 1 4
- _ 4 z x-
.

aAus a0 C) CM 4
- u
CL c.
m -L 2 u C
o
La 4
Cc
IL
P-+ K Ge 3-U.
I~~, 0 3. c.
A. 0 , 0 0 o- 0
*
ov W" -Ca
c - CA
In- =5, CAt 00C

la Wb-
u U
a a aA m --IL at WA X

o0 U C 0 40 00- x x Ca -
CA I ~ C. 0 ~ CL ~10-C
CA

t 4 aA I.- &- U -
b. w U b- .

w- wM U X a
4JI 4 C4 cC

tI
C W
c
.4
IC
~ W
-W
x In I .
I-
-0 r! r

I. qA

C0 -a ; 1 -%

40 0 W
lO 4n 0
IA 140
.4 La 0 -CC.
w9
~r O rI.
C
1
A 9 I V
C
0 I 0a

-~ ~ 89I 04- tat u4- a a

040 w a 49
cc CO c N O 0 CO
C W rIO w C C
r_ CL X~W
C.W Wa _j Cl

u wC 0 CA a

a _
W-0 3
0 c

Im U. :1 4M
to W=10
140 - 0 i
6
> r.WI W?0
WO- w -
LIlWNI
c111 _z
0
atUN0
X-- 0 .4
a 4 -0 U w
SClad = -X -
- W r C 3-1
C h O=-C AL LI :I'
tC Zr
> C 0 - -U
wl 9 . a0~ W
JC WI w U 0 Is0 -
Wm
.j G C 'x " 0 -a a _;4 i _
c w m~ D u,~ L '_ C Ia
n- JS j, f, w V)~C

>4 - C i
'- >C
w J J w i. .C
A. _ J 0: 3 w C
10 J0 U up

j 0f
J- - j 0 0 - C.
.j C - jX0 C
- -
C. ~ 0-
~ z- i
>S-0
C OJ U a
0
W - - U - S NI -C

LII~~ ~ ~~~~ C 4 C - - 4!a AS


1 1-U0CC4.- __Ij~
C -c
-5- '.l44 - C
0-0 -
- . 3L . . . _j. i w..
J5 w4 0_" x WI '

W LI C- 0.41
! xC a Uw -,
-~~~~~c 04C - I IC U 1 - cr ja-1

1
C4 o0W w GO-
I W-
4~~~t -I L
-C l m cC C M

444
0 -aC
>
A -

J IC C'A41M
W C 3wa0c

j 0 j~I *
J .0 COI

Cf .4 CCCCCCCC
-4 c cc cc

to E. W> 14 >4IC
4
cW 1 - 4
>4 40 -
IIA WW -x ta on
c5 _j>
r.0 1.40a
0 1.0

- CW C108
--CU
present time. The most readily available access at the present time is through
accessing contractor records.

The Unified Date Base (UDB) project, programmed for completion in 1987,
should provide easy access to LSAR. Plans call for the -UDB to provide an
on-line system featuring comprehensive editing and help capabilities, user
defined function keys, narrative text editing, illustrated parts breakdown, and
data indexing, among many other -capabilities. The plan calls for a dial-up
capability to access current and historical LSAR. Observation of a-
demonstration of the development -efforts of the UDB to January, 1986, revealed
that menu-driven access to the data should make LSAR data easily and quickly
accessible.

Reliability and Validity of LSA Data

Two factors affect the stability and accuracy of LSA data. First, the
data are supposed to be developed iteratively with provisions made for updating
the various records as development indicates changes to be required. Second,
estimates of such factors as skill specialty codes and manhours are merely
subjective 'estimates. The technician supplying skill specialty codes may or
may not for a variety -of reasons select the appropriate code. Further,
equipment reliability and maintainability entries have little basis in
experience until late in the life cycle. Interviews revealed, for example,
that reliability estimates for some equipment items were grossly incorrect.
Also interviews indicated that the AFS codes frequently must be changed. In
regard to the frequent changes reported as needed in AFS designation, it should
be noted that AFS decisions for a new weapons system are the result of
negotiations between the personnel classification functions in the Air Force
Military Personnel Center (AFMPC) and the major command gaining the new system.
The contractor has every incentive to -minimize requirements for new MPT
resources. On the other hand, the gaining command has incentives to capture
resources by advocating new AFSsor shreds of existing AFSs. The AFMPC role is
to reconcile the requirements consistent with such guidelines as similarity of
-task and job requirements, assignment equity, and promotion. While the
contractor may be entirely correct in the designation -of the AFS to perform a
task, these major command and AFMPC considerations may change the designation.

Mulligan and Bird -(1980) make this statement about LSAR -data in their
publication of guidelines for maintenance task identification and analysis-,
"Analysts may be aware that the quality and technical integrity of LSAR data
varies considerably from one program to another (p.37)." Of most use are the
LSAR data bases that are required in full by contract and are implemented
completely by the contractor with a continuously updated data base. Many
times,. however, a limited LSA is procured. In some cases the LSA is performed
during the conceptual and developmental phases, in which case the LSA data are
limited and do not remain current-. In other cases, LSA data relevant to the
TIES problem are not procured. Also, data sheets E, F, and G are not usually
entered into the computer data base.

109
Uses of LSA Data

Aside from use of the LSA data for design changes and as a basis for
future design considerations and for development -of technical manuals and
publications, use of the data varies. There, obviously is a wealth of data for
manpower purposes. For each task identified, manhour and elapsed time
estimates are provided. Summaries of these requirements- by skill specialty
code, organizational level, and annual maintenance manhour requirements are
available. There have been efforts to utilize these data for LCOMmodelling of
manpower. Cronk (1986) reported that an LCOM conversion program for inputting
data provided by the LSA preprocessor is beingt developed.

The Aeronautical Systems Division (ASD) of the Air Force Systems Command
has accomplkshed LCOM manpower studies using LSAR for the T-46. Cronk's
conclusion is that LSAR summary reports are inadequate for use for LCOM
modelling. The C and D data records, however, contained approximately 90
percent of the data required by LCOM, at least for unscheduled maintenance. He
stated that the LSA master file from the contractor if processed by the joint
ADP system would probably satisfy all of the data requirements, in which case,
an interface program could-be developed.

Problems in using LSAR, aside from program language, in LCOM include:

1. Blank-data fields

2. Data fields with zeros or dummy values

3. Data fields not in prescribed format, because of waivers granted to


the contractor

4. Unexpected data entries, e.g., new LSA control-numbers

5. Did not contain authorized-work unit codes

6. Variable record lengths

7. Government furnished equipment data not in LSA data base

8. Difficulty in producing task networks

In the personnel community, data about recommended skill specialty codes


can be useful. They should be -used with caution, but they could be the
starting point for making classification decisions.

Also, there is a considerable amount of data useful for training


development, especially the rationale for requiring training on a task, the
task analysis, and tasks requiring special training devices. Unfortunately,
these data are available only if the training development personnel know where
to obtain hardcopies of the data. Because- of the volume of hardcopy data, use
of them is difficult. At present a primary user are ISD specialists in the
3306th Test and Evaluation Squadron,,Edwards AFB, California.

110
This organization is charged with developing training materials and
providing the initial cadre- of -trainers who are assigned at the first site
implementing a new system operationally. The training development -process
should begin at least three years before the implementation for such systems as
the B-1. In every case, the development process must have a long lead time.

Using specialists from various AFSC who are T-prefix qualified, the 3306th
builds course training standards, plans of instruction, and-materials. The ISD
process is -employed, and requires detailed system analysis as its first step.
The task analysis reported on LSA-015 satisfies much of the system analysis
requirement. Other LSA reports also provide valuable input.

An interview with 3306th T and E Squadron personnel revealed that these


data are frequently available too late for their ISD use. Instead, their team
members spend time at the contractor site on temporary duty to obtain the same
kind of information that later appears in the LSA reports.

While the data developed during this TDY are available for training
development for initial resident technical training, its use for this purpose
is- uncertain. One inhibiting reason is the volume of the hardcopy data.
Personnel in the Training Development Service, USAFOMC, were unaware of the
scope and content of the-data.

Because of their immediate need for the LSA data, the 3306th T and E
Squadron is acquiring a terminal for access to the 1UDB. While data may not be
complete, UDB access will provide these personnel with the data available, its
updating, and relatively easy access- for limited analysis. Any such analysis
that could be accomplished appears to be limited to a given data record.
Analysis across data records is not available.

Critique of the LSA

1. The LSA data base provides a wide variety of information -highly


relevant to developing MPT requirements for new and existing systems.

2. Caution in using these data is needed, because they may not be


complete data. Also, since some of the data items usually are developed by a
single person, they may not be entirely accurate.

3. Attempts to use LSA data in LCOM modelling have encountered problems


that need resolving, but manpower modelling can certainly be done. In theory,
about 90 percent of the data needed for LCOM simulation is provided by LSA.

4. MPT uses have been limited. Some agencies whose functions could be
enhanced by usage of the data were unaware of data scope and content.

5. Criticisms have to do less with the LSA and LSAR.pr Le than with data
availability, completeness, and awareness of the MPT community that they exist.
Certainly, MIL-STD-138-2A provides the mechanism and guidance for generation
and display of data relevant to MPT.

il
6. One may get a false sense of security about LSAR, because it has an
"engineering" veneer. In fact, LSA data relevant to MPT are quite subjectie
and not dealt with as rigorously as in the case of OSM data or in the
operational audits employed to develop input into LCOM.

7. LSAR figures importantly in front-end MPT analysis foc new systems.


HARDMAN, for example, depends upon the availability of LSAR, either for the new
system, or from an older system using comparability analysis, to calculate
manpower requirements and assorted training system trade-offs. In some
respects, then, LSAR can be thought of as a prototype MPT data base. But
obtaining the data in the first place, and updating the data as development
progresses, present serious problems for the serious MPT analyst.

VII. INSTRUCTIONAL SYSTEMS DEVELOPMENT

The first step of the Air Force five step ISD process is to analyze system
requirements. This analysis requires a needs assessment (Goldstein, 1978)
which includes task analysis, organizational analysis, and person analysis.
Results of these analyses are used in the second step of the model to define
education and training requirements. It is in this step that decisions about
what is to'be trained are made. Thus, it seems apparent that prerequisite to
the first two steps are the requirements to identify tasks that will be
performed by those who are trained as well as information to decide the
emphasis to be placed on training each of the tasks and how training should be
developed.

Origin of ISD

AFR 50-8, AFMs 50-2 and 50-62, and AFP 50-58 specify the requirements and
methods for developing instructional programs according to the five step ISD
model. While policy requires the ISD process be used in developing training
programs, the five step model is most frequently applied to new courses or
programs. Interviews revealed that while most of the steps of the ISD model
are applied to some degree in every development effort, there is no audit trail
of the development efforts. In terms of the first two steps of the model,
occupational analysis data as they are available serve as the task
identification source. When these data are unavailable, training personnel
must generate their own task identification data or find some other source.
Information from the occupational analysis data base, such as task difficulty
and training emphasis, are also extremely valuable for deciding which tasks are
to be trained in resident courses. The OSM data are also useful for other
technical training (e.g., advanced courses), but not nearly so much as for
resident training for which ATCR 52-22 provides guidelines for applying the OSM
data to decisions about initial resident technical training.

The use of Utilization and Training (U&T) Workshops grew out of a Hasty
Grad initiative in 1977. U&T workshops are made up of users, trainers,
personnel representatives, and occupational analysts. This group, using OSC
data and subject matter expert guidance, define training requirements,
satisfying the first two steps of the ISD model.

112
More recently, AFR 50-58 (6 Aug 1984) in paragraph 3b directs that
"Anticipated training requirements for each stage in the life cycle of a weapon
or support system or for Air Force personnel career qualification are defined
by aomulticommand Training Planning Team (TPT). The results of TPT efforts are
documented in a Training Development Plan (TDP)." Members of a TPT include Hq
ATC, using command, and AFMPC representatives.

A TPT can be initiated-by the Air Staff (functional manager or other major
staff element) through issuance of any of the normal change directives in -the
Planning, Programming, and Budgeting System (Driskill, Mitchell, & -Ballentine,
1985). When such a directive is issued, Hq USAF/DPPT will direct Hq ATC to
form a TPT to develop career ladder TDPs. A TPT can be required for a variety
of reasons:

1. AFR 36-1 or 39-1 changes requiring significant revision of resident,


correspondence, on-the-job, field, or unit level training.

2. U&T workshops or system acquisition plans which identify significant


changes in existing training programs.

3. Internal or external evaluation data which identify a training need of


a major training user.

The TPT is a highly formalized structure to accomplish 1SD. Implemen-


tation of this initiative is the first instance in which the ISD model is to be
applied to the development of training across a career ladder. Previous ISD
applications- were directed at segments of an AFS, i.e., initial resident
training or command-sponsored training.

Development of ISD Data

As indicated above, personnel involved in application of ISD rely on OSM


tash identification data when they are available. Other data sources are also
used, and when no sources are available, the ISD specialists must develop their
own data. They use interviews with subject matter experts and questionnaires
administered in person or by mail. In the case of the 3306th Test and-
Evaluation Squadron, development of tasks and associated data is accomplished
at the contractor site or from LSAR if these data ate available.

ISD Data

In conducting a systems -analysis and determining training requirements,


ISD specialists require a variety of information about the jobs that the
trained personnel are expected to perform. This information includes:

1. Task identification

2. Task criticality

3. Percentage performing

4. Number performing

113
5. Frequency of performance

6. Learning difficu

7. Training development time

8. Task analysis (action taken, item acted on, steps of 'the task,
knowledges,, and skills)

Proficiency requirements of these items, task criticality, and frequency


of performance, are difficult to capture either in interviews- or by
questionnaire-. Criticality may have a variety of meanings to the respondents-
or interviewees. Unless a clear-, precise definition is employed, the
reliability of these data can be questioned. Frequency of performance presents
two problems. First, frequency data must be captured in an equal interval
format; otherwise the data Cannot be analyzed arithmetically. Second,
decisionmaking from frequency data is difficult. How is a decision rule
developed? Should an infrequently performed task-be trained in preference to a
frequently performed task--or vice versa?

Storage and Retrieval of ISD Data

Data relevant to ISD are recorded on a variety of forms. These data are
not automated. Furthermore, interviews revealed that documentation is
inadequate. The one exception is the task analysis data being developed by the
USAFOMC/OMT where the efforts are systematically documented and accessible
through the USAFOMC IBM 4341 system. Data developed by the 3306th TES are
available, but they must be manually accessed.

Reliability of ISD Data

When data from the other task identification systems are employed,, the
reliability of the data of those systems applies. For ISD specialist-
developed data, reliability is not assessed. Reliability would depend upon the
replicability of the data through successive SME interviews or panels.

VIII. CAN TASK DATA BASES BE LINKED?

Across the various task data bases is a wide variety of information that,
if a method of aggregation- can be developed, could be highly useful for MPT
purposes. Table 9 displays these various kinds of information, identifying
each type of data and the task system or systems that produces them. For -each
information type, the table shows which MPT function can make use of the data.
When a M, P, or T is underlined, the function is at least partially making use
of the kind of data provided.

Given the variety of information and uses, can the task data systems be
linked so that all of the information would be available? The answer is- yes.
Data elements and literal translations identifying tasks exist in each of the
data bases that will permit linking tasks and associated data among systems.

114
Table 9. Illustration of Task Data
Systems and Data Provided

TASK IDENTIFICATION METHOD

DATA PROVIDED OSM MDC LCOM LSA ISD

Task Identification MPT MPT MPT MPT* PT

AFSC MPT MPT MPT* T

Weapons System MPT MPT MPT MPT* T

Sub System MPT MPT MPT MPT* T

Component M T MT M T* T

-Crew Size M MTT*

Percent Performing PT T

Job Structure MPT T

Actual or Estimated Time M M M *

Relative Time PTT

Estimated Difficulty PT*

Relative Difficulty PT T

Consequences PT* T

Relative Training Emphasis PT T

Task Analysis T*- PT

Support Equipment T T* PT

.Job-Satisfaction P

• In the acquisition process

115
But, because of -the different way tasks are described among the systems, some
manual methods of linking will inevitably be required7 to supplement an
.automated linking methodology.

There are several precedents for linking data from different systems. The
first of these precedents-occurs in the linking of specialty training standard
elements -and plan of instruction criterion objectives with -occupational
analysis program tasks and related data. Using the PRTMOD option of CODAP, the
USAFOMC provides a matching of survey tasks and the training documents- in each
survey report. The matching is accomplished by subject matter specialists in
the career ladder to which the data apply. Recently an automated matching
process based- on the semantic content of the different sets of -data to be
matched was developed to enhance the task and training document matching
activity. Subject matter specialist input to fine tune the matching is still
required; it is not entirely automated.

A second example of matching data is reported by Monson, et -al. (1985).


Their purpose was (1) to produce performance-based position descriptions for
AFSCs from actual records of maintenance actions; and (2) link these
descriptions with -occupational analysis tasks and specialty training standards.
Extracting historical maintenance data from the F-16 Centralized Data System,
Monson, et' al. formatted these data into action verb-noun task statements by
AFSC. They then obtained occupational analysis tasks performed on the F-16 on
the flightline. The third step was to obtain specialty training standards for
F-16 AFSCs.

The matching of the- three task data sources consisted of three steps.
First, subject -matter specialists mapped the maintenance tasks onto
occupational analysis tasks. Next, the specialists linked the occupational
analysis tasks to training standard tasks. Finally, the results of the mapping
procedure were validated by additional subject matter specialists.

Monson, et al. state that the mapping methodology "successfully linked


maintenance tasks, OS tasks, and STS' statements" (p.8). They reported some
need, however, to discard occupational analysis tasks and specialty training
standard elements-. They eliminated from the occupational task list those tasks
that had no correlates in the maintenance task data inventory. These tasks-
were those that would pertain to maintenance on other aircraft and -the tasks
that maintenance personnel perform in their job that are not recorded under the
MDC,. They also removed tasks that represent some of the power of the OSM.
These were tasks that their subject matter specialists identified as not being
performed on the F-16 or as being tasks personnel assigned to this aircraft do-
not perform. The evidence over the years in which the OSM program has
functioned clearly shows that personnel do perform tasks not assigned- to their
AFSC or thought to be performed by some subset of members of the AFSC. For
example, incumbents are sometimes called upon to work on transient aircraft, in
which case the historical record of their work could only be found under that
aircraft in MDC. Given present inabilities to identify this work in MDC,
capturing these tasks in, the Monson, et al. study was impossible. But
eliminating these tasks is a mistake that must be avoided in a TIES. MDC tasks
may not be linkable with all OSM tasks, but this does not mean that the
unmatched tasks are not performed or important.

-116
Specialty training standards contain elements that are not task specific,
such as understanding of some principle, of operation, or safety training.
Monson, et al. eliminated this type of task. In the PRTMOD match of OSM tasks
and training standard elements, however, the subject matter specialists link
the non-task elements with task statements.

To identify the AFSC of incumbents who performed work on the F16, Monson,
et al. established an identification code for each participant. Such a code is
required if the data are extracted from AFTO 349, since AFSC is not recorded.

The resui of the Monson, et al-. study was the production of job
description reports. These reports could be generated in terms of the
maintenance, occupational analysis, or STS tasks. What is not clear in their
report is whether there was a one-for-one matching of the tasks, or whether one
OSM task or specialty training standard element was- mapped to a single MDC
task. Experience with PRTMOD matches and the differences of the level of
detail of MDC as opposed to- the. OSM tasks suggest that a one-to-one match is
unrealistic.

IX. USER TASK DATA REQUIREMENTS

While members of MPT organizations speak clearly about their missions and
organizational objectives, they do not so clearly articulate the needs -for data
to support their decisions. Most of-the organizations rely to varying degrees-
on data from one of the task data systems. Only one part-of the MPT structure,
however, has any written guidelines for applying data to the depisionmaking
process. ATCR 52-22 establishes rules for applying occupational analysis data
in making decisions-about what tasks are to be trained and where -they are to be
trained. Other aspects- of the training establishment as well as the manpower
and personnel functions are free to use, or not to use,, products of the, 0SM,
MDC - (except as- it is required in LCOM), LCOM, and LSA- in the manner they
choose.

LCOM is used for determining some maintenance manpower requirements.


Interviews with manpower personnel reveal their greatest concern to be
inaccuracy of maintenance data. They use the operational audit as a means of
obtaining more accurate assessment of-task time and crew size. The opekationa1
audit is labor intensive and cannot be employed on as wide a scale as necessary
to obtain the quality of data the manpower -community desires. For the
maintenance tasks, little use of occupational analysis or LSA data is conceived
by manpower rspecialists in modelling maintenance manpower requirements. The
manpower function requires, for current weapons systems, data on the time
required to perform maintenance on a particular- end item, and the frequency
with which this maintenance must be performed. These data are readily
available from- MDC. At least two observations about the present manpower
practices, however, seem appropriate. First, is a report (McFarland, 1974) of
a study of the potential uses of occupational analysis (OSM) data by management
engineering teams (MET). The study was requested by Hq, USAF, Manpower and
Organizati6n, since both the OSM and MET programs are concerned with task level
descriptions of time spent to perform tasks required in the Air Force. The
Computer Systems career field (AFSC 51XXX) was chosen as the objective of the-

117
study. A job inventory was administered to personnel in the AFSC 51XXL
workcenters to obtain the relative time spent estimates. At the same time, the
MET performed its time-measurements. As noted-earlier, the correlation between
'the 'two time measures was .79. More importantly for -using OSM with LCOM for
manpower determinations were these findings:

I. Hierarchical grouping of the OSM data became the basis for a MET
recommendation to restructure -workcenters.

2. The OSM-job descriptions-were "extremely useful in the development of


workcenter descriptions (p.10)." McFarland indicated that it was felt that
significant savings in manhours could be realized by MET during -the preliminary
phase by using current OSM job descriptions.

McFarland cautioned that OSM data are -more-useful for large workcenters and the
more homogeneous AFSs than for smaller workcenters and-heterogeneous AFSs.

A second observation is that manpower specialists -occupy a unique position


compared to personnel and training specialists. They view the work performed
in workcenters first hand. While they are concerned with identifying the
proper AFSC -for the- workcenter, they also are able to observe special
qualification requirements. This information would be valuable information if
it could be aggregated for personnel classification. In addition, the manpower
specialists are a valuable source for mapping functional account codes to AFM
39-1 job descriptions and OSM data.

The McFarland findings have significant implications for integrating


manpower and personnel issues. While M and P, as is noted in a subsequent
section, operate from different task definitions, an application of -OSM data in
manpower studies would bring the two functions to a more common meaning. The
significance of using osM in maintenance specialties is probably not so great
as its use for LCOM modelling of nonmaintenance specialties. Here there is-no
source like MDC to build a strawman for operational audit. -Coupled with better
defined tasks in the OSM and a linear transformation of relative time spent to
clock time, LCOM modelling of nonmaintenance manpower needs could be- greatly
facilitated

Personnel has responsibility for determining classification structure and


defining the overall task, aptitude, educational, and strength and stamina
requirements of AFSs. The most efficient classification structure is that
which provides the broadest task coverage consistent with the similarity of
task skill and knowledge requirements. Certain homogeneity of skill and
knowledge requirements must exist within an AFS. As the disparity of -knowledge
and skill requirements (or lack of homogeneity) among jobs increases, so grows
the inefficiency with which AFS incumbents move through assignments. Initial
and on-the-job training increase; aptitude, educational, and strength and
stamina requirements determinations may also be adversely affected.

The classification function has long-relied on occupational analysis data


and subject -matter -specialist input for making job stkucture decisions.
Interviews revealed that classification policy makers would welcome any
additional data, such-as a TIES could provide, that would assist them in

118
assigning tasks- to AFS structure -on the basis of similarity o'f nowledge and
skills.

At least two weaknesses of present data sources for making classification


determinations are apparent. First, OSM' data -provide only some of the
information needed for classification decisions. The -data provide- the tasks
performed and their work structure, but not -the similarity of -knovledge and
skill requirements. Also, the generic nature of the tasks describing many AFSs
frequently does--not permit an-assessment of the scope of the work-required by a-
specific task, that is, the many different weapons system specific tasks
represented by a: general task. It is easy to assume, for example, that the
knowledge and skills to "install starters -on jet engines" are the same, if the
number of different jet engines and types of starters are not obvious. In the-
performance measurement research to develop measures of hands-on performance,
however, task analysis revealed the knowledge and skills to be sufficiently
different so that separate hands-on measures of installing starters on J-57,
J-79, and TF33, had to be developed.

This performance measurement incident also illustrates a second weakness


of present data- sources. A subject matter specialist conference prior to the
task analysis effort had indicated the "knowledge and skills involved in
installing starters to be sot similar that a single hands-on measure could be
applied. Obviously, the task analysis revealed otherwise. The point is that
subject matter specialists made an- incorrect recommendation for the measurement
of the generic task. What would have been their recommendation had they had
the opportunity to assess the differences of task performance by engine type?
One, unfortunately, can only hypothesize that such consideration would have-
improved their analysis of the task requirement and-hence their recommendation.
This instance calls into question the whole notion of subject matter expert
judgments. Reliability and validity of their subjective judgments must somehow
be determined.

Another weakness of -subject matter specialist judgment, particularly those-


judgments based on more general circumstances, is the variation of judgments
from one group to another. A case in point is a maintenance specialty (AFSC
326XX) created in 1970 that is undergoing its fourth reorganization under the
present effort to combine work centers, which is employing the same methodology
as the previous -reorganizations.

The crux of the matter is that the personnel classification function is


provided only some of the data sources useful to perpetuation of an efficient
structure. Information from a TIES should enhance present use of occupational
analysis data by providing very specific information about tasks.

Personnel in the various parts of the training function readily


appreciated the potential of a TIES. Training personnel must provide the
initial, specialized, and on-the-job training for the tasks of AFSs. Whether
they are involved in developing training for new systems or equipment or on-
the-job training for current systems, or members of staffs of technical
training courses or major commands, training personnel require certain basic
data about tasks:

119
1. A statement of each task.

2. A definition of who performs each-task.

3. Information on which to base decisions about when and where each task
should be trained.

4. Analysis of tasks to be trained: into their component steps,


knowledges, and skills.

For new equipment or systems, -the ISD ersonnel in the 3306th Test and
Evaluation Squadron use LSA data when they are available early enough in the
development cycle. Otherwise, the 3306th personnel develop necessary
information from other sources, usually through on-site interviews and
observation with contractor personne! developing the new equipment. They need
access to the LSA data as- early as possible.

Trainers in the technical school as well as in the Training Development


Service rely on OSM. data and utilization and training workshops to decide
content of initial training. Once decisions are made on the- tasks to be
trained, task analysis to define knowledge and skill and -task steps is
completed by subject matter experts.

Field Training Detachment (FTD) and on-the-job training are position-


specific training efforts. FTD training is designed to- provide specific
instruction on a system or equipment at a given Air Force base. Decisions
about what to train in FTD courses present little -problem. A greater problem
is the sketchy audit trail between initial resident training and FTD training.
Accomplishing an audit trail could be easier by existence of TIES that would
-permit matching of equipment-specific tasks trained with related specialty
training standard element and occupational analysis tasks.

On-the-job training is intended:-to be based on-job qualification standards


(JQS) developed for various positions of an AFS. These positions may represent
job types and corresponding tasks from occupational analysis data-. Tasks
comprising the JQS are selected to be those on which persons who work in a
position covered by the JQS should receive -training. Completion of training
for all of the tasks results in- task certification of the incumbent. In many
cases, this certification leads to the skill upgrading of the incumbent. Since
survey tasks or an equally generic type of task comprise a JQS, any further
definition of them would improve communication. For example, in the install
starter case, it would be much more meaningful to know which starter an
incumbent had been trained to install.

In summary, each of the MPT functions utilizes data from at least one of
the task identification systems. They each need timely, accurate, and-complete
data at the most specific level possible.

120
X. TIES APPLICATIONS

While the development of a TIES presents many challenging issues, the


benefits to be derived from -the inementatin of such a system are also many.
In addition, there are some-actions that can be taken that will facilitate TIES
,development.

The benefits to derive from a TIES -originate in- two related sources.
First, a TIES will provide- task identification from generic- to specific levels
of -detail, including. the analysis, of 'tasks into component steps. Any MPT
system whose decisionmaking function is. based on task information would
certainly be enhanced by the more, detailed data from TIES. Seco6nd, a TIES
would provide for a system of detailed audit trails of personnel and training
activities and a mapping-of their interrelationships.

More specific task information than is now available to MPT functions


would be generally useful across the board. Some specific enhancement of MPT
activities are listed below: decisions about content of resident technical
training' courses, specific task identification for FTD -and major command
training courses, and delineation of job qualffication standards (JQS) in terms
of specific equipment or systems for which trainifig is, provided for
certification purposes. Of special importance is the fact that the content of
the training delivery systems share no common mapping or interfacing
technology. -A TIES would provide the, terhnology- so that, for example, the
tasks on a specific system trained in an FTD could-be mapped to the specialty
training standard or to tasks trained. A similar analogy also- applies- to
development of a JQS for on-the-job training use.

AOTS and TDS would also be served by more detailed task information. AOTS
especially stands, to -gain from a TIES, because of the requirement for master
and local task lists. The first could be generated by a combination of OSM and
MDC data as a first-cut master task list in maintenance AFSs. Local job
descriptions could be the product of the base level generated MDC combined with
other task identification processes at base level.

Development of the TDS technology is based on the creation of task


training modules from-hierarchical and non-hierarchical grouping of -tasks of an
occupational survey of an AFS. The- scope of these modules would be revealed if
MDC data were mapped- onto the survey -tasks. In fact, without the array of
systems, subsystems, and components whose maintenance is reflected by the task
training modules, one may be misled about the complexity of the training--issue.

When classification is considered, the need for a TIES becomes important


also. Classification should put -together those activities that have :similar
knowledges and skills and separate- those activities with- unlike ones. At the
level of occupational survey data and using subj'ect matter opinion about what
activities should -go together, the issue of-similarity of -knowledges and skills
can easily be lost. Survey data make no provision of the knowledge and skill
data, while the interviews- infrequently pay sufficient attention -to the issue.
The problem is that the judgments of senior experts 'in a field about the ease-
of performing certain tasks are tempered by their familiarity with the easkd'
-and related knowledge and skills. Unfortunately, what may be easy for an

121
experienced expert is not necessarily -easy for someone who is just-beginning to
learn- the work of a specialty. There is a nee&dfor a methodology of estimating
the degree of difficulty of performing tasks of one kind when the job incumbent
pnas experience with other kinds -of tasks, or no experience at all.

A TIES would be especially useful in initiatives like Project SUMMA. The


variety of data from TIES, especially if methods- of determining the
transferability of skills among AFS were fruitful, would provide a large data
-base showing job structure, job satisfaction, actions taken at the component
level, and other data relevant to altering AFS structure to optimize
requirements of each of the MPT segments.

For trainers-, the idea of a TIES-was.exciting. The concept of an OSM task


referenced to MDC tasks which in turn are referenced to LSA task -analysis
offers trainers the- promise of a large -amount of -information that must be
developed- for each task trained. Equally as attractive is the 'idea that a-
clear audit trail from.-traifiing standard to job requirements can be documented.

There also is considerable promise of a TIES for new weapons system MPT
issues,- especially where comparability analysis is used to arrive at manpower
and. reliability and maintainability estimates. If the MDC task data of the
comparable- systems have kfeen mapped- to 0SM' and LSA tasks, then job type,
training emphasis, task difficulty, specialty- training standard information,
and the task analysis results are available for sorting out questions of AFS
designations, and-definitions of -aptitude, educational-, strength and stamina,
and training kequirements.

A significant question for a TIES relates to its potential contribution to


MPT integration. The formal interactions of the three functions and the-
agencies involved were conc.sey reflected by Armstrong-and Moore (1981)-. This
interaction, displayed in f.igure '44, is complex and illustrates clearly the
variety of impacts that -the-action of one fnction has ot the other two.

While Armstrong and Moore cliarly illustrate the formal -structure, the-
variety of perspectives a;ch of the communities has onMPT issues is not shown.
MPT integration will require accommodation of these perspectives.
Specifically, each of -the MPT functions views- the- issue, of accomplishing work
in different terms, and, indeed, there are other players, namely -the user and
the worker, who are crucial. Theruser generally will have three variations of
the same perspective. At the highest level, the de.rIe is to acquire and train
personnel to perform whatever job must be accomplished in the shortest time
possible (although the present -Cnitiative in SUMMA suggests some softening of
this position). At intermediate levels, the perspective is more limited,-
incorporating the need for job incumbents who can accomplish the work required
by the. intermediate -level and -to retain, them as long as possible. At the
lowest level, 'the concern is much more.-specific - the training and retention of
personnel to do the specific jobs to which they are assigned. This
perspective, while varying in degrees, has strong implications fo their view
of occupational classification stri.ctures.

122
wi 0-

W-0

V. 0
U w

-Z
0

VSi 0 V

I",d

0 I-
' e zO- I , ......

CC

0
123L
Workers have different perspectives. They are concerned with -their sense
of accomplishment and job satisfaction, the meaningfulness of the work they
perform, promotion potential, career progression, and assignment desirability.
,For example, one has only to look at numerous occupational surveys of AFSs,
especially in the maintenance specialties, to- see -how job incumbent
satisfaction has varied within and among specialties as a function of changes
in classification structure This perspective, too, has strong occupational
classification implications.

The- manpower function must address two basic considerations. -First, jobs
must be identified and described so that they can be matched to an appropriate
AFR 36-1 or 39-1 classification code. Second, the manpower resources necessary
to accomplish workload must be determined. As SUMMA shows, as does- the TSAR
modelling of combat maintenance capabilities reported by Dunigan, et al.
(1985), -occupational classification- structure has a major impact -on manpower
requirements.

From the personnel -perspective, aside from- the selection issue, there are
several important criteria. Occupational classification optimally would
combine into a single structure those jobs in which knowledges and skills are
highly similar and where -transfer of them from one job to another is at a-
maximum. . There are -additional considerations, however, to structuring.
Assignment equity, especially overseas rotation, career progression-, promotion
potential, and meaningfulness of work (all of essential concern of workers)
must be accommodated in occupational classification.

Training is also- impacted 'by the occupational structure. Broad


classifications -present a very difficult training problem. More narrowly
defined classification structure reduces the- problem. In a 'broad: career
ladder, it simply is cost prohibitive to train all who enter initial resident
technical training on all aspects of the AFS. ATCR 52-22 came into existence
because of this situation, specifying !that training should be provided on the
-tasks for which the probability of a job incumbent performing them during the
first assignment is high. Under this provision, many tasks, especially in
broadly structured AFSs, are not trained- initially. There is a consequent
increased on-the-job training burden. In narrowly-defined AFSs, training -can
be more economically provided and reduce future training requirements. While
the TDS, as do U&T workshops, is addressing this issue, the problem is Unlikely
to be significantly reduced with regard to initial training without concurrent
changes in occupational structures.

It seems clear that occupational classification structure has highly


significant MPT impacts. The problem is to optimize the issues or, better,- the
criteria that M, P, and T, as well as the user and worker, apply -to the-
classification issue. As MPT have functioned, these criteria have not been
optimized. In U&T workshops and in TPTs, manpower representation is not
required. Summa -is attempting to optimize H and P, but T is not a player. In
the present effort to combine work centers under RIVET WORKFORCE, the major
players ara involved, but the efficiency of this effort may be inhibited
because of the lack of necessary data about similarity of knowledges and
skills.

124
But, what does this discussion have to do with a TIES? TIES will not
solve the issue of MPT integration. A TIES will, however, bring into a single
data base a number of sources of information that should facilitate
integration. At present neither M, P, or T use more than one data base--and
those data bases are different. A "task" to manpower, or to maintenance
personnel, is not the same to personnel or -to training in deciding what to
train. A TIES would not necessarily change their respective definitions of a
task, but it would match the different "tasks." Further, a TIES would
aggregate a variety of information relevant to MPT issues in a-single source,
i.e., such factors as tasks, job satisfaction data, job structure, task
difficulty, etc. It also is possible that over time the values of the various
criteria can be developed, so that they can be weighted accord,ng to various
situations. If these values and weights can be determined, modelling of
occupational classification structure to optimize MPT, user, and worker
criteria can become a reality.

Building a TIES is not the most difficult of the problems that lie ahead.
Typically, MPT personnel are not data oriented, and time and effort will be
necessary, first, to acquaint them with a TIES and, second, to equip them to
make use- of the data. It is not clear how the various factors should be
weighted and integrated.

Interface Problems and Issues

The idea of a TIES for use in MPT is seductive, but its creation and
implementation are not without problems and issues of some concern.

1. Historical data, i.e., data presently in the OSM, MDC, LCOM, LSA
files, are difficult to access and were not originally developed to interface
with any other data system. For these data, whose interfacing will be resource
intensive, it is suggested that a TIES be developed on request only. A TIES
initiative seems better directed at the actions necessary to align the
different systems for interfacing than at the creation of new data.

2. Incorrect or missing WUC or other key identification data will limit


the amount of initial automatic linking processes.

3. As with any new or expanded data system, initial use will be limited.
Time and training will be required for potential users to become aware-of the
data system.

4. The variable snhedules of .occupational analysis surveys and LCOM


modelling creates a problem of updating the matching of data. Also, there is
the question of when LSA tasks should be matched--at the end of the development
phase?

5. Differences of task specificity will require considerable subject


matter specialist input to resolve task mapping questions.

Some actions- that can be taken in the near term to facilitate future
interface are suggested below:

125
I. Increase compatibility of USAF Job Inventory verb and noun task
statements with MDC, -LCOM and LSA tasks. This effort can be approached in two
ways. First, actions. taken statements from the MDC, LSA, andLCOM bases can be
adapted to the survey tasks during the inventory development process. Here the-
USAFOMC may need research assistance to aggregate the different data sources
and develop a methodology. Under the pressure to produce inventories and
survey reports, the USAOMC has little resource for developmental efforts.
Second, inventory format for data collection should be studied 'from a research
perspective. Foley (1980) suggested a revised format that potentially could
extend the range of equipment specific data collected in occupational surveys
as well as resolve the incompatibility of action verbs. The importance of
Foley's suggestion is that it illustrates an alternative approach to data
collection format. What Foley's work is not is a job analysis system devised
from research findings. It is a suggestion for another approach to collecting
maintenance task data through surveys. As such it should nct be employed
operationally until it is subjected to the same kinds of tests of accuracy and
stability of data as the present program underwent. For the success of a TIES,
however, reduction of incompatibility of action taken verbs between the SM and
the MDC, :LSA, and LCOM must be achieved.

2. Increase consistency of task description from one job survey to


another, especially between surveys of the same specialty.

3. Devise-quality control procedures to assure consistency of coverage oL


background information items in USAF Job Inventories.

4. Explore feasibility of including workers' AFSC on the AFTO 349.

5. Improve discipline in completing LSA data records and MDC AFTO 349s to
assure that correct data codes, especially WUC, are entered.

If data from the four primary systems are successfully related, output
which displays the matched data will inevitably raise several crucial
questions. First, given the array of equipment included under a certain
occupational analysis task, a logical question is "How similar are the
knowledges and skills involved in the maintenance action taken among the
different equipment?" A related question concerns how a job incumbent,
especially a first-termer, can effectively move from performing one set of
tasks in an AFS to another set--on different equipment or at different
locations. These two questions are at the very heart of the classification
problem. They, unfortunately, are not easily resolved, but research which
looks at the ability of subject matter specialists to estimate the similarity
of knowledges and skills required of tasks among job types or specialties is
urgently needed.

In some unreported efforts to obtain estimates known to the author, the


thrust was to identify subject matter specialists expert in two or more
specialties to provide the estimates. Unfortunately, the supply of such
experts was too limited for the purposes. Given, however, a study conducted at
the USAFOMC (reported by Falle & Knight, 1983) and the rather extensive work in
developing task taxonomies (see for example, Fleischman & Quaintance, 1984),

126
there is sufficient reason to suggest that a methodology could be developed for
obtaining estimates useful for determining similarity of jobs.

In the USAFOMC study, senior technicians for three career ladders


estimated the amount of training required for a job incumbent in one ladder to
perform each of the tasks for the other two ladders. Agreement among raters
exceeded .90, and the results revealed that the experts considered a-merging of
two specific AFSC much more appropriate than any other merger combination.

There also is available some evidence that subject matter experts can
provide certain detailed information that, when aggregated, yield acceptable
estimates of a factor about tasks. A case in point is a study reported by
Rose, et al. (1984) in which the factor to be developed was the complexity of
tasks. Rather than obtain estimates of the complexity of each task, Rose, et
al. obtained specific kinds of information about the characteristics of the
tasks from the subject matter specialists --e.g., the quality of the technical
manual describing a task; the logical sequencing of steps of the tasks. An
algorithm for combining these data into a single complexity rating was
developed. As a result, Rose, et al. were able to rank a set of tasks from
most to least complex.

The po.int to this detour is that there is evidence developing to show that
indepth knowledge of two or more AFS may not be necessary to estimate how
efficient movement of-a worker from one job to another, consisting of different
tasks, may be. If research could develop an estimation methodology, the
importance of a TIES for-MPT would multiply dramatically.

Still another question can arise from the interfacing of a occupational


analysis task with its MDC, LSA, and LCOM counterparts. Does the task
difficulty rating of the generic OSM task accurately represent the difficulties
of the counterpart tasks? In short, using the earlier install starter task
example, is the difficulty of installing starters (of different kinds) across
all jet engine types the same? Since the task difficulty rating is an
important element of aptitude determination for an AFS, resolution of the
question is important. There- may be differences of difficulty for a given
generic task--as some unreported evidence known to the authors tentatively
suggests--depending upon the specific equipment upon which the task is
performed. In which case, aptitude determinations may be affected.

Since the OSM task statements tend to be fairly general, they provide a
high level summary for cross-referencing MDC, LCOM, and LSA items across data
systems. It is anticipated that many MDC, LCOM, and LSA data elements will map
into each OSM task., Any mapping of MDC, LCOM, and LSA data to OSM task
statements will ave to be accomplished by a methodology to be developed.

The most obvious methodology is to have subject matter experts manually


provide this matching or mapping among the elements in all systems. As this
process is potentially very time consuming and costly, it is suggested that
data element descriptions in each system be tentatively matched using an
automated approach. Perhaps as much as 60 percent of the effort can be
accomplished with automated support based on some recent related research. The
remaining 40 percent of the effort will consist of subject matter specialist

127
evaluation and correction of the -automated matching and completion of mapping
which requires substantive expertise. One method of obtaining a preliminary
mapping is to require, for all future occupational surveys, inventory
development specialists to obtain -subject matter expert input about-MDC actions
taken as they- relate to tasks during inventory development interviews. This
requirement should be expected- to improve task list consistency and coverage,
as well as provide the initial and crucial step in interfacing -the systems-.

The first step in the process would be a cross-matching and mapping of


task statements in occupational analysis inventories with each other. This
process would raise the OSM tasks above the AFSC level, allowing the selection
of tasks from across AFSCs 'by equipment or system and subsystem- -corresponding
at least to the MDS and first two levels of the work unit codes in MDC, LCOM,
and LSA. A flexibility is created by this action to permit MPT users to access
data at whatever level they desire, including the AFSC level. Then, for
whatever level chosen, -the user could generate a variety of tasks, combat or
peacetime, for example, that could be aggregated on a wide variety -of criteria,
such- as task difficulty, percentage performing equipment, mission criticality,
crew requirements, training emphasis or training standard requirements, and
aptitude requirements. What if questions could be answered, such as "What if a
certain- set of tasks are aligned to be performed along with another set of
tasks?" Would aptitude requirements change; or would job difficulty increase
or decrease? The only limit to the analysis potential is imagination.

The format of a TIES is conceptualized as resembling the product of the


Specialty Training Standard--OSM data matching produced in occupational
analysis reports (see Figure 8). In the OSM matching, the Specialty Training
Standard element is more general than OSM tasks. The element, as a result, is
the basis for the-matching of the tasks. In a TIES, OSM tasks are usually more
general than any- of the actions taken in MDC, LCOM, or LSA. Because of their
more general nature, OSM tasks would be the basis for the-matching of the other
task data.

Questions to be answered involve LCOM and ISD tasks. LCOM tasks, because
action taken codes in MDC are aggregated in the DPSS, are more general than MDC
tasks. This situation suggests that LCOM tasks would precede a- listing -of
appropriate MDC actions taken at the component level. Also, in LCOM data
preparation, the, first three -digits of the WUC are often used, the last two
digits reflecting component level maintenance being omitted. A second-issue is
the placement of ISD-tasks,. They range from a consolidation of OSM tasks,
making them very general in nature, to the MDC actions taken on components
level.

These issues of format will need resolving. Nevertheless, the general


guideline seems to be the arrangement of the tasks (or actions taken) in a
hierarchical structure. They would be aligned from the most general to the
most specific.

Since the individual task data systems address the pressing needs of their
various users, what is the value of a TIES? TIES would integrate the separate
systems into a full system capable of accommodating and summarizing very

128
detailed information into more global descriptions and, when necessary,
identify the specific sources contributing-to-each global description.

XI. SUMMARY AND CONCLUSIONS

Four of the five task identification systems, OSM, MDC, LCOM, and LSA
provide the MPT community with availability of a variety of task data. These-
data vary in their content, currency, -ease of access-, accuracy, and
reliability. While each system generally satisfies the uses for which -it was
designed, a combination of the data from two or -more of the- data systems would
increase the decision data base for MPT issues.

Although tasks are identified in -each of the systems at -different levels


of specificity, there are data elements from each of the systems that would
permit development of a system for linking the data bases. There is sufficient
task identification data in the individual systems to permit as much- as an
estimated 60 percent of the linking effort to be accomplished automatically.
The remaining 40 percent of the effort would involve subject matter specialists
to refine the product of the automatic linking and to -map those data that
cannot be automatically linked.

Because of the differences in data design, it is recommended that matching


historical data be approached on a request basis. Beginning now, -however, some
actions could be implemented that would facilitate the future linking of the
systems. These actions involve making occupational analysis task statements
more compatible with MDC task statements, greater consistency of background
information section items in occupational surveys, using AFSC on the AFTO 349
in MDC, and greater discipline in the completion of LSAdata records to provide
more timely and accurate data, particularly work unit codes. In addition,
collaboration of OSM and LCOM manpower specialists on reconciling task
performance and time issues would be an immediate step to increasing OSM
usefulness for LCOM modelling.

The LSA is an excellent source of task analysis information- for use in-
technical publication development and in designing training. Unfortunately,
some trainers report that the data are completed too late in the life cycle to
-be useful for developing the training materials to be used at implementation- of
a new weapons system. Also, accessibility of the data has been a problem,
although the development of the Unified Data.Base (UDB) promises to relieve -the
accessibility issue. In this regard, the 3306 TES is presently embarked on a
trial use of the UDB for ISD in concert with the AFHRL. Preliminary
indications are that the data handling capabilities of the UDB can contribute
greatly to -ISD for new systems as practiced by the 3306 TES. "ven so, the LSAR
must still be bought and still be made available earlier than they usually are
for the UDB (or in any other data base management system) to be useful in ISD
for emerging weapons systems (or in any other MPT application of LSAR data).

While a linking of the task data systems can be expected to create a more
comprehensive data base for MPT purposes, there will yet be missinginformation
vital to these purposes. None of the systems provide for determining the
similarity of the knowledges and skills requirements among tasks. This

129
information is crucial to personnel and training activities,, especially those
like RIVET WORKFORCE (Boye- 1985) and SUMM4A (Boyl', 1986) -in which- structuring
jobs into specialties is-the Lssue.

A TIES- is feasible and should -be developed-. It is a system that could be


consisteiht with the FOOTPRINT and Z-ARDMN initiatives. It is also suggested
that -the system be -based on the O6cup ational analysis, tasks -as the highest
level of linkage. teve-lopment should- start with che development of a
methodology -which would, result in raising -the OSM -data.base above the AFSC
level And provide accessibility t6 zthese- data and- the data with which they are
linked fr6m a variety of perspectives:- e.g., weapons system, subsystem-, AFSC,
actions taken., Linking of MDC, LOOM, and LSA -data with survey data would
proceed by automatic and-manual processes.

130
XII. REFERENCES

AFMEA Logistics Composite Model Primer (1986). Randolph AFB, TX, Hq


AFMEA/MEXL.

Archer, W.B. (1966). Computation-of-Group Job Descriptions From Occupational


Survey Data. PRL-TR-66-12, Lackland AFB, TX, Personnel Research
Laboratory.

Armstrong, B., & M6ore, S.C. (1980). Air Force manpower, personnel, and-
training roles-and interactions. Rw2429-AF, Santa Monica, CA, The Rand
Corporatfon.

Bell,, J.M. -& Thomasson, M.C. (1984). Special Report, Job Categorization
Project. (Available from the U.S. Air -Force Occupational Measurement
Center, Randolph AFB, TX).

Boyle, E. (1986, August). Small Unit Manpower Analysis (SUMMA). Proceedings


of the Society of Logistics Engineers, 21st Annual Symposium, Baltimore,
MD.

Boyle-, E., Goralski, S., and Meyer, M. (1985). The-aircraft maintenance


workforce now and in the 21st century. J. of Log., 4-6.

Christal, R.E. (1985, May). Keynote address at the Fifth International


Occupational Analysis-Worksh6p. Randolph AFB, TX.

Christal, R.E. (1971, August). Stability-of consolidated job descriptions


based on task inventory survey information. AFHRL-TR'-71-48, Brsoks-Air
Force Base, TX, Air Force Human Resources Laboratory.-

Christal, R.E. (1969). Comments by the Chairman. In proceedings of division


19. Division of Militar- Psychology SymposiuM: Collecting, analyzing. and
reporting information describing jobs and-occupations. 77th Annual
Convention of the American Psychological Association.

Cronk, R. (1986, May). Logistic Support Analysis--Logistic Composite Model


Conversion, Briefing at the Logistics Composite Model Steering- Committee
Meeting. Randolph AFB, TX.

DPSS Working Paper (1986, May). Form LCOM Steering Committee Meeting.
Randolph AFB, TX.

Driskill, W.E., & Bowers F. (1978). The stability over-time of Air Force
enlisted career ladders as observed in-occupational survey reports.
Proceedings of the 20th annual conference of the-Military Testing
Association, Oklahoma City, OK.

131
Driskill, W.E., Mitchell, J.L., & Ballentine, R.D. (1985). Using job
performance as a criterion for evaluating training effectiveness. Draft
Technical Report prepared under contract number-F33615-83--C-0030, Brooks
AFB, TX, Air Force Human Resources Laboratory.

Driskill, W.E., & Gentner, F.C. (1978).. Four fundamental criteria for
describing the tasks- of- an occupational specialty. 'Proceedings of the
20th Annual Conference of -the Military Testing Associationj Oklahoma
City.

Driskill W.E-., &-Mitchell, J.L. (1979). Variance withinloccupational fields:


Jobs Analysis V. Occupational Analysis. Proceedingfs of the 21st Annual,
Conference of the Military Testing Association.

Driskill, W.E., Keeth, J.B., & Gentner, F.-C. (1981). Work Characteristics-A
task-based, benchmark approach. In Proceedings of the 23rd Annual
Conference of the Military Testing Association, Arlington, VA.

Dunigan, J.M., Dickey, G.E., Borst, M.B., NaVin, D., Parham, D.P., Weimer,
R.E., & Miller, T.M. Combat maintenance capability: Executive-summarv.
AFHRL-TR-85-35, Brooks AFB, TX, Air Force Human Resources Laboratory.

Ellingsworth, M.E., Olivier, L.-F., & Pfeiffer, G.J. (1985). -Occupational


research data bank. Paper presented at the-26th Annual Conference of the
Military Testing Association.

Falle, I., &-Knight, J.R. (1983). The assessment of common skills among
similar career ladders. In Proceedings of the 25t Annual Conference of
the Military Testing Association, Gulf Shores , AL.

Fleishman, EiA., and Quaintance, M.K. (1984). Taxonomies of human performance.


Orlando, FL, Academic Press.

Foley, J.P. (1980). Occupational analvsis technology: Expanded role- in


development of cost-effective maintenance systems. AFHRL-TR-80-39,
Brooks AFB, TX, Air Force Human Resources Laboratory.

Garcia, S.K., Ruck, H.W., & Weeks, J. (1985). Benchmarking -learning difficulty
technology: Feasibility of operational implementation. AFHRL-TR-85-33,
Brooks AFB, TX, Air F6rce Human Resources Laboratory.

Garcia, S.K. (i984). Validation of relative-time-spent rating scales-. AFHRL-


TR-84-11. Brooks AFB, TX, Air Force Human Resources Laboratoiy.

Goldstein, I.-L. (1978). The pursuit of validity in evaluation of training


programs. Human Factors, 20(2), 131-144.

Harvey, R.J. (1986). Quantitative approaches to job-classification: A review


and critique. Perts. Pych., 39, 267-289.

132
Horton, T. (1985, May). Using cognitive task analysis to examine
troubleshooting tasks. Proceedings of the 5th International Occupational
Measurement Workshop. Randolph AFB, TX, USAF Occupational Measurement
-Center.

Maintenance and Operational Data Access System (MODAS)I, undated. (Available


from Hq. Air Force Logistics Command , AFLC/MME-2, Wright-Patterson AFB,
Ohio.)

Mayer, F.A., & York, M.L. (1974). Simulating maintenance-manning new weapon
system-development. AFHRL-TR-74-97(I), Wright-Patterson AFB, OH, Air
Force Human Resources Laboratory.

McFarland, B.P. (1974). Potential uses of occupational analysis data by Air


Force management engineering teams (AFHRL-TR-74-54) Lackland AFB: Air
Force Human Resources Laboratory.

Meister-, D. (1976). Behavioral Foundations of System-Development. New York:


Wiley.

Monson, L., Wagner, M., and Eisenberg, M. (1985). Training requirements


determination: Are occupational survey data adequate? Presentation- at
93rd Annual Convention of the American Psychological Association-, Los
Angeles, California.

Moore, S.C., & Boyle, E. (1986). Exploratory analysis of aircraft specialtv


restructure. AFHRL-TR-86-XX, Brooks AFB. TX, Air Force Human Resources
Laboratory (in-press).

Mulligan, J-.F., & Bird- J.B. (1980). Guidance for maintenance task
identification and analysis: Organizational and intermediate maintenance.
AFHRL-TR-80-21, Brooks AFB, Texas-

-Page, R.C. (1976). Concurrent validation of police promotional examinations:


An interjurisdictional itudy (Technical Report Number 18). Minneapolis:
Minnesota Department of Personnel.

Quick Reference Guide to Maintenance Data -Collection (MDC), undated.


(Available from Hq. Air Force Logistics Command, AFLC/MME-2, Wright-
Patterson AFB, OH.)

Reed, L.E. (1967). Advances in the use of computers for handling human factors
data. AMRL-TR-67.i6, Wright Patterson AFB, OH:iAerospace Medical Research
Laboratories.

Rose, A.M., Czarnoiewski, M.Y., Gragg, F.H., Austin, S.H., Ford, P., Doyle, J.,
Hogman, J.. (1984). Acguisition and retention of soldering skills.
Report prepared for U.S. Army under contract MDA 903-81-C-A01.

133
Ruck, H., & Boyle, E. (1985),. Task identification and evaluation system: A
proposal for AFHRL Interdivisional Research (Unpublished manuscript).

Tartell, J. Security police career field airbase ground defense tactic's.


AFPT 90-811-137 and 90-812-138. Randolph AFB, TX: U.S. Air Force
Occupational Measurement Center.

134
XIII. BIBLIOGRAPHY

Handbook for Designers of Instructiona1 Systems, (Volumes i-5). Air Force


Pamphlet 50-58.

Maintenance Data Collection System. Air Force Technical 0ider 00-20-2.

Maintenance Data Collection System (MDC) User's Manual. Air Force Manual
6-267.

Maintenance Management Policy. Air Force Regulation 66-1.

Logistics Composite Model (LCOM). Air Force Regulation 25-8.

Occupational Analysis Program. Air Training Command Regulation 52-22.

Occupational Analysis. Air- Force Regulation 35-2.

Policy and Guidance for Instructional Systems Development. Air Force


Regulation 50-8.

Airman Classification Regulation. Air Force Regulation 39-1.

Logistics SuRoort Analysis. MIL-STD-I388-lA.

DOD Requirements for a Logistic Suport Analysis Record. MIL-STD-1388-2A.

135
THE APPENDIX

DESCRIPTIONS OF ASCII CODAP PROGRAMS

136
Descriptions of ASCII CODAP Programs

DICTXX: Lists a description for each computed or -background variable


currently--available for use in- the analysis -of an AFSC. Background
variables consist -of information collected: directly from the job
ihcumbents and/or raters. ;Computed variables are created by
interacting the background variables and/or task information.

GROUP: Interprets the information contained in a similarity matrix created


by the OVRIAP program and produces a clustering solution. This
solution may be used by other programs in the system to control the
order in which the cases and tasks are reported.

OVRLAP: Creates a similarity matrix for -either cases or tasks that can be
used by the GROUP program to group together individuals into a
totally nested hierarchical clustering solution.

PRTJOB: Prints a job description for any desired group of job incumbents.
The job description is an ordered list -of -task or duty statements.
These descriptions are referred to as- either a task or duty job
description, respectively. The percentage of the members in the
group who perform each task or duty, and-the average percentage of an
incumbent's time that is spent performing the task or -duty is listed
with each statement.

PRTMOD: Reports task-level data along with summary values for identified sets
of tasks called modules.

TASKXX: Lists the tasks contained in the job inventory for the AFSC
specified. The tasks statements are grouped together in the listing
into modules or duties.

137

You might also like