AIVR2019 Bellarbi

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Towards Method Time Measurement Identification

Using Virtual Reality and Gesture Recognition


Abdelkader Bellarbi, Jean-Pierre Jessel, Laurent da Dalto

To cite this version:


Abdelkader Bellarbi, Jean-Pierre Jessel, Laurent da Dalto. Towards Method Time Measurement
Identification Using Virtual Reality and Gesture Recognition. IEEE International Conference on
Artificial Intelligence & Virtual Reality (AIVR 2019), Dec 2019, San Diego, California, United States.
�hal-02413200�

HAL Id: hal-02413200


https://hal.science/hal-02413200
Submitted on 16 Dec 2019

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est


archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents
entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non,
lished or not. The documents may come from émanant des établissements d’enseignement et de
teaching and research institutions in France or recherche français ou étrangers, des laboratoires
abroad, or from public or private research centers. publics ou privés.
Towards Method Time Measurement Identification
Using Virtual Reality and Gesture Recognition
Abdelkader Bellarbi1, Jean-Pierre Jessel2, Laurent Da Dalto3
1,2
IRIT, University of Toulouse, France
3
Mimbus, France
1
abdelkader.bellarbi@irit.fr, 2jean-pierre.jessel@irit.fr, 3laurent.dadalto@mimbus.com

Abstract—Methods-Time Measurement (MTM) is a analysis. This can be realized by comprising numerous


predetermined motion time system that is used primarily in different motions within relatively, long single motion
industrial settings to analyze the methods used to perform any sequences (e.g. reaching, grasping, positioning and releasing
manual operation. In this paper, we introduce a system for an object).
automatic generation of MTM codes using only head and both
hands 3D tracking. Our approach relies on the division of MTM-UAS offers the possibility to deduce the
gestures into small elementary movements. Then, we built a weaknesses of work processes in a more direct and concise
decision tree to aggregate these elementary movements in order way. In addition, the use of standard basic movements is a tool
to generate the realized MTM code. for automatically detecting different motion sequences by
identifying the typical characteristics of each sequence of
The proposed system does not need any pre-learning step, movements. Therefore, MTM-UAS serves as the basis for
and it can be useful in both virtual environments to train performing time studies automatically by identifying time
technicians and in real cases with industrial workshops to assist data and process modules.
experts for MTM code identification. Obtained results are
satisfying and promising. This work is in progress, we plan to However, the observation of technicians and the
improve it in the near future. generation of MTM codes are still done manually by experts.
Most of existing software are to facilitate recording,
Keywords— Methods-Time Measurement (MTM), Gesture management and reports generation.
Recognition, 3D Tracking, Virtual Reality.
In this paper, we propose to automate this practice using a
I. INTRODUCTION motion capture system that will allow us to analyze the
The production line’s designing process, involves several movements of a technician in exercise or simulation, in order
steps, including the development of operating procedures. For to automatically generate the MTM-UAS codes.
this, the designer based mainly on the temporal aspect. Indeed, This paper is organized as follows. In section II, we
the objective is to ensure that the duration of the tasks assigned present a brief state of the art of the different existing MTM
to each operator does not exceed the time previously defined analyses systems. In section III, we describe our proposed
by the production objectives. A second aspect that is important approach. System implementation and some results are
to take into account is the ergonomic work situation, in order presented in section IV. Finally, conclusions are drawn and
to ensure operator comfort and avoid health problems due to future plans discussed.
poor working conditions.
II. RELATED WORKS
In this sense, work analysis methods were taken into use
in the industrialized countries on a larger scale since the 1930s Since the works of Frank Bunker Gilbreth and then Asa
[1]. Thus, the predetermined time systems make it possible to Bertrand Segur in the 1920s, numerous time measurement
assign, a priori, durations associated with the elementary systems have appeared such as MTS (Motion Time Survey,
action execution such as reaching, grasping, placing, pressing, 1928). WF (Work Factor, 1936). MTM (Methods Time
walking, etc. The times attributed to these actions vary notably Measurement, 1948 for the initial MTM-1 version, 1965 for
according to the distance of movement, the difficulty to carry the second version MTM-2, and from 1975 to 1984 for the
out the action and the effort exerted by the operator. MTM-UAS, MTM-MEK and MTM-SAM versions).
MODAPTS (Modular Arrangement of Predetermined Time
Among the most famous of this kind of systems is Method Standards, 1966) and MOST (Maynard Operation Sequence
Time Measurement (MTM). MTM was developed by Technique, 1980) [4]. These different versions appeared in
Maynard in the United States [2]. This method, serves as an order to adapt to changes in production methods.
instrument for the formal modelling, analysis, planning and
design of work systems. It analyzes the methods used to From the 1990s, with the development of computer
perform any manual task and set the standard time in which a science, several software have been developped in order to
worker should complete that task. help MTM experts. These software represent complementary
tools for the experts to record workers motions, to input the
MTM consists of various methods such as MTM-1, MTM- observed MTM codes and to process these data in order to
2, MTM-UAS and MTM-MEK that may be utilized improve time and working conditions and to avoid any
depending on the characteristics of the underlying manual potential risk for the operators by taking into consideration the
work process [3]. For the production of medium to large cycle time and the physical charge associated with the planned
product series (characterized by e.g. moderate cycle times, operating modes. Thus, the Motion Analysis and Index
several repetitions), the MTM-UAS method may be applied Derivation (MAID) software developed by Kuhn and Laurig
as it ensures a relatively low effort regarding the process [5] to extract from the MTM codes percentages of targeted

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


cycle times, using the hands, arm and trunk, but also: effort,
visual control, precision. EMMA software [6] aimed to
minimize the cycle time and maximizing criteria such as the
proportions of Operations performed in the field of view (no
head movement) and the proportions of Operations performed
using both hands. For MTM time studies, it suffices to observe
a cycle of work content. However, it takes up to 200 times the
observation time to perform the analysis, depending on the
level of the MTM method. The duration of the analysis Fig. 1. The designed 3D environment to simulate a carpentry workshop
depends on the experience of the employee and requires
extensive training in MTM. B. Motion Analysis and MTM-UAS Codes Recognition
In addition, ergonomic evaluators are integrated into this The MTM-UAS is based on seven (7) basic movements,
kind of software. Thus, in 1993 appeared ErgoMost we can group them into body movements like walk and kneel,
combining a predetermined time analysis with an ergonomic hands movements like operate and place, and eyes movements
evaluator that takes into account risk factors such as strength, like visual control. Each movement is defined by a symbol.
posture, and vibration. ErgoMost provides risk assessment for Table 1 illustrates these seven basic movements.
the neck, upper limbs, lower limbs and back [7].
TABLE I. DESCRIPTION OF MTM-UAS BASIC MOVMENTS
However, these software do not allow an automatic Symbol
generation of MTM codes, experts have to observe and input Basic MTM-UAS
Movement Description (First
Movements
MTM codes manually. The use of computer vision-based Character)
approaches for gestures recognition might be useful for an Body
Body Motion K
Movements
automatic generation of MTM codes [8, 9, 10].
Get and Place A
Recently, Benter and Kuhlang [11] proposed an approach
to detect body motions in accordance to MTM-1 using motion Handle H
capture data from 3D cameras. In [12], the authors proposed Hands
an automatic generation of the MTM-1 code from motion Movements Place P
capture data using convolutional neural network.
Cycle Motion Z
Both approaches seem interesting, in the sense that they
are based on gesture recognition in order to generate the MTM Operate B

code. However, they require full visibility of the operator by Eye


Visual Control V
the motion capture systems, which is often not possible. movement
Objects may obscure the line of sight between the camera and
the worker body. Then, either it does not detect the joints or it
calculates faulty coordinates. Furthermore, even with an These symbols (characters) are followed by other
unobstructed view, it does not always interpret the 3D data characters and numbers according to some details, like for
correctly. example the precision of the action, the mass of the handled
object, or the distance traveled by the hand. Hence, according
In this paper, we propose an approach for automatic to these details, each MTM-UAS code has a predetermined
generation of MTM-UAS codes by only tracking the head and time, which is defined by the TMU (time measurement unit).
the two hands. Thus, we may use this system in real situation Where: 1 TMU = 36 millisecondes; 1 hour = 100,000 TMU.
with workstations as well as in virtual reality simulation. In
addition, the proposed technique does not require any training The total number of MTM-UAS codes is 29 codes. Figure 1
or learning step. In the following, we describe our proposed illustrates the different MTM-UAS codes with their
system. predetermined time, as defined by International MTM
Association [13].
III. PROPOSED APPRAOCH
A. Setup Design
In order to reach the mentioned aim, we first designed a
virtual environment to simulate the technician workstation, for
that, we built a setup which composed of a virtual reality
platform (HTC Vive). Replication of the real workplaces
allows to adjust system parameters and to represent various
tasks in all workplaces of woodworking. In order to maintain
the naturalness of the replication movements, we have
designed the equipment, tools, materials and the environment
in a way so that they are as close as possible to the actual
workplaces (Figure 1).
As a result, replicas are equipped with professional tools
(saw, hammer, drill …). The goal is to cover all basic
movements according to MTM-UAS and their corresponding
forms. Fig. 2. MTM-UAS codes [12]
Elementary Body Motions
movement

d Time t Eye Body Hands


Movement Movement Movement
Time t d

X d
Time t+1
Y Visual Walk Handle Place …
Z Control (VA) Tool
d (KA)
X d
Y Bend, Kneel,
dz
Z Arise (KB) Approximate Loose Tight
X
Y d Sit and
Z d stand HA1 HC1
HB1
(KC)
dz
HA2 HB2 HC2

HA3 HB3 HC3

Fig. 3. Description of the proposed approach

In order to allow the recognition of all these codes, we


proposed a decision tree-based approach. Therefore, we
created a tree by grouping the motions according to their
origin (body, hands and eyes) then we generated from each
branch other detailed movements according to the mentioned
MTM-UAS parameters in Figure 2.
The leave nodes of this tree represent the MTM-UAS
codes. Each internal node of the tree contains specific data
about the two hands and the head namely 3D position, 3D Fig. 4. Exemple of a worker performing some actions of the scenario.
rotation and time.
We tested this scenario with 10 participants and we
Thereafter, we implemented an algorithm that, at each
time t, captures data of both hands and head, and extracts from generated the MTM-UAS codes using the proposed
each one small sequence on movement (elementary technique. Table 2 shows the obtained results of five
participants among ten (false detection is in Bold), as well as
movement) then compares them to data in the tree in order to
decide which branch must follow until it arrives to a leave the codes given by an expert (Groundtruth). For the other 5
participants, the system detected correctly their motions, and
node (MTM-UAS code). Figure 3 gives a description of this
approach. The calculation of elementary movement helps to generated exactly their MTM-UAS codes.
estimate the hands motion acceleration and deceleration that
TABLE II. RESULTS OF MTM CODES DETECTION
used to detect the precision of the movement. After a series of
tests, we fixed the sequences length by 5 TMU.
Detected codes
Movement Grou
Once a correct MTM code is detected, the algorithm Description nd. Part. Part. Part. Part. Part.
restarts from root node in order to detect another MTM-UAS 1 2 4 7 8
code. Get and place the
AH3 AH3 AH2 AH3 AH3 AH3
piece of wood
Get the saw and
IV. IMPLENTAION, TEST AND DISCUSSION place it on the wood
HB2 HB2 HB2 AC2 HB2 HB2
Cut the wood (10 ZB1 ZB1 ZB1 ZB2 ZB1 ZB2
We implemented our proposed system under Visual times) (x10) (x10) (x10) (x10) (x10) (x10)
Studio C# 2017 and Unity3D version 2018.3.3 running on a Get the drill and
HC2 HC2 HC2 HC2 HC2 HA2
Windows PC with Intel Xeon 4410. We used an HTC VIVE place it on the wood
Drill the wood BA1+ BA1+ BA1+ BA1+ BA1+ BA1
Pro for virtual environment visualization and interaction, as (Operate) PT PT PT PT PT +PT
well as for motion capture (the two Vive Controllers for both Get the drilled wood AH2 AH2 AH2 AH2 AH2 AH2
hands and the HMD for the head). For real cases, we can use Control it VA null VA VA null VA
Walk to the table KA KA KA KA KA KA
VIVE Trackers or reflective markers with infrared cameras for Place the wood PA2 PA2 PA2 PA2 PA2 PA2
head and hands tracking.
In order to test and evaluate our prototype, we proposed We note that some actions cannot be detected by this
the following woodworking scenario: the worker gets a piece algorithm, namely “Operate” or “Processing Time PT”. Thus,
of wood from his left (mass: 4kg, distance 60cm) and places we implemented them as events in VR simulation. In real
it on the table front of him. Then, he uses the saw on the shelf cases, we should add a physical event trigger to detect these
in front of him to saw the piece of wood (~ 10 times). He gets actions.
the electric drill in front of him and drills the piece of wood
(Figure 4). Then, He controls the drilled wood. Finally, he gets Nevertheless, the proposed algorithm could detect and
that piece wood and walks to the table on his right to place it. generate correctly up to 94 % of codes without counting the
codes Operate and Processing Time. It generated false
detection only 5 times among the 80 motions done by the 10 REFERENCES
participants.
The weak point of this technique is that it rejects or falsely [1] Laring, J., Forsman, M., Kadefors, R., Et Al. MTM-based ergonomic
workload analysis. International journal of Industrial ergonomics,
detects an action if the worker does another gesture (eg. 2002, vol. 30, no 3, p. 135-148.
scratching, adjusting glasses) while performing an MTM
[2] Maynard, Harold B., Gustave James Stegemerten, and John L. Schwab.
motion. "Methods-time measurement.", 1948.
[3] Karger, Delmar W., and Franklin H. Bayha. Engineered work
V. CONCLUSTION measurement: the principles, techniques, and data of methods-time
In this paper, we introduced an approach for MTM-UAS measurement background and foundations of work measurement and
code automatic generation. This technique is based only on methods-time measurement, plus other related material. Industrial
Press Inc., 1987.
head and hands positions, and it does not require any learning
[4] Sumanth, D.J.: Productivity Engineering and Management:
step. In addition, the proposed system can be used in real Productivity Measurement, Evaluation, Planning, and Improvement in
cases, as well as in virtual simulation for technicians training. Manufacturing and Service Organizations. McGraw-Hill, New York,
1984.
Evaluation done for this system with participants using a
[5] Kuhn, FRANK M., and W. O. L. F. G. A. N. G. Laurig. "Computer-
virtual simulation of carpentry workshop led to good results aided workload analysis using MTM." Computer-Aided Ergonomics:
in terms of MTM-UAS codes recognition, which are validated a Researchers Guide. Taylor and Francis, London (1990).
by MTM experts. Thus, this system might be useful in [6] Braun, W. J., R. Rebollar, and E. F. Schiller. "Computer aided planning
industrial workshops to assist experts for MTM code and design of manual assembly systems." International journal of
identification and to reduce the processing time. production research 34.8 (1996): 2317-2333.
[7] Zandin, Kjell B., et al. "ErgoMOST: An engineer's tool for measuring
In the near future, we will improve the recognition ergonomic stress." OCCUPATIONAL SAFETY AND HEALTH-
algorithm and perform tests in real cases. On the other hand, NEW YORK- 27 (1996): 417-430.
we plan to spread this work to introduce ergonomic and health [8] Chen, Chen, Roozbeh Jafari, and Nasser Kehtarnavaz. "A survey of
factors study. depth and inertial sensor fusion for human action recognition."
Multimedia Tools and Applications 76.3 (2017): 4405-4425.
ACKNOWLEDGMENT [9] Bellarbi, Abdelkader, Benbelkacem, Samir, Zenati-Henda, Nadia, et
al. Hand gesture interaction using color-based method for tabletop
This work is part of VILCANIZER Project, funded by the interfaces. In : 2011 IEEE 7th International Symposium on Intelligent
Occitanie Region in France. Signal Processing. IEEE, 2011. p. 1-6.
[10] Bellarbi, Abdelkader. Vers l'immersion mobile en réalité augmentée:
The authors would like to thank M. Alain Extramiana, une approche basée sur le suivi robuste de cibles naturelles et sur
CEO of Amber Innovation and member of the France MTM l'interaction 3D.(Toward mobile immersion in augmented reality: An
Association, for his help as an expert of MTM. approach based on robust natural feature tracking and 3D interaction.).
2017. Thèse de doctorat. University of Évry Val d'Essonne, France.
[11] Benter, Martin, and Peter Kuhlang. "Analysing Body Motions Using
Motion Capture Data." International Conference on Applied Human
Factors and Ergonomics. Springer, Cham, 2019.
[12] Burggräf, Peter, et al. "Automation Configuration Evaluation in
Adaptive Assembly Systems Based on Worker Satisfaction and Costs."
International Conference on Applied Human Factors and Ergonomics.
Springer, Cham, 2019.
[13] International MTM Association : http://mtm-international.org/.

You might also like