Book

Download as pdf or txt
Download as pdf or txt
You are on page 1of 182

Sigifredo Laengle

Practical
Business Process
Engineering
B E TA E D I T I O N 1.1

Publisher Sigifredo Laengle


Copyright © 2023 Sigifredo Laengle

published by publisher sigifredo laengle

tufte-latex.googlecode.com

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in com-
pliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/
LICENSE-2.0. Unless required by applicable law or agreed to in writing, software distributed under the
License is distributed on an “as is” basis, without warranties or conditions of any kind, either
express or implied. See the License for the specific language governing permissions and limitations under
the License.

First printing, October 2023


Contents

1 The Real Time Enterprise 23


1.1 Summary 23
1.2 Advantages of a RTE and How to Implement it 23
1.3 The RTE and Coordination Mechanisms 31
1.4 Questions and Problems 31

2 Strategic Process Management 33


2.1 Introduction 33
2.2 Strategic Process Modelling 33
2.3 Initiatives Identification 45
2.4 Initiatives Evaluation 46
2.5 Initiatives Evaluation by Using Analytical Hierarchical Process 64

3 Modelling in BPMN and DMN 79


3.1 Introduction 79
3.2 Problems and Questions of BPMN 79
3.3 Decision Modelling and Notation Correctness 108
3.4 Decision Modelling and Implementation 114

4 Analysis and Optimisation 121


4.1 Stochastic Simulation of BPMN Models 121
4.2 Standardise Information, Automate, and Plan Capacity 139
4

5 Process to Applications 153


5.1 Introduction 153
5.2 Workflow Implementation in Workflow Signavio 153

6 Process Mining 163


6.1 Introduction 163
6.2 Process Mining Concepts 163
6.3 Technical Appendix 173
6.4 Process Mining Software 174

Index 181
List of Figures

1.1 Pit stop for the Scuderia Ferrari during a Formula 1 race. 23
1.2 The McLaren team’s telemetry unit at the Formula 1. 24
1.3 The process innovation cycle. 26
1.4 Schematic diagram of real-time procedure adopted by Spevi. 28
1.5 Part of the manually completed event log file. 29
1.6 As-is model of current process generated by ProM from event log
and exported to Yasper. 29
1.7 To-be model of process modelled and optimised in Yasper for export
to Yawl. 30

2.1 Solution Prob. 2.1. The objective tree is subdivided in four objective
groups (in this example we have only two). They are written in a suc-
cinct way, few words, and with a verb. In this example, we found ob-
jectives belong to the two first objective groups. 33
2.2 Solution Prob. 2.2. The output tree represents the main two strate-
gies of the Bank: to maintain the current outputs and to evolve to
electronic services. 34
2.3 Solution Prob. 2.2. A general objective tree model of an organisation
has four group of objectives related by causality. We found objectives
belong to the the first three levels. 35
2.4 Solution Prob. 2.2. The initiative allocation diagram model corresponds
to the description of two initiatives described in the text 35
2.5 Solution Prob. 2.3. The objective tree shows new objects, namely four
objectives which there not exist in the original text. Thus, the model
is simplified and made more readable. 36
2.6 Solution Prob. 2.4. The objective tree differentiates causes and effects.
The diagram can be read top-down (from effects to causes) by an-
swering the question how or bottom-up (from causes to effects) by
answering the question why. 37
2.7 Solution Prob. 2.5. The objective tree of the company Codelco. This
is again an example of the objectives identification of a complete com-
pany, i. e. we try to identify four levels of objectives. In this case, we
only found objectives related to the process performance space. 38
6

2.8 Simplified representation of the processes of Soprole company. 38


2.9 Solution Prob. 2.6. The value added chain of the company Soprole
is obtained easily from Fig. 2.8. 38
2.10 Solution Prob. 2.6. The output tree of the company is obtained im-
mediately from Fig. 2.8. 39
2.11 value added chain concept proposed and popularised by Michael Porter
in 1985. 39
2.12 Simplified representation of the process of the company Soprole with
objectives assigned to different levels. 41
2.13 Solution Prob. 2.8. The extended value added chain of the company
Soprole is obtained easily from Fig. 2.8 and from 2.12. 41
2.14 Solution Prob. 2.8. The objective tree of the company Soprole for the
Milk process is obtained from Fig. 2.8 and from 2.12. We added the
corresponding level to each objective. 42
2.15 Solution Prob. 2.8. The initiative allocation diagram of the company
Soprole is obtained from Fig. 2.8 and from 2.12. We selected three
objectives of level three associated to two different initiatives. 42
2.16 Solution Prob. 2.9. The left output tree represents a grouping asso-
ciated to the productive system and less linked to the customer. Al-
though the difference is not absolute, the right model is more asso-
ciated with the customer. 42
2.17 Simplified representation of the the value chain of agriculture prod-
ucts in Africa (Source: https://africanharvesters.com). 43
2.18 Solution Prob. 2.9. The value added chain of Fig. 2.17 is easy to model.
An object of the figure was not considered as process. 43
2.19 Solution Prob. 2.11. Two output trees of the organisation Spevi ac-
cording to its web page www.spevi.cl. The left one is more oriented
to the production, while the right one is a proposal more customer
oriented. 44
2.20 Solutions of the problem Palace Hotel. maximax assumes that na-
ture plays to favour the decision maker. In maximin it plays against
him. In minimax regret nature plays against her regret. 48
2.21 maximax assumes that nature plays to favour of the agent. The best
decision is III. 51
2.22 maximin assumes that nature plays against the agent. The best de-
cision is I. 52
2.23 maximax assumes that nature plays to favour me. According this cri-
terion the best decision is IV. 53
2.24 Problem Rich and Co.. maximin assumes that nature plays to favour
me. According this criterion, the best strategy is III. 53
2.25 Problem Rich and Co.. minimax regret assumes that nature plays
gainst the agent maximising her regret. The best decision according
this criterion is I. 54
7

2.26 The best decision according the Expected Value (EV) is warehouse
management system with value $11.2. 57
2.27 Expected Opportunity Loss. The best decision is integrated mate-
rial requirement planning with value $1.2. 58
2.28 The decision tree and identification of the best decision under per-
fect information for calculating the Expected Value of Perfect Infor-
mation (EVPI) is 22.0 − 11.2 = $10.8. 58
2.29 Decision tree of the problem of the machine owner with the addi-
tional information of the consultant. The strategies are warehouse
management system in case that the consultant recommends con-
tract (C) and integrated material requirement planning in case that
the consultant recommends not contract (NC). 62
2.30 Banco las Américas is determined to improve university students’
service. The bank has identified four criteria university students value
and three projects to prioritise 65
2.31 Pair comparison matrix according to the criterion “improve account
visibility” (C1 ). An Excel sheet can be downloaded from the ahp.xls. 71
2.32 Pair comparison matrix and normalised matrix for our four criteria. 72
2.33 This step consists on to computing the weights of each project ac-
cording to the corresponding weights of the projects for each crite-
rion and the weights of all criteria. 72
2.34 The last step consists on the compute the consistency index. RI is
determined according to the Table 2.23. In general, the degree of con-
sistency is satisfactory if CI/IR is less than 0.10, which is this case. 73

3.1 This poster contains the objects and their definitions of the standard
BPMN 2.0 published by the OMG Group (https://www.omg.org/).
The poster was elaborated by Berliner BPM–Offensive in 2015. You
can access this poster in Spanish on the link poster-ES. 80
3.2 Hardware Retailer. This is the process Shipment Process of a Hard-
ware Retailer. We are trying to understand the behaviour of differ-
ent gateways in the process flow. 83
3.3 Order Process for Pizza. This example is about Business-To-Business-
Collaboration. Because we want to model the interaction between
a pizza customer and the vendor explicitly. 85
3.4 Fulfilment and Procurement. This example tries to explain the be-
haviour of boundary events attached to an activity. Let us observe
that the exclusive gateway that opens both paths is not closed by an-
other gateway. We say that XOR-Split is mandatory (so the XOR-Join
gateway is optional). 86
3.5 Stock Level Management. This example tries to explain the behaviour
of boundary events attached to an activity. It is similar to the former
example. 87
8

3.6 Procurement. We now zoom into the global sub-process procurement


that is used by both order fulfilment and stock maintenance. 88
3.7 High Level Incident Management. This model describes a software
manufacturer, who is triggered by a customer requesting help from
her account manager because of a problem in the purchased prod-
uct. This the current process. 89
3.8 Detailed Collaboration Incident Management. The model takes a
closer look at the interaction of account manager, support agents, and
software developer by switching from a single-pool-model to a col-
laboration diagram. 91
3.9 Whole Collaboration. This model describes the software manufac-
turer described in the last problem. It contains a detailed technical
description of a to be process, i.e., an improved process. 92
3.10 Use of a Signal Start Event (Above Model). The signal start event
contains information sent without addressee. Instead, a message does
not have it. Weakly Structured Process (Below Model). These types
of processes are modelled by using ad-hoc sub-processes. They con-
tains tasks that can be executed in any order, several times or not ex-
ecuted at all. 95
3.11 Modelling External Decision and Iterations. We recommend a start
event, the task receive confirmation can, in this case, be replaced by
an intermediate event. The task send reminder is repeated every 3
days after 2 days starting. 96
3.12 Modelling While-Do and Repeat-Until Iterations. Both cases are
common iterations. The first one ask first for the condition (while-
do), and the second one ask for the condition at the final (repeat-until). 97
3.13 A Simple Iteration Model with a Terminate end Event. This model
describes the situation in parallel of a cycle activity and the end con-
dition. 97
3.14 Incorrect Optional Iteration Model. Comparing this model with the
one of Fig. 3.13. This model also represents sequential iteration, but
without pause of 24 hours. 97
3.15 Optional Iteration Model. This model describes a typical situation
of an inclusive gateway use. The inclusive gateway that synchronises
both paths at the end is mandatory. 98
3.16 Modelling a Send Automatic Task. In this case we are modelling
the sending automatic task which is triggered by the catching timer
intermediate event called 6days. 98
3.17 Modelling a Boundary Event with Loop. The terminate end event
is absolutely necessary, because it kills the instances existing, thus
an infinite loop is avoided. 99
3.18 Sending 3, 000 Invitation for Nobel Prize. The iteration is modelled
by using the multiple instance activity, because the number of iter-
ation is known. 99
9

3.19 Collecting Completed Forms. The iteration is modelled by using the


cycle activity, because the number of iterations is not known. 100
3.20 User Task Without Normal End. Let us observe that the task is ex-
ecuted by a user, it is triggered somehow by a people and finishes
just after 14 days starting. It does not have a normal end. 100
3.21 Manufacturer and Customer. This model describes the collabora-
tion diagram between a manufacturer and its customer. It is possi-
ble to receive a cancellation message . The pool customer is option-
ally collapsed. 101
3.22 Manufacturer and Customer (Exclusive Event-based Gateway). This
model describes again the collaboration diagram between a manu-
facturer and its customer. Note the use of the exclusive event-based
gateway. 102
3.23 Manufacturer and Customer (Closing Exclusive Event-based Gate-
way). Note that the exclusive event-based gateway that opened the
three threads is closed by a standard exclusive gateway according
the text. 102
3.24 Ad-hoc Sub-process. For modelling weakly unstructured process.
An ad-hoc sub-process must contain at least one task, may contain
sequence flows, and must not contain neither start nor end events. 103
3.25 Event-based Exclusive Gateway. The paths are controlled by three
external events. We added standard (plain) and end events, because
we do have information how is the process triggered and how it con-
tinues. 104
3.26 From Manual to Automatic. We observe three situations from man-
ual, supported by a computer to full automatic. 105
3.27 Event-based Exclusive Gateway. The above model is a full automatic
process, excepting by the user task. We need an exclusive event-based
gateway, because the machine waits for an external decision and not
internal. We need an exclusive gateway for receiving the flow after
the reminder. The below model represents the situation that avoids
an infinite loop in case that the customer does not answer. 106
3.28 Event-based Exclusive Gateway. This is a full automatic process. We
need an exclusive event-based gateway, because the machine waits
an external decision and not internally. We need an exclusive gate-
way for receiving the flow after the reminder. 106
3.29 Implementation of a Booking Process. The sub-process is imple-
mented in a call activity. It contains event sub-processes, which are
triggered through the corresponding events. 107
3.30 A Simple DMN Table. It describes the decision about the creditwor-
thiness of a customer. It has four rules and two inputs. Every DMN
Table requires only one value for the output. 108
10

3.31 A Simple DMN Table. It describes the decision about the creditwor-
thiness of a customer. It has four rules and two inputs. Every DMN
Table requires only one value for the output. 109
3.32 DMN Table for the Hit Policies. It presents the rules for selecting
rules in a DMN Table. 109
3.33 A Simple DMN Table. It describes the decision about the creditwor-
thiness of a customer. It has four rules and two inputs. Every DMN
Table requires only one value for the output. 110
3.34 The extensive Table With Simulation. It presents an extensive ver-
sion of the original DMN Table of Fig. 3.30. The left table contains
the partitions of the outputs values. The right one contains the cal-
culation for representative customers. 111
3.35 The Applicant Risk Rating. It classifies applicant according the age
and medical history. 111
3.36 The Applicant Risk Rating. Extensive table of the DMN Table of Fig.
3.35. We show the formula in the Numbers sheet. 112
3.37 Personal Loan Compliance. It classifies applicant for loan the per-
sonal rating, card balance, and education loan balance. 112
3.38 Personal Loan Compliance. The extensive table of the DMN Table
of Fig. 3.37. The best hit policy is ANY. 112
3.39 The Applicant Risk Rating (2nd Version). It classifies applicant ac-
cording applicant age and the medical history. 112
3.40 The Applicant Risk Rating (2nd Version). The extensive table of the
table of Fig. 3.39. Because there is at least an input tuple (see employee
6) that corresponds to many different outputs, the best hit policy is
FIRST. 113
3.41 Special Discount. It assigns discount to a customer. 113
3.42 Special Discount. Extensive table of the DMN Table of Fig. 3.41. The
best hit policy is COLLECT. 113
3.43 Holidays. Rules for calculating the number of holidays days. 113
3.44 Holidays. The extensive table of the DMN Table of Fig. 3.43. Because
the company will apply a sum of total days for the holidays. The best
hit policy is COLLECT. 114
3.45 Holidays (2nd Version). It assigns number of days for holidays to
an employee. 114
3.46 Holidays (2nd Version). The extensive table of the DMN Table of
Fig. 3.45. Because the company will apply a sum of total days for the
holidays. The best hit policy is COLLECT. 115
3.47 Student Financial Package Eligibility. It assigns discount to study
fees according the student GPA, student extra-curricular activities,
and student national honor society membership. 115
11

3.48 Student Financial Package Eligibility. The extensive table of the DMN
Table of Fig. 3.47. Assume that the university will assign only one
discount. Because the student 11 will obtain two discounts, the best
hit policy will be FIRST. 115
3.49 Final Price. The DMN Table contains only three rules according to
the description of the case. 116
3.50 Final Price. the extensive table with the corresponding simulation.
Let us observe that the case silver and true has not answer from
the DMN Table, then it is not possible to apply any hit policy. Thus
the table must be modified for applications. 116
3.51 Pre-qualification. The DMN Table contains five rules according to
the description of the case. 117
3.52 Pre-qualification. the extensive table with the corresponding sim-
ulation. Let us observe that there are cases where there are more of
an output, but they are the same. Therefore, the hit policy ANY is
applicable. 117
3.53 Holidays. The DMN Table contains five rules according to the de-
scription of the case. 118
3.54 Holidays. the extensive table with the corresponding simulation. Let
us observe that there are cases where there are more of an output.
Although, they are the same, they must be summed. Therefore, the
hit policy COLLECT is the best. 118
3.55 Nobel Prize Process. This model describes the process of the Medicine
Nobel Prize. Five participants are part of the collaboration diagram. 119

4.1 Components of a Simple Queue. The most important components


we will study for the stochastic simulation are taken from the Queue
Theory. 122
4.2 Simple Queue System. The system satisfies the assumptions of a
simple queue system: FIFO customer discipline, infinite customers.
Time time between arrivals and the processing time are exponential
distributed. 125
4.3 Base Case. Each task satisfies the assumptions of a simple queue sys-
tem: FIFO customer discipline, infinite customers, and time between
arrivals and the processing time are distributed exponential. Each
task can be assumed approximately as a simple queue system. 126
4.4 Flexibilisation Strategy. Consists in to assign different tasks to a pool
of resources, which are assigned randomly if the resource is not busy. 128
4.5 Parallelisation Strategy. Consists in to break the tasks sequence. This
strategy is obviously better that the base case, but rarely possible. 128
4.6 Composition Strategy. Consists in to compose tasks in only one. This
strategy is better only if the service time of the composed task is less
than the sum of the processing times of the tasks that compose it. 129
12

4.7 Triage Strategy. Consists in to assign special resources to particu-


lar cases. Triage is a standard strategy used in the emergency rooms
for prioritising patients according their disease severity. 130
4.8 Process Model. This model contain two different paths. The first one
has a frequency of 0.7 (in case <=5000 EUR), while the frequency for
the second path is 0.3. 135
4.9 Process Model. This model contain several different paths. The prob-
lem consists on to determine them and to calculate the total time of
processing without the queue times. 137
4.10 Process Model. This model contain three different paths. The first
one is 1-3-4-2, the second 1-3-4-2-5-(6,7), and the third 1-3-4-5-(6,7).
The frequencies each path are 0.3 · 0.5, 0.3 · 0.5, and 0.7 respectively. 138
4.11 Initial review issues list model. After an interview with the issue list
manager, the initial model of the process is obtained. 143
4.12 As-is issue list management model. More precise investigation de-
termines how the process is executed, how tasks are triggered, pro-
cessing times and eventually costs. 144
4.13 Issue list management data model. We simply group the data and
information into a tree model that will serve as the basis for the ERM
model. 145
4.14 To-be issue list management model. It is assumed that the review
of the issue list and the determination of those issues ready for dis-
cussion are supported by the issue management system and will be
done in a composed task. 145
4.15 To-be issue list management alternative model. We propose to fully
automate the review issue list task and determine if issues ready, since
it has already been sufficiently standardised in a previous automa-
tion phase. 146
4.16 Initial invoice and payment model. The initial model of the process
is obtained by using Process Mining techniques. 146
4.17 As-is invoice and payment model. After an investigation or interview,
a more advanced model is obtained in which all the tasks are user
tasks. The type of each event is also confirmed and obtained. 147
4.18 Simplified data model of invoice and payment. The data and infor-
mation are grouped in a tree model that will serve as the basis for
the ERM model. 147
4.19 Decision rule for invoice classification. This rule considers the prod-
uct type, customer type, and amount for the invoice classification. 148
4.20 To-be invoice and payment. We automate the decision rule and the
release of the invoice. In addition, we create an intermediate mes-
sage for sending a request authorisation. The next task (trigger pay-
ment) remains unchanged. 148
4.21 Initial model order processing. The initial model of the process is ob-
tained through personal interviews. 149
13

4.22 As-is order processing model. All tasks are manual and have addi-
tional time because the staff member does not work continuously.
The client’s actions have not been modelled. It was determined that
the frequency of arrival of cases is every 2 hours as well as the times
of each task and the frequencies of each path. 150
4.23 Simplified data model of invoice and payment. The data and infor-
mation are grouped in a tree model that will serve as the basis for
the ERM model. 151
4.24 Decision rule for the feasibility decision. This rule considers total amount,
product availability, and quantity. 151
4.25 To-be order processing model. We automate the decision rule and
a user task will clarify the order, a clerk will make the decision if the
request is possible or impossible. A 24-hour intermediate timer that
represents the time it takes the user to start the next task. 151

5.1 Workflow respond to an e-mail 153


5.2 Workflow approve a document 154
5.3 Workflow fill job vacancy 155
5.4 Workflow sign contract 156
5.5 Workflow process claim 158
5.6 Workflow approve and close invoice 160

6.1 Modelo descubierto a partir del event log de la tabla 6.1 165
6.2 Modelo descubierto a partir del event log de la tabla 6.3 165
6.3 Modelo descubierto a partir del event log de la tabla 6.4 166
6.4 Modelo con actividades duplicadas 166
6.5 Modelo con constructo free-choice que no puede ser descubierto con
el algoritmo α 167
6.6 Modelo con constructo free-choice que si puede ser descubierto con
el algoritmo α 168
6.7 Modelo con short-loops. La tarea C puede ejecutarse una cantidad ar-
bitraria de veces 168
6.8 Planilla electrónica para calcular los Pearson’s correlation coefficient 175
6.9 Red de relaciones entre originadores cuyos arcos representan distan-
cias de Pearson’s correlation Coefficient mayores que 0 176
6.10 Jerarquía obtenida según el criterio joint activities medido con la métrica
Pearson’s correlation coefficient 176
6.11 Red simple de relaciones entre 4 originadores 176
6.12 Red simple de relaciones entre 5 originadores 177
List of Tables

1.1 Performance of current process. 30


1.2 Performance of new optimised and implemented process. 30

2.1 Direct decision patterns for the initiatives evaluation. Let us note
that the number of the pattern corresponds to the number of nature
variables. 46
2.2 Implicit decision patterns examples for the initiatives evaluation.
The relationships between the variables or nodes are not explicit. As
in the case of explicit patterns, the number of the pattern corresponds
to the number of nature variables. 47
2.3 The CEO of the Palace Hotel in Lima is evaluating two Information
Technology projects. The economic benefits of each initiative depend
of whether the projects are successful or not. 48
2.4 Regret Table for the example Hotel Palace. This Table shows the
opportunity loss of bad decisions. If the the CEO of the Palace Ho-
tel decides for standard software, the opportunity loss will be $0 if
the natures decides for very successful and $60 for moderate success.
The criterion for selecting the best decision will be minimax. 49
2.5 The Business Process Department of Asta Co. is trying to decide which
alternative project to implement. Each project has customer impact
categorised from A (best) to F (worst), which depends on the imple-
mentation type. 51
2.6 Rich and Co. is considering to implement one of four different pro-
cess projects. Each project has implicit impact on the variation of cy-
cle time of their processes. 52
2.7 Problem Rich and Co.. This table is the opportunity loss measured in
terms of percentage of increasing of the cycle time in case of each de-
cision and different level of process performance. 53
2.8 Microcomp is a Chilean-based software company. It is planning to
invest in different projects. The unitary cost reduction of its prod-
uct depends on different scenarios. 55
16

2.9 The governmental agency for environment protection is evaluating


to invest in business process projects. The implicit effects on the pol-
lution each project is calculated depending of the strategy policy for
the implementations. 55
2.10 A machine shop owner is attempting to decide whether to invest.
The profit or loss from each invest and the probabilities are associ-
ated with each contract. 56
2.11 The opportunity loss table for the problem machine shop owner. This
table is obtained from the Table 2.10. 57
2.12 Computation of the a posteriori probabilities. The tables corresponds
to the case with recommendation contract (C) and not contract (NC)
respectively. In red the given values. 61
2.13 The environment agency reviewed the impacts on the pollution re-
duction of its projects and assigned new probabilities to the imple-
mentation policies. 63
2.14 Miramar Co. is going to introduce one of three new improvements
to its products. The market conditions (favourable, stable, or unfavourable)
will determine the profit or loss the company realises. 64
2.15 Preference scale for pairwise comparison. Values 2, 4, 6 and 8 are in-
termediate values between two adjacent judgements. 66
2.16 Matrix of pair comparison between projects A, B, and C according
to criterion C1 . 66
2.17 The first step is to add the preference values in each column of the
pair comparison matrix. 67
2.18 The second step is to compute the standardised matrix. 67
2.19 The next step is to average the values in each row. 67
2.20 Matrix representing of weights of each project according to each cri-
terion. 68
2.21 Pair comparison matrix to determine the relative importance or weight-
ing of the criteria. 68
2.22 Normalised matrix for criteria with row average. 69
2.23 You can determine an acceptable level of inconsistency by purchas-
ing CI at a random index, RI, whose index is generated randomly
in pairs. This indicator has been generated in the laboratory and de-
pends on the number n of items to be compared. 71
2.24 Pair comparison matrix for each criterion of Problem 2.21. 73
2.25 Pair comparison matrix for each criterion of Problem 2.23. 74
2.26 Pair comparison matrix for the criteria of Problem 2.23. 74
2.27 Pair comparison matrix for each criterion of Problem 2.25. 75
2.28 Pair comparison matrix for the criteria of Problem 2.25. 75
2.29 Pair comparison matrix for each criterion of Problem 2.27. 76
2.30 Pair comparison matrix for the criteria of Problem 2.27. 76
2.31 Pair comparison matrix for the criterion of Problem 2.28. 77
2.32 Pair comparison matrix for the criteria of Problem 2.28. 77
17

2.33 Pair comparison matrix for the criteria of Problem 2.29. 77


2.34 Data associated to each expert of Problem 2.29 78

3.1 Definition of the credit score rating categories. The output of this ta-
ble is used as input in the DMN Table pre-qualification. 116

4.1 Resource Table. It shows the results associated to each resource. ni


is the number of resources for the resource i and Ui is the utilisation
rate for the resource i. 132
4.2 Task Table. It shows the results associated to each task for a case base
and their improvements. Wi is the number of resources for the task
i. 132
4.3 Summary of Results. It resumes the results of n, W and U for the
models of tables 4.1 and 4.2. The values Wq , Lq can be equally ob-
tained, but they do not mean additional concepts to our objectives.
133
4.4 Pareto Optima Solutions. They are obtained by a standard proce-
dure. The alternatives 1 and 3 are dominated by the alternative 2,
while the alternative 5 is dominated by the alternative 4. 133
4.5 Summary of Results. It resumes the results of n, W and U for alter-
native models. 134
4.6 Computation Pareto Optima. It contains the eliminated alternatives
and shows the Pareto optima solutions: alternatives 3, 4, and 5. 135
4.7 Task Table obtained by simulating the model of Fig. 4.8. 136
4.8 Task Table obtained by simulating the model of Fig. 4.10. 139
4.9 The SAC method allows for phase-controlled changes. Phase (S) in-
troduces improvements in information and decisions at low cost and
low risk. Phase (A), on the other hand, involves radical improvements
with greater risk, less technical feasibility and a higher investment
cost. Finally, phase (C), adjusts the resources normally released in
the previous step. 143
4.10 Average cycle time of the as-is model The first path has a cycle time
of 5 hours and occurs 70% of the time, while the second path takes
13 hours and occurs 30% of the time. This gives a weighted average
of 7h 24’. 148
4.11 Average cycle time of the to-be model The standardisation and au-
tomation of the process radically reduces the average cycle time to
4h 48’. 149
4.12 Average cycle time of the as-is model. The model has four alterna-
tive paths with a weighted average of 3d 3h 10’. 150
4.13 Average cycle time of the to-be model. The model has four alterna-
tive paths with a weighted average of 1h 13’. Note that the frequen-
cies of the paths have changed due to improvements in the decision
rule. 151
18

5.1 Fields generated by the task Write response for workflow of Fig. 5.1 153
5.2 Trigger with a form for workflow of Fig. 5.2 154
5.3 Roles and tasks for workflow of Fig. 5.2 154
5.4 Trigger with a form for workflow of Fig. 5.3 155
5.5 Roles and tasks for workflow of Fig. 5.3 155
5.6 manual Trigger for workflow of Fig. 5.4 156
5.7 Roles and tasks for workflow of Fig. 5.4 156
5.8 Fields processed by tasks for workflow of Fig. 5.4 157
5.9 Form Trigger for workflow of Fig. 5.5 158
5.10 Roles and tasks for workflow of Fig. 5.5 158
5.11 Fields processed by tasks for workflow of Fig. 5.5 159
5.12 Form Trigger for workflow of Fig. 5.6 160
5.13 Roles and tasks for workflow of Fig. 5.6 160
5.14 Fields processed by tasks for workflow of Fig. 5.6 161

6.1 Event log del problema 6.1 163


6.2 Instancias que se obtienen del event log del problema 6.1 164
6.3 Event log del problema 6.2 165
6.4 Event log del problema 6.3 166
6.5 Caso simple de event log con datos de originadores 169
6.6 Matriz originator × task de la tabla 6.5 169
6.7 Caso simple de event log con datos de originadores 170
6.8 Matriz originator × task de la tabla 6.7 170
6.9 Caminos más cortos entre todos los pares de nodos para calcular c B (3).
Se han eliminado, para el cálculo, la fila y la columna 3 y la diago-
nal 171
6.10 Caminos más cortos entre todos los pares de nodos para calcular c B (3).
Se han eliminado, para el cálculo, la fila y la columna 1 y la diago-
nal 172
19

Dedicated to my wife.
Introduction

This book was written for professors, practitioners, and profession-


als of Business Process Management. It contains a set of articulated
methods, methodologies, and techniques for planning, designing,
optimising, implementing, and monitoring business processes.
The seminal work of this book was started by the invitation of
Prof. Dr. Rudolf Vetschera of the University of Vienna. The book is
used by the author as teaching material for the course of BPM at the
University of Chile and at the University of Vienna.
1
The Real Time Enterprise

1.1 Summary

This Section was published in 1 . 1


Roberto de la Vega, Mathias Kirchmer,
Becoming a real-time enterprise is an important challenge for and Sigifredo Laengle. “The real-time
enterprise”. In: Industrial Manage-
companies who operate in a changing and highly competitive en- ment 50.4 (2008). Available at 2008-rte
vironment. Real-time enterprises not only design, implement and (7.2015), pp. 16–19
monitor their processes, they do it at high speed. In this article we
show, with the help of an actual case, how tools and techniques taken
from industrial engineering can be used to systematise the design-
implementation-monitoring cycle and ensure it functions quickly and
accurately.

1.2 Advantages of a RTE and How to Implement it

Figure 1.1: Pit stop for the


Scuderia Ferrari during a For-
mula 1 race.

The Speed Challenge To really understand the challenge of building


a real-time enterprise, think for a moment of the Formula One auto
24 practical business process engineering

race. How does a team win the F1? What factors are key to its suc-
cess? The obvious answer is high speed, and the same is true of a
company’s ability to satisfy its customers. But how do you achieve
that high speed on the racetrack?
A large part of any race is played out in the pit stops, where a
whole series of complex manoeuvres have to be executed simultane-
ously in a matter of seconds (see Figure 1.1). This requires coordina-
tion supported by sophisticated communication systems. Working
quickly and precisely is absolutely fundamental, with nothing left to
chance. Speed and accuracy are of the essence.
The brutal reality of the F1 is evident at the end of the race. A
stroke of luck and the team grabs a winning position; an engine
breakdown or technical error and they end up well back in the field.
A huge effort goes into the search for mechanical and aerodynamic
improvements, for every second saved is precious. Discouragement
is a constant danger as the teams pull out all the stops to achieve an
optimum speed and score points.

Figure 1.2: The McLaren team’s


telemetry unit at the Formula 1.

But that’s not all. Figure 1.2 shows the McLaren team’s telemetry
data unit, whose job is to monitor vehicle behaviour from second
to second. Sensors measure engine temperature, oil composition,
the rpm-speed ratio and other vehicle performance parameters, and
the unit’s engineers closely analyse the information to determine
exactly what’s happening and why at every moment. They, too, leave
nothing to chance, deciding when the driver takes a pit stop and
what adjustments to make in the few seconds the car is off the track.
Speed is everything, of course, but speed is only possible if the pit
crew can count on a rigorous and detailed performance monitoring
system to back them up.
It is these performance measurements that make it possible to ad-
the real time enterprise 25

just the vehicle to the changing conditions of the race. Without such
any-time, any-place information captured by the data-acquisition
technology, fine-tuning the original strategy would be out of the
question. Fortunately, in the business world the technology for imple-
menting rapid strategy adjustments is available. How well it works
will depend on the following three factors:

• Rapid redesign (adjustment) of long-term plans to meet the latest


needs.

• Rapid implementation of the redesigns.

• Effective and accurate monitoring.

One further question for top management: where does the real
challenge lie? In our view the heart of the matter is choosing how
fast to carry out process innovation, which means determining the
speed of the redesign-implementation-monitoring cycle. A 1960 Volk-
swagen Beetle may have been an economical alternative in its time,
but today it would be a slow solution. A Formula 1 vehicle is a fan-
tastic choice for speed, but it’s likely to be expensive, not to mention
risky. Today, however, companies who go for the Formula 1 option
have the technology they need at their disposal; the test of their man-
agerial abilities will be to elect a process innovation speed that will
meet the challenges of their environment.

The redesign-implementation-monitoring cycle: increasing the optimum


speed The key to the real-time enterprise approach is the speed at
which the redesign-implementation-monitoring cycle is applied to
business processes. How, then, can we ensure the cycle is imple-
mented effectively and without delays? How should each of the three
tasks be coordinated?
In modern companies, two functional units often lead an uneasy
coexistence: processes and information technology. An example of
this is the banking sector, where the speed at which consumer credit
is granted is part of a bank’s competitive advantage. Not long ago,
the market standard for approving a loan was 25 days; now, its just
15. Typically, a firm’s process engineers design tasks, standardise ad-
ministrative processes and calculate human resources capabilities to
meet this deadline, and its IT experts then have to implement these
technical requirements with the available technology. Each group
works separately, and the final result is often a failure. With two
teams using two methods and two languages side by side, what fre-
quently happens is that the process designers are still studying how
to optimise when the IT experts begin implementing the software.
26 practical business process engineering

Not only does each team have its own method, they’re also using two
different set of tools.

Figure 1.3: The process innova-


tion cycle.

Clearly, what this team needs to win the race is coordination. And
that implies a tool that can model and analyse processes and coordi-
nate and monitor their execution. It must also be able to effectively
articulate the requirements of the business using the implemented
technology. Fortunately, companies today can choose from a variety
of tools and techniques that provide solid support for these tasks.
In the area of process design, for instance, there are a series of
graphical and markup languages (in XML format) that assist com-
munication across different providers and design tools. One of the
best-known graphical languages is BPMN (see http://bpmn.org),
promoted by the non-profit Business Process Management Initiative
consortium that brings together the most prominent process design
software firms. All of these languages contain totally integrated user-
friendly dynamic simulation tools for calculating capabilities and
determining cycle times and costs. For example, giant software com-
pany SAP offers ARIS (Architecture of Integrated Information Sys-
tems) in its NetWeaver suite, Oracle’s BPEL also provides ARIS for
modelling, IBM recommends using WebSphere for the design, simu-
lation and integration of information systems, and Microsoft has its
BizTalk platform that allows systems to be integrated independently
of the platform they’re built on.
Major business software providers have also given their de facto ap-
proval to the use of Service Oriented Architecture (SOA), a standard
that allows information systems to be designed using software ser-
vices from different systems. Even more important, this technology
makes it possible to orchestrate these services during execution as
a business process. This integration and orchestration technology is
the real time enterprise 27

also known as a process engine. Thus, the e-commerce functions of


Siebel’s Customer Relationship Management system can be utilised
with mySAP.com’s billing system, transforming the two platforms
into a single orchestrated process that can be optimised in an inte-
grated fashion.
Finally, the market also offers tools for process monitoring. Known
generically as process mining, these techniques permit information
to be extracted from information systems. For example, the audit
trails of a workflow management system or the transaction logs of an
enterprise resource planning system can be used to discover models
describing processes, organisations, and products. Moreover, process
mining can be used to monitor deviations (e.g., comparing observed
events with predefined models or business rules).
Process mining is closely related to BAM (Business Activity Mon-
itoring), BOM (Business Operations Management), BPI (Business
Process Intelligence), and data/workflow mining. Unlike classical
data mining techniques, the focus is on processes and questions
that transcend the simple performance-related queries supported by
tools such as Business Objects, Cognos BI, and Hyperion. SAP of-
fers Process Performance Management (PPM) as a process mining
tool that helps answer questions such as what is the cycle time of a
process from the moment a customer places an order to the moment
it’s filled, what is the cost of the process, and why do processes in
different branches have different performance levels.

Spevi in the Formula 1. Let’s now look at a real example of real-time


enterprise process innovation. Spevi is a Chilean company that began
life 18 years ago installing sound equipment, and today offers a wide
portfolio of related services. Over the last five years Spevi’s business
has exploded thanks to the boom in construction of large buildings
in Chile and neighbouring Argentina and Peru, with annual billings
of US$8 million in the form of some 300 orders that range from the
building of specialised products to large equipment installations in
public buildings.
The variety and complexity of the services provided by Spevi
made the handling of its work orders particularly difficult to manage,
and the average processing time between receiving an order and
responding to it was 2 12 days. The company was convinced this was
not good enough, however, and a project was initiated to speed up its
order response. The plan called for the adoption of a new, three-stage
procedure: (1) obtain automatically a model of the current processing
system and determine the performance indicators, (2) propose a new
optimal process that specifies the technological support, and (3) then
implement the optimised process.
28 practical business process engineering

This real-time procedure departs radically from the more tra-


ditional approach, which starts by documenting the existing pro-
cess by hand, then models and optimises, and finally, implements
the new process. This difference is the key to the real-time enter-
prise: instead of modelling the current situation manually, the task
is done automatically using the techniques of process mining. In
our case, the process mining is handled with the help of ProM (see
www.processmining.com), a free software developed at the Eind-
hoven Technical University in the Netherlands.

Figure 1.4: Schematic diagram


of real-time procedure adopted
by Spevi.

The entire procedure is presented schematically in Figure 1.4. It


begins with the retrieval of the event log file, which contains the ac-
tual order process data. On the basis of the event log data, ProM
generates an “as-is” model of the current order process by induction.
The model is then exported to Yasper, another freeware program de-
veloped at Eindhoven (see www.yasper.org). Once the to-be model of
the new process has been designed and optimized using Yasper it is
exported directly to Yawl, yet another freeware program developed at
Eindhoven (see www.workflowspattern.org). In Yawl the execution of
the new process is orchestrated, meaning that the program functions
as a process engine. Since all the programs are based on Petri net
formalism, they can easily share the model.
Let us revise the procedure in detail. A fragment of file event log
is shown in Figure 1.5. Since not all of the company’s order process-
ing tasks use electronic databases, the event log had to be completed
using data acquired manually. If the process were completely auto-
mated, the file could be obtained automatically. This will be possible
in future iterations of the process improvement cycle, and once this is
the real time enterprise 29

Figure 1.5: Part of the manually


completed event log file.

the case the entire procedure will be greatly accelerated.

Figure 1.6: As-is model of cur-


rent process generated by ProM
from event log and exported to
Yasper.

A simplified representation of the model obtained by ProM is


shown in Figure 1.6, omitting certain details such as the databases
and specification of the roles or employees involved. Yasper provides
the necessary support for this job via two key capabilities, dynamic
simulation and publication of the model. The first of these calculates
performance indicators and determines the resources for minimizing
cycle time.
A brief overview of the current process depicted in Figure 1.6
should help explain what will be in involved in the modeling and
optimization phase using Yasper. It begins with the customer places or-
der event, in which a customer makes out an order and an employee
inputs it into a database stored on a computer. The employee then
uses this data to analyze order feasibility, deciding whether filling the
order is possible (60% of cases), may be possible (22% of cases), or is
impossible (18% of cases). If the order is in the may be possible state, the
employee tries to clarify the order with the customer by telephone. This
clarification will have one of two outcomes: the order is possible (88%
of cases) or impossible (12% of cases). If the order turns out to be pos-
sible, the employee makes a tentative offer and informs a co-worker
by email; if it is found to be impossible, the employee sends an email
directly to the customer declining the order.
As can be seen in Figure 1.6, there are four alternative paths in
the current process from start to finish. The performance indicators
30 practical business process engineering

for each of them are listed in Table 1.1. The most likely path (prob-
ability of 60%) begins with the customer places order event and ends
with email to co-worker (without clarification) while the least probable
one (2.6%) begins with customer places order and ends with email to
customer (with clarification).

Path Cycle Time Probability of


Occurrence Table 1.1: Performance of cur-
From customer places order to email to 2 days 5 hours 20 0.60 rent process.
co-worker (without clarification) min
From customer places order to email to 2 days 13 hours 40 0.22 × 0.88
co-worker (with clarification) min
From customer places order to letter to 2 days 5 hours 25 0.18
customer (without clarification) min
From customer places order to email to 2 days 13 hours 35 0.22 × 0.12
co-worker (with clarification) min

Figure 1.7: To-be model of pro-


cess modelled and optimised in
Yasper for export to Yawl.

The performance results of the optimised process design (To-be


model) are laid out in Table 1.2, and demonstrate clearly the reduc-
tion in cycle times for all four paths and the change in each one’s
frequency (probability of occurrence). This optimisation is the result
of two concepts in the solution: standardising order information via
an online services catalogue, and the automation of certain tasks.
The proposed process can now be implemented and its execution
orchestrated by Yawl (see Figure 1.7).

Path Cycle Time Probability of


Occurrence Table 1.2: Performance of new
From make out and input order to make 1 optimised and implemented
2 day 4 hours 25 0.90
tentative offer (without clarification) min process.
From make out and input order to make 1 day 8 hours 15 0.05 × 0.90
tentative offer (without clarification) min
From make out and input order to decline 1 day 1 hour 10 0.05
order (without clarification) min
From make out and input order to decline 1 day 3 hours 20 0.05 × 0.10
order (with clarification) min
the real time enterprise 31

Lessons Learned. The case we have just described holds a number of


lessons both for company managers and industrial engineers. As well
as being responsible to shareholders for the value of the business,
managers should also ensure its processes operate at the required
speed in order to meet the needs of ever more demanding customers.
This will mean obtaining the technical skills to design and optimise
processes efficiently. As for industrial engineers, they should have the
necessary tools, or set of integrated tools, for responding to the chal-
lenge of building the real-time enterprise: convert business strategies
into optimised designs, implement these designs and monitor process
performance.

1.3 The RTE and Coordination Mechanisms

1.4 Questions and Problems

Problem 1.1. Explain why the title of article is Real Time Enterprise
and what are the difference with Business Process Management Process.

Solution 1.1 Business Process Management is a set of management


activities consisting on modelling, analyse, optimise, implement, and
monitoring business processes. A Real Time Enterprise is an organ- The definition of business process will
isation that executes these management activities at high velocity. A be studied in the next Chapter.

Real Time Enterprise will try to combine management concepts and


technical methods efficiently.
The author tries to show the advantages of having an organisation
that manages its processes at high velocity. For this reason RTE is the
title of the article (or Chapter).

Problem 1.2. Explain how the improvement process (improvement


initiatives or projects) in a Real Time Enterprise context differs from
the traditional mode of improvement.

Solution 1.2 Every organisation manages its business processes, i.e.


each company models, analyses, optimises, implements, and mon-
itors its processes. This is a question of methods, techniques, and
speed for improvement processes. Not all organisations are RTEs.
A traditional organisation manages its processes slowly, without so-
phisticated techniques and with bad communication between theses
management phases. Instead, RTEs work very fast with coordinated
phases of BPM and, particularly, use automatic techniques of moni-
toring.

Problem 1.3. Explain the essential competitive advantage that the


concept of Real Time Enterprise can deliver to a company.
32 practical business process engineering

Solution 1.3 BPM ensures that right people execute right tasks at
the right moment. What is then the advantage of a RTE? Answer:
The advantage is that the execution of the process that manages the
processes (i.e., BPM) is carried out at high speed. In other words, the
changes of the operational processes are very fast. That is, RTE gives
flexibility and agility to the company for changing their structures
and processes.

Problem 1.4. Explain also the role of technicians and that of senior
management.

Solution 1.4 Technicians require the technical skills and tools to de-
sign, analyse, optimise, implement, and monitor processes efficiently.
Senior management, instead, should ensure the processes operate at
the required speed in order to meet the needs of ever more demand-
ing and changing customers.
2
Strategic Process Management

2.1 Introduction

2.2 Strategic Process Modelling

Problems
Problem 2.1. Identify objects called objectives and the relationships
between them. Model the objective tree:
The industrial laundry service known as LavaBien lost important cus-
tomers during the last year. The owner asked his loyal customers about
the causes of the contract terms (which himself considered safe). The
reason given is that the competence is offering the same service for
a lower price. Apparently, the competence can make a better offer
as they invested in most advanced wash technology. With this infor-
mation, he consulted his suppliers about purchasing new machinery,
however, he had to resign to it due to the high investment level this
meant. This is why he sought another solution. He devised the way
to offer a differentiated service to his customers, which consisted in
lowering the re-position time in a 50% and a reduced price in 20% to
the customers.

Figure 2.1: Solution Prob. 2.1.


The objective tree is subdivided
in four objective groups (in this
example we have only two).
They are written in a succinct
way, few words, and with a
verb. In this example, we found
objectives belong to the two
first objective groups.

Solution 2.1 This is a simple example, but it may have modelling


complexities. The exercise is about understanding what the organisa-
34 practical business process engineering

tion’s strategic objectives are and discovering the causal relationships


between them. To answer, we will assume fundamental causal rela-
tionships that are repeated in practice in any organisation: the final
objectives of the organisation come first in the hierarchy . At the sec- It is possible to present a typology of
ond level are the objectives closely related to customers or users and fundamental objectives depending on
whether the environment of the organ-
account for the outputs and their specification that the organisation isation is a competitive environment,
offers. The objectives associated with clients assume that their fulfil- is a public service organisation or an
organisational unit within the hierarchy
ment causes the achievement of the objectives of the organisation. In that contains it.
the third level or group we find the objectives related to the perfor-
mance of the processes that produce such outputs identified above.
Finally, at the fourth level, we identify objectives regarding how to
ensure that the processes have the expected performance. These
fourth level objectives normally have to do with human and technical
capacities.
The previous explanation allows to identify a typical causal struc-
ture of the objectives of an organisation. Now we will recommend
some aspects of form or, if you like, syntactic. We suggest that the
objectives be succinct, clear and sufficiently concrete. Therefore, we
propose objectives written with a verb, of no more than 10 words and
not abstract.
The solution of the exercise is shown in Fig. 2.1.

Problem 2.2. Identify objectives, products and relationships in the


following strategic formulation. For this, model the objective tree, the
product tree, and the product allocation diagram:
The Banco las Américas has recently been installed in the Chilean mar-
ket by acquiring an entire local bank and opening new branches to
the ones that the purchased bank already owned. The bank has made
efforts to introduce services for young university students, as the uni-
versity population is expected to triple in the next six years. The main
products to be offered are: current accounts, loans and insurance for
higher education. All of this strongly focused on the internet banking.
Technical units consider that the e-banking platform is deficient. There-
fore, it is intended outsourcing the best Web content services and call
centre services.

Figure 2.2: Solution Prob. 2.2.


The output tree represents the
main two strategies of the Bank:
to maintain the current out-
puts and to evolve to electronic
services.
strategic process management 35

Figure 2.3: Solution Prob. 2.2. A


general objective tree model of
an organisation has four group
of objectives related by causal-
ity. We found objectives belong
to the the first three levels.

Figure 2.4: Solution Prob. 2.2.


The initiative allocation dia-
gram model corresponds to the
description of two initiatives
described in the text

Solution 2.2 The output tree of Banco las Américas is shown in Fig.
2.2. The output tree represents the main two strategies of the Bank:
to maintain the current outputs and to evolve to electronic services.
The objective tree was modelled considering the text without fur-
ther interpretation. Like the example 2.1, the model has four group
of objectives related by causality. We found objectives belong to the
the first three levels: main (financial) objectives, customer related
objectives, and processes related objectives. It is shown on Fig. 2.3
The initiative allocation diagram is the description of two initia-
tives described in the text. It is shown on Fig. 2.4.
The following problem was taken from 1 , page 37. 1
John S. Hammond, Ralph L. Keeney,
and Howard Raiffa. Smart choices: A
Problem 2.3. Some parents have to choice the best option for their practical guide to making better decisions.
Harvard Business Review Press, 2015
children. Because they consider a difficult decision, they write a list
about what they want for their children. The list is as following:

• Learn the fundamentals. • Participate in physical activi-


ties.
• Enjoy school.
• Learn about different people.
• Develop creativity.

• Develop discipline. • Be intellectually challenged.

• Learn good work habits. • Know the joy of learning and


knowledge.
• Learn to work with other peo-
ple. • Participate in and develop an
36 practical business process engineering

appreciation for art. • Minimising travel time to


school.
• Learn to function in our soci-
ety. • Encourage diversity in styles
of life (dress, interests).
• Develop option for the future
(secondary schools). • Discourage competitive be-
haviour for material posses-
• Develop lasting friendships.
sions (clothes, bikes).
• Deepen a commitment to ba-
• Encourage respect for and
sic values (honesty, helping
understanding of all kids,
others, empathy).
regardless of family circum-
• Minimising annual cost. stances.

Could you identify an objective tree? Propose a reasonable model


for understanding the objectives of the parents.

Figure 2.5: Solution Prob. 2.3.


The objective tree shows new
objects, namely four objectives
which there not exist in the
original text. Thus, the model
is simplified and made more
readable.

Solution 2.3 The solution is the Fig. 2.5. This example is a very
interesting, because it introduces a particular concept of causality,
namely the relation is part of, is contained in, or composition.
This is a very recurrent relationship between objects. The meaning
is clear, but it can be interpreted as well as causal relationship in the
following sense: different parts of an object are cause of the existence such
object.
Another interesting idea in this example is the criterion used for
grouping objectives. We propose four general objectives which were
strategic process management 37

not explicit in the original text. In other words, we recommend to


group the objectives in similar concepts for simplicity and readability.
Too much objects in the same model is very difficult to read. In this
case we group objectives according to the intellectual abilities, cre-
ativity and physical abilities, social abilities, and others related with
general concept of costs.

The following example is taken from 2 , page 39. 2


John S. Hammond, Ralph L. Keeney,
and Howard Raiffa. Smart choices: A
Problem 2.4. The U.S. Environmental Protection Agency (EPA) uses practical guide to making better decisions.
Harvard Business Review Press, 2015
the objective minimise emissions for evaluating many proposed pro-
grams, but the agency declares other objectives too:

• Minimise emissions.

• Reduce pollutant concentrations.

• Limit human exposure to the pollutants.

• Avoid health damage.

Model the objective tree.

Figure 2.6: Solution Prob. 2.4.


The objective tree differentiates
causes and effects. The diagram
can be read top-down (from
effects to causes) by answering
the question how or bottom-
up (from causes to effects) by
answering the question why.

Solution 2.4 This is a typical example, where it is needed to identify


causal relationships. Sometimes, it is difficult to differentiate between
causes and effects in real situations. We solve this example in the
Fig. 2.6. Let us observe that the diagram can be read top-down (from
effects to causes) by answering the question how or bottom-up (from
causes to effects) by answering the question why.

Problem 2.5. The strategy declaration year 2016 of the Chilean cop-
per concern Codelco can be found in the document downloaded from
the link codelco-mision on page 19. Model the objective tree.
38 practical business process engineering

Figure 2.7: Solution Prob. 2.5.


The objective tree of the com-
pany Codelco. This is again an
example of the objectives identi-
fication of a complete company,
i. e. we try to identify four levels
of objectives. In this case, we
only found objectives related to
the process performance space.

Solution 2.5 The solution is the Fig. 2.7. This is again an example
of the objectives identification of a complete company, i. e. we will
try to identify four levels of objectives. In this case, we only found
objectives related to process performance space. We do not identify
neither financial, nor customer, nor resources related objectives. The
solution is immediate.

Figure 2.8: Simplified represen-


tation of the processes of Soprole
company.

Problem 2.6. Identify and interpret the objects of Fig. 2.8. Model the
value added chain and the product tree.

Figure 2.9: Solution Prob. 2.6.


The value added chain of the
company Soprole is obtained
easily from Fig. 2.8.
strategic process management 39

Figure 2.10: Solution Prob. 2.6.


The output tree of the company
is obtained immediately from
Fig. 2.8.

Solution 2.6 The models are obtained immediately from Fig. 2.8 and
expressed in a standard form in Fig. 2.9 (value added chain) and in
Fig. 2.10 (output tree).
The following problem consider the concept of Value Added Chain
popularised by M. Porter in 19853 . 3
M.E. Porter. Competitive advantage: cre-
ating and sustaining superior performance.
New York: Free Press, 1985
Figure 2.11: value added chain
concept proposed and popu-
larised by Michael Porter in
1985.

Problem 2.7. Consider the Fig. 2.11. It represents the value added
chain, concept described and popularised by Michael Porter in 1985.
value added chain is understood as a set of activities that deliver a prod-
uct valued by customers. Looking at the figure, answer the following
questions:
1. What are the differences between primary and secondary activi-
ties? What is meant by core processes? What are core competences?

2. Reflect on what would be – in principle – the core process of the


University of Chile, of the industry of retail, and of any SME of
your selection.

3. Consider also the question of outsourcing activities: what ac-


tivities (primary or secondary) are in principle candidates to be
externalised or to be kept within the organisation?

Solution 2.7
1. Primary activities are (according the Fig. 2.11) service, market-
ing & sales, outbound logistics, operations, and inbound logis-
tics. While secondary activities (or support activities) are firm
40 practical business process engineering

infrastructure, human resource management, technology, and


procurement. Primary activities are those activities that are di-
rectly related with the final customer, while secondary activities
are related indirectly. From the point of view of the organisa-
tional objectives, whether an activity is near to the customer can
be important or not. What is really important, is the decision of a
company to manage knowledge related to a determined process.
In other words, there are deliberate efforts for developing and cre-
ating knowledge (of that process) which is very difficult to imitate.
Processes, where the company deliberately develop and manage
knowledge, are known as core processes. In this case, we say that
the organisation has a core competence (or ability) in this process.

2. In case of high complex universities, core processes are processes


related to management of professors. The reason is the following:
the professor work consists basically in to investigate and to teach
what is investigated. The professor work is very complex and
requires highly trained professionals. Thus, professor activities are
difficult to understand and to standardise by an external manager.
For this reason, the main coordination mechanism of a group of
professors is the standardisation of their abilities or, moreover, the
management of their abilities. Management of abilities consists on
planning, execution, and monitoring of the resource professor, or
management of professor abilities. In other words, the core processes
in complex universities are to hire, to evaluate, to promote, to
compensate, and to fire professors (or their abilities).
The case of the industry of retail is different to the last one. Here
tasks are so simple, that they can be understood by a planner.
The planner designs a set of activities of logistics, storage, and
production. He specifies exactly when, who and what to do. The
objective of the planner is to standardise the activities for optimising
the material flow. Therefore, the core process in case of retail is
logistics, storage, and production.
In case of SME, the core process is neither the professional abil-
ities (like in case of the a complex University), nor the activities
standardisation (like in case of the retail). Because, SMEs are very
flexible and adaptable organisations that solve simple problems
(not professional complex tasks), but very variable and changing.
The main coordination mechanism in this case is the ability of the
owner for adapting. All activities depends on the owner brain! We
say that the main coordination mechanism is the direct supervision.
If we strictly follow this reasoning, we can say that the core process
in case of SMEs is what occurs in the brain of the owner.

3. Because the definition of core process (that process, where the


strategic process management 41

organisation do a deliberate effort for creating and developing


knowledge), then every process that does not belong to the core
processes set is candidate to be externalised.

Figure 2.12: Simplified repre-


sentation of the process of the
company Soprole with objectives
assigned to different levels.

Problem 2.8. This problem continues from the problem 2.6. With the
figures 2.8 and 2.12 model the extended value added chain and the
objective tree. Propose a reasonable project and model the respective
model initiative allocation diagram.

Figure 2.13: Solution Prob.


2.8. The extended value added
chain of the company Soprole
is obtained easily from Fig. 2.8
and from 2.12.

Solution 2.8 This is a simple example which solution is is obtained


from Fig. 2.8 and from 2.12. We added the corresponding level to
each objective. In addition, we selected three objectives of level three
associated to two different initiatives. The model value added chain
is in Fig. 2.13, the objective tree is the Fig. 2.14, finally the initiative
allocation diagram is the model in the Fig. 2.15.

Problem 2.9. Consider the known company Ikea and review its Web
page on ikea-webpage4 . Let us observe that the main menu has the 4
Consulted on October 10, 2023.
42 practical business process engineering

Figure 2.14: Solution Prob.


2.8. The objective tree of the
company Soprole for the Milk
process is obtained from Fig.
2.8 and from 2.12. We added
the corresponding level to each
objective.

Figure 2.15: Solution Prob. 2.8.


The initiative allocation dia-
gram of the company Soprole is
obtained from Fig. 2.8 and from
2.12. We selected three objec-
tives of level three associated to
two different initiatives.

option Productos and Estancias. In fact, a modern Ikea store is de-


signed such that, visitors move from one space to another: they visit,
for example, the bedroom space, then the dining room space, and
next the children’s space. Each space or estancia is designed assuming
a global customer concept and containing a set of diverse products
associated with such space. Each grouping concept responds to a
different concept of closeness of the customer needs. By considering
this situation, propose two product trees obtained from the Web page
(draw only said 10 objects each model for this exercise).

Figure 2.16: Solution Prob. 2.9.


The left output tree represents a
grouping associated to the pro-
ductive system and less linked
to the customer. Although the
difference is not absolute, the
right model is more associated
with the customer.
strategic process management 43

Solution 2.9 Two outputs trees of the company Ikea obtained of its
web page. The left model represents a grouping associated to the
productive system and less linked to the customer. Although the
difference is not absolutely clear, the right model is more associated
with the customer. Each model expresses an orientation of the com-
pany strategy and are shown in the Fig. 2.16.

Figure 2.17: Simplified rep-


resentation of the the value
chain of agriculture prod-
ucts in Africa (Source: https:
//africanharvesters.com).

Problem 2.10. Consider the value chain schema conceptualised


by https://africanharvesters.com and shown on Fig. 2.17. The
value chain of the Figure explains the flow of agriculture products in
Africa. Model the corresponding value added chain.

Figure 2.18: Solution Prob. 2.9.


The value added chain of Fig.
2.17 is easy to model. An object
of the figure was not considered
as process.

Solution 2.10 The value added chain of Fig. 2.17 is easy to model.
The object harvesting technology of the figure was not considered as
process. The model is shown in Fig. 2.18.
44 practical business process engineering

Problem 2.11. Consider the Chilean enterprise Spevi (www.spevi.cl).


It offers products and projects related to acoustic systems for big
facilities or buildings. In the menu products investigate the outputs
and past projects it has carried out. Model the current product tree
and propose a new product tree that can change the strategy for
dealing with the market of acoustic systems.

Figure 2.19: Solution Prob. 2.11.


Two output trees of the organ-
isation Spevi according to its
web page www.spevi.cl. The
left one is more oriented to the
production, while the right one
Solution 2.11 Two output trees of the organisation Spevi accord- is a proposal more customer
ing its web page www.spevi.cl. The left one is more oriented to the oriented.
production, while the right one is a proposal more customer ori-
ented. Both depends on the Spevi strategy. The model is shown in
Fig. 2.19.
strategic process management 45

2.3 Initiatives Identification

The goal of strategic process modelling is to describe objectives,


processes, outputs, and initiatives. Thus, the models help to decide
about the most important initiatives, projects or actions on the pro-
cess satisfying organisational objectives. The models help to identify
causal relationships. In fact, initiatives have effects on customer satis-
faction and in turn impact the organisation’s objectives.
The analysis can be carried out considering the complete company
or an organisational unit of a company. The analysis is similar in
both cases.
The following six steps are proposed to identify relevant initiatives
(i. e., that impact the organisation’s objectives):

1. Determine the organisations’outputs and model them in the


product tree. Also identify the relevant output(s). Argue with
evidence, why the output chosen is relevant.

2. For the relevant output identify the client and their require-
ment(s). Model the product allocation diagram. Here too, one
must argue with evidence, why the requirement chosen for that
output is relevant. We call (O1 − E) to this customer requirement
for denoting that it is a type of objective of the external client that
we will demand to the complete process. This objective O1 − E
coincides now with a internal performance objective (O1 − I or
denoted simply as O1 ), i. e. what the customer requires from the
process (O1 − E) is the same what we will require as process per-
formance (O1 − I).

3. Model value added chain (VAC). This model indicates the pro-
cesses and sub processes that produce the relevant output of
choice. It is usually necessary to dis-aggregate to a level 3 or 4
to ensure a good analysis.

4. Model the extended value added chain (eVAC). This is to iden-


tify performance objectives in the value added chain (obtained in
the previous step) that must be achieved to meet the requirements
of the external customer identified in step 2.

5. Model the objective tree containing the causally ordered objec-


tives from the external client requirement (O1 − E) to the objectives
supported by the lower levels of the value chain (O1 , O2 , O3 , and
eventually O4 ).

6. Identify the initiatives normally associated with the objectives of


type O3 or groups of such objectives. Model the model initiative.
46 practical business process engineering

2.4 Initiatives Evaluation

The past two subsections addressed the problem of modelling and


determining (or finding) improvement initiatives. In this Subsection
we will try to evaluate them.
We will propose an evaluation framework that comes from De-
cision Theory. The general problem assumes that the evaluation
problem has three main components:

• A set of decision variables that take different values that de- decision variable
pending on the decision-making agent. In its simplest form it
consists of the decision to invest or not to invest in an initiative.

• A set of variables of nature (or chance variable) that do not de- nature variables
pend on the decision maker. In general, these decision variables
are related to: the performance of the process (caused by the intro-
duction of the initiative) and the customer satisfaction (caused by a
change in the performance of the process).

• Finally, the variable that depends on both the variables of na-


ture and the decision called consequence (or terminal variable). consequence
Generally speaking, they will be the values of the organisational
performance for each decision and each occurrence of the nature
variables.

All these variables (nature, decision and consequence) are causally


related to each other and can follow various patterns that we iden- patterns
tify in Table 2.1. Theses relationships are explicit. The pattern 1 relates
the decision explicitly to the process performance, measured, for
example, in terms of time, cost, or quality. The pattern 2 relates the
decision with the effect on customer satisfaction through changes in
the the process performance also explicitly. Finally, pattern 3 adds the
variable performance of the organisation at the end of the chain. Let
us note that the number of the pattern corresponds to the number of
nature variables.

pattern decision Table 2.1: Direct decision


meaning
nr. tree patterns for the initiatives eval-
initiative changes the uation. Let us note that the
1
process performance. number of the pattern corre-
initiative improves sponds to the number of nature
2
customer satisfaction. variables.
initiative improves
3
organisation performance.
strategic process management 47

It can be quite complex to obtain such relationships between vari-


ables. We usually have a discrete probability distribution that de-
scribes them. In many cases, we will not have such a distribution.
Patterns 1 and 2 are simpler, but they are clearly incomplete, since we
are interested in the effect on the organisation and it is not enough
to recognise the effect of the project on intermediate variables. But it
might be the only thing we have. Patterns 1, 2 and 3 in Table 2.1 will
be called explicit patterns as it describes explicit relationships to explicit patterns
each other.
The complexity of the relationships can cause that there are no
explicit relationships like the ones in the Table 2.1 and there are only
implicit relationships. For example, the decision to introduce an initia-
tive is related to the economic impact. Thus the relations between the
changes in the process and in the customers are hidden. Patterns that
describe implicit relationships will be called implicit patterns. implicit patterns

pattern decision
meaning Table 2.2: Implicit decision
nr. tree
patterns examples for the
initiative changes the initiatives evaluation. The rela-
1
customer satisfaction. tionships between the variables
initiative changes the or nodes are not explicit. As in
2
organisation performance. the case of explicit patterns, the
Examples of implicit patterns are shown on Table 2.2. Implicit rela- number of the pattern corre-
tionships probably hide underlying cause-effect relationships, unlike sponds to the number of nature
explicit relations that try to show clear and detailed relationships. On variables.
the other hand, the model that describes in detail all the relationships
may be more precise, but more difficult to understand. A fair balance
between precision and complexity is recommended.
Now, we will propose a set of problems and exercises.

Problem 2.12. The CEO of the Palace Hotel in Lima is evaluating two
Information Technology projects for the passenger administration.
She has two alternatives: buying and implementation a standard soft-
ware or to continue with the current system. An economic evaluation
of both alternatives should consider that the alternative standard soft-
ware has some risks. The economic benefits of both alternatives are
shown in Table 2.3. Select the best decision, using the following deci-
sion criteria: (a) maximax, (b) maximin, (c) minimax regret, (d) Hur-
wicz (α = 0, 4), and (e) Laplace (or equal likelihood). (f) Recognise
number of pattern of the decision and whether explicit or implicit.

Solution 2.12 We are going to solve decision problems by modelling


the corresponding decision trees. A decision tree is a tree structure
that contains three types of nodes: decisions, random variables of
48 practical business process engineering

Economic Benefits
Table 2.3: The CEO of the
Decision Very Successful Moderate Success Palace Hotel in Lima is eval-
Standard software $120.000 $60.000 uating two Information Tech-
Legacy system 85.000 85.000 nology projects. The economic
benefits of each initiative de-
nature, and consequence random value. The random variables of pend of whether the projects
nature obviously do not depend on the decision maker. The conse- are successful or not.
quence random value depends on the combination of decisions and
the variables of nature. The random variables of nature can be inter-
preted as external decisions (or decisions of nature). It is customary
to model these variables on the tree nodes, where the decision vari-
ables are square nodes and nature variables are circles. The values
of the consequence variables corresponding each value combination
of decisions and variables nature are modelled as triangles. The arcs
between each node (or variable) represent the values taken by the
decision and nature variables respectively.

(a) maximax (b) maximin (c) minimax regret


Figure 2.20: Solutions of the
In this problem we have two variables: the decision variable called
problem Palace Hotel. maxi-
simply decision, the nature variable that we will call economic ben-
max assumes that nature plays
efits. The consequences are the values on the Table. The tree repre-
to favour the decision maker. In
senting the decision in Table 2.3 is shown in Fig 2.20. Note that each
maximin it plays against him.
tree describes the dynamics of events. First, the decision maker plays
In minimax regret nature plays
about which project to carry out: introduce the standard software
against her regret.
and continue with the legacy system. The top branch of the tree con-
tinues with the standard software decision and the nature variable
economic benefits. This branch also has information about a possible
extra expense (or benefit) for choosing that branch. It is $0 in this
case, because we have no information about it. Then the node that
represents the variable of economic benefit takes two values: very
successful and moderate success. The values of $120 and $60 that
appear below each branch come from the values that have taken the
strategic process management 49

respective consequence variable, which was represented as a termi-


nal triangle in each case. The same interpretation follows the lower
branch for the legacy system decision.
(a) Let’s note that the left tree shows the decision under the
maximax criterion. maximax is an optimistic criterion for making a maximax
decision. This criterion assumes that nature will play in favour of the
agent, that is: since the agent made the decision to introduce stan-
dard software, nature will play in favour of the decision maker, that
is, it will play for an economic benefit of $120. For this reason, the
arc between the variable of the economic benefit nature and the value
of the consequence that corresponds to $120 is marked with a thick
green line. In the lower branch, both arcs are marked, because they
have the same values. So far nature has played in favour of the agent,
hence the name max of the second max of the word max-i-max. Next,
the value taken by the economic benefits variable is entered under
the corresponding circle as $120. The same occurs in the branch
that corresponds to the legacy system decision, now with the value
of $85. Finally, the CEO of Grand Palace will make the decision to
choose between the maximum of $120 and $85 (hence the prefix max
of the word max-i-max). The agent will clearly choose the standard
software decision that gives her $120 which is the final value of the
decision.
(b) The maximin decision criterion assumes an opposite attitude, maximin
that is, a pessimistic position. The agent assumes that nature will al-
ways play against the agent. Hence the name of max-i-min criterion.
The decision tree in the centre of Fig. 2.20 presents the solution with
this criterion with a value of $85.
(c) A third criterion that is acceptable to consider is called minimax
regret. This criterion tries to minimise the loss of opportunity of minimax regret
having made a bad decision and therefore the agent regrets it. To do
this we also build a decision tree but from another table, the opportu-
nity loss table shown in the Table 2.4.

Opportunity Loss
Table 2.4: Regret Table for the
Decision Very Successful Moderate Success example Hotel Palace. This
Standard software $0 $60.000 Table shows the opportunity
Legacy system 0 0 loss of bad decisions. If the
the CEO of the Palace Hotel
To determine the table, proceed as follows. Suppose the agent decides for standard software,
decides on standard software. If nature plays very successful, the the opportunity loss will be $0
opportunity loss is $0, but if nature plays moderate success, the loss if the natures decides for very
will be $60. In the event that the agent decides on the legacy system, successful and $60 for moderate
the opportunity loss will be $0 regardless of the nature’s decision. success. The criterion for se-
The final decision table is shown in Table 2.4. It is suggested to use lecting the best decision will be
minimax.
50 practical business process engineering

a pessimistic decision criteria to determine the decision using this


table, which translates into a min-i-max (or minimax) rule, that is,
nature will try to play maximising the agent’s opportunity losses and
the agent will try to minimise the effects of nature’s plays.
(d) The fourth and fifth criteria of decision making are an inter-
mediate to the optimistic and pessimistic criteria (maximax and max-
imin respectively).
The fourth criterion considers an optimism degree called optimism
factor traditionally defined as α. This factor weights the consequence
in favour of the decision maker and weights by (1 − α) the bad con-
sequence. This criterion is called Hurwicz. Assumes in this example Hurwicz
.
α = 0.4, so the α weighted value of the standard software decision is
α120 + (1 − α)60 = $84 and the value of the legacy system decision is
$85. Therefore, the decision under the Hurwicz criterion with α = 0.4
is legacy system and reaches $85.
(e) Finally, the fifth criterion, also called Laplace, is an special Laplace
.
case of the Hurwicz criterion with α = 0.5, that is, it is neither
optimistic nor pessimistic, but it is located in the middle ground.
Then the α weighted value of the standard software decision is
α120 + (1 − α)60 = $90 and the value of the legacy system decision
is $85. Therefore, the decision under the Laplace criterion is standard
software and reaches $90.
(f) The decision tree pattern es of type 1 implicit, because relates
the causes (the decision) with the final effect (organisational perfor-
mance measured by economic benefits) in two path implicitly.

Problem 2.13. The Business Process Department of Asta Co. is trying


to decide which alternative project to implement. There are multiple
benefits associated each projects like cost and time reduction as well
as customer satisfaction because a higher service quality. Further-
more, there are three strategies to implement them: intern develop,
outsourcing, and mix of both. Implementations politics depend of a
decision of the board of the company and the Business Process De-
partment does not know it. For that reason, the department has pre-
pared an analysis with different customer impact of each alternative
and proposes a discrete scale for the benefits from A to F, where A
is the best and F is the worst customer impact. The analysis is sum-
marised in the Table 2.5. Select the best decision, using the following
decision criteria: (a) maximax, (b) maximin. In addition, determine if
possible the decision using the following criteria: (c) minimax regret,
.
(d) Hurwicz with α = 0.6, (e) Laplace. Finally, (f) classify the pattern
problem and if it is explicit or implicit.

Solution 2.13 (a) and (b). The solution follows the same procedure
that the above one, excepting that the consequence variable takes
strategic process management 51

Customer Impact
Table 2.5: The Business Process
Decision Intern Outsourcing Mix Department of Asta Co. is try-
I B D D ing to decide which alternative
II C B F project to implement. Each
III F A C project has customer impact
categorised from A (best) to F
non-numeric values, therefore they cannot be neither summed nor (worst), which depends on the
multiplied by another number. But its values have an order. There- implementation type.
fore, we will assign an ordered scale to each value, i. e., 5 for A, 4 for
B until 0 for F, thus the scale has the usual order. With this consid-
eration, we obtain the corresponding decision tress for the criteria
maximax and maximin showed Fig. 2.21 and on Fig. 2.22 respec-
tively.

Figure 2.21: maximax assumes


that nature plays to favour of
the agent. The best decision is
III.

(c), (d), and (e). Because elements of the set A to F represented by


the numeric scale 0 to 5 scale is not possible to sum (e. g., what is the
meaning of 1 + 2? Does it mean E+D?) nor multiply by a number,
then the criteria neither minimax regret, nor Hurwicz, nor Laplace
are possible to apply.
(f) The pattern of this problem is of type 1 implicit.

Problem 2.14. Rich and Co. is considering to implement one of four


different process projects named I, II, III, IV, and V. Each project has
implicit impact on the customer satisfaction or dissatisfaction but
explicit in the process performance measured by cycle time. The
variation of cycle time for each project is estimated in the Table 2.6 in
percentage. Determine the best decision, using the following criteria:
(a) maximax, (b) maximin, (c) minimax regret, (d) Hurwicz (α = 0, 6),
52 practical business process engineering

Figure 2.22: maximin assumes


that nature plays against the
agent. The best decision is I.

and (e) equal likelihood. Determine (f) the decision problem pattern
of the problem and if it is explicit or implicit.

Process Performance (%)


Table 2.6: Rich and Co. is con-
Decision Low Moderate High sidering to implement one of
I -32 13 44 four different process projects.
II -51 18 63 Each project has implicit impact
III -27 7 58 on the variation of cycle time of
IV -63 -16 96 their processes.

Solution 2.14 The solution follows the same procedure that the
problem 2.12.
(a) and (b). The solution using the criteria maximax and maximin
are showed on Fig. 2.23 and 2.24 respectively. Let us observe that the
corresponding decisions are IV and III respectively.
(c) The application of the criterion minimax regret requires to
determine the table of opportunity loss which is shown on Table 2.7.
For example the opportunity loss the the decision I and low process
performance is 44 − (−32) = 76%. With the Table 2.7 we modelled
the decision tree of Fig. 2.25, which is applied the criterion minimax.
The best decision is I.
.
(d) Now we will apply the Hurwicz criterion with α = 0.6. Re-
member that α represents an optimism factor, then for each decision
it multiplies the best process performance of Table 2.6 by 0.6 and the
worst by 0.4. That is, the value for decision I is 0.44α − 0.32(1 − α) =
13.60%, for the decision II is 17.40%, for the decision III the result is
strategic process management 53

Figure 2.23: maximax assumes


that nature plays to favour me.
According this criterion the best
decision is IV.

Figure 2.24: Problem Rich and


Co.. maximin assumes that
nature plays to favour me. Ac-
cording this criterion, the best
strategy is III.

Opportunity Loss (%)


Table 2.7: Problem Rich and
Decision Low Moderate High Co.. This table is the oppor-
I 76 31 0 tunity loss measured in terms
II 114 45 0 of percentage of increasing of
III 85 51 0 the cycle time in case of each
IV 159 112 0 decision and different level of
process performance.
54 practical business process engineering

Figure 2.25: Problem Rich and


Co.. minimax regret assumes
that nature plays gainst the
agent maximising her regret.
The best decision according this
criterion is I.

24.00%, and for the decision IV is 32.40%. Therefore, the best decision
is the project IV.
(e) The Laplace criterion is applied as before, but with the factor
0.5. Thus, for decision I is equal to 0.44 × 0.5 − 0.32 × 0.5 = 6.00%, for
the decision II is 6.00%, for the decision III the cycle time reduction is
15.50%, and for the decision IV is 16.50%. Therefore, the best decision
is the project IV.
(f) The pattern of this decision problem is of type 1 explicit.

Problem 2.15. Microcomp is a Chilean-based software company.


It is planning to invest in different project of cost reduction: either
downsizing, process automation, changes strategy levels, process
outsourcing, or product redesign. It will be approximately 2 years to
build the necessary work capital in each project. The eventual cost of
the initiation will differ between projects and will even vary within
project depending on the financial, labour, and risk, including tech-
nical supports. The company has estimated the facility cost and the
cost reduction (in unitary costs) in each project under three different
scenarios, as shown in Table 2.8. Determine the best decision, using
the following decision criteria: (a) maximin, (b) minimax, (c) minimax
regret, (d) Hurwicz (α = 0, 4), (e) equal likelihood (or Laplace). (f)
Determine the decision pattern and if it is explicit or implicit.

Solution 2.15 (a) maximax: The decision is product redesign with


value $25. (b) maximin: The decision is process automation with
strategic process management 55

Scenario
Table 2.8: Microcomp is a
Project Good Regular Bad Chilean-based software com-
Downsizing $21,7 $19,1 $15,2 pany. It is planning to invest in
Process automation 19,0 18,5 17,6 different projects. The unitary
Changes strategy levels 19,2 17,1 14,9 cost reduction of its product
Process outsourcing 22,5 16,8 13,8 depends on different scenarios.
Product redesign 25,0 21,2 12,5

value 17.6%. (c) minimax regret: The decision is process automation


with opportunity loss of $ − 1.4. (d) Hurwicz (α = 0.4): process
automation with value 18.6%. (e) Laplace: product redesign with
value $18.75.

Problem 2.16. The governmental agency for environment protection


is evaluating to invest in business process projects. Each project has
implicit impact in the pollution reduction, which are very difficult to
evaluate. The impacts depend not only of the project type, but also
of the policy type for the implementations, whether they are long
range measures or not. The engineering department determined the
impacts each project according the implementation policy categorised
in five items as is shown in Table 2.9. The policies are categorised
according the time duration of the implementation: very long range
(I), long range (II), normal (III), short range (IV), and very short range
(V). (a) If the agency employs an innovative strategy, it will use the
maximax criterion. What will be its best decision? (b) If the agency
employs an conservative strategy, it will use the maximin criterion.
What will be its best decision? (c) What is the best decision if the
agency applies the criterion minimax regret? (d) What is the best
.
decision by applying the Hurwicz criterion with α = 0.7? (e) What
will be its best offensive decision if the agency is equally likely (or
Laplace) to use any of its five strategies? (f) Finally determine the
pattern of the decision problem and if it is explicit or implicit.

Reduction (%)
Table 2.9: The governmental
Decision I II III IV V agency for environment protec-
Cost reduction 13 8 19 17 9 tion is evaluating to invest in
Automation 9 18 8 19 22 business process projects. The
Legacy systems 16 26 5 13 24 implicit effects on the pollution
Standard software 8 14 13 20 7 each project is calculated de-
Quality management system 18 30 22 3 2 pending of the strategy policy
Business intelligence initiative 5 8 18 13 26 for the implementations.

Solution 2.16 (a) maximax: The decision is quality management


56 practical business process engineering

system with value 30%. (b) maximin: The decision is cost reduction
or automation with value 8%. (c) minimax regret: The decision is
cost reduction with opportunity loss of 11%. (d) Hurwicz (α = 0.7):
quality management system with value 21.6%. (e) Laplace: quality
management system with value 16%.

Problem 2.17. A machine shop owner is attempting to decide


whether to invest in a warehouse management system, a workflow
management system, or in an integrated material requirement plan-
ning. The return from each will be determined by whether the com-
pany succeeds in getting a government military contract. The profit
or loss from each purchase and the probabilities associated with each
contract are shown in Table 2.10. Determine the best decision ac-
cording the (a) Expected Value (EV), (b) the Expected Opportunity
Loss (EOL). In addition, compute (c) the Expected Value of Perfect
Information (EVPI).

Economic Benefits
Table 2.10: A machine shop
Contract No contract owner is attempting to decide
Decision 0.4 0.6 whether to invest. The profit or
Warehouse management system $40.000 $ − 8.000 loss from each invest and the
Workflow management system 20.000 4.000 probabilities are associated with
Integrated material requirement planning 12.000 10.000 each contract.

Solution 2.17 The difference between this problem and the previ-
ous ones is that, we have a probability distribution of the random
variables of nature. In other words, we have imperfect information
about the value that the variable of nature will take. The imperfect
information is modelled through a probability distribution. In this
case, the variable of nature is the economic benefit that depends on
whether or not a contract is obtained. This variable has a probability
of 0.4 for the positive occurrence and a probability of 0.6 in the nega-
tive case.5 With this distribution we will use two decision criteria that 5
Probabilities are sometimes confused
will evaluate each decision and calculate the value of having perfect with Hurwicz’s α optimism factor. Not
only they are conceptually different,
information in the following paragraphs. but they are applied differently. The
(a) The first decision criterion is the Expected Value (EV). The α-factor is multiplied to the maximum
value, while the probabilities are associ-
EV is the weighted value of the random variables and the decision ated with the event (either maximum or
rule will be that decision with maximum EV. As before, we will also not).
use the decision tree to apply this criterion. The solution is shown in expected value

Fig. 2.26. Unlike the previous decision trees, each variable of nature
(represented by a yellow circle) has a probability assigned to each
value it can take. Note that we will assume that the probability dis-
tributions are discrete. As before, the value below the yellow circle is
strategic process management 57

the weighted average of all the branches, while the value below the
decision nodes is the value that such decision makes choosing the
branch with the highest value. Thus, the decision of this problem is
warehouse management system with EV of $11.2.

Figure 2.26: The best decision


according the Expected Value
(EV) is warehouse management
system with value $11.2.

(b) The second decision criterion is Expected Opportunity Loss


(EOL) or we can also call it the expected value of the opportunity
loss. Like EV, EOL consists on choosing the decision that has the low-
est expected value of opportunity losses. We discussed the concept
of opportunity loss earlier when we built the opportunity loss tables
and applied the minimax regret rule. Here we will use the same table
as before, but we will compute the expected value of the losses (pre-
viously we applied the minimax rule to this table). In this problem,
the opportunity losses for each decision are shown in Table 2.11.

Opportunity Loss
Table 2.11: The opportunity
Contract No contract loss table for the problem ma-
Decision 0.4 0.6 chine shop owner. This table is
Warehouse management system $0 $48 obtained from the Table 2.10.
Workflow management system 0 16
Integrated material requirement planning 0 2

The corresponding decision tree is shown in Fig. 2.27. In this case


we must choose the minimisation criterion. For them, the decision of
this problem is integrated material requirement planning with EOL
of $1.2.
(c) The following computation is very interesting, because it tries
to measure how much the machine owner would be willing to pay
for having perfect information. That is, the question is: if she has the
information on whether the contract will be approved before mak-
ing the decision, what would be the average value of the decision?
58 practical business process engineering

Figure 2.27: Expected Opportu-


nity Loss. The best decision is
integrated material requirement
planning with value $1.2.

Figure 2.28: The decision tree


and identification of the best
decision under perfect in-
formation for calculating
the Expected Value of Per-
fect Information (EVPI) is
22.0 − 11.2 = $10.8.
strategic process management 59

To make this computation we will use the decision tree again, but,
unlike the actual sequence of events (in which the decision occurs be-
fore the instantiation of variables), we will assume that the instantia-
tion of variables occurs before the decision. This tree is shown in Fig.
2.28. Observe in this figure that the decisions are maximisation and
that it has an expected value of $22.0. This would be the expected
value if we had perfect information, if we subtract the EV obtained
before, we obtain a value of $22.0 − EV = 22.0 − 11.2 = $10.8, which
corresponds to the Expected Value of Perfect Information (EVPI). expected value of perfect information
That is, the EVPI is the value that the decision maker is willing to pay
to count perfect information in a context of a decision problem with
imperfect information.
Thus, we have solved the questions of the problem.
So far we have discussed the concept of the expected value of
perfect information. We noted that if the perfect information could
be obtained regarding which state of nature would occur in the fu-
ture (before the decision), the agent could obviously make better
decisions. Although perfect information about the future is rare, it
is often possible to gain some amount of additional (imperfect) in-
formation that will improve decisions, particularly with the help of
engineering or consulting projects.
Suppose that the investor of the machine owner has decided to
hire a professional economic analyst who will provide additional
information about future economic conditions and thus about the
contract or not. The analyst is constantly studying the economy and
politic situation, and the results of this research are what the investor
will be invested. The consultant will provide the machine owner with
a report predicting one of two outcomes. The report will be either
contract, indicating that good economic conditions are most likely
to prevail in the future and that the contract is probable, or negative,
indicating that poor economic conditions will probably occur and the
contract will not occur.
Now, based on the analyst’s past record in forecasting future eco-
nomic conditions, the machine owner has determined conditional
probabilities of the different report outcomes, given the occurrence conditional probabilities
of each state of nature in the future. We will use the following nota-
tions to express these conditional probabilities: c: the contract will
happen, nc: the contract will not happen, C: the consultant reports
that the contract will happen, and NC: the consultant reports that the
contract will occur. Let us remember that the probability assigned by
the owner machine that the contract occur is P(c) = 0.40 and that the
contract does not occur P(nc) = 0.60 as we saw in the Problem 2.17.
This probability is called a priori probability, i. e., the probability a a priori probability
priori to have the consultant recommendation.
60 practical business process engineering

Thus, the conditional probability of each report outcome, given


the occurrence of each state of nature, follows. The conditional prob-
ability that the consultant reports C given that the contract will, in
fact, happen is P(C |c) = 0.80 and reports NC given that the con-
tract will happen is P( NC |c) = 0.20. Additionally, the conditional
probability that the consultant reports NC given that the contract
will, in fact, not happen is P( NC |nc) = 0.90 and reports C given
that the contract will not happen is P(C |nc) = 0.10. Observe that
P(C |c) + P( NC |c) = 1.0 and P(C |nc) + P( NC |nc) = 1.0.
Given the a priori probabilities (P(c) and P(nc)) and the condi-
tional probabilities (P(C |c), P( NC |c), P(C |nc), and P( NC |nc)), then
the a priori probabilities can be revised by means of Bayes’ rule. Such
probabilities are called a posteriori probabilities. They can be a posteriori probabilities
determined using Bayes’ rule, as follows in the following case:

P(c, C ) P(C |c) P(c)


P(c|C ) = =
P(C ) P(C |c) P(c) + P(C |nc) P(nc)
0.80 × 0.40 0.32
= =
0.80 × 0.40 + 0.10 × 0.60 0.38
= 0.84.

In the formula, we have calculated the joint probability P(c, C ) = joint probability
0.32. In addition, the probability that the consultant reports contract
P(C ) = 0.38, thus the probability that the consultant reports not
contract is P( NC ) = 1 − P(C ) = 1 − 0.38 = 0.62.
Remember that the prior probability that the contract will occur
in the future was estimated by the agent in P(c) = 0.40. However,
by obtaining the additional information of a positive report from
the analyst, the investor can revise the prior probability of good
conditions to a P(c|C ) = 0.84 probability that contract will occur
given the consultant will also report that the contract will happen.
A substantial change from 0.44 to 0.84! The remaining posterior
(revised) probabilities are P(nc|C ) = 0.16, P(nc| NC ) = 0.87 and
P(c| NC ) = 0.13.
Mathematically, the problem is as follows: given the a priori prob-
abilities P(c) and P(nc) and the conditional probabilities P(C |c),
P( NC |c), P(C |nc), and P( NC |nc), we calculated (by using the Bayes’
rule) the a posteriori probabilities P(c|C ), P(nc|C ), P(c| NC ), P(nc| NC )
and the probabilities of the report values P(C ) and P( NC ). Recall
that the a priori probability is the probability without the information
consultant, while the conditional probabilities are obtained from the
past experience or expertise of the consultant.
One of the difficulties that can occur with additional information
is that as the size of the problem increases (i. e., as we add more deci-
strategic process management 61

recommendation C
Table 2.12: Computation of the
( a) (b) (c) = ( a) · (b) (c)/(d)
a priori conditional joint a posteriori
a posteriori probabilities. The
probability probability probability probability tables corresponds to the case
contract P(c) = 0.40 P(C |c) = 0.80 P(c, C ) = 0.32 P(c|C ) = 0.84 with recommendation con-
no contract P(nc) = 0.60 P(C |nc) = 0.10 P(nc, C ) = 0.06 P(nc|C ) = 0.16 tract (C) and not contract (NC)
sum (d) P(C ) = 0.38 respectively. In red the given
recommendation NC values.
contract P(c) = 0.40 P( NC |c) = 0.20 P(c, NC ) = 0.08 P(c| NC ) = 0.13
no contract P(nc) = 0.60 P( NC |nc) = 0.90 P(nc, NC ) = 0.54 P(nc| NC ) = 0.87
sum (d) P( NC ) = 0.62

sion alternatives and states of nature), the application of Bayes’ rule


to compute the posterior probabilities becomes more complex. In
such cases, the posterior probabilities can be computed by using ta-
bles. We need two tables for computing posterior probabilities for a
contract report (C) and for a not contract (NC). The tables are shown
on reftab:machine-4. Let us observe that the above table corresponds
to the recommendation contract C of the consultant, thus we will
need to compute the same number of tables as recommendations.
You can download the table in Number MacOS from this link6 for free 6
Under this link, you can find the
using. Number MacOS sheet for computing
complex a posteriori probabilities.
In summary, in the below problem we determined the best deci-
sion by using the Expected Value and Expected Opportunity Loss
criteria. In fact, the best decision was warehouse management sys-
tem with expected benefit of $11.2 under the criterion EV and inte-
grated material requirement planning under the criterion EOL with
expected opportunity loss of $1.2. In addition, the Expected Value
of Perfect Information (EVPI) of $10.8. Now, we will determine the
information value of the consultant, the Expected Value of Sample
Information (EVSI).

Problem 2.18. Regarding the below problem and given the condi-
tional probabilities: the consultant reports C given that the contract
will, in fact, happen is P(C |c) = 0.80 and reports NC given that
the contract will happen is P( NC |nc) = 0.90. (a) What is the eco-
nomic value of this additional information, i. e. the Expected Value
of Sample Information (EVSI)? (b) Let us suppose that another con- expected value of sample information
sultant (consultant-3) is offering another report: C: contract, U: un-
defined, and NC: not contract. The machine owner has a register of
qualifications of the consultant-3 with the following conditional prob-
abilities: P(C |c) = 0.9, P(U |c) = 0.05, P( NC |c) = 0.05, P(C |nc) = 0.05
and P(C |nc) = 0.1, P(U |nc) = 0.2, P( NC |nc) = 0.70. What is the EVSI
of the consultant-3. (c) Which consultant is better?
62 practical business process engineering

Solution 2.18 (a) For solving the problem, we will use the computa-
tion of the a posteriori probabilities and the probabilities of reports.
Then we will model the decision tree showed in Fig. 2.29. Let us
observe the events sequence: first the occurrence of the reports repre-
sented by a random variable, then the decision according the report,
and finally the occurrence of the nature variable that takes values
contract or not contract.
(b) According the decision tree, the Expected Value of this addi-
tional information is $18.64. The strategies are warehouse manage-
ment system in case that the consultant recommends contract (C) and
integrated material requirement planning in case that the consultant
recommends not contract (NC). Then the Expected Value of Sample
Information is EVSI = $18.64 − EV = $18.64 − $11.2 = $7.44, that is,
the value that the machine owner is willing to pay for the consultant
is $7.44. Recalling that the EVPI is $10.80, then the EVSI is 70% of
the EVPI. This fraction is called informational efficiency of the informational efficiency
consultant.

Figure 2.29: Decision tree of the


(b) In case of the additional information provided by the consultant- problem of the machine owner
3, we will compute the corresponding a posteriori probabilities by with the additional information
using the Number MacOS sheet of this this link. The a posteriori of the consultant. The strategies
probabilities are P(c|C ) = 0.86 = 1 − P(nc|C ), P(c|U ) = 0.14 = are warehouse management
1 − P(nc|U ), and P(c| NC ) = 0.05 = 1 − P(nc| NC ), while the prob- system in case that the consul-
abilities of the recommendations are P(C ) = 0.42, P(U ) = 0.14, and tant recommends contract (C)
P( NC ) = 0.44. After these computations, we obtain an expected and integrated material require-
value of $19.86. Then the Expected Value of Sample Information is ment planning in case that the
EVSI = $19.86 − EV = $19.86 − $11.2 = $8.66, that is, the value consultant recommends not
contract (NC).
strategic process management 63

that the machine owner is willing to pay for the consultant-3 is $8.66.
Recalling that the EVPI is $10.80, then the EVSI is 80% of the EVPI.
(c) Comparing the informational efficiency of consultant (70%) and
consultant-3 (80%), we can easily see that the consultant-3 is better
than the consultant.

Problem 2.19. In Problem 2.16, the agency reviewed the pollution


impacts each project and determined the following probabilities that
are shown in Table 2.13. (a) Using expected value, rank agency’s de-
cisions from best to worst. (b) Using the EOL, rank agency’s decision
from best to worst. (c) Let us suppose a new reviewing of the prob-
abilities assigning 60% to the very short range policy (policy V) and
10% chance of any of the other four policies. What decision would be
made?

Implementation Probability
Table 2.13: The environment
Policy
agency reviewed the impacts
I 0.4 on the pollution reduction of
II 0.1 its projects and assigned new
III 0.2 probabilities to the implementa-
IV 0.2 tion policies.
V 0.1

Solution 2.19 (a) EV: The best solution is quality management sys-
tem with value 15.4%. (b) EOL: The best solution is quality manage-
ment system with value −14.6%. (c) New probability distribution:
The best solution is legacy system 20.4%

Problem 2.20. Miramar Co. is going to introduce one of three new


improvements to its products: a new web page, more personal con-
tact, or business automation. The market conditions (favourable,
stable, or unfavourable) will determine the profit or loss the com-
pany realises, as shown in the Table 2.14. (a) Compute the expected
value for each decision and select the best one. (b) Develop the op-
portunity loss table and compute the expected opportunity loss for
each product. (c) Determine how much the firm would be willing to
pay to a market research firm to gain better information about future
market conditions. (d) Let us suppose that the company can con-
tract an information systems consultant. Let f , s and u for the events
favourable, stable, and unfavourable and F, S and U for the respective
recommendations. According company registers following proba-
bilities are obtained: P( F | f ) = 0.80, P(S| f ) = 0.10, P(S|s) = 0.70,
P( F |s) = 0.05, P(U |u) = 0.90, and P( F |u) = 0.01. Determine the
informational efficiency of the consultant.
64 practical business process engineering

Market Conditions
Table 2.14: Miramar Co. is
Favourable Stable Unfavourable going to introduce one of
Decision 0.2 0.7 0.1 three new improvements to
Web page $120.000 $70.000 $ − 30.000 its products. The market con-
More personal contact 60.000 40.000 20.000 ditions (favourable, stable, or
Automation 35.000 30.000 30.000 unfavourable) will determine
the profit or loss the company
Solution 2.20 (a) EV: The best solution is web page with $70. (b) realises.
EOL: The best solution is web page with −$50. (c) EVPI = $6. (d)
EVSI = $0.22, i. e. an informational efficiency of 4% app.

2.5 Initiatives Evaluation by Using Analytical Hierarchical Pro-


cess

Evaluating decision alternatives using decision theory is a robust


method that provides many analytical capabilities and powerful
results. However, building a model of this type requires knowing
parameters that are difficult to obtain, such as, for example, the
probabilities of occurrence of unknown events. The Analytical Hi-
erarchy Process (AHP), on the other hand, is a method that seeks to
determine the weights of solution alternatives according to various
decision criteria, ensuring that the result is consistent and meets the
transitivity relationship of prioritization. The transitivity relation be-
tween decision alternatives says that if an option A is preferred to
another B and this, in turn, is preferred to C, then it must be that A
is preferred to C. The AHP method is convenient and widespread. A
decision must also satisfy another pair of logical axioms: reflexibility
and antisymmetry; however, they are less debatable in practical cases.
AHP is a method for ordering or weighing decision alternatives.
It also serves to select the best one when the decision maker has
multiple objectives or criteria. The process assumes that relevant al-
ternatives and selection criteria are known. The question is then to
determine weights for the consistent criteria and alternatives. For ex-
ample, the manager of a company must choose between investing in
one technology or another. In another case, the software development
unit, which wants to purchase or contract a service, has several alter-
native suppliers from which to choose. However, in both examples,
the manager or head of the engineering unit must compare alterna-
tives based on specific predefined criteria. For example, the manager
who must choose a technology to develop has criteria of cost, tan-
gible benefits, intelligible benefits and potential. The engineering
manager will have criteria for profitability, cost and supplier expe-
strategic process management 65

rience. Then, the AHP allows finding consistent numerical weights


for each decision criterion and weighting the decision alternatives
for each criterion. As a result, you can weigh options based on both
weights.
Let’s consider the following example. Banco las Américas is a
bank in Chile that is determined to improve university students’
service. The bank has identified four criteria university students
value and three projects to prioritise. Figure 2.30 shows the problem
objective tree. The four essential criteria that students value are (1)
checking account visibility, (2) reduced college credit cycle time, (3)
low student insurance prices, and (4) more excellent travel insurance
offers study. The bank also identified three projects, not necessarily
exclusive, to execute: (1) invest in data management projects, (2)
invest in hardware infrastructure, and (3) invest in business process
management. The question arises whether all projects have similar
importance or there is some priority between them.

Figure 2.30: Banco las Américas


is determined to improve uni-
versity students’ service. The
bank has identified four criteria
university students value and
three projects to prioritise

Pairwise comparison. In a pairwise comparison, the decision maker


compares two alternatives (the pair) according to a decision crite-
rion and indicates a preference. For example, the decision maker
compares invest in data management projects (A) and invest in hardware
infrastructure (B). These comparisons are made using numerical val-
ues that indicate different levels of preference and are shown in table
2.15.
Experienced AHP researchers have determined the scale and
meaning of each level of importance (or preference) in Table 2.15.
Each level of preference is interpreted as follows: if A is “moder-
ately” important relative to B according to criterion C1 , then it is
assigned a value of 3. This number measures the degree of prefer-
ence of A above B. Obviously opposite comparisons are no longer
necessary, that is, it is not necessary to compare B with A according
66 practical business process engineering

intensity of importance value


Table 2.15: Preference scale for
equal importance 1 pairwise comparison. Values
moderate importance 3 2, 4, 6 and 8 are intermediate
essential of strong importance 5 values between two adjacent
very strong importance 7 judgements.
extreme importance 9

to the same criterion, because we know that B not is moderately more


important than A. The preference of B over A is the reciprocal of A
over B, so that the preference of B over A is exactly 1/3. Thus, the
pairwise comparisons of each of the alternatives (A, B and C) accord-
ing to the criterion C1 (improve account visibility) are expressed in a
square matrix as shown in the table 2.16.

improve account
Table 2.16: Matrix of pair com-
visibility (C1 )
parison between projects A, B,
project A B C
and C according to criterion C1 .
A 1 3 2
B 1/3 1 1/5
C 1/2 5 1
This matrix shows that project A is “moderately important” than
project B under criterion C1 . Likewise, C is “essential of strong im-
portance” over B. Note that the diagonal of the matrix is 1, which
logically indicates “equally preferred”. The resulting matrices ac-
cording to the other criteria: improve credit process cycle time (C2 ),
reduce processing costs of students insurance (C3 ) and extend offer to
foreign study travel (C4 ), are shown below:
     
1 6 1/3 1 1/3 1 1 1/3 1/2
1/6 1 1/9 , 3 1 7 , and 3 1 4 .
     
3 9 1 1 1/7 1 2 1/4 1

Determination of preference weights. The next step is to prioritise the


decision alternatives within each criterion or determine consistent
weights. In our example of four criteria. This step is called synthesis.
The mathematical synthesis procedure is quite complex and goes
beyond the scope of this book. The first step is to add the preference
values in each column as shown in the following table 2.17:
We then divide each value in each column by the corresponding
sum in that column. This result allows us to obtain a standardised
matrix as shown in Table 2.18:
Let’s notice that the values in each column add up to 1. The next
step is to average the values in each row as shown in the Table 2.19:
strategic process management 67

improve account
Table 2.17: The first step is to
visibility (C1 )
add the preference values in
project A B C
each column of the pair com-
A 1 3 2 parison matrix.
B 1/3 1 1/5
C 1/2 5 1
11/6 9 16/5

improve account
Table 2.18: The second step is
visibility (C1 )
to compute the standardised
project A B C
matrix.
A 6/11 3/9 5/8
B 2/11 1/9 1/16
C 3/11 5/9 5/16

improve account
Table 2.19: The next step is to
visibility (C1 )
average the values in each row.
project A B C average
A 0.5455 0.3333 0.6250 0.5012
B 0.1818 0.1111 0.0625 0.1185
C 0.2727 0.5556 0.3125 0.3803
Sum 1.0000
68 practical business process engineering

The averages of the rows in Table 2.19 give us the preferences of


the projects according to the criterion “improve account visibility”
(C1 ). The most preferred is invest in “data management projects” (A),
then “invest business process management” (C) and, finally, “invest
in hardware infrastructure” (B). We can write these preferences as a
vector:
 
0.5012
0.1185 .
 
0.3803

Let us observe that the sum of the elements of the vector is 1. In the
same way we calculate the vectors according to the other criteria:
     
0.2819 0.1780 0.1561
0.0598 , 0.6850 , and 0.6196 .
     
0.3803 0.1360 0.2243

These four preference vectors are summarised in the following


Table 2.20:

criteria
Table 2.20: Matrix represent-
project C1 C2 C3 C4
ing of weights of each project
A 0.5012 0.2819 0.1790 0.1561 according to each criterion.
B 0.1185 0.0598 0.6850 0.6196
C 0.3803 0.6583 0.1360 0.2243

Weighting of criteria. The next step is to determine the relative im-


portance or weighting of the criteria, that is, order the criteria from
most important to least important. This is done in the same way as it
was done to weight the projects previously. The following Table 2.21
shows such a pairwise comparison:

criteria C1 C2 C3 C4
Table 2.21: Pair comparison
C1 1 1/5 3 4 matrix to determine the relative
C2 5 1 9 7 importance or weighting of the
C3 1/3 1/9 1 2 criteria.
C4 1/4 1/7 1/2 1

The following table (2.22) shows the normalised table and the
average vector that is obtained in the last column.
Clearly, “improve credit process cycle time” (C2 ) is the highest
priority criterion, “improve account visibility” (C1 ) is the second, etc.
The next step is to combine the matrices obtained.
strategic process management 69

criteria C1 C2 C3 C4 average
Table 2.22: Normalised matrix
C1 0.1519 0.1375 0.2222 0.2857 0.1933 for criteria with row average.
C2 0.7595 0.6878 0.6667 0.5000 0.6535
C3 0.0506 0.0764 0.0741 0.1429 0.0860
C4 0.0380 0.0983 0.0370 0.0714 0.0612
1.0000

Obtaining the final weights. Let us remember that the preferences of


each place for each criterion are shown in the following matrix:
 
0.5012 0.2819 0.1790 0.1561
0.1185 0.0598 0.6850 0.6196 ,
 
0.3803 0.6583 0.1360 0.2243

while the pairwise comparisons of the four criteria obtained in the


last computation yielded the following result:
 
0.1993
0.6535
,
 

0.0860
0.0612

thus the multiplication of the matrix by the vector yields the final
weights of each project:
 
  0.1993  
0.5012 0.2819 0.1790 0.1561 0.6535 0.5314
0.1185 0.0598 0.6850 0.6196 ·   = 0.3091 .
     
0.0860
0.3803 0.6583 0.1360 0.2243 0.1595
0.0612

Therefore, the most important project is A, then B and, finally C.

Consistency of the AHP method. The analytical hierarchy process is


fundamentally based on the pairwise comparison that the decision
maker uses to establish preferences between decision alternatives
among the different criteria. The normal procedure is to develop
these pairwise comparisons by interviewing the decision maker and
obtaining the comparisons using the preference scale in Table 2.15.
However, when a decision maker has three or more alternatives to
compare, he may get lost in the comparisons he previously made.
Suppose, for example, that according to the income level criterion,
“A is very strong importance to B” and that “B is very strong im-
portance C”. This is fine, but suppose that A is the decision maker
also says “equally importance” to C. This comparison is not consis-
tent with the initial preference. A more logical comparison would be
to say that A is preferred to C to some degree to be specified. This
70 practical business process engineering

type of inconsistency can cause the method to not work. However, in


practice, it has been found that this is generally not a serious problem
and that some degree of inconsistency can be expected. To do this we
calculate an inconsistency index.
To demonstrate how to calculate the consistency index (CI) for all
criteria. First, we multiply the preference matrix by the preference
vector for each criterion:
     
1 1/5 3 4 0.1993 0.8328
 5 1 9 7 0.6535 2.8524
   
· = ,


1/3 1/9 1 2  0.0860 0.3474
1/4 1/7 1/2 1 0.0612 0.2474
then, each of these values is divided by the weights that correspond
to each preference vector:
   
0.8328/0.1993 4.1786
2.8524/0.6535 4.3648
= .
   

0.3474/0.0860 4.0401
0.2474/0.0612 4.0422

If the decision maker was perfectly consistent, then each of these


ratios should be exactly 4, that is, equal to the number of items com-
pared. Then, we average each of these values by adding them and
dividing them by 4 is given by:
4.1786 + 4.3648 + 4.0401 + 4.0422
= 4.1562.
4
Thus the consistency index is calculated using the following formula:
4.1562 − n
CI = ,
n−1
where n is the number of items to be compared and 4.1562 is the
value we just calculated. The the IC is
4.1562 − 4
CI = = 0.0521.
4−1
If CI = 0, then the decision maker is perfectly consistent. Since the
decision maker is not perfectly consistent, the question is what de-
gree of consistency is acceptable. You can determine an acceptable
level of inconsistency by purchasing CI at a random index, RI, whose
index is generated randomly in pairs and is in Table 2.23. This indica-
tor has been generated in the laboratory and depends on the number
n of items to be compared:
The degree of consistency of the pairwise comparison is deter-
mined as:
CI 0.0521
= = 0.0580.
RI 0.90
strategic process management 71

n 2 3 4 5 6 7 8 9 10
Table 2.23: You can determine
RI 0 0.58 0.90 1.12 1.24 1.32 1.41 1.45 1.51 an acceptable level of incon-
sistency by purchasing CI at
In general, the degree of consistency is satisfactory if CI/IR is less
a random index, RI, whose
than 0.10, which is this case. If this does not happen, there is a high
index is generated randomly in
probability of inconsistency and the result may not be significant.
pairs. This indicator has been
generated in the laboratory and
AHP in Excel sheets. Figure 2.31 shows the pairwise comparison
depends on the number n of
matrix according to the criterion “improve account visibility” (C1 ).
items to be compared.
Let’s note that the format of the pairwise comparison is a fraction.
Fractions can be formatted in Excel by choosing the sequence of
Format-Cells-Number-Fraction commands (the Excel sheet can down-
loaded from the following ahp.xls). Figure 2.31 is self-explanatory.
Let us remember that the average of the rows represents the vector
of preferences according to the “improve account visibility” (C1 ) cri-
terion. To complete this stage in AHP, it is necessary to calculate the
preference vectors for the other three criteria: C2 , C3 and C4 .

Figure 2.31: Pair comparison


matrix according to the crite-
rion “improve account visibil-
ity” (C1 ). An Excel sheet can be
downloaded from the ahp.xls.

The next step is to develop the pair comparison matrix and nor-
malised matrix for our four criteria. The spreadsheet that corre-
sponds to this step is shown in Figure 2.32. These two matrices are
constructed in a similar way to the two matrices in Figure 2.31.
The following step consists on to computing the weights of each
project according the corresponding weights of the projects for each
criterion and the weights of all criteria. The matrix multiplication by
a vector is on Figure 2.33. Let us observe that we are copied a column
as row for facilitating the multiplication of a matrix by a vector using
the Excel command =SUMAPRODUCTO() as is shown in the Figure.
Finally, figure 2.34 shows the worksheet to evaluate the degree
72 practical business process engineering

Figure 2.32: Pair comparison


matrix and normalised matrix
for our four criteria.

Figure 2.33: This step consists


on to computing the weights
of each project according to the
corresponding weights of the
projects for each criterion and
the weights of all criteria.
strategic process management 73

profits growth potential


Table 2.24: Pair comparison
company A B C A B C
matrix for each criterion of
A 1 1/3 7 1 1/2 1/5 Problem 2.21.
B 3 1 9 2 1 1/3
C 1/7 1/9 1 5 3 1

of consistency of the pairwise comparisons. The Figure 2.34 is self-


explanatory.

Figure 2.34: The last step con-


sists on the compute the consis-
tency index. RI is determined
according to the Table 2.23. In
general, the degree of consis-
tency is satisfactory if CI/IR
is less than 0.10, which is this
case.

Problem 2.21. “oct2019” is a consulting firm that specialises in tech-


nological projects. The consultancy is currently considering investing
in three automation technologies. The basic decision criteria are prof-
its, growth potential. “oct2019” considers “moderate importance”
the company’s potential growth over profits. The pairwise compar-
isons of each company are shown in the following table. Develop the
ranking for the three companies using the AHP method.

Solution For solving the problem we use the Excel Sheet ahp.xls
and enter the corresponding values. Let us observe that this problem
has 2 criteria and 3 alternatives. In the sheet we have the table for
obtaining the weights in case of three alternatives, but not in case of
2, which, however, is trivial to calculate. The first step is to calculate
the weights of each alternative under both criteria. According to the
Table 2.24 the weights are: (0.2946, 0.6486, 0.0714) under the criterion
“profits“ and (0.1222, 0.2299, 0.6479) under the criterion “potential
growth”. Now, we calculate the weights for each criterion. Since
‘oct2019” considers “moderate importance” the company’s potential
growth over profits. According to the Table 2.15 this preference cor-
responds to the value 3, which gives weights 0.75 and 0.25 for profits
and growth potential respectively (why?). We let this computation to
74 practical business process engineering

curriculum experience references


Table 2.25: Pair comparison
experts A B C A B C A B C
matrix for each criterion of
A 1 2 1/3 1 3 1/2 1 3 6 Problem 2.23.
B 1/2 1 1/5 1/3 1 1 1/3 1 2
C 3 5 1 2 1 1 1/6 1/2 1

criteria curriculum experience references


Table 2.26: Pair comparison ma-
curriculum 1 3 5 trix for the criteria of Problem
experience 1/3 1 2 2.23.
references 1/5 1/2 1

the students. The third step consists on to compute the final weights
of the alternatives: for A is 0.2946 × 0.75 + 0.1222 × 0.25 = 0.2515, for
B is 0.5439, and for C is 0.2155 calculated similarly. The last step of
the AHP method is to determinate if each prioritisation is consistent,
which is a question of the following problem.

Problem 2.22. Check whether the pairwise comparisons in the previ-


ous problem are consistent.

Solution 2.22 This corresponds to the last step of the AHP method.
We will compute the consistency index for each pair comparison.
For the first table, which represents the comparison of alternatives
according the first criterion gives CI/RI = 0.0701. The second ta-
ble (the comparison of alternatives according the second criterion)
gives CI/RI = 0.0032. Finally, the consistency of criteria is obvious,
because they are only two options. Thus, we say that weights of al-
ternatives are consistent, because each index CI/RI gives a value less
than 0.10.

Problem 2.23. The technology unit is evaluating three consultants:


P. Aranguiz, J. Bustos and T. Concha. To this end, it has formed a
commission made up of three internal experts. The commission has
defined three basic evaluation criteria: curriculum, experience, and
references. The comparison matrices of each consultant and the three
criteria are shown in the tables 2.25 and 2.26. Determine the ranking
of the three consultants.

Solution 2.23 The computation of the weights is trivial by using the


Excel Sheet ahp.xls in case of 3 criteria and 3 alternatives. In fact, the
final weights of each alternative are (0.3147, 0.1600, 0.5254).

Problem 2.24. Check whether the pairwise comparisons in the previ-


ous problem are consistent and indicate whether the level of consis-
tency is acceptable.
strategic process management 75

potential return risk popularity


fund global blue green global blue green global blue green
global 1 1/4 2 1 2 1/3 1 1 1/3
blue 4 1 6 1/2 1 1/5 1 1 1/3
green 1/2 1/6 1 3 5 1 3 3 1
Table 2.27: Pair comparison
criteria return risk popularity matrix for each criterion of
return 1 3 5 Problem 2.25.
risk 1/3 1 2
popularity 1/5 1/2 1
Table 2.28: Pair comparison ma-
Solution 2.24 We need to compute the consistency index of each pair trix for the criteria of Problem
comparison. For the comparison of alternatives according criterion 2.25.
“curriculum” is CI/RI = 0.0032, criterion “experience” is CI/RI =
0.3188, and criterion “references” is CI/RI = 0.0000. The consis-
tency index for the pair comparison of criteria is CI/RI = 0.0032.
Let us observe that the pair comparison according to the criterion
“experience” is not consistent, since the value is greater or equal than
0.10. This has an explanation: A is “moderate importance” with re-
spect to B, also C is “equal to moderate importance” with respect
to A. But the decision maker affirms that B and C have “equal im-
portance”, which is obviously a contradiction. The comparison of
alternatives according to the criterion “references” is perfectly con-
sistent (CI/RI = 0.0000). The other comparisons have acceptable
consistency index. If the problem is inconsistent, what must we do?

Problem 2.25. Maritza Fuentes is a sales representative for a software


company. Her earnings in the last year were considerable, so she
wants to invest money in a mutual fund. Maritza studies three funds:
global fund, blue fund, and green ƒund. She has thought about three
selection criteria: potential return (based on historical trends and
forecasts), risk, and popularity of the fund. The pairwise comparison
of the funds for each of the criteria and the pairwise comparison of
the criteria with each other are in the following tables 2.27 and 2.28.
Determine which fund is the most convenient to invest in.

Solution 2.25 This case have again 3 criteria and 3 alternatives and
is solved as the previous problem. In fact, the final weights of each
alternative are (0.2027, 0.5060, 0.2913). Therefore, the most convenient
is the fund blue.

Problem 2.26. In the previous problem, assume that Maritza has


$85.000 to invest and wants to diversify by investing in all three
funds. How much should you invest in each of them?
76 practical business process engineering

flexibility scaling cost


project airbnb-A milton-B harriot-C airbnb-A milton-B harriot-C airbnb-A milton-B harriot-C
airbnb-A 1 1/2 1/5 1 5 3 1 2 5
milton-B 2 1 1/3 1/5 1 1/4 1/2 1 2
harriot-C 5 3 1 1/3 4 1 1/5 1/2 1

Table 2.29: Pair comparison


criteria flexibility scaling cost matrix for each criterion of
flexibility 1 2 4 Problem 2.27.
scaling 1/2 1 3
cost 1/4 1/3 1
Table 2.30: Pair comparison ma-
Solution 2.26 Before to use the weights resulting of the previous trix for the criteria of Problem
problem, we must to verify the consistency of each pair comparison. 2.27.
For the “potential return”, the consistency index gives CI/RI =
0.0079; for the risk, CI/IR = 0.0032, fir “popularity”, CI/RI = 0.000;
and for the criteria is CI/RI = 0.0032. Since the consistency indexes
are less than 0.01, we can use the weights properly. In fact, a good
recomendation for the investment will be $85, 000 × 0.2027 = $17.230
for global fund, $43, 010, and $24, 761.

Problem 2.27. Rosa Cifuentes is evaluating three projects to develop


a housing reservation system that manages relationships between
guests and hosts. The projects are Airbnb-A, Milton-B and Harriot-
C. The selection criteria are three: flexibility, scaling, and cost. The
pairwise comparison matrices of the projects and the criteria among
themselves are in tables 2.29 and 2.30. Develop a AHP to help Rosa
choose the project.

Solution 2.27 This problem has 3 criteria and 3 alternatives. We


use the Excel Sheet ahp.xls for computing the weights and con-
sistency indexes. The weights of the alternatives for the criterion
“flexibility” are (0.1386, 0.3682, 0.4932), for the criterion “scaling” are
(0.6194, 0.0964, 0.2842), and for the criterion ‘cost” are (0.5949, 0.2766, 0.1285).
The weights for each criterion are (0.5571, 0.3202, 0.1226), which gives
final weights for the alternatives of (0.3485, 0.2699, 0.3816). The con-
sistency indexes for the pair comparison “flexibility” is 0.0816, for
“scaling” is 0.0251, and for “cost” is 0.048. Finally, the consistency in-
dex for the pair comparison of criteria is 0.0158. Since all consistency
indexes are less than 0.10, then the AHP is well defined. Therefore,
the best alternative for Rosa is Harriot-C.

Problem 2.28. Research and Development is evaluating three devel-


opment projects: A, B and C. R+D is not sure how to choose the best
strategic process management 77

benefits PE cost
Table 2.31: Pair comparison ma-
project A B C A B C A B C
trix for the criterion of Problem
A 1 4 6 1 2 1/3 1 1/3 1/4 2.28.
B 1/4 1 2 1/2 1 1/6 3 1 1/2
C 1/6 1/2 1 3 6 1 4 2 1

criteria benefits PE cost


Table 2.32: Pair comparison ma-
benefits 1 2 6 trix for the criteria of Problem
PE 1/2 1 4 2.28.
Cost 1/6 1/4 1

among the three projects and maintains that three criteria must be
considered: benefits, probability of success (PE), and costs. The pair-
wise comparison matrices for each project and for the criteria are as
tables 2.31 and 2.32. Sort projects using the AHP.

Solution 2.28 This problem has 3 criteria and 3 alternatives. We only


show the final weights of alternatives: (0.4937, 0.1781, 0.3281). The
consistency indexes for the first matrix is 0.0079, for the second one is
0.0, for the third is 0.0158, and for the criteria matrix is 0.0079. Since
all consistency indexes are less than 0.10, then the weights are well
defined and can be used for decisions.

Problem 2.29. The investment program selects the experts who ap-
ply to the program according to three criteria: Years of experience,
number of projects, and client reports. The program must select three
applicants for the last two vacancies—the chief wonders which appli-
cants to choose. The tables 2.33 and 2.34 show pair comparisons and
data associated to each expert. Help to select the best expert.

Solution 2.29 This problem has 3 criteria and 3 alternatives. How-


ever, it is not available the pair comparison of each expert because
we have values representing each variable. An option would be to
develop matrix comparison for each criterion according to the given
data such with the minimal consistency index each matrix. We pro-

expert years numbers reports


Table 2.33: Pair comparison ma-
years 1 3 1/3 trix for the criteria of Problem
numbers 1/3 1 1/6 2.29.
reports 3 6 1
78 practical business process engineering

experts years numbers reports


Table 2.34: Data associated to
A 3 6 good each expert of Problem 2.29
B 4 7 regular
C 3 6 bad

pose the following matrices:


     
1 1/3 1 1 1/3 1 1 3 8
3 1 3 , 3 1 3 , and 1/3 1 3 ,
     
1 1/3 1 1 1/3 1 1/8 1/3 1

Let us observe that consistency index for the matrix of criteria


“years” and “numbers” is 0.000, while for the matrix “reports” is
0.0013. Thus the decision problem is well defined. Therefore, the cor-
responding final weights for each expert will be (0.5145, 0.3626, 0.1229),
which gives expert A as winner.
3
Modelling in BPMN and DMN

This specification represents the amalgamation of best practices


within the business modelling community to define the notation
and semantics of Collaboration diagrams, Process diagrams, and
Choreography diagrams.

3.1 Introduction

1. BPMN means Business Process Model and Notation. It is a standard


for business processes modelling proposed by the Object Manage-
ment Group 1 and it is available the version 2.0. 1
Object Management Group.
Business Process Model and Notation
2. DMN means Decision Model and Notation. It is a standard for deci- (BPMN 2.0). Tech. rep. Available at
https://www.omg.org/spec/BPMN/2.0/PDF
sion models proposed by the Object Management Group 2 and it is (4.2020). Object Management Group,
available the version 1.1. 2011
2
Object Management Group.
3. The commercial software Signavio can be used for implementing Decision Model and Notation
(DMN) v1.1. Tech. rep. Available at
BPMN and DMN models. An academic version can be found on
https://www.omg.org/spec/DMN/1.1/PDF
the link https://academic.signavio.com/p/login. (7.2016). Object Management Group,
2016
4. The free software Camunda can be also used for implementing
BPMN and DMN models. You can find the software in version on-
line and for downloading on the Web page https://bpmn.io too.
Particularly, a DMN model can be simulated on-line on the Web
page https://camunda.org/dmn/simulator/.

5. Some examples proposed here are obtained from the documents


3 and 4 . Others are obtained from real situations and adapted for 3
Object Management Group. BPMN
pedagogical purposes. 2.0 by Example. Tech. rep. Avail-
able at https://www.omg.org/cgi-
bin/doc?dtc/10-06-02 (4.2020). Object
Management Group, 2010
3.2 Problems and Questions of BPMN 4
Object Management Group.
Decision Model and Notation
Consider and use the Poster – BPMN 2.0 published in 5 for answering (DMN) v1.1. Tech. rep. Available at
the problems and questions of this Section. We show the English https://www.omg.org/spec/DMN/1.1/PDF
(7.2016). Object Management Group,
version of the poster in the Fig. 3.16 . 2016
5
Berliner BPM-Offensive. BPMN 2.0
– Business Process Model and Notation.
Available at poster-EN, poster-ES. 2020.
url: http://www.bpmb.de/
6
You can found the poster translated
to several languages on the same Web
page.
80 practical business process engineering

BPMN 2.0 - Business Process Model and Notation http://bpmb.de/poster

Conversations Choreographies
Activities
Events
Participant A Participant A Participant A Start Intermediate End
A Conversation defines a set of
A Task is a unit of work, the job to be logically related message exchanges. Choreography Call

Event Sub-Process

Event Sub-Process
Sub-Choreography

Non-Interrupting
performed. When marked with a symbol When marked with a symbol it Task Choreography

Boundary Non-
Task

Interrupting

Interrupting

Interrupting
it indicates a Sub-Process, an activity that can indicates a Sub-Conversation, a

Boundary

Throwing

Standard
Standard

Catching
Participant B Participant B
be refined. compound conversation element. Participant B
A Choreography Task
A Call Conversation is a wrapper for a Participant C A Call Choreography is a
represents an Interaction
A Transaction is a set of activities that logically globally defined Conversation or Sub- (Message Exchange) A Sub-Choreography contains wrapper for a globally
Transaction belong together; it might follow a specified Conversation. A call to a Sub-conversation between two Participants. a refined choreography with defined Choreography Task
transaction protocol. is marked with a symbol. several Interactions. or Sub-Choreography. A call None: Untyped events,
to a Sub-Choreography is indicate start point, state
A Conversation Link connects marked with a symbol. changes or final states.
An Event Sub-Process is placed into a Process or Conversations and Participants.
Event
Sub-Process. It is activated when its start event
gets triggered and can interrupt the higher level Multiple
Choreography Diagram Message: Receiving and
sending messages.
Sub-Process Participants Marker Participant A
process context or run in parallel (non-
interrupting) depending on the start event. Conversation Diagram denotes a set of Initiating
Timer: Cyclic timer events,
Participant A points in time, time spans or
Participants of the Message
Conversation same kind. (decorator) Choreography timeouts.
A Call Activity is a wrapper for a globally defined Task
Pool Participant B Escalation: Escalating to
Call Activity Task or Process reused in the current Process. A Participant A
(Black Box) an higher level of
call to a Process is marked with a symbol. Participant B
Message Choreography responsibility.
Task Conditional: Reacting to
a decorator depicting Participant A
the content of the Participant B changed business conditions
Activity Markers Task Types message. It can only Choreography or integrating business rules.
Pool Multi Instance Pool be attached to Task
Markers indicate execution Types specify the nature of Response Link: Off-page connectors.
(Black Box) (Black Box) Choreography Tasks. Participant C
behavior of activities: the action to be performed: Message Two corresponding link events
Sub-Conversation (decorator) equal a sequence flow.
Sub-Process Marker Send Task Participant B Error: Catching or throwing
Participant C named errors.
Loop Marker Receive Task
Collaboration Diagram Cancel: Reacting to cancelled
Parallel MI Marker User Task (Black transactions or triggering
Box) cancellation.
Pool

Sequential MI Marker Manual Task


Pool (Black Box)
Compensation: Handling or

~ Ad Hoc Marker Business Rule Task Message Flow triggering compensation.

Signal: Signalling across differ-


Compensation Marker Service Task Attached
ent processes. A signal thrown
Ad-hoc Subprocess Intermediate
Receive Task Timer Event can be caught multiple times.
Script Task Event-based Manual Task Multiple: Catching one out of
Gateway
Lane

Task a set of events. Throwing all


Collapsed End
Subprocess Event events defined
Sequence Flow Default Flow Conditional Flow Message Parallel Multiple: Catching
Start Event Task all out of a set of parallel
Link events.

defines the execution is the default branch has a condition


Timer
Intermediate
Escalation
End Event ~ Intermediate
Event
Terminate: Triggering the
immediate termination of a
Pool (White Box)

Data Object Collection


order of activities. to be chosen if all assigned that defines Event process.
other conditions whether or not the
evaluate to false. flow is used.
Subprocess Text Annotation
Signal
End
Attached Event Group
Data
Looped Intermediate
Store

Gateways Data
Subprocess Error Event
Multi Instance
Lane

Start End
Event Event condition Task (Parallel)
Exclusive Gateway When splitting, it routes the sequence flow to exactly A Data Object represents information flowing
one of the outgoing branches. When merging, it awaits Link Parallel Event Subprocess
through the process, such as business
one incoming branch to complete before triggering the Intermediate Multiple documents, e-mails, or letters.
outgoing flow. Event Intermediate
Event Call Activity Send Task
Conditional Error End
Event-based Gateway Is always followed by catching events or receive tasks. A Collection Data Object represents a
Start Event Event
Sequence flow is routed to the subsequent event/task Message collection of information, e.g., a list of order
Exclusive Parallel
which happens first. Gateway Gateway
End Event items.

Parallel Gateway When used to split the sequence flow, all outgoing A Data Input is an external input for the
branches are activated simultaneously. When merging Input
entire process.A kind of input parameter.
parallel branches it waits for all incoming branches to
Swimlanes
Pool
Lane

complete before triggering the outgoing flow. Task


A Data Output is data result of the entire
Pool

Inclusive Gateway Out- process. A kind of output parameter.


Exclusive Event-based Gateway
When splitting, one or more put
Lane

(instantiate)
Pool

Task
branches are activated. All Each occurrence of a subsequent Message Flow symbolizes A Data Association is used to associate data
active incoming branches must event starts a new process information flow across elements to Activities, Processes and Global
complete before merging. instance. organizational boundaries. Tasks.
Pools (Participants) and Lanes The order of message
Message flow can be attached
Complex Gateway Parallel Event-based Gateway represent responsibilities for exchanges can be A Data Store is a place where the process can
to pools, activities, or
Complex merging and (instantiate) activities in a process. A pool specified by combining read or write data, e.g., a database or a filing
message events. The Message
branching behavior that is not The occurrence of all subsequent or a lane can be an message flow and cabinet. It persists beyond the lifetime of the
Flow can be decorated with Data Store
captured by other gateways. events starts a new process organization, a role, or a sequence flow. process instance.
an envelope depicting the
instance. system. Lanes subdivide pools
content of the message.
or other lanes hierarchically. © 2011

Figure 3.1: This poster contains


Problem 3.19. Read the definitions of the poster of Fig. 3.1 for an- the objects and their defini-
swering this problem. Perhaps you do not understand deeply all tions of the standard BPMN
definitions, but they must be understood at the end of this Chapter. 2.0 published by the OMG
Answer the following questions: Group (https://www.omg.org/).
1. Express in your words which are the five parts (or group of ob- The poster was elaborated by
jects) of the standard Collaboration Diagram of BPMN 2.0 (see Berliner BPM–Offensive in 2015.
the central figure of the poster). What is the main difference be- You can access this poster in
tween event and activity? Spanish on the link poster-ES.

2. Consider the group of objects activities in the poster. Read the


definitions each object corresponding to this group and answer the
following questions: What are the differences between activity
markers and task types? What is the difference between a user
task and a manual task?

3. Consider the group of objects gateways in the poster. Read the


definitions each object corresponding to this group and answer the
following questions: Why this group is named gateway? Explain
the difference between an exclusive gateway and a inclusive
gateway.

4. Consider the group of objects events in the poster. Read the def-
modelling in bpmn and dmn 81

initions each object corresponding to this group and answer the


following questions: What is the difference between start events,
intermediate event, and end event. Why are there either white-
coloured or black-coloured events and where are they used in the
models?

5. Consider the group of objects swimlanes in the poster. Read the


definitions each object corresponding to this group and answer the
following questions: What type of participants can describe pools
and lanes? What is the difference between the message flow and
sequence flow?

6. Consider the group of objects data in the poster. Read the def-
initions each object corresponding to this group and answer the
following questions: What is the difference between a data store
and others data type. Must be a data store object only an electronic
database?

Solution 3.19 The standard BPMN 2.0 has several types of models
such that: conversations, choreographies, and collaboration dia-
grams. There is an abstract example of a collaboration diagram in
the centre of the poster of Fig. 3.1.

1. A collaboration diagram has five groups of objects: activities,


gateways, swimlanes, events, and data.
According the standard document 7 on page 151, an activity is a 7
Object Management Group.
work that is performed within a Business Process. An event8 , instead, Business Process Model and Notation
(BPMN 2.0). Tech. rep. Available at
is something that “happens” during the course of a process. An event https://www.omg.org/spec/BPMN/2.0/PDF
affects the flow of the process and usually have a cause or an impact. (4.2020). Object Management Group,
2011
2. A (task) marker indicates execution behaviour of activities. and (task)
8
Ibid, page 81.

type specifies the nature of the action to be performed. Although the


standard document does not exactly express these definitions, we
can accept them as correct. Let us observe that an specific task
type (e.g., a send task) can have a marker (e.g., a loop marker).
That is, for example, the task send email to customer can be executed
in a loop until a condition is met.
The difference between a manual task and a user task is that the
first one is executed by a human without a technical system like
a computer, instead the user task is also executed by a human,
but supported by a technical system. Typical examples are: bound
a book is a manual task and enter data is executed by a human
supported by a computer.
Finally, let us observe that service tasks and script tasks are
completely automated tasks (we see the difference between them
82 practical business process engineering

later). We do not have information from the standard document


if send task, receive task, and business rule tasks are actually
automated tasks or not. However, we will consider them automatic
tasks, i.e., executed by a computer without human interaction.

3. We copy the definition of gateway because clarify the origin of


the name: gateways are [objects] used to control how the Process flows
(how Tokens flow) through Sequence Flows as they converge and diverge
within a Process. If the flow does not need to be controlled, then a Gate-
way is not needed. The term “gateway” implies that there is a gating
mechanism that either allows or disallows passage through the Gateway
– that is, as tokens arrive at a Gateway, they can be merged together
on input and/or split apart on output as the Gateway mechanisms are
invoked.9 9
Ibid, page 90.

An exclusive gateway is a gateway that allows to continue exactly


only flow after it. Another name for this gateway is XOR-gateway,
because excludes all paths excepting one. The inclusive gateway,
instead, allows at least one path to continue. If, for example, after
an inclusive gateway three paths can continue, then it is legal
that one, two, or the three path continue (at least one).

4. According the poster of Fig. 3.1, the events can be classified be-
tween two large groups: The event type and the position of an
event in the process model. There are thirteen event types, from
none to terminate, and there are three subgroups according the
position in the model: start event, intermediate, and end event.
The start events are those that trigger (or initiate) a process (e.g.,
when an email from the customer is received). The intermediate
events are those that occur during the process execution (e.g., sud-
den cancellation of an order by the customer during the process).
Finally, the end events are those that end a process (e.g., link to a
another process not modelled here).
In addition, events can be classified between catching or throwing
events depending on if they either receive or send the event infor-
mation. The catching events are white-coloured and the throw-
ing event are black-coloured. For example the catching intermedi-
ate timer event named 24h (it is a black clock inside a double circle
line) located after the task send email means that the process waits
24 hours after the sending email, i.e., the process continues just 24
hours after the sending email.

5. According the poster, pools and lanes are process participants


that represent responsibilities for activities in a process. A pool or
a lane can be an organisation, a role, or a system. Lanes subdivide
pools or other lanes hierarchically.
modelling in bpmn and dmn 83

A message flow is not sequence flow. It is a relation between ob-


jects. A message flow represents a transmission of information or,
eventually, material transportation. It is represented by a discon-
tinuous line from a sender object to a receiver object. Sender and
receiver objects can be participants or activities. Instead of, a se-
quence flow indicates an execution order of activities through the
time. A sequence flow must connect only activities, i.e., sequence
flows between participants are not allowed.

6. According the poster, a data store is a place where the process can
read or write data, e.g., a database or a filing cabinet. It persists beyond
the lifetime of the process instance. The last characteristics (persis-
tence) differentiates a data store from others data objects. A data
input for example disappears after that the object is consumed.
As the definition says, data stores can be physical objects, not
only a digital elements.

For answering the following questions, we will follow the descrip-


tions of the document BPMN by Example, Version 1.0 (See 10 .) 10
Object Management Group. BPMN
2.0 by Example. Tech. rep. Avail-
able at https://www.omg.org/cgi-
bin/doc?dtc/10-06-02 (4.2020). Object
Management Group, 2010

Figure 3.2: Hardware Retailer.


Problem 3.20. Consider the model of Fig. 3.2 and answer the follow- This is the process Shipment
ing questions: Process of a Hardware Retailer.
We are trying to understand the
1. The model of Figure 3.2 shows one pool and different lanes for behaviour of different gateways
three people involved in this process. Do we have information in the process flow.
about the communication between them?
84 practical business process engineering

2. Is the gateway mode of delivery responsible for the decision


whether this is a special or a postal shipment? Where is this de-
cision made?

3. Describe in words the flow occurs between the inclusive gateways.

4. Write an interpretation for the effect of the parallel gateway before


the last task add paperwork and move package to pick area.

Solution 3.20 Answers to the questions referred to Fig. 3.2.

1. No, we do not have information about the communication be-


tween the participants in the pool. We just assume that they are
communicating with each other somehow. 11 11
If we had a process engine driving
this process, that engine would assign
2. The gateway is not responsible for the decision whether this is a user tasks and therefore be responsi-
ble for the communication between
special or a postal shipment. Instead, this decision is undertaken those people. If we do not have such
in the activity before. The gateway only works as a router, which is a process engine, but want to model
the communication between the people
based on the result of the previous task, and provides alternative
involved explicitly, we would have to
paths. Remember: A task represents an actual unit of work, while use a collaboration diagram.
a gateway is only routing the sequence flow. This gateway is called
exclusive, because only one of the following two branches can be
traversed.

3. If an extra insurance is required, the logistics manager has to take


out that insurance. In any case, the clerk has to fill in a postal
label for the shipment. So one branch is always taken, while the
other one only if the extra insurance is required. But IF it is taken,
this can happen in parallel to the first branch. Because of this
parallelism, we need the synchronising inclusive gateway right
behind fill in a post label and take out extra insurance.12 12
In this scenario, the inclusive gateway
will always wait for fill in a post label
4. The parallel gateway before the last task add paperwork and to be completed, because that is always
started. If an extra insurance was
move package to pick area makes sure that everything has been required, the inclusive gateway will also
fulfilled before the last task is executed. In other words, this is a wait for take out extra insurance to be
finished.
synchronising gateway of the two flows before.

Problem 3.21. The model of Fig. 3.3 example is about Business-To-


Business-Collaboration. Because we want to model the interaction
between a pizza customer and the vendor explicitly providing them
with dedicated pools. Collaboration diagrams show the interaction
between business partners, but also zoom into one company, mod-
elling the interaction between different departments, teams or even
single workers and software systems in collaboration diagrams. It is
decision the modeller, whether a collaboration diagram with different
modelling in bpmn and dmn 85

Figure 3.3: Order Process


pools is useful, or whether one should stick to one pool with differ- for Pizza. This example is
ent lanes, as shown in the previous chapter. Answer the following about Business-To-Business-
questions: Collaboration. Because we want
to model the interaction be-
1. How is the pizza vendor process triggered? Who did start this
tween a pizza customer and the
process?
vendor explicitly.
2. In the model we have at least four message objects. Can they be
managed by a process engine? Why?

3. What is the role of the terminate end event (not a standard end
event) modelled at the end of the vendor process? What would
occur if this event were a standard end event.

Solution 3.21 Answers to the questions referred to Fig. 3.3.

1. It is triggered by the order of the customer, as shown with the


message start event and the message flow going from task order a
pizza to that event.

2. In this example, we use message objects not only for informational


objects, as the pizza order, but also for physical objects, like the
pizza or the money. We can do this, because those physical objects
86 practical business process engineering

actually act as informational objects inherently: When the pizza


arrives at the customer’s door, she will recognise this arrival and
therefore know that the pizza has arrived, which is exactly the
purpose of the accordant message event in the customer’s pool.
Of course, we can only use the model in that way because this
example is not meant to be executed by a process engine.

3. The role of the terminate end event modelled at the end of the
vendor process is to kill the complete process. Let us observe that,
after a question made by the customer for her order, the loop
remains forever active. This iteration continues until the terminate
end event is reached.

Figure 3.4: Fulfilment and


Problem 3.22. Consider the model of Fig. 3.4 and answer the follow- Procurement. This example
ing questions: tries to explain the behaviour
of boundary events attached
1. The collapsed activity procurement is a sub-process thickly bor- to an activity. Let us observe
dered. What type of activity is this? What is the difference from that the exclusive gateway that
the normal sub-process? opens both paths is not closed
by another gateway. We say that
2. Another characteristic of the procurement sub-process is that
XOR-Split is mandatory (so the
has attached events. By using attached (or boundary) events it is
XOR-Join gateway is optional).
possible to handle events that can spontaneously occur during the
execution of a task or sub-process. Thereby we have to distinguish
between interrupting and non-interrupting attached events. What
would the difference between boundary intermediate interrupting
and non-interrupting events?
modelling in bpmn and dmn 87

3. Describe in words the behaviour of the complete model?

4. Let us observe that the exclusive gateway that opens both paths is
not closed by another gateway. Is it correct?

Solution 3.22 Answers to the questions referred to Fig. 3.4.


1. The shape of this collapsed sub-process is thickly bordered which
means that it is a call activity. It is like a wrapper for a globally
defined task or, like in this case, sub-process. This activity is public
and can be used by several public organisations.

2. Both of them catch and handle the occurring events, but only the
non-interrupting type (here it is the escalation event late delivery)
does not abort the activity it is attached to. When the interrupt-
ing event (here it is the error event undelivery) type triggers, the
execution of the current activity stops immediately.

3. This order fulfilment process starts after receiving an order mes-


sage and continues to check whether the ordered article is avail-
able or not. An available article is shipped to the customer fol-
lowed by a financial settlement, which is a collapsed sub-process
in this diagram. In case that an article is not available, it has to be
procured by calling the procurement call activity. This sub-process
is interrupted in case that the article is not possible to deliver. In
this case the process ends after informing customer and removing
article from catalogue. During the procurement process, the event
late delivery can happen, which does not interrupt the process,
but the customer is informed about the delay. In case that neither
undelivery nor delay occur, the process continues normally with
shipping and financial settlement.

4. Yes, it is. We say that XOR-Split is mandatory (so the XOR-Join


gateway is optional.

Figure 3.5: Stock Level Man-


agement. This example tries
to explain the behaviour of
boundary events attached to
an activity. It is similar to the
former example.

Problem 3.23. Consider the model of Fig. 3.5 and describe in words
the behaviour of the process.
88 practical business process engineering

Solution 3.23 The process for the stock maintenance of Fig. 3.5 is
triggered by a conditional start event. It means that the process is
instantiated in case that the condition became true, so in this example
when the stock level goes below a certain minimum. In order to
increase the stock level an article has to be procured. Therefore, we
use the same Procurement process as in the order fulfilment and
refer to it by the call activity procurement, indicated by the thick
border. Similar to the order fulfilment process this process handles
the error exception by removing the article from the catalogue. But
in this stock maintenance process there appears to be no need for the
handling of a late delivery escalation event. That’s why it is left out
and not handled. If the procurement sub-process finishes normally,
the stock level is above minimum and the Stock Maintenance process
ends with the end event article procured.

Figure 3.6: Procurement. We


Problem 3.24. Consider the model of Fig. 3.6 and answer the follow- now zoom into the global sub-
ing question: process procurement that is
used by both order fulfilment
1. How is triggered this sub-process? Is this process triggered by an
and stock maintenance.
external event?

2. Describe in words the behaviour of the process.

Solution 3.24 Answers referred to the model of Fig. 3.6.

1. No, this process is not triggered by any external event. It is started


by referencing top-level-process. This is the reason why it is repre-
sented by a standard (plain without any symbol) event.

2. The first task in this sub-process is the check whether the article
to procured is available at the supplier. If not, this sub-process
will throw the not deliverable error that is caught by both order
modelling in bpmn and dmn 89

fulfilment and stock maintenance (the last two examples above), as


we already discussed. In case that the delivery in the Procurement
process lasts more than two days an escalation event is thrown
by the sub-process telling the referencing top-level-process that
the delivery will be late. Furthermore, the Procurement process
continues its execution by waiting for the delivery. But the thrown
event is handled by the nearest escalation event.

Models can have different perspectives from same business pro-


cess, using BPMN. In the first step we will provide a rather simple,
easy to read diagram that shows an incident process from a high
level point of view. Later on we refine this model by moving to col-
laboration diagram. In the last step we take the organisational col-
laboration and imagine how a process engine could drive part of
the process by user task assignments. The main purpose of the next
problems is to demonstrate how you can use BPMN for creating
simple and rather abstract diagrams, but also detailed views on hu-
man collaboration and finally for technical specifications for process
execution.

Figure 3.7: High Level Incident


Management. This model de-
scribes a software manufacturer,
who is triggered by a customer
requesting help from her ac-
count manager because of a
problem in the purchased prod-
uct. This the current process.
90 practical business process engineering

Problem 3.25. The shown incident management process of Fig. 3.7


describes a software manufacturer, who is triggered by a customer
requesting help from her account manager because of a problem
in the purchased product. The model shows the high level current
situation. Answer the following questions:

1. How is triggered this process? Describe in your words the be-


haviour of the model.

2. Do you think that this diagram would be useful for a general high
level description of the process? Would be useful for the technical
requirements of a software support application?

Solution 3.25 Answer referred to the model of Fig. 3.7

1. The customer triggers the process by sending a message. After


that, the account manager tries to handle that request on his own
and explain the solution to the customer, if possible. If not, the
account manager will hand over the issue to a 1st level support
agent, who will hand over to 2nd level support, if necessary. The
2nd level support agent should figure out if the customer can fix
the problem on her own, but if the agent is not sure about this
he can also ask a software developer for his opinion. In any case,
at the end the account manager will explain the solution to the
customer.

2. This diagram is useful, if you want to scope the process, get a


basic understanding of the flow, and talk about the main steps,
but not if you want to dig into the details for discussing process
improvements or even software driven support of the process.

Problem 3.26. The shown incident management process of Fig. 3.8


takes a closer look at the interaction of account manager, support
agents and software developer by switching from a single-pool-
model to a collaboration diagram. The model shows the low level
current situation. Answer the following questions:

1. Describe in words the behaviour of key account manager

2. Let us observe that that all tasks are manual which means that the
process is completely executed by humans, with no process engine
involved. Could be the process supported by a process engine?

Solution 3.26 Answers referred to incident management process of


Fig. 3.8.
modelling in bpmn and dmn 91

Figure 3.8: Detailed Collabo-


1. The customer triggers the process by sending a message. After ration Incident Management.
that, the account manager tries to handle that request on his own The model takes a closer look
and explain the solution to the customer, if possible. If not, the at the interaction of account
account manager will hand over the issue to a 1st level support manager, support agents, and
agent and will wait for an answer from him. In any case, at the software developer by switch-
end the account manager will explain the solution to the customer. ing from a single-pool-model to
2. Yes, of course. Every pool of the model could be supported by a a collaboration diagram.
process engine independently. In fact, the next step could be to
define whether we want to drive the complete collaboration by a
process engine, or only parts of it.
92 practical business process engineering

Figure 3.9: Whole Collabo-


Problem 3.27. The shown incident management process of Fig. 3.9 ration. This model describes
describes an improved process from the last one. It contains a de- the software manufacturer de-
tailed technical description of a to be process. Answer the following scribed in the last problem. It
questions: contains a detailed technical
description of a to be process,
1. The new model contains some task supported by a computer or
i.e., an improved process.
technical systems, which are those activities? Does this design
require or demand high intensity of coordination? Does it require
a process engine for the coordination?

2. Write the description of the process trouble ticket system.

Solution 3.27 Answers referred incident management process of Fig.


modelling in bpmn and dmn 93

3.9.

1. The process assumes that only the process trouble ticket system
is supported by a technical system. It contains two user tasks, one
send task, two script tasks, one service task. Only the user tasks
are not full automatic tasks (semi-automatic), while the others
are executed by a system and coordinated by a process engine
(including the send task).
This design does demand reasonable intensity of coordination. In
fact, the whole process is executed manually, excepting the trou-
ble ticket system. Only the trouble ticket system does require a
process engine for the coordination and can be considered a situ-
ation introductory of a process engine for the whole collaboration
process.

2. The process starts when a message from the key support manager
is received. It can be assume that the message is an email ad-
dressed to the 1st level support of the trouble ticket system. When
the process engine receives the issue by email, an open ticket is
opened automatically and it shows 1st level support the ticket.
She can edit the ticket and decides if the issue is resolved or not.
If yes, then the process engine sends an email to the key account
manager and the ticket is closed automatically. If the issue is not
resolved, the 2nd level support edit the ticket again after receiving
the ticket from the process engine. The 2nd level support decides
about the resolution of the issue. If she decides as resolved, pro-
cess engine communicates to the 1st level support, sends an email
to the key account manager, and closes the ticket. If not the case,
the process engine communicates to 2nd level support agent and
the system insert issue into product backlog automatically. The
process continues automatically as before.

Problem 3.28. This model is presented at the end of the Chapter.


The selection of a Nobel Prize Laureate of Fig. 3.55 is a lengthy and
carefully executed process. The processes slightly differ for each of
the six prizes. The results are the same for each of the six categories.
Following is the description for the Nobel Prize in Medicine. The
main actors in the processes for Nomination, Selection and Accepting
and Receiving the award are the: Nobel Committee for Medicine,
nominators, specially appointed experts, Nobel Assembly, and Nobel
Laureates.

1. Describe the process according the model of 3.55.


94 practical business process engineering

2. Discuss if the process of the Nobel Committee for Medicine can be


supported by a process machine.

Solution 3.28 Answers to the Nobel Prize Laureate of Fig. 3.55.

1. Each year in September, in the year preceding the year the Prize
is awarded, around 3000 invitations or confidential nomination
forms are sent out by the Nobel Committee for Medicine to se-
lected Nominators. The Nominators are given the opportunity to
nominate one or more Nominees. The completed forms must be
made available to the Nobel Committee for Medicine for the se-
lection of the preliminary candidates. The Nobel Committee for
Medicine performs a first screening and selects the preliminary
candidates. Following this selection, the Nobel Committee for
Medicine may request the assistance of experts. If so, it sends the
list with the preliminary candidates to these specially appointed
experts with the request to assess the preliminary candidates’
work. From this, the recommended final candidate laureates and
associated recommended final works are selected and the Nobel
Committee for Medicine writes the reports with recommendations.
The Nobel Committee for Medicine submits the report with rec-
ommendations to the Nobel Assembly. This report contains the
list of final candidates and associated works. The Nobel Assembly
chooses the Nobel Laureates in Medicine and associated through a
majority vote and the names of the Nobel Laureates and associated
works are announced. The Nobel Assembly meets twice for this
selection. In the first meeting of the Nobel Assembly the report is
discussed. In the second meeting the Nobel Laureates in Medicine
and associated works are chosen. The Nobel Prize Award Cere-
mony is held in Stockholm.

2. Yes, absolutely. The process contains five send and receive tasks
that can be automated and executed by a process engine. Others
tasks are user tasks that can be coordinated by the same machine.

Problem 3.29. Model the following situations:

1. The task initiate sell order is executed if the stock price drops 15%
below purchase price is true.

2. Many processes are weakly structured. Typically the professional


work and management activities. Model the following situation:
The chief of the accounting department ask every day for the
financial situation of the company and has a meeting with the
main account manager.
modelling in bpmn and dmn 95

3. The process starts with the user task receive confirmation. Parallel
to this task, after two days, the task send reminder is executed
automatically every three days. After receiving confirmation the
process ends. Ensure that the process terminate correctly.

4. What are the differences between the following process descrip-


tions (1) and (2)?: (1) The process waits 7 days for sending a re-
minder, which is repeated each 7 days. (2) The process starts
sending a reminder, which is repeated every 7 days. Model both
situations.

Figure 3.10: Use of a Signal


Start Event (Above Model).
The signal start event contains
information sent without ad-
dressee. Instead, a message
does not have it. Weakly Struc-
tured Process (Below Model).
These types of processes are
modelled by using ad-hoc
sub-processes. They contains
tasks that can be executed in
any order, several times or not
executed at all.

Solution 3.29 The situations are very common. We will show that
they can be modelled with enough precision:

1. The model can be found on Fig. 3.10 above. The stock price drops
15% below purchase price is a signal and not a message. A signal
start event contains information sent without addressee, instead, a
message has and organisation, people, or machine that receives the
information. In this example, we do not have information how is
the initiate sell order task carried out, therefore we modelled it as
sub-process.

2. The model can be found on Fig. 3.10 below. This is a typical ex-
ample of a weakly structured process. They are called ad-hoc
96 practical business process engineering

activities. In this case, it is an ad-hoc sub-process (the ad-hoc tasks


do not exist). An activity that belongs to an ad-hoc sub-process
may: be executed several times, be executed in any order, or never
be executed.

3. The model can be found on Fig. 3.11. In this example we must


model two important situations: first, parallel paths, where the
AND-Split gateway is not mandatory. However, the start event can
not have two or more output sequence flows, therefore we need a
parallel gateway that opens both parallel paths. Second, we need
also to kill instances after the reception of the confirmation, there-
fore we model the terminate end event for solve this situation.

4. The models are shown on Fig. 3.12. Both are iterations clearly
different. The first one is called while-do iteration, while the second
one repeat-until.

Figure 3.11: Modelling Exter-


nal Decision and Iterations.
We recommend a start event,
the task receive confirmation
can, in this case, be replaced by
an intermediate event. The task
send reminder is repeated every
3 days after 2 days starting.

Problem 3.30. Write an interpretation of the following model of Fig.


3.13. Is it necessary the terminate end event? Could you model the
process by using some cycle activity? Some information is lost with
this second modelling option.

Solution 3.30 The model of Fig. 3.13 describes an iteration situa-


tion. It can be interpreted as follows: Every day at 6pm the task alert
manager is executed for 10 days. The use of the terminate end event
is absolutely necessary, if not, the iteration would continue forever
without ending. Another solution could be the model of Fig. 3.14,
i.e. a multiple instance sequential task. However this model is not
modelling in bpmn and dmn 97

Figure 3.12: Modelling While-


Do and Repeat-Until Itera-
tions. Both cases are common
iterations. The first one ask first
for the condition (while-do), and
the second one ask for the con-
dition at the final (repeat-until).

Figure 3.13: A Simple Iteration


Model with a Terminate end
Event. This model describes the
situation in parallel of a cycle
activity and the end condition.

correct, because the the sequential iteration is understood as imme-


diately, not after 24 hours as we need to represent. The model of Fig.
3.14 can assume that the number of iteration (ten times) is codified in
the activity, but it requires an iteration without pause.

Problem 3.31. Model in BPMN 2.0 the following process: A user of


a travel agency books a flight or books an hotel or both. After the
booking, the process ends.

Solution 3.31 The solution can be found on Fig. 3.15. This model
describes a typical situation of an inclusive gateway use. It means

Figure 3.14: Incorrect Optional


Iteration Model. Comparing
this model with the one of Fig.
3.13. This model also represents
sequential iteration, but without
pause of 24 hours.
98 practical business process engineering

Figure 3.15: Optional Iteration


Model. This model describes
a typical situation of an inclu-
sive gateway use. The inclusive
gateway that synchronises both
paths at the end is mandatory.

that at least one alternative must be selected. The inclusive gateway


that synchronises both paths at the end is mandatory.

Problem 3.32. Model in BPMN 2.0 the following process: The process
starts when a user executes the task announce issues for discussion,
then she must wait 6 days before the task e-mail discussion deadlines
warning, which is automatically executed. Next, the process ends.

Solution 3.32 The solution is on Fig. 3.16. In this case we are mod-
elled a throwing as an automatic task, which is triggered by the
catching timer intermediate event called 6days. Because the whole
process is driven by a human and we do not have more information,
the start and end events are standard (or plain). Let us note that af-
ter the task announce issues for discussion ends, the process engine
suspends all activity until 6 days, where the following task contin-
ues.

Figure 3.16: Modelling a Send


Automatic Task. In this case we
are modelling the sending auto-
matic task which is triggered by
the catching timer intermediate
event called 6days.
Problem 3.33. Model in BPMN 2.0 the following process: Every two
days the automatic task sending a reminder is executed until the
event receive confirmation interrupts the task. If a confirmation is
received, then the terminate end event kills all instances existing. It is
necessary to model the end terminate event?

Solution 3.33 The solution is on Fig. 3.17. The terminate end event
is absolutely necessary, because it kills the instances existing, thus
modelling in bpmn and dmn 99

an infinite loop is avoided. Observe that, as in the Problem 3.30, this


model can not be replaced by a multiple instance iteration.

Figure 3.17: Modelling a


Boundary Event with Loop.
The terminate end event is ab-
solutely necessary, because it
kills the instances existing, thus
an infinite loop is avoided.

Problem 3.34. Model the following process: Each year in September


last year the Medicine Nobel Prize committee sends around 3, 000
invitations or confidential nomination forms to selected Nominators.
Model a collaboration diagram and consider the sending task as a
repetitive task.

Solution 3.34 The solution is on Fig. 3.18. The collaboration diagram


includes two participants: The nobel prize committee for medicine
and the nominators. The iteration is modelled by using the multiple
instance activity, because the number of iteration is known. The
nominator pool was modelled optionally as collapsed.

Figure 3.18: Sending 3, 000 In-


vitation for Nobel Prize. The
iteration is modelled by using
the multiple instance activity,
because the number of iteration
is known.

Problem 3.35. Model the following process: The nobel committee


for medicine collects completed forms from the nominators during a
reasonable time. Model the participants in a collaboration diagram,
the task type and marker.

Solution 3.35 The solution is on Fig. 3.19. The iteration is modelled


by using the cycle activity, because the number of iterations is not
100 practical business process engineering

known. The stopping criterion is given by a rule for example: stops


the iteration until March the 3rd.

Figure 3.19: Collecting Com-


pleted Forms. The iteration is
modelled by using the cycle
activity, because the number of
iterations is not known.

Problem 3.36. Model the following process: The task moderate e-


mail discussion will be interrupted in 14 days after starting. It will
never finish normally.

Solution 3.36 The solution is on Fig. 3.20. Let us observe that the
task is of type user, it is triggered somehow by a people and finishes
just after 14 days starting. It does not have a normal end.

Figure 3.20: User Task Without


Normal End. Let us observe
that the task is executed by a
user, it is triggered somehow
by a people and finishes just
after 14 days starting. It does
not have a normal end.

Problem 3.37. Model in BPMN 2.0 the following process: Consider


two processes executed by a manufacturer and by a customer re-
spectively. The process of the manufacturer starts with the activity
ship order. This activity is a sub-process that can be interrupted by a
cancellation message received from the customer. If the cancellation
message is received, then the send confirmation task is executed by
the manufacturer and the process ends. If the sub-process ship order
does not receive the interrupting message, then the manufacturer re-
ceives payment and the process ends. Include in the model message
flows between the manufacturer and the customer.

Solution 3.37 The solution is on Fig. 3.21. Two processes supported


probably by two process engines need be represented by two pools.
modelling in bpmn and dmn 101

So, the interaction between both processes is shown including details


of it. The pool customer was optionally collapsed. Note that the start
event is not mandatory in BPMN 2.0, but is a very good practice to
use it, additionally most of process engines require a start and end
events for implementation.

Figure 3.21: Manufacturer


and Customer. This model
describes the collaboration dia-
gram between a manufacturer
and its customer. It is possible
to receive a cancellation mes-
sage . The pool customer is
optionally collapsed.

Problem 3.38. Model in BPMN 2.0 the following process: Similarly


to the previous problem, consider two processes executed a manu-
facturer and by a customer respectively. The manufacturer process
starts when a request application is received. Then the task send out
application form for filling out is executed. After sending the appli-
cation form for filling out, the organisation waits the answer from the
customer. If the manufacturer does not receive any answer in 7 days,
the process ends. If the manufacturer receives the application form
from the customer, it confirms the reception and the process ends.

Solution 3.38 The solution is on Fig. 3.22. Two process supported


probably by two process engines need to be represented by two
pools. The exclusive event-based gateway models a decision which is
taken by the customer, not internally by the manufacturer.

Problem 3.39. Model in BPMN 2.0 the following process: Consider


the same context between a customer and a manufacturer as in the
previous problem. The organisation starts the process when receives
a request order from a customer. Then the organisation sends out
a form for filling out to the customer. After this, the process stops
until one of the following events happens: (1) the customer is not
interested and the sub-process archive details continues. (2) the ap-
plication is received and the sub-process make assessment starts. (3)
Both above tasks must be executed before 7 days, if not the case, the
task send reminder is executed just after 7 days. After all these tasks
and events are executed, the process ends.
102 practical business process engineering

Figure 3.22: Manufacturer and


Customer (Exclusive Event-
based Gateway). This model
describes again the collab-
oration diagram between a
manufacturer and its customer.
Note the use of the exclusive
event-based gateway.

Solution 3.39 The solution is on Fig. 3.23. The model contains the
same participants as the last two previous examples. The exclusive
event-based gateway models decision taken by the customer, not
internally by the manufacturer. Note that here the event-based ex-
clusive gateway that opened three threads is closed after the three
activities, because, according the text, the process must finish after
that all activities were executed.

Figure 3.23: Manufacturer and


Customer (Closing Exclusive
Event-based Gateway). Note
that the exclusive event-based
gateway that opened the three
threads is closed by a standard
exclusive gateway according the
text.

Problem 3.40. Model the following process: Once the message re-
ceive customer flight and hotel room reservation request is received,
the sub-process consists of: (1) execution search flights based on cus-
tomer request, then evaluate flights within customer criteria book
hotel; (2) execution search hotels based on customer request, then
evaluate hotel rooms within customer criteria. Both group of tasks
can be executed in parallel, skipped or only one of them. Let us sup-
pose that this sub-process is executed three times sequentially, add
this characteristic to the model.
modelling in bpmn and dmn 103

Solution 3.40 The solution is on Fig. 3.24. The sub-process is trig-


gered by the message catch event. After the reception of the message,
there are three options: the tasks of the sub-process can be executed
in parallel, only one path is executed, or simply all paths are skipped.
This logic can not be modelled by an inclusive gateway, because in-
clusive gateway models the at least one situation and this is not the
case. The right representation for this behaviour is an ad-hoc sub-
process. ad-hoc means an non-standardised order or weakly structured
process. Let us observe that the ad-hoc sub-process must contain
at least one activity, it can have sequence flows, but not to contain
neither start nor end events!
In addition the sub-process can be repeated sequentially three
times, which is modelled by a sequential multiple instance marker.

Figure 3.24: Ad-hoc Sub-


process. For modelling weakly
unstructured process. An ad-
hoc sub-process must contain
at least one task, may contain
sequence flows, and must not
contain neither start nor end
events.
Problem 3.41. Model the following process: After the user task
present flights and hotel rooms alternatives to customer, three events
can happen. Before 24 hours (first event), the customer make selec-
tion (second event), or cancel request (third event). Just after 24 hour
(i.e, the customer neither cancelled the request, nor made a selection),
the task automatically notify customer to start again, then update
customer record (request cancelled) is automatically executed, next
the process ends with the sending event request cancelled. In case
that the customer cancels the request before the 24 hours (and be-
fore the customer makes a selection), the same task update customer
record (request cancelled) is executed. In case that the customer
makes a selection, the send task request credit card information from
customer is automatically executed, which is interrupted eventually
at 24 hours if the requested information is not received in 24 hours. If
this last task is interrupted exactly at 24 hours after starting, the task
notify customer to start again is executed, if not the process finishes.

Solution 3.41 The solution is on Fig. 3.25. Let us observe the mod-
elling with an exclusive event-based gateway. The paths are con-
trolled by three external events. The task request credit card informa-
tion from customer can be interrupted exactly at 24 hours. We added
104 practical business process engineering

a start and an end event, although is not demanded by the standard


BPMN 2.0, but yes by majority of the processes engines.

Figure 3.25: Event-based Ex-


clusive Gateway. The paths
are controlled by three external
events. We added standard
(plain) and end events, because
we do have information how is
the process triggered and how
it continues.

Problem 3.42. Model the following process in BPMN 2.0: The issue
list manager executes review the issue list in the data base every Fri-
day, then the decision determines if there are issues that are ready
must be made before going through the discussion cycle. The deci-
sion consider that if there are no issues ready, then the process is over
for that week – to be taken up again the following week on Friday.
Instead, if there are issues ready, then the process will continue with
the sub-process named discussion cycle. Consider the decision as a
manual task. Could you model the process supported by a process
engine?

Solution 3.42 The solution is on Fig. 3.26. The issue list manager is
modelled by a participant. The process starts every Friday as it is in-
dicated in the text. The issue list manager uses a computer, thus it is
modelled by a user task. After reviewing task, the decision continues,
which is a manual task. This case is represented by the first model of
Fig. 3.26.
Now, let us discuss the case which the process is executed by a
process engine. In this case, the user task review the issue list and
the decision can be put together and supported by a computer as is
shown in the figure of the centre. In an extreme case all tasks can be
automatic and executed by the process engine as shown in the below
figure.

Problem 3.43. Model the following process: The process begins with
a message start events representing the message received by the cus-
tomer. Then, the process engine sends out application form, then it
waits for the application form from the customer. The machine waits
seven days for the application from the customer. If the machine does
not receive an answer from the customer at the seventh day, then the
modelling in bpmn and dmn 105

Figure 3.26: From Manual to


Automatic. We observe three
situations from manual, sup-
ported by a computer to full
automatic.

machine sends a reminder automatically. After sending the reminder,


the machine wait for an answer until the next seven days again. If the
machine receives an answer from the customer, a user write an as-
sessment and the process ends. Consider to model the process using
an exclusive event-based gateway.
Now, regard the following problem. Let us suppose that the cus-
tomer never will answer, then we will obtain an infinite loop. How to
solve this problem?

Solution 3.43 This is a full automatic process, excepting by the user


task. The solution for the original process can be found on Fig. 3.27
above. We need an exclusive event-based gateway, because the ma-
chine waits an external decision and not internal. Let us observe that
we need an special exclusive gateway for receiving the flow after
the reminder task. It is not allowed that the exclusive event-based
receives this flow.
For solving the problem of infinite loop, in case that the customer
never answers, we need a new automatic activity that asks for the
number of reminder carried out. If the number of reminder reaches
a limit (say two), then the process ends. The new model is shown on
Fig. 3.27 below.

Problem 3.44. Model the following process. When the soccer team of
the Universidad de Chile wins an official game, the process machine
sends greetings (by SMS) automatically to their fans that are regis-
tered in the data base. In addition to the greetings, a fan receives an
invitation to participate in a bet game related to the next game of the
team. If the process machine does not receive the acceptation or rejec-
106 practical business process engineering

Figure 3.27: Event-based Exclu-


sive Gateway. The above model
is a full automatic process, ex-
cepting by the user task. We
need an exclusive event-based
gateway, because the machine
waits for an external decision
and not internal. We need an
exclusive gateway for receiving
the flow after the reminder. The
below model represents the
situation that avoids an infinite
loop in case that the customer
does not answer.

tion answer, the invitation to the fan is repeated automatically once.


If the machine receive the answer from the fan, the machine archives
the details of the answer in the history data base.

Solution 3.44 The solution is shown on Fig. 3.28. The model needs
an exclusive event-based gateway for the external decision of the fun.
Let us observe, that the process machine does not receive a message
at start, but it receives a signal of every game outcome whether the
team won or lost.

Figure 3.28: Event-based Ex-


clusive Gateway. This is a full
automatic process. We need an
exclusive event-based gateway,
because the machine waits an
external decision and not in-
ternally. We need an exclusive
gateway for receiving the flow
after the reminder.

Problem 3.45. Interpret the model of Fig. 3.29

Solution 3.45 This is a typical call activity sub-process, where is


implemented a booking process. Remember that a call activity is a
public activity. Instead a (normal) sub-process is private and belongs
to the process that contains it.
modelling in bpmn and dmn 107

Figure 3.29: Implementation


The process starts with a message, then a user requests the credit of a Booking Process. The sub-
card information from customer. Next the call activity starts with the process is implemented in a call
event start booking. At least a flight or an hotel is booked automati- activity. It contains event sub-
cally and the transaction ends. After the call activity, the credit card processes, which are triggered
automatically charges the credit card and the whole process ends. through the corresponding
During the card charging, an error by charging can happen, the book- events.
ing is compensated. At this point, if the retry limit from credit card
is exceeded, then the transaction booking start again. In case that the
the retry limit of credit card does not exceeds, then the customer is
automatically notified about this situation and the process ends.
In addition, we have in the call activity three event sub-process.
The first one is triggered externally, when the credit card info needs
to be actualised, which is carried out by a user. This event does not
interrupt the call activity. The second event sub-process starts with
the event compensate booking is received, which is triggered also
108 practical business process engineering

externally by the throwing event called with the same name (after
error by charging). The compensation tasks of booking flight and
hotel are carried out in sequence and automatically. Then a user
updates the customer record. Compensation means to move back, to Compensation means to move back, to go
go at the start for the corresponding task. The name for the tasks at the start for the corresponding task.
The name for the compensation tasks
here is cancel. here are cancel.
The third event sub-process is the booking error 1 which catches
any other error intern or externally to the transaction. In this case,
both tasks of booking are compensated and the the flows leave the
transaction through the boundary event booking error 2. The rest of
the model is trivial.

3.3 Decision Modelling and Notation Correctness


The solutions of this Section are imple-
The solutions of this Section are implemented in Numbers of MAC mented in Numbers of MAC OS. You
OS. You can accesses the sheet through the link solutions-dmn. can accesses the sheet any time.

Consider the decision table of Fig. 3.30. The Table shows a set
of four rules for determining the credit worthiness of a customer
according the income and assets measured in currency units. We will
try to answer the following questions. What is the creditworthiness
of a customer that (all quantities in thousands) has:

1. An income of e25 and e60 in assets?

2. An income of e25 and e120 in assets?

3. An income of e35 and e120 in assets?

4. An income of e35 and e60 in assets? Figure 3.30: A Simple DMN


These questions can be complex for answering, especially if the DMN Table. It describes the decision
Table has more conditions or rules and a number of inputs. about the creditworthiness of a
First, we will start with some definitions. A decision rule is a customer. It has four rules and
relation that relates a number of rule inputs with only one output two inputs. Every DMN Table
rule. Each input or output takes a value or subset of values. An
requires only one value for the
input can take values of some type: Boolean, string, integer, or dou- output.
ble.13 A DMN Table is a table (or matrix) which rows are decision 13
A Boolean variable takes values
rules. Thus, a rule assigns to a combination of inputs (a tuple) only true or false. A string variable takes a
combination of characters (e.g., "this is
one output. a string"). An integer is clear. A double
Now, we will try to answer the above questions: input seems the Real numbers.

1. A customer with income of e25 belongs to the set of [20, 30]


that14 . The rule 1 is applicable. In addition, the income value 35 14
We change a little the notation. The
belongs to the range denoted by ≤ 40, which can be input of the DMN values denotes the range [20..30].
Instead, we will denote in the text
rule 2. Until here, rules 1 and 2 can be applied to the customer. [20, 30].
Additionally, the customer owns e60 in assets, then only the rule
modelling in bpmn and dmn 109

1 can be applied. As conclusion, the output for the customer with


income of e25 and e60 in assets has an only one result no. Perfect!

2. Next, consider a customer that has an income of e25 and e120


in assets. We can apply the rule 1 and also the rule 2. The result
would be not a problem, if the outputs were the same for both
rules, but this is not the case, because, according the rule 1, the
output is no, and according the rule 2, the output is yes! We will
Figure 3.31: A Simple DMN
need to decide which rule (1 or 2) to apply in this case.
Table. It describes the decision
3. For the customer that has an income of e35 and e120 in assets, we about the creditworthiness of a
can apply also two rules: rule 2 and 3. Both rules have the output customer. It has four rules and
no. This would be no problem and we do not need an additional two inputs. Every DMN Table
decision for selecting the rule to apply. requires only one value for the
output.
4. Now, consider a customer that has an income of e35 and e60 in
assets. With income of e35 we could apply the rule 3. But this rule
requires assets greater that 100 and the customer has only e60.
There not exist a rule for this customer! This is a very dangerous
situation.

In summary, when we apply the rules to a customer, we found the


following four cases

1. Only one rule is applicable, then we obtain only one output. This
is the ideal case.

2. Multiple rules are applicable and the rules produce different out-
puts. We need an additional rule for selecting the rule that pro-
duce the wished output.

3. Multiple rules are applicable, but all rules produce the same out-
put. No problem.

4. No rule is applicable. This case is dangerous and we will need to


review the complete DMN Table.
The standard DMN provides a set of rules for selecting rules to
any case. The decision criterion for selecting a rule, is called hit
policy. Let us see. For the first case (only one rule is applicable) the
hit policy is UNIQUE (U). For the second case (multiple rules with
different outputs) we have two options: COLLECT (C), i.e., we accept
the collection of outputs from all rules, and FIRST (F), where we
obtain the output only of the first rule. For the third case (multiple
rules with the same output), the name of the hit policy is ANY (A),
because we can select any rule. Finally, the fourth case (no rule is
applicable) needs a revision of the complete DMN Table.
To be consistent with this exposition we wrote the hit policies, i.e.,
the rules for apply rules in the DMN Table of Fig. 3.32.
Figure 3.32: DMN Table for
the Hit Policies. It presents the
rules for selecting rules in a
DMN Table.
110 practical business process engineering

The DMN Table of Fig. 3.32 presents the rules for selecting rules
in any DMN Table. For all cases, if only one rule is applicable, then
the hit policy will be UNIQUE. If the number of desirable outputs is
one, then we can always use the hit policy FIRST. If there are tuples
of inputs which corresponds to many and equal outputs, or only one
output, then the hit policy will be ANY. Finally, let us observe that
the hit policy COLLECT can be always be applied, but we obtain a
list of outputs, including the case that only one rule is applicable.
Instead, by applying the UNIQUE, FIRST, or ANY policies, we will
obtain only one output and, but not a list of outputs.
Now, we will try to do an analysis more systematic of the DMN
Table of Fig. 3.30. That is, given a DMN Table we will determine the
cases we obtain: only one output, multiple and different outputs,
multiple and same outputs, and no output. As we said, the last case
is not admissible. The analysis consists on to determine an extensive
decision table, i.e. a table that classifies every case (or customer)
in disjoint sets, thus each tuple of inputs are disjoint. Then, we will
apply the rules of the original DMN Table to each tuple of disjoint
inputs and will obtain the corresponding outputs verifying each case.
Figure 3.33: A Simple DMN
We apply the method in the following problem:
Table. It describes the decision
Problem 3.46. For the DMN Table of Fig. 3.33 determine the exten- about the creditworthiness of a
sive decision table, apply the rules to each tuple of inputs of this customer. It has four rules and
table, and determine an adequate hit policy for the DMN Table, if two inputs. Every DMN Table
any. requires only one value for the
output.
Solution 3.46 For determining an extensive decision table, we need
to consider each input and write the partition of possible values of
it. The DMN Table of Fig. 3.33 has two inputs. The set of values of
the input income is [20, ∞[ and, according the DMN Table, the parti-
tion15 of this set is [20, 30]∪]30, 40]∪]40, ∞[. The set of values of the 15
Remember: a partition of a set is a set
input assets is [0, ∞[ and, according the DMN Table, the partition of sets such that they are disjoint and
the union is the original set.
is [0, 100]∪]100, ∞[. Let us observe that the number of subset of the
partitions of values of income is 3 and the number of subsets of the
partition of the input assets is 2. Then we will have 3 × 2 = 6 com-
binations of tuples that classify all customers. This is shown in the
extensive table on the left side of Fig. 3.34.
Now, we will apply the four original rules of the original DMN
Table of Fig. 3.33 to each exclusive set of values of the extensive ta-
ble. We will consider a representative customer each subset. This is
shown on the right table of the Fig. 3.34. Let us study in detail this
extensive table. We have six representative customer and we will
apply each rule of the extensive table (not of the original DMN Ta-
ble!) to every customer. To the customer 1 is applicable only the rule
1 obtaining the output no. To the customer 2 is applicable rules 1
modelling in bpmn and dmn 111

Figure 3.34: The extensive


Table With Simulation. It
presents an extensive version of
the original DMN Table of Fig.
3.30. The left table contains the
partitions of the outputs val-
ues. The right one contains the
calculation for representative
customers.
and 2 obtaining contradictory outputs: yes and no respectively. The
customer 3 does not have an applicable rule. To the customer 4 is
applicable rules 2 and 3 obtaining the same output yes. To customers
5 and 6 is applicable the rule 4 obtaining the output yes.
In summary, the original DMN Table of Fig. 3.30 is not admissible,
because the customer 4 does not have an output. Thus, a hit policy is
not admissible. We need to review the complete table.

Before to continue with the next problem, we will explain in more


detail how to determine the extensive table, because it is sometimes
confusing. The key part is to determine the partition of the values
each input. We said that in case of the DMN Table of Fig. 3.33, the
universe values of the input income is [20, ∞[ and the partition for
this set is [20, 30]∪]30, 40]∪]40, ∞[ with 3 subsets. The set of values
of the input assets is [0, ∞[ has the partition [0, 100]∪]100, ∞[ with 2
subsets. Then, we will have 3 × 2 = 6 combinations of tuples that
classify all customers. Now, we will determine each row of the exten-
sive table: The total number disjoint tuples (6) divided by number of
partitions (3) of the input income gives 2 rows, in other words the six
rows are divided in 3 groups of 2 different rows. Then, each group
of 2 rows is divided by the number of partitions (2) of the second
input assets, that is 2/2 = 1. Thus, the rows of the extensive table are
obtained as shown on the left table of Fig. 3.34.

Problem 3.47. Consider the DMN Table of Fig. 3.35. Obtain the
extensive decision table and determine the best hit policy, if any. Figure 3.35: The Applicant
Risk Rating. It classifies ap-
Solution 3.47 The solution is the table of Fig. 3.36. Let us observe plicant according the age and
that we simulated the table by using a MAC OS Numbers sheet. You medical history.
can use MS Excel without difference. We are showing the formula for
calculation of applying the rule 1 to the customer 5.
Let us assume that, because the logic of the problem, we need to
obtain only one output. Because each input tuple corresponds to only
one output, the best hit policy is UNIQUE.
112 practical business process engineering

Figure 3.36: The Applicant


Risk Rating. Extensive table
of the DMN Table of Fig. 3.35.
We show the formula in the
Numbers sheet.

Problem 3.48. Consider the DMN Table of Fig. 3.37. Obtain the
extensive decision table and determine the best hit policy, if any.

Solution 3.48 The solution is the table of Fig. 3.38. It is reasonable


to assume that we need to obtain only one output from this decision
Figure 3.37: Personal Loan
table. Because there are input tuples that corresponds to many and
Compliance. It classifies ap-
equal outputs, the best hit policy is ANY.
plicant for loan the personal
rating, card balance, and educa-
tion loan balance.
Figure 3.38: Personal Loan
Compliance. The extensive
table of the DMN Table of Fig.
3.37. The best hit policy is ANY.

Problem 3.49. Consider the DMN Table of Fig. 3.39. Obtain the
extensive decision table and determine the best hit policy, if any.

Solution 3.49 The solution is the table of Fig. 3.40. It is reasonable Figure 3.39: The Applicant
to assume that we need to obtain only one out from this decision Risk Rating (2nd Version). It
table. Because there is at least an input tuple (see employee 6) that classifies applicant according
corresponds to many different outputs, the best hit policy is FIRST. applicant age and the medical
history.
modelling in bpmn and dmn 113

Figure 3.40: The Applicant


Risk Rating (2nd Version). The
extensive table of the table of
Fig. 3.39. Because there is at
least an input tuple (see em-
ployee 6) that corresponds to
many different outputs, the best
hit policy is FIRST.

Problem 3.50. Consider the DMN Table of Fig. 3.41. Obtain the
extensive decision table and determine the best hit policy, if any.

Solution 3.50 The solution is the table of Fig. 3.42. It is reasonable


to assume that we need to obtain all outputs, because the company Figure 3.41: Special Discount.
will apply a total discount to its customer. The best hit policy is COL- It assigns discount to a cus-
LECT. tomer.

Figure 3.42: Special Discount.


Extensive table of the DMN
Table of Fig. 3.41. The best hit
policy is COLLECT.

Problem 3.51. Consider the DMN Table of Fig. 3.43. Obtain the
extensive decision table and determine the best hit policy, if any.

Solution 3.51 The solution is the table of Fig. 3.44. It is reasonable


to assume that we need to obtain all outputs, because the company
will apply a sum of total days for the holidays. The best hit policy is Figure 3.43: Holidays. Rules
COLLECT. for calculating the number of
holidays days.
Problem 3.52. Consider the DMN Table of Fig. 3.45. Obtain the
extensive decision table and determine the best hit policy, if any.

Solution 3.52 The solution is the table of Fig. 3.46. It is reasonable


to assume that we need to obtain all outputs, because the company
114 practical business process engineering

Figure 3.44: Holidays. The ex-


tensive tableHolidays
Figure 3.45: of the DMN
(2ndTable
Ver-
of Fig. It3.43.
sion). Because
assigns the com-
number of days
pany will apply
for holidays to ana employee.
sum of total
days for the holidays. The best
hit policy is COLLECT.

will apply a sum of total days for the holidays. The best hit policy is
COLLECT.

Problem 3.53. Consider the DMN Table of Fig. 3.47. Obtain the
extensive decision table and determine the best hit policy, if any.

Solution 3.53 The solution is the table of Fig. 3.48. It is reasonable to


assume that we need to obtain only one output, because the univer-
sity will assign only one discount. Because the student 11 will obtain
two discount, the best hit policy will be FIRST.

3.4 Decision Modelling and Implementation


The solutions of this Section are imple-
The solutions of this Section are implemented in Numbers of MAC mented in Numbers of MAC OS. You
OS. You can accesses the sheet through the link solutions-dmn. can accesses the sheet any time.

Problem 3.54. Consider the following situation and answer the fur-
ther questions.
For determining the final price, it is necessary to know the member
type and if the day of the buying is a black Friday. In fact, if the mem-
ber is gold and the day is a black friday, then the final price is $69.99.
For any another day, the normal price is $99.99, excepting when the
member is silver, which case the price is $89.99.
modelling in bpmn and dmn 115

Figure 3.46: Holidays (2nd


Version). The extensive table
of the DMN Table of Fig. 3.45.
Because the company will ap-
ply a sum of total days for the
holidays. The best hit policy is
COLLECT.

Figure 3.47: Student Financial


Package Eligibility. It assigns
discount to study fees accord-
ing the student GPA, student
extra-curricular activities, and
(a) Modelling the corresponding DMN Table and determine the best student national honor society
hit policy, if any. (b) Implement the DMN Table in Signavio Aca- membership.
demic16 and verify that the selected hit policy is correct if any. 16
We use the Cloud application of
Signavio under a University license.
The link to the application is signavio-
Solution 3.54 (a) The DMN Table is shown in Fig. 3.49. While the ex- academic.
tensive table with the corresponding simulation is shown in Fig. 3.50.
Let us observe that the case silver and true has not answer from the
DMN Table, then it is not possible to apply any hit policy. Thus, the
table must be modified for applications. (b) The implementation in
Signavio Academic is not shown in this book.

Problem 3.55. Consider the following situation and answer the fur-
ther questions.
The decision of a pre-qualification of creditworthiness depends on
three criteria: the credit score rating, which can take the values: poor,

Figure 3.48: Student Finan-


cial Package Eligibility. The
extensive table of the DMN
Table of Fig. 3.47. Assume
that the university will assign
only one discount. Because the
student 11 will obtain two dis-
counts, the best hit policy will
be FIRST.
116 practical business process engineering

Figure 3.49: Final Price. The


DMN Table contains only three
rules according to the descrip-
tion of the case.

Figure 3.50: Final Price. the


extensive table with the corre-
sponding simulation. Let us
observe that the case silver
and true has not answer from
the DMN Table, then it is not
possible to apply any hit pol-
icy. Thus the table must be
bad, fair, good, and excellent, the back end ratio, and the front end ratio.
modified for applications.
The last two inputs can be sufficient or insufficient. There are basically
three rules: (1) If the credit score rating is poor or bad, then the pre-
qualification is not qualified. (2) If the back end ratio or the front end
ratio is insufficient, then the pre-qualification is not qualified. (3)
The pre-qualification is qualified in case that the back end and the front
end ratios are sufficient and the credit score rating is at least fair.
Additionally, the credit score rating categories are defined as in Table 3.1

(a) Modelling the corresponding DMN Table and determine the best
hit policy, if any. (b) Implement the DMN Table in Signavio Academic
and verify that the selected hit policy is correct if any.

credit score credit score rating


Table 3.1: Definition of the
≥ 750 excellent credit score rating categories.
[700..750[ good The output of this table is used
[650..700[ fair as input in the DMN Table
[600..650[ poor pre-qualification.
< 600 bad

Solution 3.55 (a) The DMN Table is shown in Fig. 3.51. While the
extensive table with the corresponding simulation is shown in Fig.
3.52. Let us observe that there are cases where there are more of
an output, but they are the same. Therefore, the hit policy ANY is
applicable. (b) The implementation in Signavio Academic is not shown
in this book. However, we must consider a DMN diagram which the
output of Table 3.1 is input for the Table of Fig. 3.51.

Problem 3.56. Consider the following situation and answer the fur-
ther questions.
modelling in bpmn and dmn 117

Figure 3.51: Pre-qualification.


The DMN Table contains five
rules according to the descrip-
tion of the case.

Figure 3.52: Pre-qualification.


the extensive table with the
corresponding simulation. Let
us observe that there are cases
where there are more of an
output, but they are the same.
Therefore, the hit policy ANY is
applicable.

The workers and the company agreed at least 20 vacations days yearly
plus additional days according to the worker age and years of service.
Each worker has right to additional days according to the following
rules: (a) if the age is less than 25 or greater or equal than 60, then
he will have 5 days. (b) If the years of service is greater or equal than
30, he will receive 5 days too. (c) The most common case is when the
worker is between 18 and 60 years old (inclusive both quantities) and
the years of service between 15 and 30 (also inclusive), the additional
days are 2. (d) The last case is when the age is between 45 and 60
(inclusive) and the years of service is less than 30, then the worker will
have 2 days.

(a) Modelling the corresponding DMN Table and determine the best
hit policy, if any. (b) Implement the DMN Table in Signavio Academic
and verify that the selected hit policy is correct if any.

Solution 3.56 (a) The DMN Table is shown in Fig. 3.53, while the ex-
tensive table with the corresponding simulation is shown in Fig. 3.54.
Let us observe that there are cases where there are more of an output,
which must be added to the total days of vacations. Therefore, the
hit policy COLLECT is applicable, because we need to sum the total
days of vacations. (b) The implementation in Signavio Academic is not
shown in this book. However, we must consider a DMN diagram
which the output of Table 3.1 is input for the Table of Fig. 3.53.
118 practical business process engineering

Figure 3.53: Holidays. The


DMN Table contains five rules
according to the description of
the case.

Figure 3.54: Holidays. the


extensive table with the corre-
sponding simulation. Let us
observe that there are cases
where there are more of an
output. Although, they are the
same, they must be summed.
Therefore, the hit policy COL-
LECT is the best.
modelling in bpmn and dmn 119

Figure 3.55: Nobel Prize Pro-


cess. This model describes the
process of the Medicine Nobel
Prize. Five participants are part
of the collaboration diagram.
4
Analysis and Optimisation

4.1 Stochastic Simulation of BPMN Models

A Business Process can be considered as a ordered set of queues or


waiting lines. On the one side, simple queues with some stochastic Business Processes can be considered as
characteristics can be mathematically very good described, but sets a ordered set of queues or waiting lines

of queues as in business processes are very difficult to model. For


this reason, we will start with basic concepts of Queue Theory that
are applicable in very simple cases, then we will continue with the
stochastic simulation in case of business processes.
Waiting queues is one of the most common phenomena in people’s
lives. We have all experienced the inconvenience of waiting in line to
shop or waiting for an order to be processed on the computer. This
phenomenon is recurrent in business processes: the orders queue
in the order processing unit, a customer waits in the cashier line
to receive service, customers wait in the shopping store. Time is
a valuable resource, which is why it is an important topic of our
analysis.
The emphasis on quality of service has motivated organisations
to reduce waiting times. More and more companies are striving to
reduce waiting times as an important component of quality improve-
ment. However, companies can reduce waiting times by increasing
their service capacity, which generally means adding more servers
or processors. For example, they are added as more cashiers in a
bank, more employees in the order processing unit or more compu-
tational processing capacity. The consequence is clear: the increase in
resources has a monetary cost. Therefore: the trade-off between the trade-off between between cost and
cost of the improved service and the cost of making customers wait. time

Queuing Theory offers many models and variants of situations


in which queues occur. The theory starts with components of a Sim-
ple Queue. Simple elements that we will explain below: a queue of
clients, one or more servers that serve a single queue and an arrival
discipline that we will assume FIFO (first-in-first-out). This simple
122 practical business process engineering

model will suffice to understand the elements of Queue Theory and


business process simulation.
Queues are formed because people or clients arrive at a server,
faster than they can be served. However, this does not mean that the
server has little processing capacity. In fact, most organisations have
sufficient available service capacity or gaps. Queues originate for two why queues are originated
reasons: first, because customers do not arrive at a constant rate. Sec-
ond, because customers are also not constantly served. Customers
arrive randomly and the time required to serve them is also random.
Therefore, the tails continually increase and decrease in length. Even
many times they are empty. However, in the long term they tend to
average long. For example, customers who reach a teller at a bank
may be serviced by a sufficient number of employees to serve an
average of 12 customers per hour, and at a particular hour only 2 cus-
tomers may arrive, so the size of the queue is almost zero. Otherwise,
at specific times during rush hour, queues may form because for ex-
ample 100 customers arrive and the cashier is able to serve only 60
per hour.

Components of a Simple Queue This is the simplest form of queuing


system. As such, it will be used to demonstrate the fundamentals of a
queuing system and stochastic simulation of business processes. Let’s
consider the following example.
A bank has a single cashier and an employee who operates it. The
counter and operator combination is the server (or service) in this
simple queuing system. Clients lining up on the server to ask for
their tasks form the queue (or waiting line). The configuration of this queue
example queuing system is shown in Figure 4.1.

Figure 4.1: Components of a


Simple Queue. The most im-
portant components we will
study for the stochastic simula-
tion are taken from the Queue
Theory.

The most important components we will study for the stochastic


analysis and optimisation 123

simulation are based in the Queue Theory are the following:


1. the queue discipline, i.e. the order how the clients are served,

2. the source of clients,

3. the characteristics of the arrivals, and

4. the characteristics of service.


Queue discipline is the order in which waiting customers are queue discipline
served. Bank customers are issued on a first-in-first-out basis. That
is, the first person in line at the checkout is attended first. This is the
most common type of queuing discipline. However, other disciplines
are also possible. For example, an order processing unit employee
can match requests to so that the last order is on top and then selects
the first order in the package. This discipline of the tail is known as
last-in- first-out (LIFO). The clerk could also simply select a random
order. In this case, the queue discipline is random. In many cases,
customers are served according to a pre-established schedule, re-
gardless of when they arrive at the facility. A final example of the
many types of queuing disciplines is when clients are processed ac-
cording to a unique characteristic such as ID number or age, which
is assigned according to a priority. We will consider in this book the
simplest case called first-in-first-out (FIFO). first-in-first-out discipline
The source of the clients that we will suppose is infinite. In source of the clients: infinite
other words, there is a very large number of potential customers
in the area where the bank is located. Some queuing systems have
finite call populations. For example, the order receiving unit receives
only 20 orders a day. The client population in this case is 20 and is
therefore finite. However, queuing systems that have a supposed
population of infinite calls are more common.
The arrival rate is the rate at which clients reach the server dur- arrival rate
ing a specific period of time. This rate is estimated from empirical
data derived from the study of the system or a similar system, or it
can be an average of these real data. For example, if every 2.5 min-
utes a customer arrives at the bank cashier between 12 and 13 hours,
the arrival rate would be 1/2.5 [min−1 ]. However, although we could
determine an arrival rate by counting the number of customers pay-
ing in the market for an entire day, we would not know exactly how
they are distributed during the day. In other words, it could be that
no client would arrive during the 9 and 10 in the morning, but more
clients arrive between the 12 and 13 hours. In general, these arrivals
are assumed to be independent of each other and vary randomly
over time and constant in a determined period.
Given these assumptions, we will further assume that arrivals at
a service facility follow a probability distribution. It has been de-
124 practical business process engineering

termined (through years of research and practical experience) that


the number of arrivals per unit of time at a service facility can often
be defined by a Poisson distribution. If this is so, it can be shown the number of arrivals per minute is
mathematically that the time between arrivals follows an exponential Poisson distributed if and only if the
time between arrivals is exponential
distribution. distributed
The service rate is the average number of clients that can be service rate
served during a specific period of time. In our example, the cashier
can serve 1 customer every 4 minutes. The service time, as well as
the time between client arrivals are random variables. Both will be
considered exponential. In this way, the number of clients that arrive
and are served per unit of time are distributed according to a Poisson
distribution.
The example of a single queue in a bank is an example of a single single queue
server queue system with the following characteristics:

1. an infinite customer population,

2. a queuing order discipline FIFO,

3. exponential time between arrivals, and

4. exponential service time

The analytical derivation of even this queuing model is simple, how-


ever changing this system requires a complex analysis that most
of the time is not possible to do analytically. Stochastic simulation
is used in these cases. Therefore, we will refrain from deriving the
model in detail and consider only simple system formulas to intro-
duce the components of more complex systems. For this reason, the
reader should keep in mind that the formulas that we will see below
are applicable only to simple queuing systems.
Given that λ the arrival rate, the service rate µ with λ < arrival rate
µ, the Queue Theory provides of formulas for calculating several service rate

characteristics. The probability that no customers are in the queuing


system (either in the queue or being served) is given by
 
λ
P0 = 1 − .
µ
The probability that n customers are in queuing system (either probability that n customers are in
in the queue or being served) is queuing system

 n  n  
λ λ λ
Pn = P0 = 1− .
µ µ µ
The cycle time, i.e. the average time a customer spends in the total cycle time
queuing system (either in the queue or being served) is
1
W= .
µ−λ
analysis and optimisation 125

The queue time, i.e. the average time a customer spends waiting in queue time
queue to be served is
λ
Wq = .
µ(µ − λ)
The system length, i.e. the average number of customers in the system length
waiting line is given by the Little Law Little Law

L = λW.

Observe that
1 λ
L = λW = λ = .
µ−λ µ−λ
The queue length, i.e. the average number of customer in the queue queue length
is given also by the Little Law

Lq = λWq .

Observe that
λ λ2
Lq = λWq = λ = .
µ−λ µ−λ
The utilisation rate of the server, i.e. the probability that the utilisation rate
server is busy (i.e., the probability that a customer has to wait) is
λ
U= .
µ
The idle rate of the server, i.e. the probability that the server is idle idle rate
(i.e., the probability that a customer can be served) is
λ
I = 1−U = 1− .
µ
Let us observe that the last term 1 − λ/µ is also equal to P0 , that is
the probability of no customers in the queuing system is the same as
the probability that the server is idle.
We will continue with exercises and problems. You can find the
original models in BPMN 2.0 and the corresponding models with
data for simulation, i.e. with the resources and times in the following
link: models. Figure 4.2: Simple Queue Sys-
Problem 4.1. Let us suppose a bank office with a unique cashier tem. The system satisfies the
and queue. Between 12am and 1pm customers arrive every 2.5 min- assumptions of a simple queue
utes on average. The cashier is able to process every customer in system: FIFO customer disci-
2 minutes on average. The number of arrivals per time is Poisson pline, infinite customers. Time
distributed and the time of serving each customer is exponential dis- time between arrivals and the
tributed. Answer: (a) Characterise the queue system. (b) Calculate processing time are exponential
P0 , W, Wq , L, Lq , U, and I by using formulas provided by the Queue distributed.
Theory. (c) Consider the model BPMN of Fig. 4.2 representing the
simple queue system. Calculate P0 , W, Wq , L, Lq , U, and I by using a
simulator and compare the results with those obtained in (b).
126 practical business process engineering

Solution 4.1 (a) The bank office has a unique cashier and queue.
The queue discipline is FIFO (first-in-first-out), it is supposed that
the queue receives infinite customers, the time between arrivals and
the processing time are distributed exponential (remember that this
equivalent to the number of arrivals and the number of processed
customers are Poisson distributed). This characterisation corresponds
to a simple queue.
. .
(b) Given is λ = 1/2.5 [min−1 ] and µ = 1/2 [min−1 ]. It is first
important to verify that λ < µ, what is true. With this information,
we can calculate P0 = (1 − 2.0/2.5) = 0.2, W = 1/(1/2.0 − 1/2.5) =
10.0 [min], Wq = (1/2.5)/((1/2.0) · (1/2.0 − 1/2.5)) = 8.0 [min], L =
(1/2.5) · 10.0 = 4 [-], Lq = (1/2.5) · 8.0 = 3.2, U = (1/2.5)/(1/2.0) =
0.8, and P0 = I = 1 − 0.8 = 0.2. Let us observe that, because the
service time is 2.0, then W = Wq + 2.0 = 8.0 + 2.0 = 10.0, i.e. the
cycle time is equal to the queue time plus the processing time for
each customer.
(c) The simulation was carried out with the software BIMP (see the
Web Page https://bimp.cs.ut.ee/). The results are: W = 9.6 [min],
Wq = 7.6 [min], and U = 0.79. L and Lq are not given by the software,
then we will use the Little Law: L = λW = (1/2.5) · 9.6 = 3.8 [-],
Lq = λWq = (1/2.5) · 7.6 = 3.0, and P0 = I = 1 − 0.79 = 0.21. Let
us note that these results are not exactly equal to those obtained from
the Queue Theory of part (b), but they are similar. The differences
depend on the number of runs and the construction of the simulator
software. However, they are good enough for our purposes.

Figure 4.3: Base Case. Each


task satisfies the assumptions
of a simple queue system: FIFO
customer discipline, infinite
customers, and time between
arrivals and the processing time
are distributed exponential.
Each task can be assumed ap-
proximately as a simple queue
system.
Problem 4.2. Model the process of Fig. 4.3 in BPMN and simulate
it. Consider the time random variables as exponentially distributed,
FIFO queue discipline, and infinite customers. By simulating the
model obtain W, Wq , L, Lq for the complete process and U and I for
each resource.

Solution 4.2 Each task satisfies the assumptions of a simple queue


system: FIFO customer discipline, infinite customers, and time be-
analysis and optimisation 127

tween arrivals, and the processing time are distributed exponential.


Each task can be assumed approximately as a simple queue sys-
tem. The first problem we have to solve is the minimal number of
resources assuring the condition λ < µ for each task. Let us see. . .
The only one condition we are not still verified is that λ < µ, i.e.
the processing demand must be strict less than the processing offer.
. 1 . 1
Since λ = 2.5 and µ = 4.0 , then the condition λ < µ is not satisfied. The condition λ < µ is not satisfied.
How to solve this? The answer is by adding resources. But remember
that the condition λ < µ is valid for the case of a simple queue
system, where the system has only one resource. If we had more
resources, we could increase the processing capacity, but the queue
system would not be the same. For this reason, we will add more
resources, namely n and will accept the mathematical approximation.
Therefore, the condition to satisfy will be λ < nµ, where n is the
number of resources.
By assuming this approximation, we will compute the minimal
number of resources n satisfying λ < nµ. Thus, for the rol A in the
Fig. 4.3, the minimal number n A of resources will be

1 1 4.0 .
< nA → nA > = 1.6, then we take the value n A = 2.
2.5 4.0 2.5
.
The same procedure is carried out for the rol B, i.e. n B = 2.
The simulation was carried out with the software BIMP (see the
Web Page https://bimp.cs.ut.ee/). The results are the following:
W = 20.9 [min]; Wq is obtained by summing the queue times each
task, i.e. Wq = 6.6 + 6.4 = 13.0 [min]. L and Lq are not directly
obtained from the simulator, therefore they must be calculated by
using the Little Law, i.e.

1 1
L = λW = · 20.9 = 8.4 [-] and Lq = λWq = · 13.0 = 5.2 [-].
2.5 2.0
The utilisation rates are U A = 0.79, UB = 0.79 for roles A and B
respectively. I A and IB are easily computed by I A = 1 − U A and
IB = 1 − UB respectively.

Problem 4.3. In this problem we will evaluate the flexibilisation


strategy, which consists in to assign different tasks to a pool of re- flexibilisation strategy
sources. Each resource is assigned randomly to the tasks if it is not
busy. This strategy has the advantage that the more resources are
shared in different tasks when they are demanded. Model the pro-
cess of Fig. 4.4 in BPMN and simulate it. Consider the time random
variables as exponentially distributed, FIFO queue discipline, and
infinite customers. By simulating the model obtain W, Wq , L, Lq for
the complete process, and U and I for each resource.
128 practical business process engineering

Figure 4.4: Flexibilisation


Strategy. Consists in to assign
different tasks to a pool of re-
sources, which are assigned
randomly if the resource is not
busy.

Solution 4.3 We follow the same argument as the above problem, i.e.
each task can be assumed approximately as a simple queue system
and we need to compute the minimal resources n satisfying λ <
nµ. Thus, for the rol AB in the Fig. 4.4, the minimal number n AB of
resources will be
1 1 8.0 .
< n AB → n AB > = 3.2, we take the value n AB = 4.
2.5 4.0 + 4.0 2.5
The results are: W = 12.5 [min]. The queue time for the task A is
2.3 [min] and for the task B 2.3 [min], then the queue time for the
process is Wq = 2.3 + 2.3 = 4.6 [min]. According the Little Law

1 1
L = λW = · 12.5 = 5.0 [-] and Lq = λWq = · 4.6 = 1.8 [-].
2.5 2.5
The utilisation rate is U AB = 0.79 for rol AB. I AB is easily computed
by I AB = 1 − U AB .

Figure 4.5: Parallelisation


Strategy. Consists in to break
the tasks sequence. This strat-
egy is obviously better that the
base case, but rarely possible.

Problem 4.4. In this problem we will evaluate the parallelisation


strategy, which consists in to break the tasks sequence. This strat- parallelisation strategy
analysis and optimisation 129

egy is obviously better that the base case, but rarely possible. Indeed,
it is possible to implement it if no output of the previous task is an
input for the next task. Model the the process of Fig. 4.5 in BPMN
and simulate it. Consider the time random variables as exponentially
distributed, FIFO queue discipline, and infinite customers. By simu-
lating the model obtain W, Wq , L, Lq for the complete process, and U
and I for each resource.

Solution 4.4 We follow the same argument as the above problems,


i.e. each task can be assumed approximately as a simple queue sys-
tem and we need to compute the minimal resources n satisfying
λ < nµ. Thus, for the roles A and B in the Fig. 4.5, the minimal
number n A and n B of resources will be
1 1 4.0 .
< nA → nA > = 1.6, we take the value n A = 2.
2.5 4.0 2.5
.
The same procedure is carried out for the rol B, i.e. n B = 2. The
results are: W = 14.7 [min]. The queue time for the task A is 6.4
[min] and for the task B 6.6 [min], but differently to the above cases
the queue time for the process does not correspond to the sum of
both tasks, because the tasks are not in sequence. W is the maximum
of the queue times of both tasks, i.e Wq = max{6.4, 6.6} = 6.6 [min].
According the Little Law

1 1
L = λW = · 14.7 = 5.9 [-] and Lq = λWq = · 6.6 = 2.6 [-].
2.5 2.5
The utilisation rates are U A = 0.79, UB = 0.80 for roles A and B
respectively. I A and IB are easily computed by I A = 1 − U A and
IB = 1 − UB respectively.

Figure 4.6: Composition Strat-


egy. Consists in to compose
tasks in only one. This strategy
is better only if the service time
of the composed task is less
than the sum of the processing
times of the tasks that compose
it.

Problem 4.5. In this problem we will evaluate the composition


strategy, which consists in to compose tasks in only one. This strat- composition strategy
egy is better only if the service time of the composed task is less
than the sum of the processing times of the tasks that compose it. In
130 practical business process engineering

this example, the sum of the service times of the tasks to compose is
4.0 + 4.0 = 8.0, we will consider a new task, called AB, with service
time 7.0 less that 8.0. The service time of the composed task depends
of the real context and there are not a general rule for determining it.
Model the the process of Fig. 4.6 in BPMN and simulate it. Consider
the time random variables as exponentially distributed, FIFO queue
discipline, and infinite customers. By simulating the model obtain
W, Wq , L, Lq for the complete process, and U and I for the resource
AB.

Solution 4.5 We follow the same argument as the above problems,


i.e. each task can be assumed approximately as a simple queue sys-
tem and we need to compute the minimal resources n satisfying
λ < nµ. Thus, for the roles AB in the Fig. 4.6, the minimal number
n AB of resources will be
1 1 7.0 .
< n AB → n AB > = 2.8, we take the value n AB = 3.
2.5 7.0 2.5
The results are: W = 30.4 [min]. The queue time Wq = 23.5 [min].
According the Little Law

1 1
L = λW = · 30.4 = 12.1 [-] and Lq = λWq = · 23.5 = 9.4. [-].
2.5 2.5
The utilisation rate is U AB = 0.92. The idle rate is I = 1 − 0.92 =
0.08.

Figure 4.7: Triage Strategy.


Consists in to assign special
resources to particular cases.
Triage is a standard strategy
used in the emergency rooms
for prioritising patients accord-
ing their disease severity.

Problem 4.6. In this problem we will evaluate the triage strategy, triage strategy
which consists in to assign special resources to particular cases. The
assignation is not flexible. Triage is a standard strategy used in the
emergency rooms for prioritising patients according their disease
severity. In this case, let us suppose that the task A can be divided
analysis and optimisation 131

in two tasks types: A simple and A complex. Simple tasks need a


processing time of 2.7 [min] and have a frequency of 75%, instead
of complex tasks need a processing time of 8.0 [min] and have a
frequency of 25%. The task B is not subdivided. The design of the
triage strategy depends of the real context and there are not a general
rule for determining it. As the below problems, consider the time
random variables as exponentially distributed, FIFO queue discipline,
and infinite customers. By simulating the model obtain W, Wq , L, Lq
for the complete process, and U and I for resources AS, AC, and B.

Solution 4.6 We follow the same argument as the above problems,


i.e. each task can be assumed approximately as a simple queue sys-
tem and we need to compute the minimal resources n satisfying
λ < nµ. Thus, for the rol AS in the Fig. 4.7, the minimal number n AS
of resources will be
1 1 .
· 0.75 < n AS → n AS > 0.81, we take the value n AS = 1.
2.5 2.7
Similarly for the rol AC,
1 1 .
· 0.25 < n AC → n AC > 0.80, we take the value n AC = 1.
2.5 8.0
Finally, for the resource B, the minimal number n B of resources will
be
1 1 .
< nB → n B > 1.6, we take the value n B = 2.
2.5 4.0
The results are: W = 32.3 [min]. Wq for the process is more difficult
to compute, because it depends either the task A simple or task A
complex is executed. According the results the queue times are: for
AS is 11.1 [min], for AC is 36.9 [min], and for B is 6.7 [min]. We will
calculate an average, i.e. Wq = 11.1 · 0.75 + 36.9 · 0.25 + 6.7 = 24.3
[min]. According the Little Law
1 1
L = λW = · 32.3 = 12.9 [-] and Lq = λWq = · 24.3 = 9.7 [-].
2.5 2.5
The utilisation rates are U AS = 0.79, U AC = 0.80, UB = 0.79 for roles
AS, AC, and B respectively. The idle rates can be calculated as in the
above problems.
So far we have reviewed the Simple Queue Theory and the simula-
tion of business process models modelled in BPMN also simple. We
have learned how to determine the minimum number of resources
to ensure the λ condition, where λ is the case arrival rate, µ is the
service rate of a task executed by a resource, and n is the number of
resources. Remember that the λ < µ condition is an approximation
to the problem of queues with more than one server and that it is
strictly valid for the case of single queued systems.
132 practical business process engineering

In addition, in the case of simple business processes we generate


improvements from a base case by proposing flexibility, parallelisa-
tion, composition, and triage. We have simulated all these cases and
obtained the values for W, Wq , L, Lq , U, and I for the entire process
given the parameters λ, µ, and the number of resources n. What the
simulation yields as a result can be divided into two groups: results
associated with each resource (e.g., role A and role B) and associated
with each task (e.g. A and B). Utilisation rates U are associated with
each resource, while W, Wq , L, and Lq to each task. Let us remem-
ber, however, that L and Lq were obtained from the Formula of Little
(L = λW or Lq = λWq respectively). What we want now is to de-
fine how to obtain these values for the whole process. Let’s see the
summary of the values associated to the resources in the Table 4.1.

nr strategy resource ni Ui
Table 4.1: Resource Table. It
1 base case rol A rol B 2 2 0.79 0.79 shows the results associated to
2 flexibilisation rol AB 4 0.79
each resource. ni is the number
3 parallelisation rol A rol B 2 2 0.79 0.80
of resources for the resource i
4 composition rol AB 3 0.94
and Ui is the utilisation rate for
5 triage rol AS rol AC rol B 1 1 2 0.79 0.80 0.79
the resource i.
On the other hand, the table 4.2 shows the values associated with
each task. In this case, the tasks are grouped into paths. The base path
case has only one path from start to finish: the path A − B. The
model of the flexibilisation strategy has the same path A − B. The
parallelisation strategy has two possible paths A − B or B − A de-
pending on which task is performed first, however for the purpose
of calculating the cycle time of the whole process, the order does
not matter. The composition strategy has a single task AB, while the
triage strategy has two exclusive paths: AS − B or AC − B.

nr strategy path Wi [min]


Table 4.2: Task Table. It shows
1 base case A B 11.0 10.9 the results associated to each
2 flexibilisation A B 7.5 7.5 task for a case base and their
3 parallelisation A B 16.3 10.3 improvements. Wi is the num-
4 composition AB 31.1 ber of resources for the task
5 triage AS B 14.0 11.1 i.
AC B 34.2 11.1
Now we will solve the problem of finding U and W for the whole
process using the values obtained for the individual resources and
tasks shown in the tables 4.1 and 4.2 respectively.
Problem 4.7. Find U and W for the complete processes using the
values obtained for the individual resources and tasks shown in the
tables 4.1 and 4.2 respectively.
analysis and optimisation 133

nr strategy n W [min] U
Table 4.3: Summary of Results.
1 base case 4 21.9 0.79 It resumes the results of n, W
2 flexibilisation 4 15.0 0.79 and U for the models of tables
3 parallelisation 4 16.3 0.80 4.1 and 4.2. The values Wq , Lq
4 composition 3 31.1 0.94 can be equally obtained, but
5 triage 3 30.2 0.79 they do not mean additional
concepts to our objectives.
Solution 4.7 The U utilisation rate for the entire process is obtained
by weighting the utilisation rates associated with each resource Ui by
the respective number of resources ni . The result is obtained in the
table 4.3. We have repeated the table 4.1 for the purpose of further
explanation.
Instead the cycle time W of the entire process is a little more dif-
ficult to obtain, because it depends on the path and its frequency. In
the case of the base case, flexibilisation and composition there is only
one path with frequency 1.0. In the model of the parallelisation strat-
egy, the cycle time of the whole process will be max{16.3, 10.3} =
16.3 [min]. The triage strategy has two paths: the first path occurs
with frequency 0.75 and has a cycle time of 14.1 + 11.1 = 25.2 [min],
while the second path occurs with frequency 0.25 and has a cycle
time of 34.2 + 11.1 = 45.3 [min]. This yields a weighted value of
W = 0.75 · 25.2 + 0.25 · 45.3 = 40.3 [min].

Problem 4.8. The Table 4.3 resumes the results of n, W and U for
the models of tables 4.1 and 4.2. The values Wq , Lq can be equally
obtained, but they do not mean additional concepts to our objectives.
Determine the Pareto optima solutions.

nr strategy n W [min] U dom. by


Table 4.4: Pareto Optima Solu-
1 base case 4 20.9 0.79 2 tions. They are obtained by a
2 flexibilisation 4 15.0 0.79 standard procedure. The alter-
3 parallelisation 4 26.3 0.80 2 natives 1 and 3 are dominated
4 composition 3 31.1 0.94 5 by the alternative 2, while the
5 triage 3 30.2 0.79 alternative 5 is dominated by
the alternative 4.
Solution 4.8 The problem consist on to find improvement alterna-
tives (or solutions) that are the best in n, W, and U, i.e. the minimal
n, W, and U simultaneously. It easy to see from Table 4.3, that no al-
ternative satisfies this condition, thus we will determine the Pareto
optima solution. A Pareto optimum is a solution i such that there
not exist another better solution j than i. The general procedure is as
following:
134 practical business process engineering

(1) First eliminate the worst alternatives, i.e. the alternative that is
the worst in the three criteria (n, W and U). (2) Compare pairwise the
first alternative with the rest. (3) Continue with the step 1.
We will apply the procedure to the Table 4.3 and show it in Ta-
ble 4.4. (1) First eliminate the worst alternatives, i.e. the alternative
that is the worst in the three criteria (n, W and U). In this case, as we
said, no alternative is the worst simultaneously in the three criteria.
(2) Compare the first alternative pairwise with the rest, i.e. compare
the alternative 1 pairwise with 2, 3, 4, and 5. Comparing 1 and 2, we
can eliminate the alternative 1, because both have the same n and
U, but the alternative 1 has worse W. (3) Continue with the step 1.
After eliminating the alternative 1, we eliminate that alternative with
minimum in all criteria (for alternatives 2, 3, 4, and 5), but no alter-
native satisfies this condition. (4) We compare now the alternative 2
pairwise with 3, 4, and 5. Comparing 2 and 3, we can eliminate 3 (we
say that 3 is dominated by 2). Alternatives 2 and 4 are not dominated
between them. In addition, 2 and 5 are not dominated between them.
We will continue with alternatives 4 and 5. (5) Continue with the step
1 of the general procedure by eliminating the alternative 4.
Therefore, the Pareto optima solution are the alternatives 2 and
5.

Problem 4.9. The Table 4.5 resumes the results of n, W and U for 6
alternative models. Determine the Pareto optima solutions.

nr n W [min] U
Table 4.5: Summary of Results.
1 8 10.5 0.82 It resumes the results of n, W
2 10 10.5 0.75 and U for alternative models.
3 7 15.5 0.60
4 4 12.3 0.80
5 8 8.3 0.67
6 10 14.2 0.90

Solution 4.9 We will follow the procedure for finding the Pareto
optima solutions of Table 4.5 and write the analysis in Table 4.6. (1)
First, examine the entire list and eliminate the worst alternative. In
this case the alternative 6 is eliminated. (2) We start with pairwise
comparison from alternative 1: 1 is dominated by 5, then 1 is elimi-
nated. (3) There is no worst alternative between 2, 3, 4 and 5. (3) We
start with pairwise comparison from alternative 2: 2 is dominated by
5, then 2 is eliminated. (4) There is no worst alternative between 3,
4 and 5. (3) The pairwise comparison for 3 (comparing 3 with 4 and
3 with 5) does not give a winner. (4) The pairwise comparison for 4
and 5 does not give a winner.
analysis and optimisation 135

Thus, the Pareto optima are the alternatives 3, 4, and 5.

nr n W [min] U dom. by
Table 4.6: Computation Pareto
1 8 10.5 0.82 5 Optima. It contains the elimi-
2 10 10.5 0.75 5 nated alternatives and shows
3 7 15.5 0.60 the Pareto optima solutions:
4 4 12.3 0.80 alternatives 3, 4, and 5.
5 8 8.3 0.67
6 10 14.2 0.90 min.

Figure 4.8: Process Model. This


Problem 4.10. Answer the following questions related to the process model contain two different
model of Fig 4.8: (a) Obtain the number of resources minimal n for paths. The first one has a fre-
resources accounting, sales, and director. (b) By using the answer of quency of 0.7 (in case <=5000
(a), determine the average W and U for the entire process by calculat- EUR), while the frequency for
ing from the Wi and Uj for all tasks i and resource j respectively. the second path is 0.3.

Solution 4.11 (a) The minimal resources is determined by using


the rule λ < nµ, i.e. the demand for processing is less than the
offer. This is an approximation obtained from the theory of simple
queue systems. Let us suppose the approximation that each task has
assigned its own resource no flexible, e.g. the task formal verification
of invoice has assigned n1 resources, then it must satisfy λ = 21 <
.
n1 µ = n1 11 , hence n1 > 0.5, we take n1 = 1. For the task trigger
payment (called 2), we obtain 12 (0.7 + 0.3) < n2 12 , thus n2 > 1.0, we
136 practical business process engineering

task Wi
Table 4.7: Task Table obtained
formal verification of invoice 1.2 by simulating the model of Fig.
trigger payment 2.2 4.8.
invoice registration 2.7
release invoice 2.9
.
take n2 = 2. Thus, the number of resources for resource accounting
.
will be n acc = n1 + n2 = 1 + 2 = 3. Now, we will continue with the
resource sales. For n3 assigned to invoice registration, 12 < n3 12 , we
. .
take n3 = 2. Finally for the director, 0.7 21 < n4 21 , we take n7 = 1.
In summary, the minimal number of resources will be n acc =
3, nsal = 2, and ndir = 1.
(b) The simulation yields the following utilisation rates: Uacc =
0.50, Usal = 0.50 and Udir = 0.30. Thus, the average utilisation rate for
the entire process will be: U = (3 · 0.50 + 2 · 0.50 + 1 · 0.30)/(3 + 2 +
1) = 0.47.
For calculating the W for the entire process, we need to sum-
marise the results in the task table. This is shown in Table 4.7. We
will calculate the times each path for the process. We have two
possible paths. The first one is: formal verification of invoice - in-
voice registration - trigger payment, and the second one is: for-
mal verification of invoice - invoice registration - release invoice
- authorisation received - trigger payment. The frequencies each
path are 0.7 and 0.3 respectively. Each path has the following cy-
cle time. The first path: 1.2 + 2.7 + 2.2 = 6.1 [hr], and the second
1.2 + 2.7 + 2.9 + 6.0 + 2.2 = 15.0 [hr], where we added the time of the
event authorisation received with a fixed time of 6 [hr]. Therefore, the
cycle time for the entire process will be: W = 0.7 · 6.1 + 0.3 · 15.0 = 8.8
[hr].

Problem 4.11. Answer the following questions related to the process


model of Fig 4.9: (a) Obtain the number of resources minimal n for
resources A, B, and C. (b) Determine the total time of processing
without the total queue times.

Solution 4.11 (a) The minimal resources is determined by using


the rule λ < nµ, i.e. the demand for processing is less than the
offer. This is an approximation obtained from the theory of simple
queue systems. Let us suppose the approximation that each task has
assigned its own resource no flexible. Thus, for the resource A, we
have two tasks: 1 and 5, for the task 1 with n1 resources, it must be
satisfied the condition 0.3λ = 12 < n1 µ = n1 10
1
gives n1 > 1.5, we take
. 1 1 .
n1 = 2. For the task 5, 2 < n5 8 , gives n5 > 4, we take n5 = 5. Thus
.
n A = n1 + n5 = 2 + 5 = 7. Now, for the resource B, the task 2 must
analysis and optimisation 137

. Figure 4.9: Process Model. This


satisfy 0.3 12 + 0.7 12 + 0.2 12 < n2 18 , gives n2 = 5. For then task 6, we model contain several different
. .
have 21 < n6 61 , the n6 = 4. Thus, n B = n2 + n6 = 5 + 4 = 9. Finally, for paths. The problem consists on
.
the resource C, we have 12 < n3 61 , whence n3 = 4, and 12 < n4 20 1
gives to determine them and to calcu-
. .
n4 = 11, therefore nC = n3 + n4 = 4 + 11 = 15. late the total time of processing
In summary, the minimal number of resources will be n A = without the queue times.
7, n B = 9, and nC = 15.
(b) The process model has several paths. The first path is 1-2-3-
2-3-4-(5,6), where we are assuming that the task 2 is repeated only
once. The second path is 1-2-3-4-(5,6). The third and fourth paths are
the same, but with out the task 1, i.e. 2-3-2-3-4-(5,6) and 2-3-4-(5,6)
respectively. The frequency of the first path is: 0.30 × 0.20 = 0.06, the
frequency of the second path is 0.30 × 0.80 = 0.24. The frequency
of the third path is 0.7 × 0.2 = 0.14 and the frequency of the fourth
path is 0.7 × 0.8 = 0.56. Let us note that the sum of all frequencies
is equal to 1.0. In addition the processing time each path is given
as following. For the path 1: 10 + 10 + 8 + 6 + 8 + 6 + 2 + 20 +
max{8, 6} = 78 [min]. For the path 2: 10 + 10 + 8 + 6 + 2 + 20 +
max{8, 6} = 64 [min]. For the path 3: 8 + 6 + 8 + 6 + 2 + 20 +
max{8, 6} = 58 [min]. For the path 4: 8 + 6 + 2 + 20 + max{8, 6} = 44
[min].
With the above computed frequencies and the processing times
each task, we can compute the average processing time as following.
For the first path: 0.06 · 78 + 0.24 · 64 + 0.14 · 58 + 0.56 · 44 = 56.8
[min].

Problem 4.12. Answer the following questions related to the process


model of Fig 4.10: (a) Obtain the number of resources minimal n for
resources A, B, and C. (b) By using the answer of (a), determine the
average W and U for the entire process by calculating from the Wi
138 practical business process engineering

Figure 4.10: Process Model.


and Uj for all tasks i and resource j respectively. This model contain three dif-
ferent paths. The first one is
Solution 4.12 (a) The minimal resources is determined by using
1-3-4-2, the second 1-3-4-2-5-
the rule λ < nµ, i.e. the demand for processing is less than the
(6,7), and the third 1-3-4-5-(6,7).
offer. This is an approximation obtained from the theory of simple
The frequencies each path are
queue systems. Let us suppose the approximation that each task has
0.3 · 0.5, 0.3 · 0.5, and 0.7 respec-
assigned its own resource no flexible, e.g. the task 1 has assigned n1
tively.
resources, then it must satisfy λ1 = 11 < n1 µ1 = 18 , hence n1 > 8.0, we
. 1 1
take n1 = 9. For the task A, we obtain 0.3 10 < n2 10 , thus n2 > 3.0,
.
we take n2 = 9. Thus, the number of resources for resource A will be
.
n A = n1 + n2 = 9 + 4 = 13. Now, we will continue with the resource
. .
B. For n3 , 11 < n3 18 , we take n3 = 9. For n4 , 11 < n4 16 , we take n4 = 7.
For n5 , we must be careful, 0.7 11 + 0.3 · 0.5 11 < n5 16 , thus n5 > 5.1
. .
and we take n5 = 6. For n6 , 0.85 11 < n6 12 , we take n6 = 2. Finally,
.
n B = 9 + 7 + 6 + 2 = 24. For the resource C, 0.85 11 < n7 20 1
, we take
.
n7 = 18.
In summary, the minimal number of resources will be n A =
13, n B = 24, and nC = 18.
(b) The simulation yields the following utilisation rates: U A =
0.83, UB = 0.85 and UC = 0.91. Thus, the average utilisation rate for
the entire process will be: U = (13 · 0.83 + 24 · 0.85 + 18 · 0.91)/(13 +
24 + 18) = 0.86.
For calculating W for the entire process, we need to summarise the
results in the task table. This is shown in Table 4.8. We will calculate
the times each path for the process. We have three possible paths,
they are: The first one is 1-3-4-2, the second 1-3-4-2-5-(6,7), and the
third 1-3-4-5-(6,7). The frequencies each path are 0.3 · 0.5, 0.3 · 0.5,
and 0.7 respectively. Each path has the following cycle time. The
first path: W1 + W3 + W4 + W2 = 9.6 + 8.7 + 6.6 + 11.7 = 36.6
analysis and optimisation 139

[min], the second W1 + W3 + W4 + W2 + W5 + max{W6 , W7 } =


9.6 + 8.7 + 6.6 + 11.7 + 6.6 + max{2.6, 29.9} = 73.1 [min], and the
third W1 + W3 + W4 + W5 + max{W6 , W7 } = 9.6 + 8.7 + 6.6 + 6.6 +
max{2.6, 29.9} = 61.4. Therefore, the cycle time for the entire process
will be: W = 0.3 · 0.5 · 36.6 + 0.3 · 0.5 · 73.1 + 0.7 · 61.4 = 59.4 [min].

task 1 2 3 4 5 6 7
Table 4.8: Task Table obtained
Wi 9.6 11.7 8.7 6.6 6.6 2.6 29.9 by simulating the model of Fig.
4.10.

4.2 Standardise Information, Automate, and Plan Capacity

Process improvement and optimisation are concepts that require


precision. In fact, optimisation has a fairly precise connotation: it Improvement and optimisation.
refers to maximising a profit function or minimising a cost function
subject to feasibility constraints. However, this concept is somewhat
reduced for our purposes, because, on the one hand, for the practice
of process optimisation we do not have a sufficiently representative
mathematical model and, on the other hand, optimisation only refers
to a theoretical phase (i.e., planning phase), prior to the execution
of the process, but we are interested in a more holistic improvement
concept. In fact, it is possible to theoretically optimise a model that is
then very difficult to implement or, once implemented, may not run
according to the original plans.
Therefore, we prefer to focus on process improvements rather
than optimisation. By improvement we mean the reduction of cycle
times, processing costs and the variability of the specification of
a process output. Improving a process requires thinking about or
formulating a multi-objective decision problem. In addition to the
multi-objective feature, improving a process requires consideration
of the randomness of its behaviour. It is clear that the behaviour of
the business process is random. Therefore, execution times, resource
use, and resource availability are random variables that should be
considered.
Another distinction that we must make concerns structural im-
provements (at the level of the organisational structure) from those
of a lower level. The structure of the organisation is a concept widely High-level (structural) changes and
studied in the Organisation theory and depends on the approach low-level changes.

or other theories that support it. We opted for the so-called Contin-
gency Theory approach as proposed by H. Mintzberg 1 . According to 1
Henry Mintzberg. Structure in
this theory, the organisation adapts its organisational parameters to fives: Designing effective organizations.
Prentice-Hall, Inc, 1993
certain environments in the search to minimise coordination costs.
The theory identifies five coordination mechanisms (or archetypes)
that classify each organisation into one of them. Each archetype is
140 practical business process engineering

a combination of parameters and environment and a defined set of


environment characteristics. In particular, we are interested in or-
ganisations whose coordination mechanism is based on the activities
standardisation. Other coordination mechanisms are direct supervi-
sion, standardisation of skills, standardisation of results, and mutual
adjustment. In summary, according to this theory, structural improve-
ments should be correspond to one of coordination mechanisms that
assures a consistent set of structural parameters design and environ-
ment characteristics.
Low level improvements, on the other hand, refer to changes in
information parameters or specifications, decision, processing rules,
automation, resource processing capacity, and structure of the in-
formation or material flow. To make low-level improvements, we
assume a defined process and assume that improvement or structural
change is the starting point. From the point of view of contingency
theory, low-level improvement consists of fine-tuning the coordination
mechanism of activities standardisation. In fact, the other coordina-
tion mechanisms require emphasis on other parameters that do not
refer to the optimisation of information, flows or decisions.
Another more or less common distinction is product and process
improvement. The differences between both are clear in the case Process and product improvements.
of manufactured goods, but less so in the case of services. Let us
assume for example the invoice and payment process. The billing
process has as an obvious output an invoice with certain essential
characteristics such as service item and amount. But it has other
added features such as time delivery or transparency. An improve-
ment of the invoice can be for example a faster invoice with a more
transparent process. We need to improve the process for delivering
a new invoice, which is faster and more transparent than the old one.
However, in order to carry out this improvement, it is necessary to
accelerate and make the processes transparent through various mea-
sures such as increasing resources or automation. Thus, the change in
processes produces a change in the product. That is why we say that
it is often difficult to distinguish between product improvement and
process improvement, especially in the case of services.
Process improvement needs to be evaluated according to certain
criteria. We have already mentioned some of them: cycle time, pro-
cessing costs, and quality. Quality is understood as the variability
of a product (or service) specification. We assume that these criteria
of time, costs or quality are associated with or cause the improve-
ment of the organisation’s objectives through the satisfaction of the
client who requires these outputs. We call all these objectives as eco-
nomic criteria, although only the second one refers to costs. Thus, we Economic, technical, and empirical
say that the process must satisfy a economic feasibility. Another feasibility.
analysis and optimisation 141

group of criteria is that of technical feasibility and must answer ques-


tions such as: Does the technology that supports the redesign exist?
Is there enough experience with the technology behind the process?
Do you have the necessary experts for the technical maintenance
of the technology for the supporting technology? Are there reliable
and suitable suppliers? Finally, we will consider a third criterion
called empirical feasibility and it refers to the organisational and
administrative capacities to sustain the new process. Empirical feasi-
bility is related to the breadth and depth of the organisational change
and how employees and managers cope with it. It should answer
questions such as: How will employees cope with the new job? Are
employees and managers motivated to change their procedures and
work? Are there objective or subjective threats to them? What are
the benefits and costs to people of changing procedures and ways of
working?
In addition to the improvement criteria, we need to clarify the ba-
sic question about what, how, and when to change in the processes.
We propose a change in three areas or parameters: information and
decision, automation, and distribution of resources. Additionally,
we proposes a sequence of steps of marginal changes that ensures
effective and controlled changes.
Information and decision is a pair we try not to separate. Improv- Information and decision standardisa-
ing the quality of information without improving the quality of the tion.

associated decision often does not bring any benefit. Moreover, most
of the time, standardising the information is a preliminary step to
improving the decision. Standardisation of information can also be
useful if it is then managed in a computer system such as a Database
Management System. Consequently, this can accelerate information
processing as well as transparency. Some typical improvements in
information and decision standardisation are as follows:

• Creation of a form in a manual • Creation of an optimised in-


task. ventory model.
• Implementation of a digital • Implementation of vehicle
form. assignment task in a trans-
• Optimising model of inventory portation system.
rule.
• Demand forecasting of an
• Use or redesign of a database. item.
• Predicting sales behaviour.
• Rules for symptomatic main-
• Design of a system of auto- tenance of mechanical equip-
matic decision rules. ment.
142 practical business process engineering

A second process design parameter relates to automation. In this Automation.


case we try to transform manual tasks into semi-manual or user tasks
and manual or user tasks into fully automatic tasks. The benefit in
this case is obvious: a radical reduction in processing times and costs.
Some typical examples of automation are as follows:

• From manual verification of an stocks.


invoice to automatic verifica-
• Automatically prepare and
tion.
communicate an offer.
• Automate the payment order
task. • Assign automatically re-
sources.
• Automatically send a reply or
acceptance message. • Release production order
automatically on receipt of
• Decide automatically about approval.
a credit using an automatic
decision rule. • Assign a resource to a task.

• Send automatically a signal • Trigger the payment order


about changes in inventory automatically.

The third process design parameter is the amount of resources re-


quired by a process. Note that we are addressing the problem of the Capacity planning.
quantity and allocation of resources and not the quality of resources.
In other words, in the case of human resources, we do not try in prin-
ciple to solve the problem of which skill is required from employees
or professionals, because we assume that our design is oriented to
the standardisation of activities and not of skills. Determining the
amount of resources needed and the allocation to tasks is a problem
typically called capacity planning. Stochastic Simulation, a technique
we have described in the Section 4.1, is generally used to solve it the
capacity planning problem.
Below we propose the systematic improvement of business process
design composed of three ordered steps: Information and Decision
Standardisation (S), Automation (A), and Capacity Planning (C). SAC method.
The order of improvement has practical sense. In fact, Information
and Decision Standardisation (we will call it phase (S)), is an im-
provement that does not necessarily compromise the introduction of
technology, but can be understood as a housekeeping procedure. It is a
preparation phase for the following improvements. Phase (S) tries to
ensure that the fundamental objective of the process is effectively ful-
filled. For example, that a customer’s order is paid for correctly, that
credit is given to the right person, or that a customer’s order is ful-
filled as promised. The second phase, or step (A), optimises time and
costs by automating the information processing and decision rules
analysis and optimisation 143

What does it improve? Technical Feasibility Economic Feasibility Empirical Feasibility


(S) Information and De- Typically quality and High High Low risk
cision Standardisation decisions
(A) Automation Radical reduction of time Low to medium Low High risk
and costs
(C) Capacity Planning Resource optimisation Variable Medium Medium risk

Table 4.9: The SAC method


designed in phase (S). Phase (S) is oriented to ensure the business ob- allows for phase-controlled
jectives, instead phase (A) brings velocity and low costs. Finally the changes. Phase (S) introduces
last phase, phase (C), adjusts the resources released in the previous improvements in information
phase, since normally, automation releases resources. Thus, we call and decisions at low cost and
this set of ordered steps the SAC method. low risk. Phase (A), on the
In summary, we say that the SAC method has a practical or- other hand, involves radical
der that, on the one hand, allows for the introduction of marginal improvements with greater
changes in a systematic way, on the other hand, reduce risks that may risk, less technical feasibility
impose possible radical changes. The Table 4.9 shows a balance be- and a higher investment cost.
tween the achievement of the process objectives and the risk involved Finally, phase (C), adjusts the
in the improvement steps of the SAC method. resources normally released in
We will review some examples below. the previous step.

Figure 4.11: Initial review issues


list model. After an interview
with the issue list manager, the
initial model of the process is
obtained.

Problem 4.13. After an interview with the issue list manager, the
initial model of the process is obtained, as shown in the Fig. 4.11. Initial model
This consists of the issue list manager reviews the issue list. Follow-
ing this task, the same issue list manager identifies which issues are
ready to start a discussion cycle and authorises those that are ready.
This initial model is the starting point of the improvement process.
Once the initial model is obtained, a more precise investigation de-
termines how the process is executed, how the tasks are triggered,
the processing times and eventually the costs. We call this new model
as-is model and it is shown in the Fig. 4.12. Note that this model is As-is model
triggered every Friday, it is executed by a user who takes an interme-
diate time of 6 hours between the two tasks. Answer the following
questions: (a) Determine an improved to-be model and (b) calculate
the differences in cycle time from the as-is model.
144 practical business process engineering

Figure 4.12: As-is issue list


management model. More pre-
cise investigation determines
how the process is executed,
how tasks are triggered, pro-
cessing times and eventually
costs.

Solution 4.13 This example is very simple, but it shows how to use
the SAC improvement method. The example shows how to design
processes through marginal improvements ensuring risk control of
the implementation of changes by using the SAC method.
(a) The first phase of the method consists of Information and Deci-
sion Standardisation, phase (S). In practice, this is done by collecting
the manual or system documentation that supports the process. The
collected data are specified in a model. The most suitable technical
model for this is the Entity Relationship Model (ERM), however the
ERM standard has modelling requirements that we will avoid this
time. Therefore, we will simply group the data and information into
a tree model that will serve as the basis for the ERM model. This
model is shown in Fig. 4.13. A second sub-step of phase (S) is re-
lated to the improvement of the decisions involved. In this case, the
decision is to determine which issues are ready for the discussion
cycle. This can be a complex task and the decision rule can involve
professional work. We will assume that, to start the cycle of discus-
sion of an issue, two conditions are required: that it has a creation
date greater than 15 days and that it has a number of participants
greater than 5. Since it is a fairly simple decision rule, we will assume
such a rule will be embedded in the to-be model’s issue list review
task. We propose that the determination of those issues ready for
discussion will be supported by the issue management system and
will be done in a composed task as shown in Fig. 4.14. Note that the
fraction of issues that initiate discussion (80%) remains the same as
in the as-is model, since the decision rule remains unchanged from
the as-is model. Additionally, in this case we propose an alternative
improvement of phase (A). We propose to fully automate the review
issue list task and determine if issues ready, since it has already been
sufficiently standardised in a previous automation phase. This so-
lution is shown in the to-be model in Fig. 4.15 and shows how we
replace the task with a fully automatic service that also executes the
issue selection rule ready for the discussion cycle.
(b) The as-is model of the Fig. 4.12 has a cycle time of 7 hours.
analysis and optimisation 145

Figure 4.13: Issue list manage-


ment data model. We simply
group the data and information
into a tree model that will serve
as the basis for the ERM model.

This occurs due to the interruption of the semi-manual task. The to-
be model of the Fig. 4.14 has a cycle time of 30 min which is achieved
by standardising the information and composing both tasks into a
single task. Finally, the to-be alternative model of the Fig. 4.15 is
reduced to 0 min by fully automating the process and the simple
decision rule embedded in the automatic task. In short, there is a
reduction from 7 hours to 0.

Figure 4.14: To-be issue list


management model. It is as-
sumed that the review of the
issue list and the determination
of those issues ready for discus-
sion are supported by the issue
management system and will
Problem 4.14. The initial model has been obtained using Process be done in a composed task.
Mining techniques and is shown in Fig. 4.16. This model is triggered
approximately every two hours during office hours. Then the form
verification of invoice and the invoice registration is done. The 70%
of the invoices are destined to be paid directly and the rest must be
released. In the latter case, payment is triggered once the respective
authorisation is received. Note that the authorisation is received on
average 6 hours after release. As in the previous example, once the
146 practical business process engineering

Figure 4.15: To-be issue list


management alternative model.
We propose to fully automate
the review issue list task and
determine if issues ready, since
it has already been sufficiently
standardised in a previous
automation phase.

Figure 4.16: Initial invoice and


payment model. The initial
model of the process is ob-
tained by using Process Mining
techniques.

initial model is obtained, a more precise investigation determines


how the process is executed, how the tasks are triggered, the pro-
cessing times and eventually the costs. This results in the as-is model
shown in Fig. 4.17. Answer the following questions: (a) Determine an
improved to-be model and (b) calculate the differences in cycle time
from the as-is model.

Solution 4.14 (a) As before, we will follow the SAC method. The
data is specified in a technical data model that serves for further
construction of the ERM model. This data model is shown in Fig.
4.18. The second part of phase (S) tries to improve the decisions
involved. In this example, the decision is to classify the invoice to
decide whether the payment requires special approval or not. The
decision involved in this case is more complex than in the previous
example, although not enough to require the work of a professional
employee. The decision model is shown in Fig. 4.19. As you can see
in the figure, this rule considers the product type, customer type, and
amount to classify the invoice. Thus we end phase (S) of the method
and continue phase (A). In phase (A), we automate the decision rule
and the release of the invoice. We additionally model an intermediate
sending message request authorisation. The next task (trigger pay-
analysis and optimisation 147

Figure 4.17: As-is invoice and


payment model. After an in-
vestigation or interview, a more
advanced model is obtained
in which all the tasks are user
tasks. The type of each event is
also confirmed and obtained.

ment) remains unchanged as a user task. The resulting to-be model is


shown in Fig. 4.20.
(b) We then calculate the performance in terms of cycle times of
the as-is model and the respective to-be model. The table 4.10 shows
the average performance of the as-is model. The average lengths of
two paths are determined and weighted accordingly. In the case of
the as-is model, the first path has a cycle time of 5 hours and occurs
about 70% of the time, while the second path takes 13 hours and
occurs about 30% of the time. This gives a weighted average of 7.4
hours or 7h 24min (we will write 7h 24’). On the other hand, the
standardisation and automation of the process radically reduces the
average cycle time to 4h 48’ and is shown in the Table 4.11.

Figure 4.18: Simplified data


model of invoice and payment.
The data and information are
grouped in a tree model that
will serve as the basis for the
ERM model.

Problem 4.15. The initial model has been obtained through personal
interviews and is shown in Fig. 4.21. In this model, the client places
an order, then a clerk enters the client’s requirement and analyses
148 practical business process engineering

Figure 4.19: Decision rule for


invoice classification. This rule
considers the product type,
customer type, and amount for
the invoice classification.

Figure 4.20: To-be invoice and


payment. We automate the de-
cision rule and the release of
the invoice. In addition, we cre-
ate an intermediate message for
sending a request authorisation.
The next task (trigger payment)
remains unchanged.

the feasibility of the request. The analysis can have three alternative
results. One of them must perform an additional task of trying to
clarify the request. Finally the process has two possible outcomes:
decline the request or make a tentative offer. As in the previous
examples, once the initial model is obtained, a more precise inves-
tigation determines how the process is executed, how the tasks are
triggered, the processing times and eventually the costs. This results
in the as-is model shown in Fig. 4.22. In this as-is model we observe
that all tasks are manual (or essentially manual if tools such as Excel
or a local database are used). As each task is performed manually,
additional times are added represented by intermediate events that
occur. Such events represent the fact that the staff member does not
work on each task continuously and lets a day go by for the next ac-
tivity. That is why all the intermediate events last 24 hours. Addition-
ally, the actions of the customer have not been modelled, although
it has been specified that there is an interaction with the client. On

path nr frec time total


Table 4.10: Average cycle time
1. 0.70 1+2+2 5.0h of the as-is model The first path
2. 0.30 1+2+2+6+2 13.0h has a cycle time of 5 hours and
total 1.00 7h 24’ occurs 70% of the time, while
the second path takes 13 hours
and occurs 30% of the time.
This gives a weighted average
of 7h 24’.
analysis and optimisation 149

path nr frec time total


Table 4.11: Average cycle time
1. 0.70 1+2 3.0h of the to-be model The stan-
2. 0.30 1+6+2 9.0h
dardisation and automation of
total 1.00 4h 48’ the process radically reduces
the average cycle time to 4h 48’.

Figure 4.21: Initial model order


the other hand, it was also determined that the frequency of cases is processing. The initial model of
every 2 hours on average during office hours, the times of each task the process is obtained through
and the frequencies of each path. Answer the following questions: (a) personal interviews.
Determine an improved to-be model and (b) calculate the differences
in cycle time from the as-is model.

Solution 4.15 (a) We will follow the SAC method. As before, the
data is specified in a technical data model that serves for further
construction of the ERM model. This model is shown in Fig. 4.23. In
phase (S) we propose an improvement of the decision rules involved.
The decision is to analyse the feasibility of the request. As before,
the decision is relatively complex, although not complex enough
to require professional work. The decision model is shown in Fig.
4.24. As you can see from the figure, this rule considers the total
amount, product availability, and quantity to determine the feasibility
of the request. Thus we end phase (S) of the method and continue
phase (A). In phase (A), we automate the decision rule and set a user
task that will clarify the order, in the event that it is judged by an
clerk, who in turn will make the decision if the request is possible or
impossible. We additionally modelled a 24-hour intermediate timer
that represents the time it takes the user to start the next task. The
resulting to-be model is shown in Fig. 4.25.
(b) We then calculate the performance in terms of cycle times of
the as-is model and the respective to-be model. The Table 4.12 shows
the average performance of the as-is process. The average lengths
of all paths are determined and weighted accordingly. The model
has four alternative paths with a weighted average of 3d 3h 10’. On
the other hand, the standardisation and automation of the process
150 practical business process engineering

Figure 4.22: As-is order pro-


cessing model. All tasks are
manual and have additional
time because the staff member
does not work continuously.
The client’s actions have not
been modelled. It was deter-
mined that the frequency of
arrival of cases is every 2 hours
as well as the times of each task
and the frequencies of each
path.

path nr frec time total


Table 4.12: Average cycle time
1. 0.10 10’+24h+25’+24h+15’+24h 3d 50’ of the as-is model. The model
2. 0.10 × 0.60 10’+24h+25’+24h+15’+24h+15’+24h 4d 1h 5’ has four alternative paths with
3. 0.10 × 0.40 10’+24h+25’+24h+15’+24h+15’+24h 4d 1h 5’ a weighted average of 3d 3h 10’.
4. 0.80 10’+24h+25’+24h+15’+24h 3d 50’
total 1.00 3d 3h 10’

radically reduces the average cycle time to 1h 13’ and is shown in


the table 4.13. Further note that since there is an improvement in
the decision, the frequencies of the following paths change with the
consequent additional improvement in cycle time.
analysis and optimisation 151

Figure 4.23: Simplified data


model of invoice and payment.
The data and information are
grouped in a tree model that
will serve as the basis for the
ERM model.

Figure 4.24: Decision rule for


the feasibility decision. This
rule considers total amount,
product availability, and quan-
tity.

path nr frec total


Table 4.13: Average cycle time
1. 0.05 0’ of the to-be model. The model
2. 0.05 × 0.10 1d 15’ has four alternative paths with
3. 0.05 × 0.90 1d 15’ a weighted average of 1h 13’.
4. 0.90 0’ Note that the frequencies of
total 1.00 1h 13’ the paths have changed due to
improvements in the decision
rule. 4.25: To-be order pro-
Figure
cessing model. We automate
the decision rule and a user
task will clarify the order, a
clerk will make the decision
if the request is possible or
impossible. A 24-hour interme-
diate timer that represents the
time it takes the user to start
the next task.
5
Process to Applications

5.1 Introduction

5.2 Workflow Implementation in Workflow Signavio

1. Consider the process of Fig. 5.1. The process is triggered (starts)


with the reception of an e-mail.

Figure 5.1: Workflow respond to


an e-mail

The task Write Response generates the following fields:

nr. field type Required (R)/


Table 5.1: Fields generated
Optional (O)
by the task Write response for
1. issue text R workflow of Fig. 5.1
2. response text R
154 practical business process engineering

2. Consider the process of Fig. 5.2

Figure 5.2: Workflow approve a


document

nr. field type Required (R)/


Table 5.2: Trigger with a form for
Optional (O)
workflow of Fig. 5.2
1. document File R
2. author User R

nr. rol tasks


Table 5.3: Roles and tasks for
1. author trigger, Update document workflow of Fig. 5.2
2. reviewer Review document
Use any account Google Drive for the task Archive approved docu-
ment.
process to applications 155

3. Consider the process of Fig. 5.3

Figure 5.3: Workflow fill job


vacancy

nr. field type Required (R)/


Table 5.4: Trigger with a form for
Optional (O)
workflow of Fig. 5.3
1. job title Text R
2. job description Text O
3. maximum salary Money O
4. replaces existing job Yes/No Check Box O
5. requesting manager User R

nr. rol tasks


Table 5.5: Roles and tasks for
1. manager review job requisition workflow of Fig. 5.3
2. hr employee publish vacancy on web site, remove published vacancy
3. requesting manager interview next applicant
156 practical business process engineering

4. Consider the process of Fig. 5.4

Figure 5.4: Workflow sign con-


tract

nr. field type Required (R)/ Optional (O)


Table 5.6: manual Trigger for
1. document File R workflow of Fig. 5.4
2. signer User R

nr. rol tasks


Table 5.7: Roles and tasks for
1. requester upload document to be signed, update document workflow of Fig. 5.4
2. signer review document, change signer, request changes, sign document
process to applications 157

nr. task field


Table 5.8: Fields processed by
1. review document document tasks for workflow of Fig. 5.4
2. change signer signer
3. request change change request
4. update document change request, document
5. sign document document, signed document
158 practical business process engineering

5. Consider the process of Fig. 5.5. Interpret from the workflow


logic, how to use correctly the fields of Table 5.11.

Figure 5.5: Workflow process


claim

nr. field type Required (R)/ Optional (O)


Table 5.9: Form Trigger for work-
1. claim File R flow of Fig. 5.5
2. junior claim User R

nr. rol tasks


Table 5.10: Roles and tasks for
1. junior claim calculate claim value, classify claim complexity, processworkflow
simple claim, calcu-
of Fig. 5.5
late payment amount
2. investigator investigate high-value claim
3. senior claim process complex claim
process to applications 159

nr. task field


Table 5.11: Fields processed by
1. calculate claim value claim, total value tasks for workflow of Fig. 5.5
2. classify claim complexity claim
3. process simple claim claim, payout
4. calculate payment amount claim
5. investigate high-value claim claim
6. process complex claim claim, payout
160 practical business process engineering

6. Consider the process of Fig. 5.6. Interpret from the workflow


logic, how to use correctly the fields of Table 5.14.

Figure 5.6: Workflow approve


and close invoice

nr. field type Required (R)/ Optional (O)


Table 5.12: Form Trigger for
1. invoice File R workflow of Fig. 5.6
2. finance administrator 1 User R
3. customer email Email R

nr. rol tasks


Table 5.13: Roles and tasks for
1. finance administrator 1 trigger, update invoice, chase late payment workflow of Fig. 5.6
2. finance administrator 2 check invoice, check payment
process to applications 161

nr. task field


Table 5.14: Fields processed by
1. check invoice invoice tasks for workflow of Fig. 5.6
2. update invoice invoice
3. check payment invoice
4. chase late payment invoice
6
Process Mining

6.1 Introduction

En este capítulo presentamos las principales técnicas de process min-


ing. Contiene problemas simples de descubrimiento de las perspecti-
vas de procesos y organizacional. Aunque es posible usar el software
ProM (www.processmining.org) para resolver los ejercicios, sugeri-
mos hacerlos manualmente o con uso de la planilla electrónica que
se puede acceder en el link link a planilla. Con esta guía queremos
ayudar a comprender mejor cómo trabajan los algoritmos básicos de
process y organizational mining.

6.2 Process Mining Concepts

Process Perspective
Los siguientes ejercicios de la perspectiva de procesos tienen como
objetivo ayudar a comprender cómo trabaja el algoritmo α. Este
algoritmo ha sido ampliado para resolver varias situaciones del
mundo real en cuyos casos no trabaja adecuadamente, a saber: ac-
tividades invisibles, actividades duplicadas, constructos non-free-
choice, short-loops, y finalmente ruido, excepciones e incompletitud.
Aquí mostramos el algoritmo α básico y algunos ejemplos de estos
problemas.

Problem 6.1. (Process mining) La tabla 6.1 muestra 5 casos o instan-


cias de proceso. El orden de presentación corresponde al orden de
ejecución. Esta tabla representa un event log. Determine el proceso
(flujo de tareas) que se ejecutó usando el algoritmo α.

Case ID 1 2 3 3 1 1 2 4 2 2 5 4 1 3 3 4 5 5 4
Tarea A A A B B C C A B D A C D C D B E D D

Table 6.1: Event log del prob-


lema 6.1
164 practical business process engineering

Solution 6.1 El objetivo del process mining desde la perspectiva de


proceso es descubrir el modelo de flujo de tareas a partir de un event
log como el de la tabla 6.1. Para ello usamos el algoritmo α que con-
siste en determinar el flujo a partir de las relaciones que se encuen-
tran entre cada par de tareas que aparecen en el event log. En este
problema, las tareas son A, B, C, D y E. En general, para cualquier
par de tareas x, y, las relaciones entre ellas pueden ser de causalidad
(x →y), paralelismo (x ky) y no-relacionada (x ]y). Observemos que estas
relaciones, para cualquier par de tareas, se excluyen entre sí. La infor-
mación que se obtiene del event log son las sucesiones directas entre las
tareas (denotamos x > y para expresar que y sucede a x de modo di-
recto). Antes de determinar las sucesiones directas, observemos que
las instancias del event log se pueden ordenar como se muestra en la
tabla 6.2. De esta tabla, las sucesiones directas se pueden encontrar
más fácilmente.

Case ID Log events


Table 6.2: Instancias que se
1 ( A, B, C, D ) obtienen del event log del prob-
2 ( A, C, B, D ) lema 6.1
3 ( A, B, C, D )
4 ( A, C, B, D )
5 ( A, E, D )

Observemos que el caso 1 de la tabla 6.2 es el mismo que el caso


3, y el 2 es igual al 4. Podemos decir que el event log contiene casos
redundantes, luego sólo analizamos la información de los casos 1, 3
y 5. Comencemos por el caso 1: las sucesiones directas son A > B,
A > C y A > E; también B > C y B > D; además C > D y C > B;
finalmente, E > D. Para encontrar las relaciones →, k y ] a partir de
las sucesiones directas consideremos que para cualquier par de tareas
x, y, entonces

• x → si y sólo si x > y y no existe la relación y > x;

• x ky si y sólo si no existe la relación x > y y ni tampoco la relación


y > x; finalmente,

• x ]y si y sólo si existen ambas relaciones x > y y y > x.

De allí obtenemos A → B, A → C, A] D, A → E, BkC, B → D, B] E,


C → D, C ] E y E → D. Con esta información, podemos dibujar la
Petri Net que se muestra en la figura 6.1.

Problem 6.2. (Process mining) Descubra el proceso que generó el


event log mostrado en la tabla 6.3.
process mining 165

A C D
Figure 6.1: Modelo descubierto
a partir del event log de la tabla
E
6.1

Case ID 1 2 3 3 1 1 2 4 2 1 2 3 4 3 5 4 5 4
Tarea A A A B B C C A B D D C C D E B F D

Table 6.3: Event log del prob-


lema 6.2
Solution 6.2 Seguimos el mismo procedimiento del ejercicio anterior
(algoritmo α) de donde se obtiene el modelo de la figura 6.2.

Problem 6.3. (Process mining) Considere el event log de la tabla 6.4


que contiene las siguientes tareas A: recibir un item y registrar,
B: verificar item, C: verificar garantía, D: notificar
cliente, E: reparar item, F: recibir pago, G: enviar carta
de cancelación, H : devolver item. Determine el proceso que se
ejecutó usando el algoritmo α. Este problema ha sido adaptado de 1 . 1
Song, M. and van der Aalst, W.M.P.
“Towards comprehensive support for
organizational mining”. In: Decision
Solution 6.3 Seguimos el mismo procedimiento del ejercicio anterior Support System 46 (2008), pp. 300–317
(algoritmo α) de donde se obtiene el modelo de la figura 6.3.

Problem 6.4. (Actividades invisibles) Considere los casos ( A, B, C, D ),


( A, C, B, D ) y ( A, D ). Observe que es similar al del problema 6.1, ex-
cepto que la última instancia es ( A, E, D ) y no ( A, D ). Es decir, la
actividad E es invisible. ¿Puede el algoritmo α resolver este problema
con tareas invisibles?

Solution 6.4 Una de las suposiciones básicas del process mining (no
sólo del algoritmo α) es que cada actividad ejecutada está registrada

Figure 6.2: Modelo descubierto


B a partir del event log de la tabla
A D 6.3
C

E F
166 practical business process engineering

Case ID Log events


1 ( A, B, C, D, E, F, H )
2 ( A, B, C, D, E, F, H ) Table 6.4: Event log del prob-
3 ( A, C, B, D, E, F, H ) lema 6.3
4 ( A, C, B, D, G, H )
5 ( A, C, B, D, E, F, H )
6 ( A, B, C, D, G, H )

B G

A D H
Figure 6.3: Modelo descubierto
a partir del event log de la tabla
C E F
6.4

en el log. Por ello es que el algoritmo no es capaz de descubrir tareas


que no están registradas.

Problem 6.5. (Actividades duplicadas) Considere los casos ( A, B, C, D ),


( A, C, B, D ) y ( A, B, D ). ¿Es posible descubrir el modelo? Ahora con-
sidere el siguiente modelo de la figura 6.4 y verifique que genera los
casos especificados.

A C D
Figure 6.4: Modelo con activi-
dades duplicadas
B

Solution 6.5 El problema con actividades duplicadas se refiere a


la situación en que el mismo modelo tiene dos nodos con la misma
actividad. Este es el caso de la actividad B de la figura 6.4. Es claro
que es muy difícil automatizar el descubrimiento del modelo, porque
no es posible distinguir la ejecución de B en un caso del otro.

Problem 6.6. (Constructos non-free-choices) Considere el modelo de


la figura 6.5. Este es un caso de red con cosntructo non-free-choice que
process mining 167

no puede ser descubierto. Genere un conjunto de instancias comple-


tas a partir del modelo y compruebe que no es posible descubrir el
modelo.

Figure 6.5: Modelo con con-


A D structo free-choice que no puede
ser descubierto con el algoritmo
C α

B E

Solution 6.6 El modelo de la figura 6.1 es del tipo free-choice, porque


la elección entre B y C, con E está separada de la sincronización
(nodo AND). En cambio el modelo de la figura 6.5 es non-free-choice.
Después de ejecutar la actividad C, hay una elección entre la activi-
dad D y la E. Sin embargo, la elección entre D y E está controlada por
la elección realizada antes entre A y B. Claramente tales construc-
tos son difíciles de descubrir puesto que la elección es no-local y el
algoritmo no puede recordar eventos anteriores.
Las free-choice Petri nets son Petri nets en las cuales existen al
menos dos transiciones que no ocupan los mismos input places, es
decir hay al menos dos transiciones que no comparten los mismos
input places. Esto excluye la posibilidad de mezclar elección (OR) y sin-
cronización (AND o paralelismo) en un mismo modelo. Las free-choice
Petri nets constituyen una clase bien conocida y estudiada de Petri
nets. Los constructos non-free-choices se presentan en situaciones en
las cuales la elección entre dos actividades no está determinada den-
tro del nodo del modelo del proceso sino que depende de elecciones
hechas en otras partes del modelo. Por ello se dice que tales modelos
tienen un comportamiento no-local. Desafortunadamente, la mayoría
de los algoritmos de process mining (incluido el algoritmo α) asumen
que los modelos a descubrir son free-choices.

Problem 6.7. (Constructos non-free-choices) Considere el modelo de


la figura 6.6. Este es un caso de red non-free-choice que si puede ser
descubierto. Genere un conjunto de instancias completas a partir del
modelo y compruebe que si es posible descubrir el modelo.

Solution 6.7 A pesar del caso anterior, hay constructos non-free-


choices que pueden ser descubiertos adecuadamente como el de la
figura 6.6 usando el algoritmo α. Ahora la elección se detecta debido
a las nuevas actividades X e Y. Nótese que D sigue directamente
168 practical business process engineering

Figure 6.6: Modelo con con-


A X D structo free-choice que si puede
ser descubierto con el algoritmo
C α

B Y E

a X, pero no así lo hace E que sigue a Y. De esta forma se puede


descubrir el place entre X y D.

Problem 6.8. (Short loops) En un proceso, es posible que una activi-


dad se ejecute muchas veces. La figura 6.7 muestra un ejemplo. De-
spués de ejecutar la actividad B, la actividad C se puede ejecutar un
número arbitrario de veces. Posibles instancias son ( B, D ), ( B, C, D ),
( B, C, C, D ),( B, C, C, C, D ), etc. ¿Puede resolver esto el algoritmo α
básico?

Figure 6.7: Modelo con short-


A E loops. La tarea C puede ejecu-
C tarse una cantidad arbitraria de
veces

B D

Solution 6.8 En un proceso es posible ejecutar alguna actividad


muchas veces. Si esto sucede, esto se modela típicamente como in
loop como se muestra en la figura 6.7. Loops que involucran sólo una
actividad, como en este caso, son fáciles de descubrir; sin embargo,
en el caso de loops de constructos más complejos no son triviales de
descubrir.

Organisational Perspective
El objetivo de process mining desde la perspectiva organizacional es
descubrir las relaciones, jerarquías y centralidades entre originadores
o grupos de ellos. Estas relaciones, jerarquías o centralidades pueden
medirse de acuerdo a varias métricas. Los problemas que se plantean
a continuación no abordan todas estas métricas ni todas las rela-
ciones, pero sí las más relevantes. Una comprensión adecuada de
estos problemas permitirá comprender y abordar otros casos más
process mining 169

grandes o complejos que pueden resolverse haciendo uso del soft-


ware ProM.
Problem 6.9. (Default mining) Considere el siguiente event log que
se muestra en la tabla 6.5. Determine el proceso que generó el event
log de la tabla utilizando el algoritmo α. Luego construya una matriz
del tipo originator ×task en que cada elemento ij de la matriz es el
número de veces que el originador i realiza la tarea j. ¿Qué grupos
de originadores puede encontrar?

Case ID Log events


Table 6.5: Caso simple de event
1 ( A, Camila), ( B1 , Ernesto), (C, Camila) log con datos de originadores
2 ( A, Carlos), ( B2 , Emilia ), (C, Carlos)
3 ( A, Camila), ( B1 , Ernesto), (C, Camila)
4 ( A, Camila), ( B1 , Ernesto), (C, Camila)
2 ( A, Carlos), ( B2 , Emilia ), (C, Carlos)
6 ( A, Camila), ( B1 , Ernesto), (C, Camila)

Solution 6.9 La tabla 6.6 corresponde a la matriz originator × task.


De esta matriz es fácil observar que los originadores están agrupados
en cuatro grupos, uno para cada tarea. Si denotamos t a una tarea
cualquiera y Ot el conjunto de originadores que realiza esa tarea,
entonces tenemos los siguientes grupos: O A = {Camila, Carlos}
que también corresponde a OC . Además OB1 = {Ernesto} y OB2 =
{Emilia}.

A B1 B2 C
Table 6.6: Matriz originator ×
Camila 4 0 0 4 task de la tabla 6.5
Ernesto 0 4 0 0
Carlos 2 0 0 2
Emilia 0 0 2 0

Problem 6.10. (Default mining) Considere el siguiente event log que


se muestra en la tabla 6.7. Determine el proceso que generó el event
log de la tabla utilizando el algoritmo α. Luego construya una matriz
del tipo originator ×task en que cada elemento ij de la matriz es el 2
Song, M. and van der Aalst, W.M.P.
número de veces que el originador i realiza la tarea j. ¿Qué grupos “Towards comprehensive support for
de originadores puede encontrar? Este problema ha sido adaptado de organizational mining”. In: Decision
2. Support System 46 (2008), pp. 300–317

Solution 6.10 El flujo de tareas se ha descubierto en la solución del


problema 6.3 y corresponde al modelo de la figura 6.3. La matriz de
originadores se muestra en la tabla 6.8. Los grupos de originadores
por tareas se pueden obtener fácilmente de dicha tabla.
170 practical business process engineering

Case ID Log events


1 ( A, John), ( B, Mike), (C, John), ( D, Sue), ( E, Pete), ( F, Jane), ( H, Sue)
2 ( A, John), ( B, Fred), (C, John), ( D, Clare), ( E, Robert), ( F, Mona), (H, Clare)
3 ( A, John), (C, John), ( B, Pete), ( D, Sue), ( E, Mike), ( F, Jane), ( H, Sue)
4 ( A, John), (C, John), ( B, Fred), ( D, Clare), ( G, Clare), ( H, Clare)
5 ( A, John), (C, John), ( B, Robert), ( D, Clare), ( E, Fred), ( F, Mona), ( H, Clare)
6 ( A, John), ( B, Mike), (C, John), ( D, Sue), ( G, Sue), ( H, Sue)
Table 6.7: Caso simple de event
A B C D E F G H log con datos de originadores

John 6 0 6 0 0 0 0 0
Sue 0 0 0 2 0 0 1 3
Mike 0 2 0 0 1 0 0 0
Pete 0 1 0 0 1 0 0 0
Jane 0 0 0 0 0 2 0 0
Clare 0 0 0 2 0 0 1 3
Fred 0 0 0 0 1 0 0 0
Table 6.8: Matriz originator ×
Robert 0 2 0 0 1 0 0 0
task de la tabla 6.7
Mona 0 1 0 0 0 2 0 0

Problem 6.11. (Joint activities metrics) Utilizando la tabla del tipo


originator ×task que se determinó en el ejercicio anterior, determine
la distancia entre los originadores de acuerdo la métrica Pearson’s
correlation coefficient según joint activities. También determine el grafo
resultante en que cada arco dirigido entre los originadores representa
un coeficiente de correlación mayor que 0.

Solution 6.11 La tabla 6.8 obtenida anteriormente nos sirve para cal-
cular distancias entre los originadores (distancias entre vectores que
corresponden a las filas). Esta distancia se puede interpretar cuán
cerca están los originadores de actividades similares (joint activities).
La métrica más usada para medir esta distancia es el Pearson’s corre-
lation coefficient, cuyo método de cálculo se explica en el apéndice 6.3.
A pesar de la simplicidad del cálculo, la cantidad de operaciones ha-
cen algo tediosa la solución, por ello usamos una planilla electrónica
que mostramos en la figura 6.8 (bajarla desde el siguiente link link a
planilla).
A partir del cálculo de estos coeficientes, es posible obtener la red
de relaciones entre los originadores que se muestra en la figura 6.9.
Los arcos considerados entre cualquier par de nodos representan
distancias mayores que 0.

Problem 6.12. (Hierarchical organizational mining) Utilizando como


punto de partida el resultado anterior determine las jerarquías de
process mining 171

agrupación basados en el método Agglomerative Hierarchical Cluster-


ing.

Solution 6.12 Las jerarquías entre personas o agrupaciones que fun-


cionan en la práctica se pueden encontrar con ayuda del método
Agglomerative Hierarchical Clustering que se detalla en el apéndice 6.3.
Este método agrupa sucesivamente originadores o grupos que están
más cerca. Por ejemplo, de la tabla de Pearson’s coefficients mostrada
en la parte de abajo de la figura 6.8 se encuentran cerca los origi-
nadores Clare con Sue; Jane y Mona; Pete y Robert; y finalmente,
Fred con Mike. Esto permite construir 4 parejas más John, un total de
5 grupos. Con estos grupos se continua el procedimiento hasta llegar
a una cantidad de grupos pre-definida. El cálculo completo es largo
y tedioso, por lo que sugerimos usar la planilla electrónica (bajarla
desde el siguiente link link a planilla). El proceso de agrupación se
puede representar como una jerarquía que se muestra en la figura
6.10.

Problem 6.13. (Social network – betweenness) Considere la red so-


cial entre 4 originadores que se muestra en la figura 6.11. La relacón
entre los originadores es la métrica handover (es decir quién le pasa
la información a quién). Calcule el indicador de betweenness de cada
originador. ¿Cómo se interpreta este resultado? Ahora suponga que
el criterio representado en la red social es la subcontractor, ¿cómo se
interpreta este último resultado?

Solution 6.13 Para calcular la métrica de betweenness de cada nodo


de la red social de la figura 6.11, primero se debe construir una tabla
que muestre todos los caminos más cortos entre cualquier par de
nodos. Luego usamos la fórmula de la sección 6.3 para calcular el
indicador. Observemos que la red es dirigida y no existen arcos entre
un nodo y sí mismo.

1 2 3 4
Table 6.9: Caminos más cortos
1 – 1−2 1−3 1−4 entre todos los pares de nodos
2 – – – – para calcular c B (3). Se han
3 – 3−2 – – eliminado, para el cálculo, la
4 – 4−3−2 4−3 – fila y la columna 3 y la diagonal
Calculamos el betweenness centrality del originador 1, es decir c B (3)
que está dado por
 
1
 0 + 0 + 0 + 0 + 0 + 1  = 1,

c B (3) =
6 |{z}
 1 1
|{z} 0
|{z} 0
|{z} 0
|{z} 1 
|{z} 6
1−2 1−4 2−1 2−4 4−1 4−2
172 practical business process engineering

donde cada sumando corresponde al cuociente entre el número de


caminos más cortos i − j (que comienzan en i y terminan en j) y que
pasan por el nodo 3 y el número total de caminos más cortos i − j.
Por ejemplo entre el nodo 4 y el 2 existe un camino más cortos: el 4 −
3 − 2 y ambos pasan por el nodo 3, luego la fracción correspondiente
es 11 (ver tabla 6.9). El número 6 que divide toda la suma es un factor
de normalización que corresponde el número total de comparaciones
(n − 1)(n − 2), donde n es el número de nodos. Del mismo modo, se
puede calcular c B (1) = 0, c B (2) = 0 y c B (4) = 0.

Problem 6.14. (Social network – betweenness) Considere la red so-


cial entre 5 originadores que se muestra en la figura 6.12. La relación
entre los originadores es la métrica handover (es decir quién le pasa
la información a quién). Calcule el indicador de betweenness de cada
originador. ¿Cómo se interpreta este resultado? Ahora suponga que
el criterio representado en la red social es la subcontractor, ¿cómo se
interpreta este último resultado?

Solution 6.14 Como en el ejercicio anterior, para calcular la métrica Table 6.10: Caminos más cortos
de betweenness de cada nodo de la red social de la figura 6.12, primero entre todos los pares de nodos
se debe construir una tabla que muestre todos los caminos más cortos para calcular c B (3). Se han
entre cualquier par de nodos. Luego usamos la fórmula de la sección eliminado, para el cálculo, la
6.3 para calcular el indicador. Observemos que la red es dirigida y fila y la columna 1 y la diagonal
no existen arcos entre un nodo y sí mismo. Calculamos el betweenness

1 2 3 4 5
1 – 1−2 1−4−3 1−4 1−4−5
2 2 − 4 − 3 − 1,2 − 4 − 5 − 1 – 2−4−3 2−4 2−4−5
3 3−1 3−1−2 – 3−1−4 3−1−4−5
4 4 − 3 − 1,4 − 5 − 1 4 − 3 − 1 − 2,4 − 5 − 1 − 2 4−3 – 4−5
5 5−1 5−1−2 5−1−4−3 5−1−4 –

centrality del originador 1, es decir c B (1) que está dado por


 
1 
 0 + 0 + 0 + 1 + 1 + 0 + 1 + 2 + 0 + 1 + 1 + 1 = 7,

c B (1) =
12 |{z}
1 1
|{z} 1
|{z} 1
|{z} 1
|{z} 1
|{z} 1
|{z} 2
|{z} 1
|{z} 1
|{z} 1
|{z} 1 
|{z} 12
2−3 2−4 2−5 3−4 3−5 4−5 3−2 4−2 4−3 5−2 5−3 5−4

donde cada sumando corresponde al cuociente entre el número de


caminos mós cortos i − j (que comienzan en i y terminan en j) y que
pasan por el nodo 1 y el número total de caminos más cortos i − j.
Por ejemplo entre el nodo 4 y el 2 existen dos caminos más cortos: el
4 − 3 − 1 − 2 y el 4 − 5 − 1 − 2 y ambos pasan por el nodo 1, luego la
fracción correspondiente es 22 . El número 12 que divide toda la suma
process mining 173

es un factor de normalización que corresponde el número total de


comparaciones (n − 1)(n − 2), donde n es el número de nodos. Del
mismo modo se puede calcular c B (2) = 0, c B (3) = 18 , c B (4) = 12
7
y
1
c B (5) = 8 .

6.3 Technical Appendix

Pearson’s Correlation Coefficient


Supóngase dos vectores (de dimensión mayor o igual a 2), digamos
( x1 , . . . , xn ) y (y1 , . . . , yn ). La distancia entre estos dos vectores se
puede calcular según el Pearson’s correlation coefficient (ρ) de acuerdo a
la siguiente fórmula

. n ∑ xi yi − ∑ xi ∑ yi
ρ( x, y) = q q ,
n ∑ xi2 − (∑ xi )2 n ∑ y2i − (∑ yi )2

donde ∑ xi se entiende como ∑in=1 xi . Esta distancia es fácil de cal-


cular en planillas electrónicas. Por ejemplo en Numbers de Mac OS,
es posible usar la función =COEF.DE.CORREL() que tiene como argu-
mento dos vectores de la misma dimensión.
Es fácil ver que ρ es un número entre −1 y 1. El signo indica la
dirección de la correlación, si dos vectores están positivamente cor-
relacionados, significa que el ángulo adyacente entre ellos es menor
que 90 grados y entre 90 y 180 en el caso contrario. Si los vectores
tienen correlación perfecta (ρ igual a 1) quiere decir que apunta a la
misma dirección (en este sentido son iguales). Si los vectores no están
correlacionados (ρ igual a 0) quiere decir que son perpendiculares. Si
los vectores tienen correlación inversa (ρ igual a −1) quiere decir que
apuntan en direcciones contrarias. Por eso se dice que el Pearson’s
correlation coefficient entre dos vectores es una medida del seno del
ángulo que se forma entre ellos.

Betweenness Centrality
En la teoría de grafos y análisis de redes hay varias medidas de la
centralidad de un vértice dentro de un grafo que determina la im-
portancia relativa de un vértice dentro del mismo (por ejemplo, cuán
importante es una persona en una red social o, en una red urbana,
cuán bien usado es un camino, etc.). Hay varias medidas de central-
idad que se usan ampliamente en el análisis de una red: degree cen-
trality, betweennness, closeness y eigenvector centrality. No explicaremos
aquí cada una de las medidas, sino que mostraremos cómo calcular
la betweenness centrality.
174 practical business process engineering

La idea es que aquellos vértices, que son parte de muchos caminos


más cortos entre los otros vértices, tienen un betweenness más alto que
.
aquellos que no los son. Supongamos un grafo G = (V, E) con un
conjunto V de vértices y E de arcos entre tales vértices. La medida
betweenness centrality de un nodo v ∈ V se define como

. σst (v)
c B (v) = ∑ σst
,
s 6= v 6= t,
s 6= t

donde σst es el número de caminos más cortos entre los nodos s y t;


y σst (v) es el número de caminos más cortos que pasan por el nodo
v. Normalmente este valor se normaliza al dividirlo por el número
de pares de vértices que no incluyen v, el cual es (n − 1)(n − 2) para
grafos dirigidos y (n − 1)(n − 2)/2 para grafos no-dirigidos. Para el
cálculo asumimos que 00 toma el valor 0.

Agglomerative Hierarchical Clustering


Supongamos que V es un conjunto de n originadores representados
por vectores m-dimensionales. Para cualquier par xi , x j de origi-
nadores que están en V se define una distancia ρ( xi , x j ) ∈ R. Se
quiere construir k∗ subconjuntos de V (1 < k∗ < n) tal que los el-
ementos contenidos en cada subconjunto tengan distancia mínima.
Asumimos que si un conjunto de dos o más nodos cualesquiera se
puede representar por la suma de los vectores que pertenecen a ese
conjunto
Data: el conjunto V = { x1 , . . . , xn }, obtener k∗ sub-conjuntos
Result: k∗ sub-conjuntos
k ← n − 1;
while k > k∗ do
encontrar los sub-conjuntos más cercanos, digamos Di y D j ;
unir Di y D j ;
k ← k − 1;
end
Algorithm 1: Algoritmo de la técnica Agglomerative Hierarchical
Clustering para determinar jerarquías según distancias

6.4 Process Mining Software


process mining 175

Figure 6.8: Planilla electrónica


para calcular los Pearson’s corre-
lation coefficient
176 practical business process engineering

Figure 6.9: Red de relaciones


entre originadores cuyos ar-
John Jane Mone Clare Sue
cos representan distancias de
Pearson’s correlation Coefficient
Mike mayores que 0

Pete

Fred Robert

G1
Figure 6.10: Jerarquía obtenida
según el criterio joint activities
medido con la métrica Pearson’s
John G2
correlation coefficient

G3 G4

Clare Sue G5 G6

Jane Mona G7 G8

Pete Robert Fred Mike

2 3
Figure 6.11: Red simple de
relaciones entre 4 originadores

1 4
process mining 177

2
Figure 6.12: Red simple de
relaciones entre 5 originadores

1 4

5
Bibliography

[1] Berliner BPM-Offensive. BPMN 2.0 – Business Process Model and


Notation. Available at poster-EN, poster-ES. 2020. url: http :
//www.bpmb.de/.

[2] Object Management Group. BPMN 2.0 by Example. Tech. rep.


Available at https://www.omg.org/cgi-bin/doc?dtc/10-06-02
(4.2020). Object Management Group, 2010.
[3] Object Management Group. Business Process Model and Notation
(BPMN 2.0). Tech. rep. Available at https://www.omg.org/spec/BPMN/2.0/PDF
(4.2020). Object Management Group, 2011.
[4] Object Management Group. Decision Model and Notation (DMN)
v1.1. Tech. rep. Available at https://www.omg.org/spec/DMN/1.1/PDF
(7.2016). Object Management Group, 2016.
[5] John S. Hammond, Ralph L. Keeney, and Howard Raiffa. Smart
choices: A practical guide to making better decisions. Harvard Busi-
ness Review Press, 2015.
[6] Henry Mintzberg. Structure in fives: Designing effective organiza-
tions. Prentice-Hall, Inc, 1993.
[7] M.E. Porter. Competitive advantage: creating and sustaining superior
performance. New York: Free Press, 1985.
[8] Song, M. and van der Aalst, W.M.P. “Towards comprehensive
support for organizational mining”. In: Decision Support System
46 (2008), pp. 300–317.
[9] Roberto de la Vega, Mathias Kirchmer, and Sigifredo Laengle.
“The real-time enterprise”. In: Industrial Management 50.4 (2008).
Available at 2008-rte (7.2015), pp. 16–19.
Index

a posteriori probabilities envi- empirical feasibility environment, Expected Value of Sample


ronment, 60 141 Information, 61
a priori probability environment, end event environment, 81, 82 explicit patterns, 47
59 environments extended value added chain, 45
activity environment, 80 a posteriori probabilities, 60 extensive decision table, 110
activity markers environment, 80 a priori probability, 59 first-in-first-out, 123
arrival rate environment, 123, 124 activity, 80 flexibilisation strategy, 127
as-is model environment, 143 activity markers, 80 gateways, 80
arrival rate, 123, 124 hit policy, 109
BPMN environment, 79 as-is model, 143 Hurwicz, 50
business rule environment, 82 BPMN, 79 idle rate, 125
business rule, 82 implicit patterns, 47
Camunda environment, 79 Camunda, 79 inclusive gateway, 80, 82
catching environment, 82 catching, 82 informational efficiency, 62
catching event environment, 82 catching event, 82 initial model, 143
choreographies environment, 81 choreographies, 81 initiative, 45
Collaboration Diagram environment, Collaboration Diagram, 80 intermediate, 82
80 collaboration diagram, 81 intermediate event, 81
collaboration diagram environment, composition, 36 is contained in, 36
81 composition strategy, 129 is part of, 36
composition environment, 36 conditional probabilities, 59 joint probability, 60
composition strategy environment, consequence, 46 lane, 82
129 conversations, 81 lanes, 81, 82
conditional probabilities environ- cycle time, 124 Laplace, 50
ment, 59 data, 81 LavaBien, 33
consequence environment, 46 data store, 81, 83 loop marker, 81
conversations environment, 81 decision rule, 108 manual task, 80
cycle time environment, 124 decision rules, 108 marker, 81
decision variables, 46 maximax, 49
data environment, 81 DMN, 79 maximin, 49
data store environment, 81, 83 DMN Table, 108 message flow, 81
decision rule environment, 108 economic feasibility, 140 minimax regret, 49
decision rules environment, 108 empirical feasibility, 141 none, 82
decision variables environment, 46 end event, 81, 82 objective tree, 45
DMN environment, 79 event, 80–82 output rule, 108
DMN Table environment, 108 exclusive gateway, 80 outputs, 45
Expected Value, 56 parallelisation strategy, 128
economic feasibility environment, Expected Value of Perfect paths, 132
140 Information, 59 patterns, 46
182 practical business process engineering

pool, 82 first-in-first-out environment, paths environment, 132


pools, 81, 82 123 patterns environment, 46
probability that n customers flexibilisation strategy environ- pool environment, 82
are in queuing system, 124 ment, 127 pools environment, 81, 82
product allocation diagram, 45 probability that n customers are
product tree, 45 gateways environment, 80 in queuing system environment,
queue, 122 124
Queue discipline, 123 hit policy environment, 109 product allocation diagram envi-
queue length, 125 Hurwicz environment, 50 ronment, 45
queue time, 125 product tree environment, 45
receive task, 82 idle rate environment, 125
rule input, 108 implicit patterns environment, 47 Queue discipline environment, 123
script task, 81 inclusive gateway environment, 80, queue environment, 122
send task, 81, 82 82 queue length environment, 125
sequence flow, 81 informational efficiency environ- queue time environment, 125
service rate, 124 ment, 62
service task, 81 initial model environment, 143 receive task environment, 82
Signavio, 79 initiative environment, 45 rule input environment, 108
single queue, 124 intermediate environment, 82
source of the clients, 123 intermediate event environment, 81
script task environment, 81
start event, 82 is contained in environment, 36
send task environment, 81, 82
start events, 81 is part of environment, 36
sequence flow environment, 81
swimlanes, 81 service rate environment, 124
system length, 125 joint probability environment, 60
service task environment, 81
task types, 80 Signavio environment, 79
lane environment, 82
terminate, 82 single queue environment, 124
lanes environment, 81, 82
throwing, 82 source of the clients environment,
Laplace environment, 50
triage strategy, 130 123
LavaBien environment, 33
user task, 80 start event environment, 82
utilisation rate, 125
license, 2
start events environment, 81
loop marker environment, 81
value added chain, 45 swimlanes environment, 81
variables of nature, 46 system length environment, 125
manual task environment, 80
XOR-gateway, 82
marker environment, 81
event environment, 80–82 task types environment, 80
maximax environment, 49
exclusive gateway environment, 80 terminate environment, 82
maximin environment, 49
Expected Value environment, 56 throwing environment, 82
message flow environment, 81
Expected Value of Perfect triage strategy environment, 130
minimax regret environment, 49
Information environment, 59
Expected Value of Sample user task environment, 80
none environment, 82
Information environment, 61 utilisation rate environment, 125
explicit patterns environment, 47
objective tree environment, 45
extended value added chain envi-
output rule environment, 108 value added chain environment, 45
ronment, 45 variables of nature environment,
outputs environment, 45
extensive decision table environ-
46
ment, 110
parallelisation strategy environ-
ment, 128 XOR-gateway environment, 82

You might also like